Richard Searle's Blog

Thoughts about software

Archive for the ‘Uncategorized’ Category

Moved to github

Posted by eggsearle on January 26, 2014

Moved to github 

Posted in Uncategorized | Leave a Comment »

Working around Scala parser memory leak.

Posted by eggsearle on January 26, 2014

This ticket covers an ongoing issue with the parser combinators. The last fix replaced the thread safety issue with a memory leak.That was unfortunate since it fairly easy to deal with the thread safety limitation. Resolving the memory leak requires more heroic efforts, illustrated by this code.

This issue has forced me to remain on Scala 2.9.x, which is becoming less tenable as time passes. The ticket implies that 2.11 will contain a fix, but there is no evidence of any change in the current code.

Reworking the code to use the phrase combinator side-steps the memory leak and allows an upgrade to Scala 2.10.x

def rawParse(reader: Reader[Char]) = parse(rootParser, reader)


def rawParse(reader: Reader[Char]) = {
 parse(phrase(capture(rootParser) ~ dropDead), reader) match {
 case Success(wrapped, _) =>
 wrapped match {
 case result ~ in => result match {
 case ns : NoSuccess => ns
 case _ => Success(result, in)
 case ns: NoSuccess => ns //should not be seen

private val dropDead = Parser { in => Success(in, new CharSequenceReader("")) }

private def capture[T](p: => Parser[T]) = Parser { in =>
 p(in) match {
 case ns : NoSuccess => Success(ns,in)
 case _ @ s => s

Posted in Uncategorized | Leave a Comment »

Unexpected attribute handling when querying datomic txReport

Posted by eggsearle on October 12, 2013

Perform queries with results indicated 
Peer.q(“[:find ?value :in $ :where [_ :db/doc ?value]]”,tx.get(Connection.TX_DATA)); //returns []
Peer.q(“[:find ?value  :in $ :where [_ :db/doc ?value]]”,db); //returns values
Peer.q(“[:find ?value  :in $ :where [_ 61 ?value]]”,tx.get(Connection.TX_DATA)); //returns values
Peer.q(“[:find ?value  :in $ :where [_ 61 ?value]]”,db); //returns values
So :db/doc is mapped to 61 for a query against the database but not when referencing the txReport.
This initially looks rather strange but is a direct consequence of how Datomic is implemented. 
:db/doc is merely an entity stored in the database, just like any other entity. It is not a special constant, baked into the implementation.
The txReport value does not contain the entity and thus cannot perform the mapping to attribute id. 
Set up the test
uri = “datomic:mem://hello”;
datom = Util.list(“db/add”,
                  “hello world”);     
queue = conn.txReportQueue();
db = conn.db();

Posted in Uncategorized | Leave a Comment »

datomic history

Posted by eggsearle on October 7, 2013

The history API returns a Database that contains asserts and retractions.

This seems to duplicate the Log API, so further study is required.

Posted in Uncategorized | Leave a Comment »

Query datoms via transaction id fails due to full scan error

Posted by eggsearle on October 6, 2013

The recent versions of datomic provide a  log interface that provides access to the transactions that have been processed.
That interface is unfortunately not implemented for memory databases, which complicates testing.

The datoms stored by Datomic are quads, which include the identity of the transaction.
It is thus possible to locate datoms asserted by a specific transaction

[:find ?e ?name  :in $ ?t  :where [ ?e :person/name ?name ?t]]

will locate all datoms that asserted a person’s name for a transaction id:t

One might then expect this would retrieve all the datoms asserted for a transaction

[:find ?e ?a ?v  :in $ ?t  :where [ ?e ?a ?v ?t]]

That query fails with “Insufficient binding of db clause: [?e ?a ?v ?t] would cause full scan”.

Which would make sense if datomic does not provide an index over transaction id.
However, it is not clear how the log interface can provide reasonable performance w/o that index.




Posted in Uncategorized | Leave a Comment »

Some concerns with akka.js

Posted by eggsearle on August 17, 2013

The akka.js  project provides a direct linkage from angular.js to Akka and was referenced by Jonas Boner.

However, there are some concerns with architecture and the implementation. The latter would be irrelevant if this project is merely a POC.

  1. The WebSocketClientStore object contains a mutable Map shared between two actors.
  2. ActorRefs are added to the Map, but never removed.
  3. The akka module contains a replyTo  map, to which a promise is added, but never removed.
  4. The actor protocol is defined in terms of JsonObject, which are essentially untyped.
  5. The Akka actors are directly exposed to untrustworthy client code, without an mechanism for authentication, authorization, etc.


Posted in Uncategorized | Leave a Comment »

Akka IO using byte[]

Posted by eggsearle on February 2, 2013

Used the code from and to send data serialized using ProtoBufs

These examples are all string based, primarily to make the code easy to test with curl.

The string representation of the length can be replaced with its 4 byte binary representation.
The payload is still a byte[], derived from a String for convenience


case s: String => handle.foreach {
h =>
val bb = ByteBuffer.allocate(4)
h write ByteString(bb)
h write ByteString(s.getBytes("US-ASCII"))



def readMessage: IO.Iteratee[String] =
for {
lengthBytes <- take(4)
len = lengthBytes.asByteBuffer.getInt()
bytes <- take(len)
} yield {



Posted in Uncategorized | 6 Comments »

LMAX Disruptor and h.264 processing

Posted by eggsearle on January 13, 2013

A recent project required the decryption of an MPEG TS multicast stream.

The first attempt simply looped, reading from a MulticastChannel , decrypting with a Cipher and writing via a DatagramChannel.
The decryption turned out to have negligible cost, which was surprising.

Unfortunately, this simple implementation dropped too many packets. The resultant video  was unwatchable, being riddled with artifacts.

Some buffering was obviously required.

The standard j.u.c classes did not resolve the problem. Packet loss remained a problem, perhaps due to the ongoing garbage collection.

The LMAX Disruptor provides a ring buffer containing preallocated elements, eliminating GC.
Its design provides very small and consistency latency, orders of magnitude better than ArrayBlockingQueue.

A simple two thread design turned out to suffice :

  1. Blocking read packet into ByteBuffer in the ring buffer
  2. Decrypt ByteBuffer in-place  and transmit from the ring buffer.

This approach generates zero garbage.

Any other design caused packet loss. That includes separating the decryption into a third thread.

The scheduling of the reading thread appears to be be the primary factor in avoiding packet loss. The system thus works best when there are more free cores than active threads. Further improvement would require manipulation of the scheduler, e.g. by pinning the thread to a specific core.

The result was very pleasing, requiring less than 100 LOSC.
A quad core 3.5 GHz Xeon suffices to handle four 1080P 30 Hz signals encrypted using AES-128.


Posted in Uncategorized | 1 Comment »

Play2 websocket performance

Posted by eggsearle on December 9, 2012

The realtime UI design was recreated using websockets and Play2.

Latency is ~ 1.6 ms and CPU load ~85%.

The latency is 10x better than the alternatives, which would be expected.
The loading did not increase 10x, which is helpful. 

Posted in Uncategorized | Leave a Comment »

Akka HTTP Server example

Posted by eggsearle on November 10, 2012

I was unable to find the source code anywhere except scattered through the documentation page.

Posted in Uncategorized | Leave a Comment »