High performance middleware wars: ZeroMQ vs Enduro/X benchmark

Recently for my finance transaction processing project I was evaluating high performance message queues for sub-millisecond response times. I came across two middleware platforms, that ensure high message throughput:

  • Zero MQ – middleware which uses TCP/IP for message interchange;
  • Enduro/X – middleware which uses Unix kernel queues for message passing between processes.

These are two different technologies. But lets do benchmark for these two. We will use tools which are provided with these platforms. Enduro/X have testing tools which includes benchmark plotting, and Zero MQ have similar tools which allows to benchmark the message IPC, by sending packets from one process to another.

The test will involve following characteristics:

  • Send message from one process to another (with out reply) i.e. one way message delivery;
  • Message sizes: 1 byte, 1 KB, 5 KB, 9KB, …, 53KB

Results will be logged to data file for which later results will be plotted to chart.

Test System

Testing is performed on Linux Mint Mate 17.2 (based on Ubuntu 14.04) v4.5 Kernel, 64bit, 8 GB RAM, i5-2520M processor.

Software for testing:

  • Enduro/X version 3.2.2
  • ZeroMQ version 4.1.5

ZeroMQ Benchmark

For Zero MQ result plotting following script was used (script is using bench-marking approach described here ZeroMQ performance testing). The following test uses local_thr and remote_thr via localhost.

#!/bin/bash 
size=(1 5 9 13 17 21 25 29 33 37 41 45 49 53 )
for i in "${size[@]}"; do 
 j="$(($i * 1024))" 
 echo $j 
 killall local_thr remote_thr 
 sleep 1 
 # start the local 
 ./local_thr tcp://lo:5555 $j 1000000 > tmp.out & 
 # start remote 
 ./remote_thr tcp://127.0.0.1:5555 $j 1000000 
 echo "*** TEST OUTPUT ***" 
 cat tmp.out 
 MSG_PER_SEC=`cat tmp.out| grep "mean throughput" | grep msg | awk '{print $3}'`
 echo "MSG/SEC: $MSG_PER_SEC" 
 echo "*******************" 
 echo "\"ZeroMQ\" $i $MSG_PER_SEC" >> test.out 
 # plot results 
 # killall progs 
 killall local_thr remote_thr 
done

The results are logged to test.out file. The results I got are following:

"Configuration" "MsgSize" "CallsPerSec"
"ZeroMQ" 1 658781 
"ZeroMQ" 5 158614 
"ZeroMQ" 9 105379 
"ZeroMQ" 13 89496 
"ZeroMQ" 17 64921 
"ZeroMQ" 21 61736 
"ZeroMQ" 25 60590 
"ZeroMQ" 29 58622 
"ZeroMQ" 33 60852 
"ZeroMQ" 37 56381 
"ZeroMQ" 41 55865 
"ZeroMQ" 45 52895 
"ZeroMQ" 49 52143 
"ZeroMQ" 53 46434

Enduro/X Benchmark

Enduro/X benchmark is based on Enduro/X Benchmark document. Execution was done by running doc/benchmark/build.sh script. Output was joined in “04_tpacall.txt” and chart regenerated with “$ build.sh r”.

The results on my linux platform was following:

"Configuration" "MsgSize" "CallsPerSec"
"Enduro/X (linux 4.5)" 1 402753
"Enduro/X (linux 4.5)" 5 366041
"Enduro/X (linux 4.5)" 9 237587
"Enduro/X (linux 4.5)" 13 251288
"Enduro/X (linux 4.5)" 17 138942
"Enduro/X (linux 4.5)" 21 115218
"Enduro/X (linux 4.5)" 25 97640
"Enduro/X (linux 4.5)" 29 85601
"Enduro/X (linux 4.5)" 33 76632
"Enduro/X (linux 4.5)" 37 69131
"Enduro/X (linux 4.5)" 41 62630
"Enduro/X (linux 4.5)" 45 57488
"Enduro/X (linux 4.5)" 49 52648
"Enduro/X (linux 4.5)" 53 48646

Results

Here comes the most interesting thing. How these two correlate? See the chart:

zeromq_vs_endurox

So from the test we can see that at lower message sizes the performance of ZeroMQ is better, but in range of ~5KB till ~30 KB Enduro/X clearly shows is strength, as the performance is higher. The later Enduro/X winning can be related with fact that Enduro/X uses Operating system’s queues (which basically is shared memory) and by growing the message size, operations stays the same, which basically is constant number of memcpy() calls in users-pace and kernel-space. For ZeroMQ chain of operations could change depending on system’s  TCP/IP stack’s MTU settings, checksumming and other TCP processing facts.

As for my application of financial transaction processing I have selected Enduro/X platform, because this size of messages are more common for the problem area. Also Enduro/X gives a nice application server functionality out of the box.

 

15/11/2016 update compare with ZeroMQ ipc:// faclity

Form the http://stackoverflow.com/questions/2854004/ipc-speed-and-compare/40606375#40606375 post, it was mentioned that I did compare tcp:// vs Enduro/X ipc, so lets do the test with ZeroMQ ipc:// facility, which runs on Posix IPC Pipes. Script was following:

 

#!/bin/bash
size=(1 5 9 13 17 21 25 29 33 37 41 45 49 53 )
for i in “${size[@]}”; do
j=”$(($i * 1024))”
echo $j
killall local_thr remote_thr
sleep 1
# start the local
./local_thr ipc:///tmp/y $j 1000000 > tmp.out &
# start remote
./remote_thr ipc:///tmp/y $j 1000000
echo “*** TEST OUTPUT ***”
cat tmp.out
MSG_PER_SEC=`cat tmp.out| grep “mean throughput” | grep msg | awk ‘{print $3}’`
echo “MSG/SEC: $MSG_PER_SEC”
echo “*******************”
echo “\”ZeroMQ\” $i $MSG_PER_SEC” >> test.out
# plot results
# killall progs
killall local_thr remote_thr
done

The output (updated label):

 

$ cat test.out
“ZeroMQ ipc://” 1 1179170
“ZeroMQ ipc://” 5 301360
“ZeroMQ ipc://” 9 189343
“ZeroMQ ipc://” 13 150115
“ZeroMQ ipc://” 17 127244
“ZeroMQ ipc://” 21 100667
“ZeroMQ ipc://” 25 102241
“ZeroMQ ipc://” 29 92593
“ZeroMQ ipc://” 33 96148
“ZeroMQ ipc://” 37 85642
“ZeroMQ ipc://” 41 88643
“ZeroMQ ipc://” 45 78705
“ZeroMQ ipc://” 49 66414
“ZeroMQ ipc://” 53 70142

 

So if we combine with above results, we get following chart out:

zmq_ipc

So from these results, we see that for very low packet size ZeroMQ ipc:// runs a lot faster, but at some point still the Enduro/X posix queues shows better results in range of 5-20 kb.

 

 

 

 

Advertisements

One thought on “High performance middleware wars: ZeroMQ vs Enduro/X benchmark

  1. Pingback: Enduro/X vs ZeroMQ Benchmark | ATR Baltic Software

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s