Very very slow result

Hi sapiens!
I testing few MQTT brokers for our need with this simple python scripts:

this for consume messages:

from datetime import datetime as dt
import paho.mqtt.client as mqtt
from pickle import loads

user_data = {'npp': 0, 'avg': 0, 'fcnt': 0}

client = mqtt.Client(client_id="XXX2", userdata=None, protocol=mqtt.MQTTv5, transport="tcp")
client.connect(host="", port=1883)

def on_connect(client, userdata, flags, rc, properties):
    print("Connected with result code " + str(rc))

    # Subscribing in on_connect() means that if we lose the connection and
    # reconnect then subscriptions will be renewed.
    client.subscribe(topic="xxxx01", qos=0)

def on_message(client, userdata, msg):
    x = loads(msg.payload)
    userdata['avg'] = (( - x[2]) + userdata['avg']) / 2
    if userdata['npp'] % 100 == 0:
        print(f' msg.topic: {msg.topic} avg_ts: {userdata["avg"]:.12f} df>> {dt.utcfromtimestamp(x[2]).strftime("%Y-%m-%d %H:%M:%S.%f") } now() >>{"%Y-%m-%d %H:%M:%S.%f")}')    


this one for produce:

import os
import paho.mqtt.client as mqtt
import as props
from datetime import datetime
from pickle import dumps

SIZE = 1024 * 1024 * 7

client = mqtt.Client(client_id="XXX", userdata=None, protocol=mqtt.MQTTv5, transport="tcp")
client.connect(host="", port=1883)
i = 0
while i <= 100000:
    message_info = client.publish(topic='xxxx01', payload=dumps((f'my message numbero: {i}', os.urandom(SIZE),, retain=False, qos=0)
    i += 1

And we got next results:

msg.topic: xxxx01 avg_ts: 0.052382999198 df>> 2022-08-03 08:05:41.531240 now() >>2022-08-03 11:05:41.584121
 msg.topic: xxxx01 avg_ts: 0.058017518886 df>> 2022-08-03 08:05:41.571821 now() >>2022-08-03 11:05:41.635525
 msg.topic: xxxx01 avg_ts: 0.057231320326 df>> 2022-08-03 08:05:41.609001 now() >>2022-08-03 11:05:41.665505

So average latency between publish and consume is 55ms, so even slower, than with Mosquitto.
I have a dream, that we can find MQTT broker with latency about 1-2 ms :frowning:

Probably, I have make some tuning ?

additional info:

fs.file-max = 9223372036854775807
fs.nr_open = 2097152
net.core.somaxconn = 32768
net.ipv4.tcp_max_syn_backlog = 16384
net.core.netdev_max_backlog = 16384
net.core.rmem_default = 262144
net.core.wmem_default = 262144
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.optmem_max = 16777216
net.ipv4.tcp_rmem = 1024 4096 16777216
net.ipv4.tcp_wmem = 1024 4096 16777216
net.ipv4.tcp_max_tw_buckets = 1048576

... limits:
root             -       nofile          unlimited
*                -       nofile          unlimited

My PC with 64Gb RAM, 24 Cores Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz
emqx 5.0.4 at Debian 11

And I don’t need persitence in my broker, did not found, how to turn it off/

What I am doing wrong ?

  1. maybe you can move the os.urandom(SIZE) out of the loop
  2. (a1 + a2 + a3 ... an) / n not equal (((a1 + a2) / 2 + a3) / 2 ... + an) / 2

Thank you for reply, but I need really random array of bytes. In real case it will np.ndarray with video frame from openCV :slight_smile:

about second note i will check, thank you, but it just average.

thanks for the awesome information.

Interested to hear if there is any update on this, @athathoth. Did EMQX lose the competition? If so, which MQTT broker did you pick?

Mates, I realised, just I am an idiot :wink: Look, If I want to pass with network all uncompressed videoframes, it will consume 10000000000Gb per seconds from 100 video cameras. I switch our architecture to work only with shared memory to process video streams.

1 Like

Oh wow! Yeah, we can’t push that much data yet. Glad you figured it out! I’d love to hear more, actually. I work for EMQ’s marketing department. Would you be interested in sitting down for an interview?

Hi Josh!

To be honest, I have very ugly English, and this interview should be a joust torture for you :slight_smile: And about marketing, I don’t have a “success story” about your product. What plan for this interview do you have ?

No worries! Just trying to learn more about how people are using our tools. Could I email you some questions, instead?

thanks my issue has been fixed.