Unable to receive data from multiple UDP devices - c++

I am using the EIPScanner library to talk to 2 TURCK I/O modules via EthernetIP. The library works fine with one module, but the moment I instantiate the driver for the second one, the first one times out.
I tried to modify the library by passing a port number to listen on, but it didn't seem to make a difference:
In ConnectionManager.cpp
IOConnection::WPtr
ConnectionManager::forwardOpen(const SessionInfoIf::SPtr& si, ConnectionParameters connectionParameters, int port, bool isLarge) {
....
....
....
if (o2tSockAddrInfo != additionalItems.end())
{
Buffer sockAddrBuffer(o2tSockAddrInfo->getData());
sockets::EndPoint endPoint("",0);
sockAddrBuffer >> endPoint;
endPoint.setPort(port)
if (endPoint.getHost() == "0.0.0.0") {
ioConnection->_socket = std::make_unique<UDPSocket>(
si->getRemoteEndPoint().getHost(), endPoint.getPort());
} else {
ioConnection->_socket = std::make_unique<UDPSocket>(endPoint);
}
}
I tried adding a timeout to the recvfrom() function, but that also didn't make a difference.
In UDPSocket.cpp
UDPSocket::UDPSocket(EndPoint endPoint)
: BaseSocket(EndPoint(std::move(endPoint))) {
_sockedFd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (_sockedFd < 0) {
throw std::system_error(BaseSocket::getLastError(), BaseSocket::getErrorCategory());
}
/*struct timeval tv;
tv.tv_sec = 0;
tv.tv_usec = 1;
setsockopt(_sockedFd, SOL_SOCKET, SO_RCVTIMEO, &tv, sizeof(tv));*/
Logger(LogLevel::DEBUG) << "Opened UDP socket fd=" << _sockedFd;
}
Output:
[DEBUG] Opened TCP socket fd=7
[DEBUG] Connecting to 192.168.1.130:44818
[INFO] Registered session 103
[DEBUG] Opened TCP socket fd=8
[DEBUG] Connecting to 192.168.1.131:44818
[INFO] Registered session 103
2021-06-18 13:03:02,996 Process-1
2021-06-18 13:03:02,997 Process-1 [INFO] Send request: service=0x54 epath=[classId=6 objectId=1]
[INFO] Open IO connection O2T_ID=1781321728 T2O_ID=915406849 SerialNumber 1
[DEBUG] Opened UDP socket fd=15
[INFO] Open UDP socket to send data to 192.168.1.130:2223
[DEBUG] Opened UDP socket fd=16
[INFO] Will look for id: 915406849 from endPoint: 192.168.1.130:2222
[DEBUG] Received data from connection T2O_ID=915406849(host: 192.168.1.130:2223)
[DEBUG] Received: seq=2 data=[0][0][0][0]
[DEBUG] Received data from connection T2O_ID=915406849(host: 192.168.1.130:2223)
[DEBUG] Received: seq=3 data=[0][0][0][0]
2021-06-18 13:03:03,112 Process-2
2021-06-18 13:03:03,113 Process-2
[INFO] Send request: service=0x54 epath=[classId=6 objectId=1]
[INFO] Open IO connection O2T_ID=1782571776 T2O_ID=1911881729 SerialNumber 1
[DEBUG] Received data from connection T2O_ID=1911881729(host: 192.168.1.130:2223)
[DEBUG] Received: seq=1 data=[0][0][4][0]
[DEBUG] Opened UDP socket fd=20
[INFO] Open UDP socket to send data to 192.168.1.131:2224
[DEBUG] Opened UDP socket fd=21
[INFO] Will look for id: 1911881729 from endPoint: 192.168.1.131:2222
[DEBUG] Received data from connection T2O_ID=915406849(host: 192.168.1.131:2224)
[DEBUG] Received: seq=4 data=[0][0][0][0]
[DEBUG] Received data from connection T2O_ID=1911881729(host: 192.168.1.131:2224)
[DEBUG] Received: seq=2 data=[0][0][4][0]
As you can see once the device on 192.168.1.131 starts streaming, the 1.130 device stops. And eventually it times out and shuts down.

Related

Django Login suddenly stopped working - timing out

My Django project used to work perfectly fine for the last 90 days.
There has been no new code deployment during this time.
Running supervisor -> gunicorn to serve the application and to the front nginx.
Unfortunately it just stopped serving the login page (standard framework login).
I wrote a small view that checks if the DB connection is working and it comes up within seconds.
def updown(request):
from django.shortcuts import HttpResponse
from django.db import connections
from django.db.utils import OperationalError
status = True
# Check database connection
if status is True:
db_conn = connections['default']
try:
c = db_conn.cursor()
except OperationalError:
status = False
error = 'No connection to database'
else:
status = True
if status is True:
message = 'OK'
elif status is False:
message = 'NOK' + ' \n' + error
return HttpResponse(message)
This delivers back an OK.
But the second I am trying to reach /admin or anything else requiring the login, it times out.
wget http://127.0.0.1:8000
--2022-07-20 22:54:58-- http://127.0.0.1:8000/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... 302 Found
Location: /business/dashboard/ [following]
--2022-07-20 22:54:58-- http://127.0.0.1:8000/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... 302 Found
Location: /account/login/?next=/business/dashboard/ [following]
--2022-07-20 22:54:58-- http://127.0.0.1:8000/account/login/? next=/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response... No data received.
Retrying.
--2022-07-20 22:55:30-- (try: 2) http://127.0.0.1:8000/account/login/?next=/business/dashboard/
Connecting to 127.0.0.1:8000... connected.
HTTP request sent, awaiting response...
Supervisor / Gunicorn Log is not helpful at all:
[2022-07-20 23:06:34 +0200] [980] [INFO] Starting gunicorn 20.1.0
[2022-07-20 23:06:34 +0200] [980] [INFO] Listening at: http://127.0.0.1:8000 (980)
[2022-07-20 23:06:34 +0200] [980] [INFO] Using worker: sync
[2022-07-20 23:06:34 +0200] [986] [INFO] Booting worker with pid: 986
[2022-07-20 23:08:01 +0200] [980] [CRITICAL] WORKER TIMEOUT (pid:986)
[2022-07-20 23:08:02 +0200] [980] [WARNING] Worker with pid 986 was terminated due to signal 9
[2022-07-20 23:08:02 +0200] [1249] [INFO] Booting worker with pid: 1249
[2022-07-20 23:12:26 +0200] [980] [CRITICAL] WORKER TIMEOUT (pid:1249)
[2022-07-20 23:12:27 +0200] [980] [WARNING] Worker with pid 1249 was terminated due to signal 9
[2022-07-20 23:12:27 +0200] [1515] [INFO] Booting worker with pid: 1515
Nginx is just giving:
502 Bad Gateway
I don't see anything in the logs, I don't see any error when running the dev server from Django, also Sentry is not showing anything. Totally lost.
I am running Django 4.0.x and all libraries are updated.
The check up script for the database is only checking the connection. Due to misconfiguration of the database replication, the db was connecting and also reading, but when writing it hang.
The login page tries to write a session to the tables, which failed in this case.

Trying to block visitors using regex on fail2ban

I am receiving a large number of request for PHP files that does not exist in my wordpress.
They show up in nginx error logs as following two examples:
2019/06/24 03:16:43 [error] 4201#4201: *17573871 FastCGI sent in stderr: "Unable to open primary script: /var/www/html/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php (No such file or directory)" while reading response header from upstream, client: 172.68.189.50, server: mywebsite.net, request: "GET /vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "mywebsite.net"
2019/06/24 03:16:43 [error] 4201#4201: *17573871 FastCGI sent in stderr: "Unable to open primary script: /var/www/html/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php (No such file or directory)" while reading response header from upstream, client: 172.68.189.50, server: mywebsite.net, request: "POST /vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php/php7.2-fpm.sock:", host: "mywebsite.net"
I have tried making a noscript filter.
In file /etc/fail2ban/jail.local I put:
[nginx-noscript]
enabled = true
port = http,https
filter = nginx-noscript
logpath = /var/log/nginx/error.log
maxretry = 2
In File /etc/fail2ban/filter.d/nginx-noscript.conf I put:
[Definition]
failregex = \[error\] \d+#\d+: \*\d+ (FastCGI sent in stderr: "Unable to open primary script:)
ignoreregex =
But this filter is not catching these type of 404s. After systemctl restart fail2ban the fail2ban logs shows these error messages.
2019-06-24 16:11:05,548 fail2ban.filter [6182]: ERROR No failure-id group in '\[error\] \d+#\d+: \*\d+ (FastCGI sent in stderr: "Unable to open primary script:)'
2019-06-24 16:11:05,548 fail2ban.transmitter [6182]: WARNING Command ['set', 'nginx-noscript', 'addfailregex', '\\[error\\] \\d+#\\d+: \\*\\d+ (FastCGI sent in stderr: "Unable to open primary script:)'] has failed. Received RegexException('No failure-id group in \'\\[error\\] \\d+#\\d+: \\*\\d+ (FastCGI sent in stderr: "Unable to open primary script:)\'',)
2019-06-24 16:11:05,549 fail2ban [6182]: ERROR NOK: ('No failure-id group in \'\\[error\\] \\d+#\\d+: \\*\\d+ (FastCGI sent in stderr: "Unable to open primary script:)\'',)
What am I doing wrong. What will be the full regex for such nginx error logs.
This should work (for fail2ban >= 0.10):
failregex = ^\s*\[error\] \d+#\d+: \*\d+ FastCGI sent in stderr: "Unable to open primary script: [^"]*" while reading response header from upstream, client: <ADDR>
If you have older versions (0.9 or below), use <HOST> instead of <ADDR> (and better disable DNS-lookup for jail with usedns = no).

Python Orion Context Broker Token problems

I've been developing the following code:
datos = {
"id":"1",
"type":"Car",
"bra":"0",
}
jsonData = json.dumps(datos)
url = 'http://130.456.456.555:1026/v2/entities'
head = {
"Content-Type": "application/json",
"Accept": "application/json",
"X-Auth-Token": token
}
response = requests.post(url, data=jsonData, headers=head)
My problem is that I can't establish a connection between my computer and my fiware Lab instance.
The error is:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='130.206.113.177', port=1026): Max retries exceeded with url: /v1/entities (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f02c97c1f90>: Failed to establish a new connection: [Errno 110] Connection timed out',))
Seems to be a network connectivity problem.
Assuming that there actually an Orion process listening to port 1026 at IP 130.206.113.177 (should be checked, eg. curl localhost:1026/version command executed in the same VM where Orion runs), the most probable causes of Orion connection problems are:
Something in the Orion host (e.g a firewall or security group) is blocking the incoming connection
Something in the client host (e.g a firewall) is blocking the outcoming connection
There is some other network issue is causing the connection problem.

Kafka and zookeeper won't connect in a stable fashion on AWS ECS

I'm trying to run kafka and zookeeper containers on AWS ECS but I'm stuggling to get them into a stable state.
Zookeeper - i'm using the official zookeeper:3.5 image and having ECS service discovery service give it an ip address at zk0.ecs.demo.com
Kafka - i'm using the latest wurstmeister/kafka image and having ECS service discovery service give it an ip address at kafka1.ecs.demo.com
configuring the KAFKA_ZOOKEEPER_CONNECT env var to look at zk0.ecs.demo.com and KAFKA_ADVERTISED_HOST_NAME is kafka1.ecs.demo.com:9092
Zookeeper starts up and seems to do so in a stable way, kafka then starts up and once it recognises zk0.ecs.demo.com as a host it will connect causing some errors in the zookeeper logs.
They seem to connect to each other until about a minute later when zookeeper dies and then kafka can't find it at the old IP address and dies and this all happens in a circular fashion.
The networking is wide open with all traffic allowed so i'm not sure that this is the problem.
I've tried running them on t2.large machines with 2048 memory so i'm not sure it's a resources problem.
I've tried
- running ZK as a cluster of 3
- mapping the ports mapped to the hosts and unmapped
- using the ECS EC2 instance ip address instead of private dns
Can't seem to get it to work.
I've unfactored the terraform script i'm using to a single one if anyone wants to try to recreate it. (will need an ssh key at ~/.ssh/id_rsa.pub) https://gist.github.com/chestercodes/480dda384229adbd75fefea8fdf483d4
ZK logs
15:53:03 ZooKeeper JMX enabled by default
15:53:04 Using config: /conf/zoo.cfg
53:04,044 [myid:] - INFO [main:QuorumPeerConfig#117] - Reading configuration from: /conf/zoo.cfg
53:04,051 [myid:] - INFO [main:QuorumPeerConfig#317] - clientPort is not set
53:04,051 [myid:] - INFO [main:QuorumPeerConfig#331] - secureClientPort is not set
53:04,058 [myid:] - WARN [main:QuorumPeerConfig#590] - No server failure will be tolerated. You need at least 3 servers.
53:04,072 INFO [main:DatadirCleanupManager#78] - autopurge.snapRetainCount set to 3
53:04,072 INFO [main:DatadirCleanupManager#79] - autopurge.purgeInterval set to 0
53:04,072 INFO [main:DatadirCleanupManager#101] - Purge task is not scheduled.
53:04,073 INFO [main:ManagedUtil#46] - Log4j found with jmx enabled.
53:04,085 INFO [main:QuorumPeerMain#138] - Starting quorum peer
53:04,104 INFO [main:NIOServerCnxnFactory#673] - Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 4 worker threads, and 64 kB direct buffers.
53:04,112 INFO [main:NIOServerCnxnFactory#686] - binding to port /0.0.0.0:2181
53:04,149 INFO [main:Log#186] - Logging initialized #753ms
53:04,204 WARN [main:ContextHandler#1339] - o.e.j.s.ServletContextHandler#8bd1b6a{/,null,null} contextPath ends with /*
53:04,204 WARN [main:ContextHandler#1350] - Empty contextPath
53:04,225 INFO [main:QuorumPeer#1349] - Local sessions disabled
53:04,225 INFO [main:QuorumPeer#1360] - Local session upgrading disabled
53:04,226 INFO [main:QuorumPeer#1327] - tickTime set to 2000
53:04,227 INFO [main:QuorumPeer#1371] - minSessionTimeout set to 4000
53:04,227 INFO [main:QuorumPeer#1382] - maxSessionTimeout set to 40000
53:04,227 INFO [main:QuorumPeer#1397] - initLimit set to 5
53:04,243 INFO [main:FileTxnSnapLog#320] - Snapshotting: 0x0 to /data/version-2/snapshot.0
53:04,246 INFO [main:QuorumPeer#798] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
53:04,249 INFO [main:QuorumPeer#813] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
53:04,256 INFO [main:Server#327] - jetty-9.2.18.v20160721
53:04,289 INFO [main:ContextHandler#744] - Started o.e.j.s.ServletContextHandler#8bd1b6a{/,null,AVAILABLE}
53:04,311 INFO [main:AbstractConnector#266] - Started ServerConnector#14ab98c3{HTTP/1.1}{0.0.0.0:8080}
53:04,312 INFO [main:Server#379] - Started #921ms
53:04,312 INFO [main:JettyAdminServer#112] - Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands
53:04,319 INFO [QuorumPeerListener:QuorumCnxManager$Listener#636] - My election bind port: localhost/127.0.0.1:3888
53:04,323 INFO [QuorumPeer[myid=1](plain=/0.0.0.0:2181)(secure=disabled):QuorumPeer#1055] - LOOKING
53:04,324 INFO [QuorumPeer[myid=1](plain=/0.0.0.0:2181)(secure=disabled):FastLeaderElection#894] - New election. My id = 1, proposed zxid=0x0
53:04,326 INFO [WorkerReceiver[myid=1]:FastLeaderElection#688] - Notification: 2 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEPoch), LOOKING (my state)0 (n.config version)
53:04,527 INFO [...)(secure=disabled):MBeanRegistry#128] - Unregister MBean [org.apache.ZooKeeperService:name0=ReplicatedServer_id1,name1=replica.1,name2=LeaderElection]
53:04,528 INFO [...)(secure=disabled):QuorumPeer#1143] - LEADING
53:04,531 INFO [...)(secure=disabled):Leader#63] - TCP NoDelay set to: true
53:04,531 INFO [...)(secure=disabled):Leader#83] - zookeeper.leader.maxConcurrentSnapshots = 10
53:04,532 INFO [...)(secure=disabled):Leader#85] - zookeeper.leader.maxConcurrentSnapshotTimeout = 5
53:04,538 INFO [...)(secure=disabled):Environment#109] - Server environment:zookeeper.version=3.5.3-beta-8ce24f9e675cbefffb8f21a47e06b42864475a60, built on 04/03/2017 16:19 GMT
53:04,539 INFO [...)(secure=disabled):Environment#109] - Server environment:host.name=ip-10-0-1-117.eu-west-1.compute.internal
53:04,539 INFO [...] - Server environment:java.version=1.8.0_151
53:04,539 INFO [...] - Server environment:java.vendor=Oracle Corporation
53:04,539 INFO [...] - Server environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre
53:04,539 INFO [...] - Server environment:java.class.path=/zookeeper-3.5.3-beta/bin/../build/classes:/zookeeper-3.5.3-beta/bin/../build/lib/*.jar:/zookeeper-3.5.3-beta/bin/../lib/slf4j-log4j12-1.7.5.jar:/zookeeper-3.5.3-beta/bin/../lib/slf4j-api-1.7.5.jar:/zookeeper-3.5.3-beta/bin/../lib/netty-3.10.5.Fin
53:04,539 INFO [...] - Server environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
53:04,539 INFO [...] - Server environment:java.io.tmpdir=/tmp
53:04,539 INFO [...] - Server environment:java.compiler=<NA>
53:04,539 INFO [...] - Server environment:os.name=Linux
53:04,540 INFO [...] - Server environment:os.arch=amd64
53:04,540 INFO [...] - Server environment:os.version=4.9.85-38.58.amzn1.x86_64
53:04,540 INFO [...] - Server environment:user.name=zookeeper
53:04,540 INFO [...] - Server environment:user.home=/home/zookeeper
53:04,540 INFO [...] - Server environment:user.dir=/zookeeper-3.5.3-beta
53:04,540 INFO [...] - Server environment:os.memory.free=53MB
53:04,540 INFO [...] - Server environment:os.memory.max=889MB
53:04,540 INFO [...] - Server environment:os.memory.total=59MB
53:04,541 INFO [...:ZooKeeperServer#907] - minSessionTimeout set to 4000
53:04,541 INFO [...:ZooKeeperServer#916] - maxSessionTimeout set to 40000
53:04,542 INFO [...:ZooKeeperServer#159] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /datalog/version-2 snapdir /data/version-2
53:04,543 INFO [...:Leader#414] - LEADING - LEADER ELECTION TOOK - 15 MS
53:04,547 INFO [...:FileTxnSnapLog#320] - Snapshotting: 0x0 to /data/version-2/snapshot.0
53:04,564 INFO [...:Leader#1258] - Have quorum of supporters, sids: [ [1],[1] ]; starting up and setting last processed zxid: 0x100000000
53:04,578 INFO [...:CommitProcessor#255] - Configuring CommitProcessor with 2 worker threads.
53:04,587 INFO [...:ContainerManager#64] - Using checkIntervalMs=60000 maxPerMinute=10000
54:21,619 INFO [NIOServerCxnFactory.AcceptThread:/0.0.0.0:2181:NIOServerCnxnFactory$AcceptThread#296] - Accepted socket connection from /10.0.1.91:44016
54:21,627 INFO [NIOWorkerThread-1:ZooKeeperServer#1013] - Client attempting to establish new session at /10.0.1.91:44016
54:21,630 INFO [SyncThread:1:FileTxnLog#204] - Creating new log file: log.100000001
54:21,638 INFO [CommitProcWorkThread-1:ZooKeeperServer#727] - Established session 0x1000001141b0000 with negotiated timeout 6000 for client /10.0.1.91:44016
54:21,708 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:create cxid:0x2 zxid:0x100000003 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
54:21,725 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:create cxid:0x6 zxid:0x100000007 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
54:21,733 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:create cxid:0x9 zxid:0x10000000a txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
54:22,012 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:create cxid:0x15 zxid:0x100000015 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster
54:22,818 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:setData cxid:0x22 zxid:0x10000001b txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
54:23,031 INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor#880] - Got user-level KeeperException when processing sessionid:0x1000001141b0000 type:delete cxid:0x32 zxid:0x10000001e txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
Kafka logs
15:54:22 log.roll.hours = 168
15:54:22 log.roll.jitter.hours = 0
15:54:22 log.roll.jitter.ms = null
15:54:22 log.roll.ms = null
15:54:22 log.segment.bytes = 1073741824
15:54:22 log.segment.delete.delay.ms = 60000
15:54:22 max.connections.per.ip = 2147483647
15:54:22 max.connections.per.ip.overrides =
15:54:22 max.incremental.fetch.session.cache.slots = 1000
15:54:22 message.max.bytes = 1000012
15:54:22 metric.reporters = []
15:54:22 metrics.num.samples = 2
15:54:22 metrics.recording.level = INFO
15:54:22 metrics.sample.window.ms = 30000
15:54:22 min.insync.replicas = 1
15:54:22 num.io.threads = 8
15:54:22 num.network.threads = 3
15:54:22 num.partitions = 1
15:54:22 num.recovery.threads.per.data.dir = 1
15:54:22 num.replica.alter.log.dirs.threads = null
15:54:22 num.replica.fetchers = 1
15:54:22 offset.metadata.max.bytes = 4096
15:54:22 offsets.commit.required.acks = -1
15:54:22 offsets.commit.timeout.ms = 5000
15:54:22 offsets.load.buffer.size = 5242880
15:54:22 offsets.retention.check.interval.ms = 600000
15:54:22 offsets.retention.minutes = 1440
15:54:22 offsets.topic.compression.codec = 0
15:54:22 offsets.topic.num.partitions = 50
15:54:22 offsets.topic.replication.factor = 1
15:54:22 offsets.topic.segment.bytes = 104857600
15:54:22 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
15:54:22 password.encoder.iterations = 4096
15:54:22 password.encoder.key.length = 128
15:54:22 password.encoder.keyfactory.algorithm = null
15:54:22 password.encoder.old.secret = null
15:54:22 password.encoder.secret = null
15:54:22 port = 9092
15:54:22 principal.builder.class = null
15:54:22 producer.purgatory.purge.interval.requests = 1000
15:54:22 queued.max.request.bytes = -1
15:54:22 queued.max.requests = 500
15:54:22 quota.consumer.default = 9223372036854775807
15:54:22 quota.producer.default = 9223372036854775807
15:54:22 quota.window.num = 11
15:54:22 quota.window.size.seconds = 1
15:54:22 replica.fetch.backoff.ms = 1000
15:54:22 replica.fetch.max.bytes = 1048576
15:54:22 replica.fetch.min.bytes = 1
15:54:22 replica.fetch.response.max.bytes = 10485760
15:54:22 replica.fetch.wait.max.ms = 500
15:54:22 replica.high.watermark.checkpoint.interval.ms = 5000
15:54:22 replica.lag.time.max.ms = 10000
15:54:22 replica.socket.receive.buffer.bytes = 65536
15:54:22 replica.socket.timeout.ms = 30000
15:54:22 replication.quota.window.num = 11
15:54:22 replication.quota.window.size.seconds = 1
15:54:22 request.timeout.ms = 30000
15:54:22 reserved.broker.max.id = 1000
15:54:22 sasl.enabled.mechanisms = [GSSAPI]
15:54:22 sasl.jaas.config = null
15:54:22 sasl.kerberos.kinit.cmd = /usr/bin/kinit
15:54:22 sasl.kerberos.min.time.before.relogin = 60000
15:54:22 sasl.kerberos.principal.to.local.rules = [DEFAULT]
15:54:22 sasl.kerberos.service.name = null
15:54:22 sasl.kerberos.ticket.renew.jitter = 0.05
15:54:22 sasl.kerberos.ticket.renew.window.factor = 0.8
15:54:22 sasl.mechanism.inter.broker.protocol = GSSAPI
15:54:22 security.inter.broker.protocol = PLAINTEXT
15:54:22 socket.receive.buffer.bytes = 102400
15:54:22 socket.request.max.bytes = 104857600
15:54:22 socket.send.buffer.bytes = 102400
15:54:22 ssl.cipher.suites = []
15:54:22 ssl.client.auth = none
15:54:22 ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
15:54:22 ssl.endpoint.identification.algorithm = null
15:54:22 ssl.key.password = null
15:54:22 ssl.keymanager.algorithm = SunX509
15:54:22 ssl.keystore.location = null
15:54:22 ssl.keystore.password = null
15:54:22 ssl.keystore.type = JKS
15:54:22 ssl.protocol = TLS
15:54:22 ssl.provider = null
15:54:22 ssl.secure.random.implementation = null
15:54:22 ssl.trustmanager.algorithm = PKIX
15:54:22 ssl.truststore.location = null
15:54:22 ssl.truststore.password = null
15:54:22 ssl.truststore.type = JKS
15:54:22 transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
15:54:22 transaction.max.timeout.ms = 900000
15:54:22 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
15:54:22 transaction.state.log.load.buffer.size = 5242880
15:54:22 transaction.state.log.min.isr = 1
15:54:22 transaction.state.log.num.partitions = 50
15:54:22 transaction.state.log.replication.factor = 1
15:54:22 transaction.state.log.segment.bytes = 104857600
15:54:22 transactional.id.expiration.ms = 604800000
15:54:22 unclean.leader.election.enable = false
15:54:22 zookeeper.connect = zk0.ecs.demo.com:2181
15:54:22 zookeeper.connection.timeout.ms = 6000
15:54:22 zookeeper.max.in.flight.requests = 10
15:54:22 zookeeper.session.timeout.ms = 6000
15:54:22 zookeeper.set.acl = false
15:54:22 zookeeper.sync.time.ms = 2000
15:54:22 (kafka.server.KafkaConfig)
15:54:22 [2018-05-04 15:54:22,162] INFO [ThrottledRequestReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
15:54:22 [2018-05-04 15:54:22,162] INFO [ThrottledRequestReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
15:54:22 [2018-05-04 15:54:22,163] INFO [ThrottledRequestReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledRequestReaper)
15:54:22 [2018-05-04 15:54:22,198] INFO Log directory '/kafka/kafka-logs-ip-10-0-1-91.eu-west-1.compute.internal' not found, creating it. (kafka.log.LogManager)
15:54:22 [2018-05-04 15:54:22,209] INFO Loading logs. (kafka.log.LogManager)
15:54:22 [2018-05-04 15:54:22,221] INFO Logs loading complete in 12 ms. (kafka.log.LogManager)
15:54:22 [2018-05-04 15:54:22,234] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
15:54:22 [2018-05-04 15:54:22,239] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
15:54:22 [2018-05-04 15:54:22,611] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
15:54:22 [2018-05-04 15:54:22,642] INFO [SocketServer brokerId=1001] Started 1 acceptor threads (kafka.network.SocketServer)
15:54:22 [2018-05-04 15:54:22,669] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,674] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,674] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,692] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
15:54:22 [2018-05-04 15:54:22,713] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient)
15:54:22 [2018-05-04 15:54:22,717] INFO Result of znode creation at /brokers/ids/1001 is: OK (kafka.zk.KafkaZkClient)
15:54:22 [2018-05-04 15:54:22,718] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: ArrayBuffer(EndPoint(kafka1.ecs.demo.com:9092,9092,ListenerName(PLAINTEXT),PLAINTEXT)) (kafka.zk.KafkaZkClient)
15:54:22 [2018-05-04 15:54:22,720] WARN No meta.properties file under dir /kafka/kafka-logs-ip-10-0-1-91.eu-west-1.compute.internal/meta.properties (kafka.server.BrokerMetadataCheckpoint)
15:54:22 [2018-05-04 15:54:22,789] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,789] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,801] INFO Creating /controller (is it secure? false) (kafka.zk.KafkaZkClient)
15:54:22 [2018-05-04 15:54:22,801] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
15:54:22 [2018-05-04 15:54:22,813] INFO Result of znode creation at /controller is: OK (kafka.zk.KafkaZkClient)
15:54:22 [2018-05-04 15:54:22,828] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator)
15:54:22 [2018-05-04 15:54:22,829] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
15:54:22 [2018-05-04 15:54:22,838] INFO [GroupMetadataManager brokerId=1001] Removed 0 expired offsets in 8 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
15:54:22 [2018-05-04 15:54:22,859] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
15:54:22 [2018-05-04 15:54:22,929] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
15:54:22 [2018-05-04 15:54:22,936] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
15:54:22 [2018-05-04 15:54:22,937] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
15:54:23 [2018-05-04 15:54:23,047] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
15:54:23 [2018-05-04 15:54:23,070] INFO Kafka version : 1.1.0 (org.apache.kafka.common.utils.AppInfoParser)
15:54:23 [2018-05-04 15:54:23,070] INFO Kafka commitId : fdcf75ea326b8e07 (org.apache.kafka.common.utils.AppInfoParser)
15:54:23 [2018-05-04 15:54:23,071] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer)
15:56:37 [2018-05-04 15:56:37,770] INFO Unable to read additional data from server sessionid 0x1000001141b0000, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:56:39 [2018-05-04 15:56:39,772] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:56:40 [2018-05-04 15:56:40,798] WARN Session 0x1000001141b0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:56:40 java.net.ConnectException: Connection refused
15:56:40 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
15:56:40 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
15:56:40 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
15:56:40 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
15:56:42 [2018-05-04 15:56:42,475] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:56:48 [2018-05-04 15:56:48,482] WARN Client session timed out, have not heard from server in 7583ms for sessionid 0x1000001141b0000 (org.apache.zookeeper.ClientCnxn)
15:56:48 [2018-05-04 15:56:48,482] INFO Client session timed out, have not heard from server in 7583ms for sessionid 0x1000001141b0000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:56:50 [2018-05-04 15:56:50,254] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:56:56 [2018-05-04 15:56:56,257] WARN Client session timed out, have not heard from server in 7674ms for sessionid 0x1000001141b0000 (org.apache.zookeeper.ClientCnxn)
15:56:56 [2018-05-04 15:56:56,258] INFO Client session timed out, have not heard from server in 7674ms for sessionid 0x1000001141b0000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:56:57 [2018-05-04 15:56:57,591] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:03 [2018-05-04 15:57:03,596] WARN Client session timed out, have not heard from server in 7238ms for sessionid 0x1000001141b0000 (org.apache.zookeeper.ClientCnxn)
15:57:03 [2018-05-04 15:57:03,596] INFO Client session timed out, have not heard from server in 7238ms for sessionid 0x1000001141b0000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:04 [2018-05-04 15:57:04,890] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:10 [2018-05-04 15:57:10,894] WARN Client session timed out, have not heard from server in 7198ms for sessionid 0x1000001141b0000 (org.apache.zookeeper.ClientCnxn)
15:57:10 [2018-05-04 15:57:10,894] INFO Client session timed out, have not heard from server in 7198ms for sessionid 0x1000001141b0000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:12 [2018-05-04 15:57:12,388] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:18 [2018-05-04 15:57:18,392] WARN Client session timed out, have not heard from server in 7398ms for sessionid 0x1000001141b0000 (org.apache.zookeeper.ClientCnxn)
15:57:18 [2018-05-04 15:57:18,392] INFO Client session timed out, have not heard from server in 7398ms for sessionid 0x1000001141b0000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:20 [2018-05-04 15:57:20,162] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:24 [2018-05-04 15:57:24,254] WARN Session 0x1000001141b0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:24 java.net.NoRouteToHostException: Host is unreachable
15:57:24 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
15:57:24 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
15:57:24 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
15:57:24 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
15:57:25 [2018-05-04 15:57:25,482] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:50 [2018-05-04 15:57:50,783] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:51 [2018-05-04 15:57:51,902] WARN Session 0x1000001141b0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:51 java.net.NoRouteToHostException: Host is unreachable
15:57:51 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
15:57:51 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
15:57:51 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
15:57:51 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
15:57:52 [2018-05-04 15:57:52,203] INFO Terminating process due to signal SIGTERM (kafka.Kafka$)
15:57:52 [2018-05-04 15:57:52,205] INFO [KafkaServer id=1001] shutting down (kafka.server.KafkaServer)
15:57:52 [2018-05-04 15:57:52,207] INFO [KafkaServer id=1001] Starting controlled shutdown (kafka.server.KafkaServer)
15:57:53 [2018-05-04 15:57:53,962] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:54 [2018-05-04 15:57:54,974] WARN Session 0x1000001141b0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
15:57:54 java.net.NoRouteToHostException: Host is unreachable
15:57:54 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
15:57:54 at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
15:57:54 at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
15:57:54 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
15:57:55 [2018-05-04 15:57:55,076] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
15:57:56 [2018-05-04 15:57:56,978] INFO Opening socket connection to server ip-10-0-1-117.eu-west-1.compute.internal/10.0.1.117:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
15:57:58 [2018-05-04 15:57:58,046] WARN Session 0x1000001141b0000 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)

Filter Libcurl Debug Message

Is it possible to filter debug data function text? I want to display commands at one time and full output at the other (I want for example to filter out Adding handle: send: 0). All the time I get a lot of messages. I want something nice like Filezilla Short messages .
Here is my code for debug function and below it a message. I have verbose enabled
int Uploader::DebugDataCallBack(CURL* handle, curl_infotype infotype, char* msg, size_t size, void* f)
{
int level= 1; //debug info 0-None, 1-necessary 2 - All TODO: Should come from config
switch(level) //error level
{
case 0:
{
break; //do nothing
}
case 1:
{
//only necessary, skip headers
if(infotype==CURLINFO_TEXT)
{
static_cast<Uploader*>(f)->SendMessage(wxString(msg));
}
}
default:
{
//full debug messages
static_cast<Uploader*>(f)->SendMessage(wxString(msg));
}
}
return 0;//must return 0
}
----------Thu Dec 26 14:14:40 2013----------
STATE: INIT => CONNECT handle 0x7fffd0001a08; line 998 (connection #-5000)
[14:14:40]
STATE: INIT => CONNECT handle 0x7fffd0001a08; line 998 (connection #-5000)
[14:14:40]
Rebuilt URL to: ftp://ftp.mysite.com/
[14:14:40]
Rebuilt URL to: ftp://ftp.mysite.com/
[14:14:40]
About to connect() to ftp.mysite.com port 21 (#0)
[14:14:40]
About to connect() to ftp.mysite.com port 21 (#0)
[14:14:40]
Trying 31.170.162.203...
[14:14:40]
Trying 31.170.162.203...
[14:14:40]
Adding handle: conn: 0x7fffd0013b48
[14:14:40]
Adding handle: conn: 0x7fffd0013b48
[14:14:40]
Adding handle: send: 0
[14:14:40]
Adding handle: send: 0
[14:14:40]
Adding handle: recv: 0
[14:14:40]
Adding handle: recv: 0
[14:14:40]
Curl_addHandleToPipeline: length: 1
[14:14:40]
Curl_addHandleToPipeline: length: 1
[14:14:40]
0x7fffd0001a08 is at send pipe head!
[14:14:40]
0x7fffd0001a08 is at send pipe head!
[14:14:40]
- Conn 0 (0x7fffd0013b48) send_pipe: 1, recv_pipe: 0
[14:14:40]
- Conn 0 (0x7fffd0013b48) send_pipe: 1, recv_pipe: 0
[14:14:40]
STATE: CONNECT => WAITCONNECT handle 0x7fffd0001a08; line 1045 (connection #0)
[14:14:40]
STATE: CONNECT => WAITCONNECT handle 0x7fffd0001a08; line 1045 (connection #0)
[14:14:40]
Connected to ftp.mysite.com (31.170.162.203) port 21 (#0)
[14:14:40]
Connected to ftp.mysite.com (31.170.162.203) port 21 (#0)
[14:14:40]
FTP 0x7fffd0013fe0 (line 3174) state change from STOP to WAIT220
[14:14:40]
FTP 0x7fffd0013fe0 (line 3174) state change from STOP to WAIT220
[14:14:40]
STATE: WAITCONNECT => PROTOCONNECT handle 0x7fffd0001a08; line 1158 (connection #0)
[14:14:40]
STATE: WAITCONNECT => PROTOCONNECT handle 0x7fffd0001a08; line 1158 (connection #0)
[14:14:40]
First Enable curl by putting verbose to 1L that is CURLOPT_VERBOSE in curl_easy_setopt. Theb set the debug function, that is CURLOPT_DEBUGFUNCTION to receive the debug messages. Then use codes infotypes to filter out what you want. If you want to get command/response like I wanted just take message from Header in/out. Here is piece of code just to show it!
switch(infotype)
{
case CURLINFO_HEADER_OUT:
{
wxString message = _("COMMAND: ")+wxString(msg);
SendMessage(message);
break;
}
case CURLINFO_HEADER_IN:
{
wxString message = _("RESPONSE: ")+wxString(msg);
SendMessage(message);
break;
}
}