I am using QuestDb and posting record with like
influxDB = InfluxDBFactory.connect("http://localhost:9000", username, password);
influxDB.setDatabase(database);
influxDB.enableBatch(BatchOptions.DEFAULTS);
influxDB.write(Point.measurement(TABLE_NAME)
.addField("ID",ir.getid())
.addField(....)
.build());
We do not send timestamp, so QuestDb inserts the server time and running it with one worker thread server option.
Each ID is unique but when I query QuestDb I see sometimes 5 records with the same ID as if QuestDb creates duplicates.
What can be wrong here?
The duplicates have all the same values, except timestamp is roughly 10s apart from one to another.
QuestDb does not support HTTP connection, it support TCP instead.
What probably happens is that you open HTTP connection to the port, it opens underlying TCP socket connection and sends HTTP headers, QuestDb ignores the headers as invalid messages and parses valid Influx protocol lines. Then Influx library does not receives any response and re-sends the message again after some configured interval, and again... Here you got the duplicates
Switch to TCP connection and do not use Influx library to send the messages using something like https://questdb.io/docs/develop/insert-data/ or telegraf, or use UDP
Related
I have 2 servers (One for Database Connections and one or more for client connections) running in a machine. The job of the database connections server is to fetch data from MySQL database and give this to the other servers on request.
Right now the data transfer (Between the 2 servers) happens via Json (json_spirit)(Don't know why I designed it this way.).
I am coming up to a stage where the data that is loaded from the MySQL DB is huge when the server starts up and every 1 minute. And 1000s of smaller queries in between.
I can see the impact Json is having since I have to Parse MYSQL_RES to Json and transmit from the DB Connection server to Client connection server and then parse the Json to a data set.
I am looking to serialize my data or do something other than parsing since the overhead is slowing down the client connection server since it is waiting on the response from DB Connection Server.
What would you suggest to serialize MYSQL_RES struct?
I have read about Protobuf, Flatbuffer and serialization. But simply cannot make a decision.
In our client/server application, we use TLS/TCP protocol for messaging. There is a message shift occurs between applications after a while (messages are sent and received in the correct order at the beginning) i.e. the client sends the 1000th message to the server and receives the response of message 999th. The suspect is on the client side, in which we implement TCP and TLS layers independently i.e. do not bind TCP socket to SSL object (via SSL_set_fd()) but using BIOs. When the client app gets the response from server (pretty sure that message is processed in the server correctly, client TCP layer receives the message correctly etc.), the message is forwarded to SSL layer. The client app firstly write the message to BIO:
BIO_write (readBio, data, length);
Then in another function of SSL layer, the message is read using SSL_read():
res = SSL_read (ssl, buffer, length);
The read operation is done successfully, but my goal is to check whether there is another record(s) to be read in the BIO. I considered to use the method SSL_pending() but it seems that this one should be used in order to check if there are still bytes in the SAME record. If our suspects are correct, I would like to check if there is another record in the BIO so that all messages are processed without any delay. Can you help me on this topic? Thanks in advance.
SSL_pending tells you if there are data from the current decryted SSL record which got not yet read by SSL_read. BIO_pending can be used to find out if there are already data in the BIO which are not processed by the SSL layer. To find out if the are unread data at the socket level use MSG_PEEK.
I am doing a TCP client - server simulation. In the simulation, I have created 2 clients and 2 servers. And I have programmed that read requests will go to server 1 and write requests will go to server 2. Thus, the client will always renew it's socket and make a new connection to the servers.
However, after the client has made 66561 times of connections to the server, instead of sending request packets, it will just simply send some empty ACK packets.
I expected both the clients to be able to send up to millions of requests, but currently, both the clients are only able to send up to 13k requests. Can anyone give me tips or advices?
Nagle's algorithm
Solutions:
Do not use small package in your app protocol
Use socket option TCP_NODELAY on both side client/server
Sounds like most previously created connections are still taking the resource (not released from system). From the information you give,
However, after the client has made 66561 times of connections to the server, instead of sending request packets, it will just simply send some empty ACK packets.
Looks like about 1000+ connections are released. Probably because of the 2msl time is due. If this is the case, suggest you explicitly release a connect before you create a new one.
Copy and paste your C/S part code would help the analyse.
I am designing a Data Distributor (say generating random number) where it will serve multiple clients.
The client C first sends the list of numbers in which it is interested to DD over TCP and listens for data on UDP. After some time (few minutes) the client may renew its subscription list by sending more numbers to DD.
I can design this in 2 ways.
FIRST:
New_Client_Connected_Thread(int sock_fd)
{
--Get Subscription
--Add to UDP Publisher List
--close(sock_fd)
}
Everytime client wants to subscribe to new set of data it will establish a new TCP connection.
SECOND:
New_Client_Connected_Thread(int sock_fd)
{
while(true)
{
--wait for new subscription list
--Get subscription
--Add to UDP Publisher List.
}
}
Here only 1 TCP connection per client would be required.
However if the client does not send new request, the Client_Thread would be waiting unnecessarily for long time.
Given that my Data Distributor would be serving lots of clients which of them seems to be the efficient way?
Libevent, or libev, which supports an event driven loop is probably more appropriate for the TCP portion of this.
You can avoid the threading, and have a single loop for the TCP portion to add your clients to the Publishers' list. Libevent is very efficient at managing lots and lots of connections and socket teardowns per second and is used by things like Tor (The onion router)
It seems like the TCP connection in your application is more of a "Control Plane" connection, and thus it's probably going to depend on how often your clients need to "control" your server thats going to make the decision whether to leave the socket open or close it after controlling. Keep in mind that keeping thousands of TCP connections open permanently does it kernel resource on the host, but on the other opening and closing connections the whole time introduces some latency due to the connection setup time.
See https://github.com/libevent/libevent/blob/master/sample/hello-world.c for an example of a libevent TCP server.
Since you're coding in C++, you will probably interested in the http://llucax.com.ar/proj/eventxx/ wrapper for libevent
Hey gang. I have just written a client and server in C++ using sys/socket. I need to handle a situation where the client is still active but the server is down. One suggested way to do this is to use a heartbeat to periodically assert connectivity. And if there is none to try to reconnect every X seconds for Y period of time, and then to time out.
Is this "heartbeat" the best way to check for connectivity?
The socket I am using might have information on it, is there a way to check that there is a connection without messing with the buffer?
If you're using TCP sockets over an IP network, you can use the TCP protocol's keepalive feature, which will periodically check the socket to make sure the other end is still there. (This also has the advantage of keeping the forwarding record for your socket valid in any NAT routers between your client and your server.)
Here's a TCP keepalive overview which outlines some of the reasons you might want to use TCP keepalive; this Linux-specific HOWTO describes how to configure your socket to use TCP keepalive at runtime.
It looks like you can enable TCP keepalive in Windows sockets by setting SIO_KEEPALIVE_VALS using the WSAIoctl() function.
If you're using UDP sockets over IP you'll need to build your own heartbeat into your protocol.
Yes, this heartbeat is the best way. You'll have to build it into the protocol the server and client use to communicate.
The simplest solution is to have the client send data periodically and the server close the connection if it hasn't received any data from the client in a particular period of time. This works perfectly for query/response protocols where the client sends queries and the server sends responses.
For example, you can use the following scheme:
The server responds to every query. If the server does not receive a query for two minutes, it closes the connection.
The client sends queries and keeps the connection open after each one.
If the client has not send a query for one minute, it sends an "are you there" query. The server responds with "yes I am". This resets the server's two minutes timer and confirms to the client that the connection is still available.
It may be simpler to just have the client close the connection if it hasn't needed to send a query for the past minute. Since all operations are initiated by the client, it can always just open a new connection if it needs to perform a new operation. That reduces it to just this:
The server closes the connection if it hasn't received a query in two minutes.
The client closes the connection if it hasn't needed to send a query in one minute.
However, this doesn't assure the client that the server is present and ready to accept a query at all times. If you need this capability, you will have to implement an "are you there" "yes I am" query/response into your protocol.
If the other side has gone away (i.e. the process has died, the machine has gone down, etc.), attempting to receive data from the socket should result in an error. However if the other side is merely hung, the socket will remain open. In this case, having a heartbeat is useful. Make sure that whatever protocol you are using (on top of TCP) supports some kind of "do-nothing" request or packet - each side can use this to keep track of the last time they received something from the other side, and can then close the connection if too much time elapses between packets.
Note that this is assuming you're using TCP/IP. If you're using UDP, then that's a whole other kettle of fish, since it's connectionless.
Ok, I don't know what your program does or anything, so maybe this isn't feasible, but I suggest that you avoid trying to always keep the socket open. It should only be open when you are using it, and should be closed when you are not.
If you are between reads and writes waiting on user input, close the socket. Design your client/server protocol (assuming you're doing this by hand and not using any standard protocols like http and/or SOAP) to handle this.
Sockets will error if the connection is dropped; write your program such that you don't lose any information in the case of such an error during a write to the socket and that you don't gain any information in the case of an error during a read from the socket. Transactionality and atomicity should be rolled into your client/server protocol (again, assuming you're designing it yourself).
maybe this will help you, TCP Keepalive HOWTO
or this SO_SOCKET