Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I have a project which i communicate with DNS servers. For example I used googles DNS server. After connecting to the server and sending a message, everything works well and the server returns an answer. But by the time i send the second query, the server already closed the connection by himself(sends FIN) and now I send a message to an invalid fd. Is there a known solution to this problem?
From the DNS over TCP RFC:
The server should assume that the client will initiate connection
closing, and should delay closing its end of the connection until
all outstanding client requests have been satisfied.
This means that if you send multiple requests simultaneously, the connection will stay open until all the requests have been replied to. But once there are no more pending requests, the connection can be closed.
If you want to make multiple requests, then you either need to send them all at once, or create new connections for each single request.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I'm already working on a blockchain project, However I have a problem on implementing a peer to peer network between nodes. I found a Udemy course and in that course Redis pub/sub was used for peet to peer network but is it only available in a local network right ? or in another article it says that there are some main nodes that run 24/7 so others first make connection with them. but is it not sort of server-side network ??
my question is how can I actually implement a peer to peer network that many nodes around the world can communicate with each other without any main server ?
Usual implementation of P2P connection is to have one predefined port (for example in case of Bitcoin Core it's 8333) and the applications periodically broadcast their messages on this particular port.
It's also usual to have in your app a preset list of nodes that are likely to be online 24/7, so that the app can listen to their messages right from the startup and doesn't have to wait for other nodes to broadcast their presence.
The app can keep a list of currently active nodes (for example the ping period is 60 seconds, so any node that has pinged within the last 60 seconds is considered active) in case it needs to communicate with the other nodes directly.
But most communication is usually done via broadcasting and listening to messages on the predefined port.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a client/Server systems implemented by Boost asio in C++ that a client sends a request to server. Then the server registers this client to the list of alive clients and keeps sending data to it over UDP protocol. But, the server should keep track of alive clients and stop sending data to a disconnected or dead client.
I wonder how I can implement the UDP session/socket management here since UDP is a connectionless protocol and cannot provide us any information about alive clients. Should I use another library for UDP session management in C++? Or I should use another protocol in the application layer for UDP session management.
I know there is a library in Java called Verax IPMI https://en.wikipedia.org/wiki/Verax_IPMI which provides this ability. But, how about in C++?
Thanks for reading my question.
Just keep a list of endpoints that you've seen recently (meaning they sent you a datagram). Usually, you allow for a grace time (e.g. 30s) before removing a client from the list.
That way, if some datagrams were dropped you don't immediately forget the "connection".
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I have been using Firebase C++ SDK's Auth and Realtime Database (for Windows) in a simple test application. After a succesful authentication every new message (node) is being arrived from the cloud within just a few millisecs until the following happens:
I leave my computer untouched in idle state.
Due to the energy settings it goes to sleep after 10-15 minutes. (don't want to change the settings!)
After I wake it up again the network connection is re-established for all other background applications (like Skype, Outlook etc)
It seems Firebase's connection is NOT re-established.
Is there any built-in function to get notification from Firebase when it's lost the connection and try to re-login, re-connect to the database either automatically or manually?
I guess it has a background keep-alive connection to check network status but I couldn't get any useful information about it. The documentation says it can keep everything synced even in offline mode.
any built-in function to get notification from Firebase when it's lost the connection[?]
For that you'd attach a listener to the virtual .info/connected node, as shown here: https://firebase.google.com/docs/database/android/offline-capabilities#section-connection-state. Somehow this section is missing from the C++ documentation, which is why I linked you to the Android version.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
what exactly bind api doing in server programm.
I am very new for socket programming.
Bind says : bind the socket to IPaddress and port.
So what exactly happen if i give
argument1 of bind=AF_INET & args2=(sockaddr *)struct sockaddr hint and args3=sizeof(hint);
In short: bind() specifies the address & port on the local side of the connection. If you don't call bind(), the operating system will automatically assign you an available port number.
Each time an IP datagram containing TCP data is sent over the network, the datagram contains a 'local address', 'remote address', 'local port', and 'remote port'. This is the only information that IP has to figure out who ends up getting the packet.
So, both the client and the server port numbers need to be filled in before the connection can work. Data that is directed to the server needs a 'destination' port, so that the data can get sent to the appropriate program running on the server. Likewise, it needs a 'source' so that the server knows who to send data back to, and also so that if there are many connections from the same computer, the server can keep them separate by looking at the source port number.
Since the connection is initiated by the client program, the client program needs to know the server's port number before it can make a connection. For this reason, servers are placed on 'well-known' port numbers. For example, a telnet server is always on port 23. A http server is always on port 80.
The bind() API call assigns the 'local' port number. That is, the port number that is used as the 'source port' on outgoing datagrams, and the 'destination port' on incoming datagrams.
There is a detailed explanation here with an example.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I have a C++ server and client.
I am using the poll() system call to monitor sockets on the server for read-ready, write-ready and errors.
For some of the connections, I see the poll() detects an ECONNRESET after sending out a bunch of data and fails midway. On the client side too, I see a ECONNRESET being reported.
So essentially both sides are reporting that the remote side closed the connection.
How can this happen?
How do I debug this? Is there any tcp layer logging that I can enable?
Is there any tcp layer logging that I can enable?
The most common tool to enable you seeing what's going on at your low level IP transport channels is Wireshark.
You can inspect any packets send and received over your NIC in detail with that tool.
Another one is tcpdump for linux systems.