I'm using apache thrift in version 0.13.0.
As soon as the time between two calls is approximately 1.5 seconds the connection will be closed.
The timeout varies from 1.3 to 1.8 seconds.
keepAlive is set in server and client. I tried different for rx, tx but this did not change anything.
My client code used for testing is below.
The client is using windows and the server is running linux.
for (int i = 0; i < 100'000;i+=50){
remote_method();
auto sleep = std::chrono::milliseconds(i);
std::cout << "Sleep: " << i << "\n";
std::this_thread::sleep_for(sleep);
}
Thrift will throw an exception in the code snippet below, which is located in TSocket.cpp
// Timed out!
if (errno_copy == THRIFT_ETIMEDOUT) {
throw TTransportException(TTransportException::TIMED_OUT, "THRIFT_ETIMEDOUT");
}
It looks like something is resetting the connection after this time.
If the method is called with a high frequency no timeout occurs.
Thrift is working correctly, other socket based communication is showing this behavior as well. The root cause was the VMware virtual machine the server was running in. The network mode (briged, NAT, or host only) did not make a difference. By moving the server to a physical machine the problem has been solved. Most likely the network configuration of the linux is faulty.
Related
I am trying to implement gRPC server/ client for the first time using Windows Subsystem for Linux kernel and CLion as the IDE (on Windows). My code does not have any other bugs/ issues except this communication failure.
The following lines of code
if(status.ok()) {
cv::imshow("Rotated image", decrypt_img);
} else {
std::cout << status.error_code() << " : " << status.error_message() << std::endl;
}
yields the following message
14 : failed to connect to all addresses
This is a kind of generic error message from grpc which can have multiple causes.
In my experience, it can be one of the following things:
Your server isn't running (either you forgot to call grpc::ServerBuilder::BuildAndStart or you didn't start your server application all along).
When running the server for the first time Windows Firewall should ask you if you want to allow your application to access the network (I don't recall the actual wording). You want to accept this, of course.
You have a wrong address specified in your client application (i.e. a different one than you have set in your server application via grpc::ServerBuilder::AddListeningPort)
Not knowing your actual server and client code these are just assumptions I can make based on my experience with grpc.
My C++ software is creating syn packets (using boost) to my server with specific outgoing ports (according to the IANA port-assignment standards).
I am picking the outgoing ports for internal purposes.
For some reason, after I checked my application on many machines, with one specific machine am having the below issue:
The outgoing port which is being used isn't the one I assigned - Looks like the OS (Windows 10) is changing it.
What can be the issue?
Below is the relevant code I am using for assigning specific outgoing port:
std::string exceptionFormat = "exception. Error message: ";
error_code socket_set_option_error_code;
socket->set_option(tcp::socket::reuse_address(true), socket_set_option_error_code);
if (socket_set_option_error_code) {
throw SocketException("Got socket reuse set option " + exceptionFormat + socket_set_option_error_code.message());
}
const auto source_endpoint = tcp::endpoint(tcp::v4(), source_port);
error_code bind_socket_error_code;
socket->bind(source_endpoint, bind_socket_error_code);
if (bind_socket_error_code) {
throw SocketException("Got socket bind " + exceptionFormat + bind_socket_error_code.message());
}
Apparently, there were 2 antivirus installed on the machine while one of them changed the outgoing port (Kaspersky).
Tha packets might be flowing through NAT module (NAPT) or firewall which could also be one main reason due to which the port numbers can change.
My problem seems to be related to https://svn.boost.org/trac10/ticket/10496.
Simply opening a boost serial port and waiting for data cause one of the cpu
core of the embedded cpu to have 100% usage.
My hardware is the redpitaya STEMLAB 125-14 running Ubuntu 16.04. Some relevant
code snippets below:
// In the header
namespace ba = boost::asio;
typedef std::shared_ptr < boost::asio::serial_port serial_port_ptr;
typedef std::shared_ptr<boost::asio::io_service::work> work_ptr;
class SerialPort {
boost::asio::io_service m_io_service;
serial_port_ptr m_port;
work_ptr m_work;
bool open(const std::string &portname);
};
// in the source file,
bool SerialPort::open(const std::string &portname) {
m_port = serial_port_ptr(new ba::serial_port(m_io_service));
m_work = work_ptr(new ba::io_service::work(m_io_service));
m_port->open(portname, ec);
if (ec) {
std::cout << "open failed" << std::endl;
return false;
}
m_io_service.reset();
std::thread t(boost::bind(&ba::io_service::run, &m_io_service));
t.detach();
}
If I comment out the last two lines, I get 0% cpu utilization. The code works
fine, I can read and send from the serial port. The serial port is a usb to
serial device (can this have any effect?).
Has anyone else experience this before and is there a workaround before it is
‘fixed’ by the boost developers or am I doing something wrong?
EDIT: I initially thought maybe the hardware is not ‘powerful enough’, so I installed Ubuntu on VMWare on my i7 laptop. And run only the serial code on it with the USB serial device and I obtained the same result - 100% CPU usage on one core.
I understand it is difficult to help without seeing the full code, so I created a simple code that demonstrate the problem: this can be downloaded from goo.gl/FD5RNE . For simplicity, the code is only looking for USB serial device and will try to open the first device it found.
If you remove the comment from sp.open(), you will see the cpu usage go close to full utilisation. Thanks.
Edit: Still no solution to this but since I am not expecting the serial device to send unsolicited messages to the client, I have changed the program to always close the port i.e. the write/read are executed together with port opened before write and closed after read.
This has worked well so far and I am getting an average CPU of around 0-0.8%. Maybe a future updates to boost will solve the issue.
Ref: I have a RTSP media server that sends video via TCP streaming.I am using rtsp-sink(RTSPStreamer) DirectShow.Net filter based RTSP server which is developed in C++.Where as The Wrapper app is developed using C#.
The problem i am facing is the moment RTSP server start streaming,it affects the system level internet connection & drops internet connection speed by 90 percent.
I wanted to get your input on how it would be possible? (if at all).Because it impacts on system level internet connection not the very app level.
For Example:- My normal internet connection speed is 25 mbps. It suddenly drops to 2 mbps , whenever the RTSP streaming started in the app's server tab.
Sometimes even disables the internet connection in the system(Computer) where the app is running.
I'm asking you because I consider you an expert so please bear with me on this "maybe wild" question and thanks ahead.
...of all the things I've lost, I miss my mind the most.
Code Snippet of RTSPSender.CPP
//////////////////////////////////////////////////////
// CStreamingServer
//////////////////////////////////////////////////////
UsageEnvironment* CStreamingServer::s_pUsageEnvironment = NULL;
CHAR CStreamingServer::s_szDefaultBroadCastIP[] = "239.255.42.42";
//////////////////////////////////////////////////////
CStreamingServer::CStreamingServer(HANDLE hQuit)
: BasicTaskScheduler(10000)
, m_hQuit(hQuit)
, m_Streams(NAME("Streams"))
, m_bStarting(FALSE)
, m_bSessionReady(FALSE)
, m_pszURL(NULL)
, m_rtBufferingTime(UNITS * 2)
{
s_pUsageEnvironment = BasicUsageEnvironment::createNew(*this);
rtspServer = NULL;
strcpy_s(m_szAddress,"");
strcpy_s(m_szStreamName,"stream");
strcpy_s(m_szInfo,"media");
strcpy_s(m_szDescription,"Session streamed by \"RTSP Streamer DirectShow Filter\"");
m_nRTPPort = 6666;
m_nRTSPPort = 8554;
m_nTTL = 1;
m_bIsSSM = FALSE;
}
Edited:WireShark logs:
WireShark logs at the time of RTSP streaming started
I have established connection to a Siemens S7-300 PLC (simulated via PlcSIM) using the libnodave library. There are no issues connecting and writing data to the PLC. However, I am unable to change the status of the PLC from Start/Stop. I am attempting to use the following libnodave methods for such actions:
int daveStatus = daveStart(dc);
int daveStatus = daveStop(dc);
Both function calls return the same Error: 33794
nodave.c Cites the error as the following:
case 0x8402: return "CPU already in RUN or already in STOP ?";
The use of the daveStart() and daveStop() functions can be viewed in the example testS7online.c:
if(doStop) {
daveStop(dc);
}
if(doRun) {
daveStart(dc);
}
In the examples the start/stop functions are only called when MPI connections to the PLC are made. Does anyone know if the start/stop functions are supported for use with TCP connections? If so, any suggestions as to what may be causing my error?
I have just tried dc.start() and dc.stop() using libnodave 8.4 and NetToPlcSim tool. It worked perfectly. Possibly you don't use NetToPlcSim tool that makes connection to PLCSim via TCP/IP (that is 127.0.0.1 port 102 obviously) hence dc can't even connect. So if your lines don't work, then u must be doing something wrong.