Libnodave - daveStart() Error using TCP Connection - c++

I have established connection to a Siemens S7-300 PLC (simulated via PlcSIM) using the libnodave library. There are no issues connecting and writing data to the PLC. However, I am unable to change the status of the PLC from Start/Stop. I am attempting to use the following libnodave methods for such actions:
int daveStatus = daveStart(dc);
int daveStatus = daveStop(dc);
Both function calls return the same Error: 33794
nodave.c Cites the error as the following:
case 0x8402: return "CPU already in RUN or already in STOP ?";
The use of the daveStart() and daveStop() functions can be viewed in the example testS7online.c:
if(doStop) {
daveStop(dc);
}
if(doRun) {
daveStart(dc);
}
In the examples the start/stop functions are only called when MPI connections to the PLC are made. Does anyone know if the start/stop functions are supported for use with TCP connections? If so, any suggestions as to what may be causing my error?

I have just tried dc.start() and dc.stop() using libnodave 8.4 and NetToPlcSim tool. It worked perfectly. Possibly you don't use NetToPlcSim tool that makes connection to PLCSim via TCP/IP (that is 127.0.0.1 port 102 obviously) hence dc can't even connect. So if your lines don't work, then u must be doing something wrong.

Related

Connect to multiple queue managers in different servers

I am trying to connect a C++ application (using MQCONNX) based on a PaaS IBM MQ client to two different queue managers, each one based on a different server (one in a PaaS server and the other one in a Unix server). Unfortunately I am not able to do it as I am getting a message when I try to connect to the second server saying that it is not possible as it is connected to the first queue manager. I am using two different MQHCONN connections, one for each queue manager, but the problem is still there.
I have taken a look into this link, but I still have some doubts, as for example, from which server should I copy the CCDT to the client?
https://www.ibm.com/support/pages/connecting-mq-clients-multiple-queue-managers-client-channel-definition-table-ccdt
Any help would be much appreciated, or even a quick sample of how to use CCDT, as right now I am completely stuck.
Many thanks in advance for any help.
Assuming Queue Manager 1 is called MQG1 and Queue Manager 2 is called MQG2 and these can be found using connection names of machine1.com(1701) and machine2.com(1702) respectively, and using channel names MQG1.SVRCONN and MQG2.SVRCONN respectively, you can create your CCDT, on your client application machine, thus:-
runmqsc -n
issue these commands into runmqsc:-
DEFINE CHANNEL(MQG1.SVRCONN) CHLTYPE(CLNTCONN) CONNAME('machine1.com(1701)') QMNAME(MQG1)
DEFINE CHANNEL(MQG2.SVRCONN) CHLTYPE(CLNTCONN) CONNAME('machine2.com(1702)') QMNAME(MQG2)
Then you can code your 2 x MQCONN (or MQCONNX if you need to specify any additional things on the connection) thus:-
#include <cmqc.h> /* Includes for MQI constants */
#include <cmqstrc.h> /* Convert MQRC into string */
MQHCONN hConn1 = MQHC_UNUSABLE_HCONN;
MQHCONN hConn2 = MQHC_UNUSABLE_HCONN;
MQCHAR QMName1[MQ_Q_MGR_NAME_LENGTH] = "MQG1";
MQCHAR QMName2[MQ_Q_MGR_NAME_LENGTH] = "MQG2";
MQLONG CompCode, Reason;
MQCONN(QMName1,
&hConn1,
&CompCode,
&Reason);
if (CompCode)
{
printf("MQCONN to %s failed with reason code %s (%d)\n", QMName1, MQRC_STR(Reason), Reason);
}
MQCONN(QMName2,
&hConn2,
&CompCode,
&Reason);
if (CompCode)
{
printf("MQCONN to %s failed with reason code %s (%d)\n", QMName2, MQRC_STR(Reason), Reason);
}
Take care with how you are linking your program. If you try to make two local connections, you will get a return code of MQRC_ANOTHER_Q_MGR_CONNECTED. Ensure you either link with the client library, set connection option MQCNO_CLIENT (which means you must use MQCONNX) or set the environment variable MQ_CONNECT_TYPE=CLIENT.
You might find the following blog post useful additional reading:-
IBM MQ Little Gem #30: MQ_CONNECT_TYPE

SSH local port forwarding using libssh

Problem
I try to do local port forwarding using libssh with the libssh-C++-wrapper. My intention is to forward port localhost:3306 on a server to localhost:3307 on my machine via SSH to connect via MySQL to localhost:3307.
void ssh_session::forward(){
ssh::Channel channel(this->session);
//remotehost, remoteport, localhost, localport
channel.openForward("localhost",3306,"localhost",3307);
std::cout<< "Channel is " << (channel.isOpen()?"open!":"closed!") << std::endl;
}
with session in the constructor of ssh::Channel being of type ssh::Session.
The code above prints Channel is open!. If I try to connect to localhost:3307 using the MySQL Connector/C++ I get
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (61)
Observations
If I use the shell command $ ssh -L 3307:localhost:3306 me#myserver.com everything works fine and I can connect.
If I use ssh::Session session used in the constructor or ssh::Channel channel to execute remote shell commands everything works therefore the session is fine!
The documentation of libssh (which is total crap for the C++ wrapper libsshpp.hpp since a lot of public member functions are not documented and you have to look into the source code) shows that ssh::Channel::openForward() is a wrapper for the C function ssh_channel_open_forward()
The documentation of ssh_channel_open_forward() states
Warning
This function does not bind the local port and does not automatically forward the content of a socket to the channel. You still have to use channel_read and channel_write for this.
I think that could cause the problem. I have no problem by reading and writing in to the ssh:Channel but thats not how the MySQL Connector/C++ works.
Question
How can I achieve the same behaviour produced by the common shell command
$ ssh -L 3307:localhost:3306 me#myserver.com
using libssh?
Warning
This function does not bind the local port and does not automatically forward the content of a socket to the channel. You
still have to use channel_read and channel_write for this.
This is telling you that you need to write your own local socket code. Unfortunately, it doesn't do it for you.
The simplest implementation would be to bind a local socket, and use ssh_select to listen for events (e.g. new connection to accept, socket or channel events). You can keep your socket fds aand ssh_channels in a vector for easy management.
When you get any event, just loop over all the operations in a non-blocking way, i.e.
try to accept a new connection, and append the fd, and a new ssh_channel (created as in your question) to your vectors.
try to read all the socket fds, and forward anything to the corresponding ssh channel using ssh_channel_write (make sure to setsockopt SO_RCVTIMEO to 0)
try to read all the channels, using ssh_channel_read_nonblocking, and forward to the socket fd using write.
You also need to handle errors everywhere, and close the corresponding fd and ssh_channel.
Overall it's probably going to be too much code for a StackOverflow answer, but I may come back and add it in if I get time.
The tempting alternative to all that would be to just run ssh -L ... as a subprocess using fork & exec, avoiding all that boilerplate socket code, and benefitting from an efficient, bug-free implementation.

C++: One client communicating with multiple server

I was wondering, if it is possible to let one client communicate with multiple server at the same time. As far as I know, common browsers like for example firefox are doing exactly this.
The problem I have now is, that the client has to listen and wait for data from the server, rather then requesting it itself. It has to listen to multiple server at once. Is this even possible? What happens if the client is listening to server 1 and server 2 sends something? Is the package lost or will it be resend until the client communicates a successful receival? The protocol used is TCP.
edit: platform is Windows. Thanks for pointing this out Arunmu.
This is nothing different from regular socket programming using select/poll/epoll OR using thread-pool OR using process-per-connection OR whatever model that you know.
I can show you a rough pseudo-code on how to do it with epoll.
NOTE: None of my functions exist in C++, its just for explanation purpose. ANd I am ALSO assuming that you are on Linux, since you have mentioned nothing about the platform.
socket sd = connect("server.com", 8080);
sd.set_nonblocking(1);
epoll_event event;
event.data.fd = sd
epoll_ctl(ADD, event);
...
...
while (True) {
auto n = epoll_wait(events, 1);
for (int i : 1...n) {
if (events[i].data.fd == sd) // The socket added in epoll_ctl
{
std::thread(&Session::read_handler, rd_hndler_, sd); // Call the read in another thread or same thread
}
}
}
I hope you got the gist. In essence, think of server like a client and client like a server and you have your problem solved (kind of). Check out below link to know more about epoll
https://banu.com/blog/2/how-to-use-epoll-a-complete-example-in-c/
To see an fully functional server design using epoll, checkout:
https://github.com/arun11299/cpp-reactor-server/blob/master/epoll/reactor.cc

Boost UDP socket issue on unix - bind: address already in use

First of all, I know there are several other threads on the same theme, but I was unable to find anything in those that could help me so I'll try to be very specific with my situation.
I have set up a simple UDP Client / UDP Server pair that is responsible to send data between several parallel simulations. That is, every instance of the simulator is running in a separate thread and send data on a UDP socket. In the master thread the server is running and routes the messages between the simulations.
The (for this problem) important parts of the server code looks like this:
UDPServer::UDPServer(boost::asio::io_service &m_io_service) :
m_socket(m_io_service, udp::endpoint(udp::v4(), PORT_NUMBER)),
m_endpoint(boost::asio::ip::address::from_string("127.0.0.1"), PORT_NUMBER)
{
this->start_receive();
};
void UDPServer::start_receive() {
// Set SO_REUSABLE to true
boost::asio::socket_base::reuse_address option(true);
this->m_socket.set_option(option);
// Specify what happens when a message is received (it should call the handle_receive function)
this->m_socket.async_receive_from( boost::asio::buffer(this->recv_buffer),
this->m_endpoint,
boost::bind(&UDPServer::handle_receive, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred));
};
This works fine on my windows workstation.
The thing is; I want to be able to run this on a linux cluster, which is why I compiled it and tried to run it on a cluster node. The code compiled without a hitch, but when I try to run it I get the error
bind: address already in use
I use a port number above 1024, and have verified that it is not in use by another program. And as is seen above, I also set the reuse_address option, so I really don't know what else could be wrong.
To portably use SO_REUSEADDR you need to set the option before binding the socket to the wildcard address:
UDPServer::UDPServer(boost::asio::io_service &m_io_service) :
m_socket(m_io_service, udp::v4()),
m_endpoint()
{
boost::asio::socket_base::reuse_address option(true);
this->m_socket.set_option(option);
this->m_socket.bind(udp::endpoint(udp::v4(), PORT_NUMBER));
this->start_receive();
}
In your original code, the constructor that takes an endpoint constructs, opens and binds the socket in a single line - it's concise but not very flexible. Here we're constructing and opening the socket in the constructor call, and then binding it later after we set the option.
As an aside, there's not much point initialising m_endpoint if you're just going to use it as the out argument of async_receive_from anyway.
Try running the following command on Linux to see if the port is already being used by another program.
netstat -antup | grep 1024
If you are getting "address already in use" then it is definitely being used by some other program. If the above command yields some result, then kill the process id that is reported in the command. If this does not work, try changing the port number to some other arbitrary port and check if the problem persists.

Socket in use error when reusing sockets

I am writing an XMLRPC client in c++ that is intended to talk to a python XMLRPC server.
Unfortunately, at this time, the python XMLRPC server is only capable of fielding one request on a connection, then it shuts down, I discovered this thanks to mhawke's response to my previous query about a related subject
Because of this, I have to create a new socket connection to my python server every time I want to make an XMLRPC request. This means the creation and deletion of a lot of sockets. Everything works fine, until I approach ~4000 requests. At this point I get socket error 10048, Socket in use.
I've tried sleeping the thread to let winsock fix its file descriptors, a trick that worked when a python client of mine had an identical issue, to no avail.
I've tried the following
int err = setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)TRUE,sizeof(BOOL));
with no success.
I'm using winsock 2.0, so WSADATA::iMaxSockets shouldn't come into play, and either way, I checked and its set to 0 (I assume that means infinity)
4000 requests doesn't seem like an outlandish number of requests to make during the run of an application. Is there some way to use SO_KEEPALIVE on the client side while the server continually closes and reopens?
Am I totally missing something?
The problem is being caused by sockets hanging around in the TIME_WAIT state which is entered once you close the client's socket. By default the socket will remain in this state for 4 minutes before it is available for reuse. Your client (possibly helped by other processes) is consuming them all within a 4 minute period. See this answer for a good explanation and a possible non-code solution.
Windows dynamically allocates port numbers in the range 1024-5000 (3977 ports) when you do not explicitly bind the socket address. This Python code demonstrates the problem:
import socket
sockets = []
while True:
s = socket.socket()
s.connect(('some_host', 80))
sockets.append(s.getsockname())
s.close()
print len(sockets)
sockets.sort()
print "Lowest port: ", sockets[0][1], " Highest port: ", sockets[-1][1]
# on Windows you should see something like this...
3960
Lowest port: 1025 Highest port: 5000
If you try to run this immeditaely again, it should fail very quickly since all dynamic ports are in the TIME_WAIT state.
There are a few ways around this:
Manage your own port assignments and
use bind() to explicitly bind your
client socket to a specific port
that you increment each time your
create a socket. You'll still have
to handle the case where a port is
already in use, but you will not be
limited to dynamic ports. e.g.
port = 5000
while True:
s = socket.socket()
s.bind(('your_host', port))
s.connect(('some_host', 80))
s.close()
port += 1
Fiddle with the SO_LINGER socket
option. I have found that this
sometimes works in Windows (although
not exactly sure why):
s.setsockopt(socket.SOL_SOCKET,
socket.SO_LINGER, 1)
I don't know if this will help in
your particular application,
however, it is possible to send
multiple XMLRPC requests over the
same connection using the
multicall method. Basically
this allows you to accumulate
several requests and then send them
all at once. You will not get any
responses until you actually send
the accumulated requests, so you can
essentially think of this as batch
processing - does this fit in with
your application design?
Update:
I tossed this into the code and it seems to be working now.
if(::connect(s_, (sockaddr *) &addr, sizeof(sockaddr)))
{
int err = WSAGetLastError();
if(err == 10048) //if socket in user error, force kill and reopen socket
{
closesocket(s_);
WSACleanup();
WSADATA info;
WSAStartup(MAKEWORD(2,0), &info);
s_ = socket(AF_INET,SOCK_STREAM,0);
setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)&x,sizeof(BOOL));
}
}
Basically, if you encounter the 10048 error (socket in use), you can simply close the socket, call cleanup, and restart WSA, the reset the socket and its sockopt
(the last sockopt may not be necessary)
i must have been missing the WSACleanup/WSAStartup calls before, because closesocket() and socket() were definitely being called
this error only occurs once every 4000ish calls.
I am curious as to why this may be, even though this seems to fix it.
If anyone has any input on the subject i would be very curious to hear it
Do you close the sockets after using it?