I have a device on my network (wi-fi with only static IP's) with a static IP address of 192.168.1.17. I use it in input for part of my code in a c++ program in linux, but if it disconnects/is powered off, the program stops responding because it tries to pull data from a non-existent location. Is there a way I can check if it disconnects so that I can stop the program before it goes out of control? Thanks for the helpful responses I know are coming!
Use ioctl SIOCGIFFLAGS to check is the interface UP and RUNNING:
struct ifreq ifr;
memset( &ifr, 0, sizeof(ifr) );
strcpy( ifr.ifr_name, ifrname );
if( ioctl( dummy_fd, SIOCGIFFLAGS, &ifr ) != -1 )
{
up_and_running = (ifr.ifr_flags & ( IFF_UP | IFF_RUNNING )) == ( IFF_UP | IFF_RUNNING );
}
else
{
// error
}
Input variable is ifrname. It should be the interface name "eth0", eth1", "ppp0" ....
Because ioctl() needs a file descriptor as parameter, you can use for example some temporary UDP socket for that:
dummy_fd = socket( AF_INET, SOCK_DGRAM, 0 );
Remember to close the socket, when not used anymore.
See how to go very low-level and use ioctl(7). See lsif by Adam Risi for an example.
Try to ping 192.168.1.17 before proceeding
int status = system("ping -c 2 192.168.1.17");
if (-1 != status)
{
ping_ret = WEXITSTATUS(status);
if(ping_ret==0)
cout<<"Ping successful"<<endl; ////Proceed
else
cout<<"Ping not successful"<<endl; ///Sleep and agin check for ping
}
Related
I am using C++ code snippet for port forwarding. The requirement is to do the hand shake between two ports. It should be two way communication. That is to forward what ever iscoming on the source port to destination port. And then to forward the response of the destination port to the source port.
This piece of code is working as expected on my mac system. But when I am running this code on Linux system I am facing one issue.
Issue:
The C++ code that I am using is having 3 parts:
establish_connection_to_source();
open_connection_to_destination();
processconnetion();
On Linux: establish_connection_to_source(); and open_connection_to_destination(); is working perfectly fine. But processconnetion(); is havng one issue.
Following is the process connection method:
void processconnetion()
{
buffer *todest = new buffer(socket_list[e_source].fd,socket_list[e_dest].fd);
buffer *tosrc = new buffer(socket_list[e_dest].fd,socket_list[e_source].fd);
if (todest == NULL || tosrc == NULL){
fprintf(stderr,"out of mememory\n");
exit(-1);
}
unsigned int loopcnt;
profilecommuncation srcprofile(COMM_BUFSIZE);
profilecommuncation destprofile(COMM_BUFSIZE);
while (true) {
int withevent = poll(socket_list, 2, -1);
loopcnt++;
fprintf(stderr,"loopcnt %d socketswith events = %d source:0x%x dest:0x%x\n", loopcnt, withevent, socket_list[e_source].revents, socket_list[e_dest].revents);
if ((socket_list[e_source].revents | socket_list[e_dest].revents) & (POLLHUP | POLLERR)) {
// one of the connections has a problem or has Hungup
fprintf(stderr,"socket_list[e_source].revents= 0x%X\n", socket_list[e_source].revents);
fprintf(stderr,"socket_list[e_dest].revents= 0x%X\n", socket_list[e_dest].revents);
fprintf(stderr,"POLLHUP= 0x%X\n", POLLHUP);
fprintf(stderr,"POLLERR= 0x%X\n", POLLERR);
int result;
socklen_t result_len = sizeof(result);
getsockopt(socket_list[e_dest].fd, SOL_SOCKET, SO_ERROR, &result, &result_len);
fprintf(stderr, "result = %d\n", result);
fprintf(stderr,"exiting as one connection had an issue\n");
break;
}
if (socket_list[e_source].revents & POLLIN) {
srcprofile.increment_size(todest->copydata());
}
if (socket_list[e_dest].revents & POLLIN) {
destprofile.increment_size(tosrc->copydata());
}
}
delete todest;
delete tosrc;
close(socket_list[e_source].fd);
close(socket_list[e_dest].fd);
srcprofile.dumpseensizes("source");
destprofile.dumpseensizes("destination");
}
Here it is giving error - exiting as one connection had an issue that means that if ((socket_list[e_source].revents | socket_list[e_dest].revents) & (POLLHUP | POLLERR)) is returning true. The issue is with the destination port and not in case of source.
Note:
Variales used in the processconnetion(); method:
socket_list is a structure of type pollfd. Following is the description:
struct pollfd {
int fd;
short events;
short revents;
};
pollfd socket_list[3];
#define e_source 0
#define e_dest 1
#define e_listen 2
Following is the output at the time for exit:
connecting to destination: destination IP / 32001.
connected...
loopcnt 1 socketswith events = 1 source:0x0 dest:0x10
socket_list[e_source].revents= 0x0
socket_list[e_dest].revents= 0x10
POLLHUP= 0x10
POLLERR= 0x8
result = 0
exiting as one connection had an issue
int withevent = poll(socket_list, 2, -1); here the withevent value returned is 1
Socket List Initialisation:
guard( (socket_list[e_listen].fd = socket( PF_INET, SOCK_STREAM, IPPROTO_TCP )), "Failed to create socket listen, error: %s\n", "created listen socket");
void guard(int n, char *msg, char *success)
{
if (n < 0) {
fprintf(stderr, msg, strerror(errno) );
exit(-1);
}
fprintf(stderr,"n = %d %s\n",n, success);
}
I am not able to figure out the issue as it is working fine in mac. Any leads why this behaviour in Linux is highly appreciated. Thanks in advance.
I am trying to get the Bluetooth address of the local device using the Microsoft Bluetooth stack. I am targetting a Windows CE 6.0 device.
On MSDN I found the following code example :
SOCKADDR_BTH sab;
int len = sizeof(sab);
if (0 == getsockname (s, &sab, &len)) {
wprintf (L"Local Bluetooth device is %04x%08x, server channel = %d\n",
GET_NAP(sab.btAddr), GET_SAP(sab.btAddr), sab.port);
}
At : https://msdn.microsoft.com/en-us/library/ee495768(v=winembedded.60).aspx
In this example, s is not declared, so I assume its just a valid socket...
Here is the code I've written based on this example :
SOCKET s = socket( AF_BTH, SOCK_STREAM, BTHPROTO_RFCOMM );
if( s == INVALID_SOCKET )
{
return false;
}
SOCKADDR_BTH sab;
int len = sizeof( sab );
if( 0 == getsockname(s, (sockaddr *) & sab, & len ) )
{
// Use the BT address here
closesocket( s );
return true;
}
else
{
closesocket( s );
return false;
}
I had to cast the SOCKADDR_BTH * to a sockaddr * since the compiler wouldn't let me compile this, unlike what the example lets think.
This is getting me a 10022 error code indicating that I'm providing an invalid argument, which is not really surprising me due to this weird cast I had to do.
I Also tried another method involving the function BthReadLocalAddr which is documented on MSDN at : https://msdn.microsoft.com/en-us/library/ms887876.aspx
The function is indeed declared in Bt_api.h, but there is no Btdrt.lib in the CE 6.0 SDK. However there is a Btd.lib, but it doesn't seem to contain the definition of that function since I'm getting an unresolved external link error, which is not surprising.
How can I get this to work ? And maybe find a valid documentation about the MS bluetooth API that doesn't refer to files that do not exist ? Thank you.
Consider the following EXAMPLE code:
#include <sys/socket.h>
int main()
{
int sv[ 2 ] = { 0 };
socketpair( AF_UNIX, SOCK_STREAM | SOCK_CLOEXEC | SOCK_NONBLOCK, 0, sv );
for( unsigned ii = 0; ii < 5; ++ii )
{
int* msg = new int( 123 );
if( -1 == send( sv[ 0 ], &msg, sizeof(int*), MSG_NOSIGNAL ) )
{
delete msg;
}
}
close( sv[0] );
sleep( 1 );
int* msg = 0;
int r;
while( ( r = read( sv[ 1 ], &msg, sizeof(int*) ) ) > 0 )
{
delete msg;
}
return 0;
}
Obviously, the code works fine, but it doesn't mean it's not UB.
Couldn't find anything in the man pages, which guarantees, that when sv[ 0 ] is closed, the read will still be able to read everything from sv[ 1 ], sent by the send.
Maybe the question could be asked like this - as read returns 0 for EOF and as the socketpair is SOCK_STREAM, I expect the EOF will be "hit" once everything is read from the socket and the other side is closed. Is this correct?
AFAIK it can work, but smells UB.
The correct way is the graceful shutdown :
shutdown(s, 1) or (better shutdown(s, SHUT_WR))
read until eof on input
only then call close.
(References : http://msdn.microsoft.com/en-us/library/windows/desktop/ms738547%28v=vs.85%29.aspx, Graceful Shutdown Server Socket in Linux)
Edit :
After reading R.. comment, I wondered if I was not a little confused, did some tests and read documentation again. And ... I now think what I said is true for general socket usage (include AF_INET sockets), but not for the special AF_INET socketpair.
My test was put a little more stress on the system, since I send 8 packets of 1024 bytes on a FreeBSD 9 system. I stopped there because sending more would have blocked. And after the close on sv[0] I could successfully read my 8 packets.
So it works on different kernels, but could not find a valid reference for it except that AF_UNIX sockets do not support OOB data.
I could also confirm that using shutdown, works fine.
Conclusion :
As far as I am concerned, I would stick to graceful shutdown for closing a socket, but mostly because I do not want to think about the underlying protocol.
Hope somebody else with more knowledge could give a piece of reference documentation
We are using a usb-serial port converter to establish a serial port connection. We've tested it on a computer with no serial port and were able to initialize and send command through the converter to the device successfully. Once we release the .exe file to another PC with the same usb-serial converter, it fails to open com port.
The only thing we thought we need to change in the code is the port number, which we made sure were correct from device manager. COM6 on the working computer, and COM11 on the non-working one. We also tried to change COM11 to COM2 (an unused port number). The PC we try to make it work on does already have 3 real serial port (COM1, 3 and 4), but would they somehow be interfering this port?
We are using SerialCommHelper.cpp code to initialize the port.
HRESULT CSerialCommHelper:: Init(std::string szPortName, DWORD dwBaudRate,BYTE byParity,BYTE byStopBits,BYTE byByteSize)
{
HRESULT hr = S_OK;
try
{
m_hDataRx = CreateEvent(0,0,0,0);
//open the COM Port
//LPCWSTR _portName =LPCWSTR( szPortName.c_str());
wchar_t* wString=new wchar_t[4096];
MultiByteToWideChar(CP_ACP, 0, szPortName.c_str(), -1, wString, 4096);
m_hCommPort = ::CreateFile(wString,
GENERIC_READ|GENERIC_WRITE,//access ( read and write)
0, //(share) 0:cannot share the COM port
0, //security (None)
OPEN_EXISTING,// creation : open_existing
FILE_FLAG_OVERLAPPED,// we want overlapped operation
0// no templates file for COM port...
);
if ( m_hCommPort == INVALID_HANDLE_VALUE )
{
TRACE ( "CSerialCommHelper : Failed to open COM Port Reason: %d",GetLastError());
ASSERT ( 0 );
std::cout << "This is where the error happens" << std::endl;
return E_FAIL;
}
And we call this using
if( m_serial.Init(comPort, 38400, 0, 1, 8) != S_OK )
which comPort is set correctly, but Init never returns S_OK.
Any help is appreciated! Thank you!
The COM port name syntax changes for COM10 and higher. You need: "\\.\COM10"
as documented here...
http://support.microsoft.com/kb/115831/en-us
I've run into a rather strange problem:
I use select() in order to determine if a socket becomes readable. However, whenever a client connects, I get a segfault when I call FD_ISSET() to check if a given socket is present in the fd_set.
/* [...] */
while( /* condition */ ){
timeout.tv_sec = 0;
timeout.tv_usec = SELECT_TIMEOUT;
//this simply fills sockets with some file descriptors (passed in by clients - both parameters are passed by reference)
maxfd = this->build_fd_set( clients, sockets );
//wait until something relevant happens
readableCount = select( maxfd + 1, &sockets, (fd_set*)NULL, (fd_set*)NULL, &timeout );
if( readableCount > 0 ){
//Some sockets have become readable
printf( "\nreadable: %d, sockfd: %d, maxfd: %d\n",
readableCount, this->sockfd, maxfd );
//Check if listening socket has pending connections
// SEGFAULT OCCURS HERE
if( FD_ISSET( this->sockfd, &sockets ) ) {
DBG printf( "new connection incoming" );
this->handle_new_connection( clients );
/* [...] */
}else {
// Data is pending on some socket
/* [...] */
}
}else if( readableCount < 0 ) {
//An error occured
/* [...] */
return;
}else {
// select has timed out
/* [...] */
}
}
EDIT:
Yeah, sorry for the sparse info: I've updated the code.
this->sock_fd is set up to be a descriptor for a listening socket, created usingthis->sockfd = socket( AF_UNIX, SOCK_STREAM, 0 ); and then made listening via listen( this->sockfd, ACCEPT_BACKLOG ).
build_fd_set:
int SvcServer::build_fd_set( const vector<int>& clients, fd_set& sockets ) {
//build up the socket set
FD_ZERO( &sockets );
FD_SET( this->sockfd, &sockets ); //listening socket is always part of the set
int maxfd = this->sockfd;
//Add all currently connected sockets to the list
for( vector<int>::const_iterator it = clients.begin() ; it != clients.end() ; ++it ) {
FD_SET( *it, &sockets );
maxfd = max( maxfd, *it );
}
return maxfd;
}
It really doesn't matter what clients is, it' just empty and meant to be filled once clients connect, which is not happening since the whole thing segfaults on the first incoming connection.
Also, here's some sample output:
readable: 1, sockfd: 3, maxfd: 3
Segmentation fault
The things I can derive here are:
The call to select() works, readable is set correctly
Also sockfd and maxfd are valid descriptors.
I'm afraid I can't provide you with any debugging info (e.g. gdb) since I'm cross compiling and gdb is not available on the platform I'm compiling to.
Nevermind, I figured it out. * stupid me *
Turns out, the segfault was never actually occuring at the suspected position, the last printf before the segfault never got shown because it stdout wasn't flushed. The actual segfault occured a little later and was (of course) my mistake.
thx nevertheless