I am having trouble setting the timeout properly on nanomsg python in the PUSH/PULL schema. I use the set socket option function and passing in a ctype object. Both set_sock_options return a success. Am I missing something?
The sockets work to receive and send, but have an infinite timeout. When I use get_sock_option it returns the value I just set, so it seems my inputs have some sort of an effect.
NN_RCVTIMEO = 5
NN_SNDTIMEO = 4
NN_SOL_SOCKET = 0
message = ""
timeout = ctypes.create_string_buffer(b'500');
#Bind input socket
socket_in = nn_wrapper.nn_socket(AF_SP, PULL)
sucess = nn_wrapper.nn_setsockopt(socket_in, NN_SOL_SOCKET, NN_RCVTIMEO, timeout)
nn_wrapper.nn_bind(socket_in, 'tcp://127.0.0.1:64411')
time.sleep(0.2)
print("SUCESS?" + str(sucess))
#Send inquiry
socket_out = nn_wrapper.nn_socket(AF_SP, PUSH)
sucess = nn_wrapper.nn_setsockopt(socket_out, NN_SOL_SOCKET, NN_SNDTIMEO, timeout)
nn_wrapper.nn_connect(socket_out, 'tcp://127.0.0.1:64400')
time.sleep(0.2)
print("SUCESS?" + str(sucess))
nn_wrapper.nn_send(socket_out, b'HELLO',0)
#Received...
bytes, message = nn_wrapper.nn_recv(socket_in, 0)
nn_wrapper.nn_close(socket_in)
nn_wrapper.nn_close(socket_out)
Looks like when working with C function wrappers that take in arguments by reference(in this case an integer), you have to use the 'struct' module. This is how one would use the integer value 500 as a parameter for the nn_setsockopt() function:
from struct import Struct
from nanomsg import wrapper as nn_wrapper
from nanomsg import Socket, PULL, PUSH, AF_SP
import ctypes
NN_RCVTIMEO = 5
NN_SOL_SOCKET = 0
INT_PACKER = Struct(str('i'))
timeout = (ctypes.c_ubyte * INT_PACKER.size)()
INT_PACKER.pack_into(timeout, 0, 500)
socket_in = nn_wrapper.nn_socket(AF_SP, PULL)
nn_wrapper.nn_bind(socket_in, 'tcp:///127.0.0.1:60000')
nn_wrapper.nn_setsockopt(socket_in, NN_SOL_SOCKET, NN_RCVTIMEO, timeout)
late to the party, hopefully will help the next guy...
the writer of the library already thought of these problems, and provided a set_int_option python for the socket object.
this worked for me:
import nanomsg
class ManagementClient:
def __init__(self, endpoint):
self._client_socket = nanomsg.Socket(nanomsg.REQ)
self._client_socket.set_int_option(nanomsg.SOL_SOCKET, nanomsg.RCVTIMEO, 5 * 1000)
self._client_socket.connect("tcp://" + endpoint)
Related
I have provided the following python code,but the problem is that when it doesnot receive any input, it start showing error. how can i modify the code in a way that this error dosnot appear:
#!/usr/bin/env python from roslib import message import rospy import sensor_msgs.point_cloud2 as pc2 from sensor_msgs.msg import PointCloud2, PointField import numpy as np import ros_numpy from geometry_msgs.msg import Pose
#listener def listen():
rospy.init_node('listen', anonymous=True)
rospy.Subscriber("/Filtered_points_x", PointCloud2, callback_kinect)
def callback_kinect(data):
pub = rospy.Publisher('lidar_distance',Pose, queue_size=10)
data_lidar = Pose()
xyz_array = ros_numpy.point_cloud2.pointcloud2_to_xyz_array(data)
print(xyz_array)
mini_data = min(xyz_array[:,0])
print("mini_data", mini_data)
data_lidar.position.x = mini_data
pub.publish(data_lidar)
print("data_points", data_lidar.position.x)
height = int (data.height / 2)
middle_x = int (data.width / 2)
middle = read_depth (middle_x, height, data) # do stuff with middle
def read_depth(width, height, data) :
if (height >= data.height) or (width >= data.width) :
return -1
data_out = pc2.read_points(data, field_names= ('x','y','z'), skip_nans=True, uvs=[[width, height]])
int_data = next(data_out)
rospy.loginfo("int_data " + str(int_data))
return int_data
if __name__ == '__main__':
try:
listen()
rospy.spin()
except rospy.ROSInterruptException:
pass
the following is the error that i mentioned:
[[ 7.99410915 1.36072445 -0.99567264]]
('mini_data', 7.994109153747559)
('data_points', 7.994109153747559)
[INFO] [1662109961.035894]: int_data (7.994109153747559, 1.3607244491577148, -0.9956726431846619)
[]
[ERROR] [1662109961.135572]: bad callback: <function callback_kinect at 0x7f9346d44230>
Traceback (most recent call last):
File "/opt/ros/kinetic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
cb(msg)
File "/home/masoumeh/catkin_ws/src/yocs_velocity_smoother/test4/distance_from_pointcloud.py", line 27, in callback_kinect
mini_data = min(xyz_array[:,0])
ValueError: min() arg is an empty sequence
The code is still receiving input, but specifically it’s receiving an empty array. You’re then trying to splice the empty array, causing the error. Instead you should check that the array has elements before the line min(xyz_array[:,0]). It can be as simple as:
if xyz_array == []:
return
As another note, you’re creating a publisher in the callback. You shouldn’t do this as a new publisher will be created every time it gets called. Instead create it as a global variable.
I am trying to write bool value to PLC Tag (turn relay on or off) using OPC UA as Client writing to OPC UA Server running on Siemens S7-1512 PLC. The client must be implemented in c/c++
I have tried a few different GUI clients with no issues.
Also, I have tried Python SDK including freeopcua. I had slight issues but was able to write value after setting an attribute in the write request. But with open62541 I can't find any solution to that (status code is good but value is not changed I am able to read values):
Python working request:
node.set_attribute(ua.AttributeIds.Value, ua.DataValue(not node.get_value()))
C not working request code below:
UA_WriteRequest request;
UA_WriteRequest_init(&request);
request.nodesToWrite = UA_WriteValue_new();
request.nodesToWriteSize = 1;
request.nodesToWrite[0].nodeId = UA_NODEID_STRING_ALLOC(3, "\"VALUE\"");
request.nodesToWrite[0].attributeId = UA_ATTRIBUTEID_VALUE;
request.nodesToWrite[0].value.hasValue = true;
request.nodesToWrite[0].value.value.type = &UA_TYPES[UA_TYPES_BOOLEAN];
request.nodesToWrite[0].value.value.storageType = UA_VARIANT_DATA_NODELETE;
request.nodesToWrite[0].value.hasServerTimestamp = true;
request.nodesToWrite[0].value.hasSourceTimestamp = true;
request.nodesToWrite[0].value.sourceTimestamp = UA_DateTime_now();
request.nodesToWrite[0].value.value.data = val;
request.requestHeader.timestamp = UA_DateTime_now();
request.requestHeader.authenticationToken = UA_NODEID_NUMERIC(0, UA_NS0ID_SESSIONAUTHENTICATIONTOKEN);
//write to client service
UA_WriteResponse wResp = UA_Client_Service_write(client, request);
I expect value for PLC tag to be changed to the opposite value or provide me with info on why it won't work.
Better use the client high level api:
UA_NodeId nodeid = UA_NODEID_STRING_ALLOC(3, "\"VALUE\"");
UA_Boolean value = true;
UA_Variant *var= UA_Variant_new();
UA_Variant_setScalarCopy(var, &value, &UA_TYPES[UA_TYPES_BOOLEAN]);
UA_StatusCode ret = UA_Client_writeValueAttribute(client, nodeid, var);
....
UA_Variant_delete(var);
The reason it is rejected, is that you trie to set the Timestamp in your write request. Most server refuse that.
" i am trying to read data from the serial connection and doing some stuff if it matches my string but its giving me errors when i close the serial connection port"
" for some reason i do not see this error if i use the serial.readline() method "
import time
import serial
from Queue import Queue
from threading import Thread
class NonBlocking:
def __init__(self, serial_connection, radio_serial_connection):
self._s = serial_connection
self._q = Queue()
self.buf = bytearray()
def _populateQueue(serial_connection, queue):
if type(serial_connection) == str:
return
self.s = serial_connection
while True:
i = self.buf.find(b"\n")
if i >= 0:
r = self.buf[:i + 1]
self.buf = self.buf[i + 1:]
queue.put(r)
while True:
i = max(1, min(2048, self.s.in_waiting))
data = self.s.read(i)
i = data.find(b"\n")
if i >= 0:
r = self.buf + data[:i + 1]
self.buf[0:] = data[i + 1:]
a = r.split('\r\n')
for item in a:
if item:
queue.put(item)
else:
self.buf.extend(data)
self._t = Thread(target=_populateQueue, args=(self._s, self._q))
self._t.daemon = True
self._t.start()
def read_all(self, timeout=None):
data = list()
if self._q.empty():
pass
while not self._q.empty():
data.append(self._q.get(block=timeout is not None, timeout=timeout))
return data
class SerialCommands:
def __init__(self, port, baudrate):
self.serial_connection = serial.Serial(port, baudrate)
self.queue_data = NonBlocking(self.serial_connection, '')
def read_data(self):
returned_info = self.queue_data.read_all()
return returned_info
def close_q(self):
self.serial_connection.close()
class qLibrary:
def __init__(self):
self.q = None
self.port = None
def close_q_connection(self):
self.q.close_q()
def establish_connection_to_q(self, port, baudrate=115200, delay=2):
self.delay = int(delay)
self.port = port
try:
if not self.q:
self.q = SerialCommands(self.port, int(baudrate))
except IOError:
raise AssertionError('Unable to open {0}'.format(port))
def verify_event(self, data, timeout=5):
timeout = int(timeout)
data = str(data)
# print data
while timeout:
try:
to_analyze = self.q.read_data()
for item in to_analyze:
print "item: ", item
if str(item).find(str(data)) > -1:
print "Found data: '{0}' in string: '{1}'".format(data, item)
except:
pass
time.sleep(1)
timeout -= 1
if __name__ == '__main__':
q1 = qLibrary()
q1.establish_connection_to_q('COM5')
q1.verify_event("ATE")
q1.close_q_connection()
" i expect the code to close the serial connection without any exceptions or errors "
the output is
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python27\Lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Python27\Lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "C:/Program Files (x86)/serialtest1.py", >line 27, in _populateQueue
data = self.s.read(i)
File "C:\Program Files (x86)\venv\lib\site->packages\serial\serialwin32.py", line 283, in read
ctypes.byref(self._overlapped_read))
TypeError: byref() argument must be a ctypes instance, not 'NoneType'
If you define your serial port with no timeout it will get the default setting timeout=None which means when you call serial.read(x) the code will block until you read x bytes.
If you never get those x bytes your code will get stuck in there waiting forever, or at least until you receive more data on the buffer to get the total number of bytes received equal to x.
If you mix that up with threading, I'm afraid you are quite likely closing the port while you are trying to read.
You can probably fix this issue just defining a sensible read timeout on your port or changing the way you read. The general advice is to set a timeout that works for your application and read at least the maximum number of bytes you expect. Reading your code, that seems to be what you wanted to do. If so, you forgot to set the timeout.
If you have a reason not to set a timeout or you want to keep your reading routine as it is, you can make your code work if you cancel reading before closing. You can do that with serial.cancel_read()
I am trying to receive stream from some GRPC server, code of which is not accessible to me.
Here is the code of my GRPC client:
class Client():
def __init__ (self, grpcIP, grpcPort, params):
self.channel = grpc.insecure_channel('%s:%d' % (grpcIP, grpcPort))
grpc.channel_ready_future(self.channel).result()
self.stub = pb2_grpc.DataCloneStub(self.channel)
self.params = params
self.host = grpcIP
def StartSender(self):
params = pb2.StartParameters(**self.params)
try:
res = self.stub.Start(params)
print(type(res))
for pr in res:
print(pr.current_progress)
except grpc.RpcError as e:
print(e)
Here are snippets from proto file that is used.
Method:
rpc Start (StartParameters) returns (stream Progress) {}
Message in stream:
message Progress {
double current_progress = 1;
uint64 total_sent = 2;
uint64 total_size = 3;
string file_name = 4;
Error error = 5;
}
As I understand self.stub.Start(params) should return iterator with objects of type Progress. The problem is that it returns something with type grpc._channel._Rendezvous. I can`t iterate through the response. It doesn`t catch any exceptions either.
Did someone experience such behavior? Is it possible that issue comes not from client side?
Class grpc._channel._Rendezvous is also an iterator, you can check the streaming RPC example here.
The code you posted is correct implementation. If the program blocked at the for pr in res line, it is possible that the server is not sending any response (or taking a long time), hence the client side is blocked.
I am trying to send a file across the network using Twisted with the LineReceiver protocol. The issue I am seeing is that when I read a binary file and try to send the chunks they simply don't send.
I am reading the file using:
import json
import time
import threading
from twisted.internet import reactor, threads
from twisted.protocols.basic import LineReceiver
from twisted.internet import protocol
MaximumMsgSize = 15500
trySend = True
connectionToServer = None
class ClientInterfaceFactory(protocol.Factory):
def buildProtocol(self, addr):
return WoosterInterfaceProtocol(self._msgProcessor, self._logger)
class ClientInterfaceProtocol(LineReceiver):
def connectionMade(self):
connectionToServer = self
def _DecodeMessage(self, rawMsg):
header, body = json.loads(rawMsg)
return (header, json.loads(body))
def ProcessIncomingMsg(self, rawMsg, connObject):
# Decode raw message.
decodedMsg = self._DecodeMessage(rawMsg)
self.ProccessTransmitJobToNode(decodedMsg, connObject)
def _BuildMessage(self, id, msgBody = {}):
msgs = []
fullMsgBody = json.dumps(msgBody)
msgBodyLength = len(fullMsgBody)
totalParts = 1 if msgBodyLength <= MaximumMsgSize else \
int(math.ceil(msgBodyLength / MaximumMsgSize))
startPoint = 0
msgBodyPos = 0
for partNo in range(totalParts):
msgBodyPos = (partNo + 1) * MaximumMsgSize
header = {'ID' : id, 'MsgParts' : totalParts,
'MsgPart' : partNo }
msg = (header, fullMsgBody[startPoint:msgBodyPos])
jsonMsg = json.dumps(msg)
msgs.append(jsonMsg)
startPoint = msgBodyPos
return (msgs, '')
def ProccessTransmitJobToNode(self, msg, connection):
rootDir = '../documentation/configs/Wooster'
exportedFiles = ['consoleLog.txt', 'blob.dat']
params = {
'Status' : 'buildStatus',
'TaskID' : 'taskID',
'Name' : 'taskName',
'Exports' : len(exportedFiles),
}
msg, statusStr = self._BuildMessage(101, params)
connection.sendLine(msg[0])
for filename in exportedFiles:
with open (filename, "rb") as exportFileHandle:
data = exportFileHandle.read().encode('base64')
params = {
ExportFileToMaster_Tag.TaskID : taskID,
ExportFileToMaster_Tag.FileContents : data,
ExportFileToMaster_Tag.Filename : filename
}
msgs, _ = self._BuildMessage(MsgID.ExportFileToMaster, params)
for m in msgs:
connection.sendLine(m)
def lineReceived(self, data):
threads.deferToThread(self.ProcessIncomingMsg, data, self)
def ConnectFailed(reason):
print 'Connection failed..'
reactor.callLater(20, reactor.callFromThread, ConnectToServer)
def ConnectToServer():
print 'Connecting...'
from twisted.internet.endpoints import TCP4ClientEndpoint
endpoint = TCP4ClientEndpoint(reactor, 'localhost', 8181)
deferItem = endpoint.connect(factory)
deferItem.addErrback(ConnectFailed)
netThread = threading.Thread(target=reactor.run, kwargs={"installSignalHandlers": False})
netThread.start()
reactor.callFromThread(ConnectToServer)
factory = ClientInterfaceFactory()
protocol = ClientInterfaceProtocol()
while 1:
time.sleep(0.01)
if connectionToServer == None: continue
if trySend == True:
protocol.ProccessTransmitJobToNode(None, None)
trySend = False
Is there something I am doing wrong?file is sent, it's when the write is multi part or there are more than one file it struggles.
If a single write occurs then the m
Note: I have updated the question with a crude piece of sample code in the hope it makes sense.
_BuildMessage returns a two-tuple: (msgs, '').
Your network code iterates over this:
msgs = self._BuildMessage(MsgID.ExportFileToMaster, params)
for m in msgs:
So your network code first tries to send a list of json encoded data and then tries to send the empty string. It most likely raises an exception because you cannot send a list of anything using sendLine. If you aren't seeing the exception, you've forgotten to enable logging. You should always enable logging so you can see any exceptions that occur.
Also, you're using time.sleep and you shouldn't do this in a Twisted-based program. If you're doing this to try to avoid overloading the receiver, you should use TCP's native backpressure instead by registering a producer which can receive pause and resume notifications. Regardless, time.sleep (and your loop over all the data) will block the entire reactor thread and prevent any progress from being made. The consequence is that most of the data will be buffered locally before being sent.
Also, your code calls LineReceiver.sendLine from a non-reactor thread. This has undefined results but you can probably count on it to not work.
This loop runs in the main thread:
while 1:
time.sleep(0.01)
if connectionToServer == None: continue
if trySend == True:
protocol.ProccessTransmitJobToNode(None, None)
trySend = False
while the reactor runs in another thread:
netThread = threading.Thread(target=reactor.run, kwargs={"installSignalHandlers": False})
netThread.start()
ProcessTransmitJobToNode simply calls self.sendLine:
def ProccessTransmitJobToNode(self, msg, connection):
rootDir = '../documentation/configs/Wooster'
exportedFiles = ['consoleLog.txt', 'blob.dat']
params = {
'Status' : 'buildStatus',
'TaskID' : 'taskID',
'Name' : 'taskName',
'Exports' : len(exportedFiles),
}
msg, statusStr = self._BuildMessage(101, params)
connection.sendLine(msg[0])
You should probably remove the use of threading entirely from the application. Time-based events are better managed using reactor.callLater (your main-thread loop effectively generates a call to ProcessTransmitJobToNode once hundred times a second (modulo effects of the trySend flag)).
You may also want to take a look at https://github.com/twisted/tubes as a better way to manage large amounts of data with Twisted.