I have a TCP server and client. At some point in the server script, I start a process, which needs to be able to get every new connection and send data to it. In order to do so, I have a multiprocessing.Queue(), to which I want to put every new connection from the main process, so that the process I opened can get the connections from it and send data to them. However, it seems that you cannot pass anything you want to a Queue. When I try to pass the connection (a socket object), I get:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/queues.py", line 266, in _feed
send(obj)
TypeError: expected string or Unicode object, NoneType found
Are there any alternatives that I could use?
Sending a socket through a multiprocessing.Queue works fine starting with python3.4 because from that version a ForkingPickler is used to serialize the objects to be put in the queue, and that pickler knows how to serialize sockets and other objects containing a file handle.
The multiprocessing.reduction.ForkingPickler class does already exist in python2.7 and can pickle sockets, it's just not used by multiprocessing.Queue.
If you can't switch to python3.4+ and really need similar functionality in python2.7 a workaround would be to create a function that uses the ForkingPickler to serialize objects, e.g:
from multiprocessing.reduction import ForkingPickler
import StringIO
def forking_dumps(obj):
buf = StringIO.StringIO()
ForkingPickler(buf).dump(obj)
return buf.getvalue()
Instead of sending the socket directly you then need to send its pickled version and unpickle it in the consumer. Simple example:
from multiprocessing import Queue, Process
from socket import socket
import pickle
def handle(q):
sock = pickle.loads(q.get())
print 'rest:', sock.recv(2048)
if __name__ == '__main__':
sock = socket()
sock.connect(('httpbin.org', 80))
sock.send(b'GET /get\r\n')
# first bytes read in parent
print 'first part:', sock.recv(50)
q = Queue()
proc = Process(target=handle, args=(q,))
proc.start()
# use the function from above to serialize socket
q.put(forking_dumps(sock))
proc.join()
Making sockets pickleable only makes sense here in the context of multiprocessing, it would not make sense to write it to a file and use later or try to use it on a different pc or after the original process has ended. Therefore it wouldn't be a good idea to make sockets pickleable globally (e.g. by using the copyreg mechanisms).
Related
I am creating a simple encryption software. The problem I currently have is that sending encrypted aes file data through a socket doesn't work. At the receiving end, the file that should be written to is empty. I have looked through my code for a good while and can't see to solve it.
I have made a version without networking.
I have been able to send a small file up to 8 KB on a different version
My Program Is Function Based So The Program Branches Off From The Main Menu To Other Menues And Functions. Since There is A Bit Of Jumping, It Would Be Best To Show All The Code.
https://github.com/BaconBombz/Dencryptor/blob/Version-2.0/Dencryptor.py
The socket connects, and all required data is sent. Then, the file is AES encrypted and sent through the socket. The Receiving end writes encrypted data to a file and decrypts it. The program will say the file was sent but on the recieving end, the program spits out a struct error because the file that should have the encrypted data is empty.
The code is too non-minimal so here's a minimal example of downloading an unencrypted file. Also, TCP is a streaming protocol and using sleeps to separate your data is incorrect. Define a protocol for the byte stream instead. This is the protocol of my example:
Open the connection.
Send the UTF-8-encoded filename followed by a newline.
Send the encoded file size in decimal followed by a newline.
Send the file bytes.
Close the connection.
Note this is Python 3 code since Python 2 is obsolete and support has ended.
server.py
from socket import *
import os
CHUNKSIZE = 1_000_000
# Make a directory for the received files.
os.makedirs('Downloads',exist_ok=True)
sock = socket()
sock.bind(('',5000))
sock.listen(1)
with sock:
while True:
client,addr = sock.accept()
# Use a socket.makefile() object to treat the socket as a file.
# Then, readline() can be used to read the newline-terminated metadata.
with client, client.makefile('rb') as clientfile:
filename = clientfile.readline().strip().decode()
length = int(clientfile.readline())
print(f'Downloading {filename}:{length}...')
path = os.path.join('Downloads',filename)
# Read the data in chunks so it can handle large files.
with open(path,'wb') as f:
while length:
chunk = min(length,CHUNKSIZE)
data = clientfile.read(chunk)
if not data: break # socket closed
f.write(data)
length -= len(data)
if length != 0:
print('Invalid download.')
else:
print('Done.')
client.py
from socket import *
import os
CHUNKSIZE = 1_000_000
filename = input('File to upload: ')
sock = socket()
sock.connect(('localhost',5000))
with sock,open(filename,'rb') as f:
sock.sendall(filename.encode() + b'\n')
sock.sendall(f'{os.path.getsize(filename)}'.encode() + b'\n')
# Send the file in chunks so large files can be handled.
while True:
data = f.read(CHUNKSIZE)
if not data: break
sock.sendall(data)
I'am trying to do an integration via HTTP socket. I'am using python to create the socket client and send data to a socket server created in C.
As you can see in the following images, the integration documentation gives an example in C that shows how I must send the data to the server:
Integration documentation example:
1- define record / structure types for the message header and for each message format
2- Declare / Create a client socket object
3- Open the socket component in non blocking mode
4- declare a variable of the data structure type relevant to the API function you wish to call – then fill it with the correct data (including header). Copy the structure data to a byte array and send it through the socket
I've tried to do that using the ctypes module from python:
class SPMSifHdr(ctypes.Structure):
_fields_ = [
('ui32Synch1', ctypes.c_uint32),
('ui32Synch2', ctypes.c_uint32),
('ui16Version', ctypes.c_uint16),
('ui32Cmd', ctypes.c_uint32),
('ui32BodySize', ctypes.c_uint32)
]
class SPMSifRegisterMsg(ctypes.Structure):
_fields_ = [
('hdr1', SPMSifHdr),
('szLisence', ctypes.c_char*20),
('szApplName', ctypes.c_char*20),
('nRet', ctypes.c_int)
]
body_len = ctypes.sizeof(SPMSifRegisterMsg)
header = SPMSifHdr(ui32Synch1=0x55555555, ui32Synch2=0xaaaaaaaa, ui16Version=1, ui32Cmd=1, ui32BodySize=body_len)
body = SPMSifRegisterMsg(hdr1=header, szLisence='12345', szApplName='MyPmsTest', nRet=1)
socket_connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# config is a dict with the socket server connection params
socket_connection.connect((config.get('ip'), int(config.get('port'))))
socket_connection.sendall(bytearray(body))
socket_connection.recv(1024)
When I call the socket recv function it never receive anything, so I have used a windows tool to check the data that I sent and as you can see in the next image it seems any data is sent:
Socket sniff
I've tried to send even a simple "Hello! world" string and the result is always the same.
The socket connection is open. I know it because I can see how many connections are open from the server panel.
What am I doing wrong?
The error was that the SocketSniff program only shows the sent data if the server return a response. In it case the server did never return nothing because some bytes were missing.
I found it creating my own socket echo server and checking that the data I was sending were uncomplete.
Mystery solved. :D
I recently encountered ZeroMQ ( pyzmq ) and I found this very useful piece of code on a website Client Server with REQ and REP and I modified it to make only a single process call. My code is:
import zmq
import sys
from multiprocessing import Process
port = 5556
def server():
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:%s" % port)
print "Running server on port: %s" % port
# serves only 5 request and dies
#for reqnum in range(4):
# Wait for next request from client
message = socket.recv()
print "Received request : %s from client" % message
socket.send("ACK from %s" % port)
def client():
context = zmq.Context()
socket = context.socket(zmq.REQ)
#for port in ports:
socket.connect ("tcp://localhost:%s" % port)
#for request in range(20):
print "client Sending request to server"
socket.send ("Hello")
message = socket.recv()
print "Received ACK from server""[", message, "]"
time.sleep (1)
if __name__ == "__main__":
Process(target=server, args=()).start()
Process(target=client, args=()).start()
time.sleep(1)
I realise that ZeroMQ is powerful, especially with multiprocessing/Multi-threading calls, but I was wondering if it is possible to call the server and client methods without calling them as a Process in __main__. For example, I tried calling them like:
if __name__ == "__main__":
server()
client()
For some reason the server started but not the client and I had to hard exit the program.
Is there any way to achieve this without Process calling? If not, then is there a socket program ( with or without a client server type architecture ) that functions exactly like the one above? ( I want a single program, not 2 programs running in different terminals as a classic CL-SE program ).
Using Ubuntu 14.04, 32-bit VM with Python-2.7
Simply, the server() processing had to start, not the client()
Why?
because the pure [SERIAL]-process scheduling has stepped into the server() code, where a Context instance has been instantiated, a Socket-instance was created, and next, the call to a socket.recv() method has hung-up the whole process into an unlimited & uncontrollable waiting state, expecting to receive some message, having the REP-LY Formal Behaviour Pattern ready on the local-side, but having no live counterparty, that would have sent any such expected message yet.
Yes, distributed-computing has several new dimensions ( degrees-of-freedom ) to care about -- the elementary (non)-presence and order of events being just recognised in this trivial scenario.
Wherever I can advocate, I do, NEVER use a blocking format of .recv() + read about a risk of a principally un-salvageable REQ/REP mutual dead-lock ( you have no doubt when it will happen, but have a certainty, it will & a certainty, you cannot salvage the mutually dead-locked counterparties, once it happens )
So, welcome into the realms of a distributed-processing reality
I am using Python 2.7 and serial v 2.6 to listen to a serial port. I can listen to the port just fine, but I cannot write to the port.
import serial
cp = 5
br = 9600
ser = serial.Serial(5,br)
a = ser.readline()
Using this, I can listen to the outcoming data stream. However, if I want to change the status of the instrument (e.g. set GPS to off) I would write a command:
ser.write('gps=off')
When I do this, I get "6L" returned and the gps stay on. However, if I connect via TeraTerm I can see the data stream in in real time. While the data streams in I can type gps=off followed by a return and suddenly my GPS is off. Why is my command in Python not working like TeraTerm?
UPDATE
If I instead do
a = ser.write('gps=on')
"a" is assigned value of 6. I also tried sending a "junk" command via
a = ser.write('lkjdflksdjflksdjf')
with "a" assigned a value of 17, so it seems to be assigning the length of the string to a, which does not make sense.
I think the problem was the ser.write commands were getting stuck in buffer (I am not sure of that, but that is my suspicion). When I checked the input buffer I found it to be full. After flushing it out I was able to write to the instrument.
import serial
ser = serial.Serial(5, 9600)
# the buffer immediately receives data, so ensure it is empty before writing command
while ser.inWaiting()>0:
ser.read(1)
# now issue command
ser.write('gps=off\r')
That works.
In the blocking way I can do this:
from scapy.all import *
sniff(filter"tcp and port 80", count=10, prn = labmda x:x.summary())
# Below code will be executed only after 10 packets have been received
do_stuff()
do_stuff2()
do_stuff3()
I want to be able to sniff packets with scapy in a non blocking way, something like this:
def packet_recevied_event(p):
print "Packet received event!"
print p.summary()
# The "event_handler" parameter is my wishful thinking
sniff(filter"tcp and port 80", count=10, prn=labmda x:x.summary(),
event_handler=packet_received_event)
#I want this to be executed immediately
do_stuff()
do_stuff2()
do_stuff3()
To sum-up: My question is pretty clear, I want to be able to continue executing code without the sniff function blocking it.
One option is to open a separate thread for this, but I would like to avoid it and use scapy native tools if possible.
Environment details:
python: 2.7
scapy: 2.1.0
os: ubuntu 12.04 64bit
This functionality was added in https://github.com/secdev/scapy/pull/1999.
I'll be available with Scapy 2.4.3+ (or the github branch). Have a look at the doc over: https://scapy.readthedocs.io/en/latest/usage.html#asynchronous-sniffing
>>> t = AsyncSniffer(prn=lambda x: x.summary(), store=False, filter="tcp")
>>> t.start()
>>> time.sleep(20)
>>> t.stop()
Scapy doesn't have an async version of the sniff function. You're going to have to fire threads.
There may be other issues with this, mostly having to do with resource locking.