IPv6 destination options header - python-2.7

I'm working on a software-defined networking research project, and what I need is to make a simple UDP server that puts a data tag into the destination options field (IPv6) of the UDP packet. I was expecting to either the sendmsg() recvmsg() commands, or by using setsockopt() and getsockopt(). So, Python 2.7 doesn't have sendmsg() or recvmsg(), and while I can get setsockopt() to correctly load a tag into the packet (I see it in Wireshark), the getsockopt() command just returns a zero, even if the header is there.
#Python 2.7 client
#This code does put the dest opts header onto the packet correctly
#dst_header is a packed binary string (construction details irrelevant--
# it appears correctly formatted and parsed in Wireshark)
addr = ("::", 5000, 0, 0)
s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_DSTOPTS, dst_header)
s.sendto('This is my message ', addr)
#Python 2.7 server
addr = ("::", 5000, 0, 0)
s = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_RECVDSTOPTS, 1)
s.bind(addr)
data, remote_address = s.recvfrom(MAX)
header_data = s.getsockopt(socket.IPPROTO_IPV6, socket.IPPROTO_DSTOPTS, 1024)
I also tried this in Python 3.4, which does have sendmsg() and recvmsg(), but I just get an error message of "OSError: [Errno 22]: Invalid argument", even though I'm passing it (apparently) correct types:
s.sendmsg(["This is my message"], (socket.IPPROTO_IPV6, socket.IPV6_DSTOPTS, dst_header), 0, addr) #dst_header is same string as for 2.7 version
It looks like 99% of the usage of sendmsg() and recvmsg() is for passing UNIX file descriptors, which isn't what I want to do. Anybody got any ideas? I thought this would be just a four or five line nothing-special program, but I'm stumped.

OK, I'm going to partially answer my own question here, on the off chance that a search engine will bring somebody here with the same issues as I had.
I got the Python 3.4 code working. The problem was not the header, it was the message body. Specifically, both the message body and the header options value fields must be bytes (or bytearray) objects, stored in an iterable container (here, a list). By passing it ["This is my message"] I was sending in a string, not a bytes object; Python let it go, but the OS couldn't cope with that.
You might say I was "byted" by the changes in the handling of strings in Python 3.X...

Related

C++ HTTP client hangs on read() call after GET request

std::string HTTPrequest = "GET /index.html HTTP/1.1\r\nHost: www.yahoo.com\r\nConnection: close\r\n\r\n";
write(socket, HTTPrequest.c_str(), sizeof(HTTPrequest));
char pageReceived[4096];
int bytesReceived = read(socket, pageReceived, 4096);
I've got an HTTP client program that I run from my terminal. I've also got a webserver program. Using the webserver as a test, I can verify that the socket creation and attachment works correctly.
I create the request as shown above, then write to the socket. Using print statements, I can see that the code moves beyond the write call. However, it hangs on the read call.
I can't figure out what's going on - my formatting looks correct on the request.
Any ideas? Everything seems to work perfectly fine when I connect to my webserver, but both www.yahoo.com and www.google.com cause a hang. I'm on Linux.
In C and C++, sizeof gives you the number of bytes required to hold a type, regardless of its contents. So you are not sending the full request, only sizeof(std::string) bytes. You want HTTPRequest.size() (which gives you the number of bytes the value stored in HTTPRequest takes), not sizeof(HTTPrequest) (which gives you the number of bytes a std::string itself requires).

Binding custom layers in scapy

I have a python script which assembles and sends AVB (IEEE) packets into a network.
The packets will be captured by wireshark.
With an other python script I iterate through the capture file.
But I can't access a few parameters in a few layers because scapy doesn't know them.
So I have to add those layers to scapy.
Here's the packet in wireshark:
I added the following code to the file "python2.7/dist-packages/scapy/layers/l2.py"
class ieee(Packet):
name = "IEEE 1722 Packet"
fields_desc=[ XByteField("subtype", 0x00),
XByteField("svfield", 0x81),
XByteField("verfield", 0x81)]
bind_layers(Dot1Q, ieee1722, type=0x22f0)
When I execute the python script which should grab the parameters in the new layer (IEEE 1722 Protocol), the following error occurs:
"IndexError: Layer [ieee1722] not found"
What's wrong?
Ok, found the solution by editing the type value:
bind_layers(Dot1Q, ieee1722, type=0x88f7) ---> works
Dot1Q is the layer above the created ieee1722 layer (see wireshark).
You can see the type value by clicking at the layer of a packet in wireshark.
This is old, maybe they didn't have the doc page but they have it now:
"Adding new protocols"
https://scapy.readthedocs.io/en/latest/build_dissect.html

Python2.7 --Reconstruct packets to print html

Using wireshark, I could see the html page I was requesting (segment reconstruction). I was not able to use pyshark to do this task, so I turned around to scapy. Using scapy and sniffing wlan0, I am able to print request headers with this code:
from scapy.all import *
def http_header(packet):
http_packet=str(packet)
if http_packet.find('GET'):
return GET_print(packet)
def GET_print(packet1):
ret = packet1.sprintf("{Raw:%Raw.load%}\n")
return ret
sniff(iface='wlan0', prn=http_header, filter="tcp port 80")
Now, I wish to be able to reconstruct the full request to find images and print the html page requested.
What you are searching for is
IP Packet defragmentation
TCP Stream reassembly
see here
scapy
provides best effort ip.defragmentation via defragment([list_of_packets,]) but does not provide generic tcp stream reassembly. Anyway, here's a very basic TCPStreamReassembler that may work for your usecase but operates on the invalid assumption that a consecutive stream will be split into segments of the max segment size (mss). It will concat segments == mss until a segment < mss is found. it will then spit out a reassembled TCP packet with the full payload.
Note TCP Stream Reassembly is not trivial as you have to take care of Retransmissions, Ordering, ACKs, ...
tshark
according to this answer tshark has a command-line option equivalent to wiresharks "follow tcp stream" that takes a pcap and creates multiple output files for all the tcp sessions/"conversations"
since it looks like pyshark is only an interface to the tshark binary it should be pretty straight forward to implement that functionality if it is not already implemented.
With Scapy 2.4.3+, you can use
sniff([...], session=TCPSession)
to reconstruct the HTTP packets

Python serial fails to write

I am using Python 2.7 and serial v 2.6 to listen to a serial port. I can listen to the port just fine, but I cannot write to the port.
import serial
cp = 5
br = 9600
ser = serial.Serial(5,br)
a = ser.readline()
Using this, I can listen to the outcoming data stream. However, if I want to change the status of the instrument (e.g. set GPS to off) I would write a command:
ser.write('gps=off')
When I do this, I get "6L" returned and the gps stay on. However, if I connect via TeraTerm I can see the data stream in in real time. While the data streams in I can type gps=off followed by a return and suddenly my GPS is off. Why is my command in Python not working like TeraTerm?
UPDATE
If I instead do
a = ser.write('gps=on')
"a" is assigned value of 6. I also tried sending a "junk" command via
a = ser.write('lkjdflksdjflksdjf')
with "a" assigned a value of 17, so it seems to be assigning the length of the string to a, which does not make sense.
I think the problem was the ser.write commands were getting stuck in buffer (I am not sure of that, but that is my suspicion). When I checked the input buffer I found it to be full. After flushing it out I was able to write to the instrument.
import serial
ser = serial.Serial(5, 9600)
# the buffer immediately receives data, so ensure it is empty before writing command
while ser.inWaiting()>0:
ser.read(1)
# now issue command
ser.write('gps=off\r')
That works.

Serialize and deserialize the message using google protobuf in socket programming in C++

Message format to send to server side as below :
package test;
message Test {
required int32 id = 1;
required string name = 2;
}
Server.cpp to do encoding :
string buffer;
test::Test original;
original.set_id(0);
original.set_name("original");
original.AppendToString(&buffer);
send(acceptfd,buffer.c_str(), buffer.size(),0);
By this send function it will send the data to client,i hope and i am not getting any error also for this particular code.
But my concern is like below:
How to decode using Google Protocol buffer for the above message in
the client side
So that i can see/print the message.
You should send more than just the protobuf message to be able to decode it on the client side.
A simple solution would be to send the value of buffer.size() over the socket as a 4-byte integer using network byte order, and the send the buffer itself.
The client should first read the buffer's size from the socket and convert it from network to host byte order. Let's denote the resulting value s. The client must then preallocate a buffer of size s and read s bytes from the socket into it. After that, just use MessageLite::ParseFromString to reconstruct your protobuf.
See here for more info on protobuf message methods.
Also, this document discourages the usage of required:
You should be very careful about marking fields as required. If at
some point you wish to stop writing or sending a required field, it
will be problematic to change the field to an optional field – old
readers will consider messages without this field to be incomplete and
may reject or drop them unintentionally. You should consider writing
application-specific custom validation routines for your buffers
instead. Some engineers at Google have come to the conclusion that
using required does more harm than good; they prefer to use only
optional and repeated. However, this view is not universal.