I am working on a ping tool and I am consistently getting an access violation around my sending buffer while calculating the ICMP checksum when using a packet size of 45041 or greater (including ICMP header). Any packet with size 45040 or below throws no error and properly transmits with correct checksum. Abbreviated code is below; the access violation occurs when dereferencing the buffer within the while loop in the checksum function on the first iteration.
typedef struct ICMPHeader
{
BYTE type; // ICMP packet type
BYTE code; // Type sub code
USHORT checksum;
USHORT id;
USHORT seq;
} ICMPHeader;
typedef struct echoRequest
{
ICMPHeader icmpHead;
char *data;
} EchoRequest;
// ...
EchoRequest *sendBuffer = new EchoRequest();
sendBuffer->data = new char[packetSize];
memset((void *)sendBuffer->data, 0xfa, packetSize);
sendBuffer->icmpHead.checksum = ipChecksum((USHORT *)sendBuffer,
packetSize + sizeof(sendBuffer->icmpHead));
// ...
// checksum function
USHORT ipChecksum(USHORT *buffer, unsigned long size)
{
unsigned long cksum = 0;
while (size > 1)
{
cksum += *buffer++;
size -= sizeof(USHORT);
}
if (size)
cksum += *(UCHAR *)buffer;
cksum = (cksum >> 16) + (cksum & 0xffff);
cksum += (cksum >> 16);
return (USHORT)(~cksum);
}
Any ideas as to why this is happening?
Exact error wording: Unhandled exception at 0x009C2582 in PingProject.exe: 0xC0000005: Access violation reading location 0x004D5000.
Using Visual Studio Professional 2012 with platform toolset v100 for .NET 4.0
Your ipChecksum function expects a pointer to the data it's supposed to checksum, not a pointer to a structure that contains a pointer to the data to checksum. So first it checksums icmpHead, which is good. But then it checksums the pointer to data, which makes no sense. And then it checksums off the end of the EchoRequest structure.
If you want this code to be interpreted as c++ by the readers you need to fix some things.
memset really?
use reinterpret_cast to convert one pointer type to another.
it's generally considered a much better practice to use size_t instead of unsigned long
use smart pointers instead.
use static_cast to convert ulong to ushort.
USHORT is not guaranteed to be 16 bits. Use a different type instead.
Edit: You're waaaay above MTU. Keep your packets under 1k bytes. IEEE 802.3 expects 1492 though this value might vary.
Related
Can someone explain me how works this lines
template <class T>
...const T& value)...
.
.
.
const uint8_t* p = (const uint8_t*)(const void*)&value;
on this code (i2c byte write for eeprom)
template <class T>
uint16_t writeObjectSimple(uint8_t i2cAddr, uint16_t addr, const T& value){
const uint8_t* p = (const uint8_t*)(const void*)&value;
uint16_t i;
for (i = 0; i < sizeof(value); i++){
Wire.beginTransmission(i2cAddr);
Wire.write((uint16_t)(addr >> 8)); // MSB
Wire.write((uint16_t)(addr & 0xFF));// LSB
Wire.write(*p++);
Wire.endTransmission();
addr++;
delay(5); //max time for writing in 24LC256
}
return i;
}
template <class T>
uint16_t readObjectSimple(uint8_t i2cAddr, uint16_t addr, T& value){
uint8_t* p = (uint8_t*)(void*)&value;
uint8_t objSize = sizeof(value);
uint16_t i;
for (i = 0; i < objSize; i++){
Wire.beginTransmission (i2cAddr);
Wire.write((uint16_t)(addr >> 8)); // MSB
Wire.write((uint16_t)(addr & 0xFF));// LSB
Wire.endTransmission();
Wire.requestFrom(i2cAddr, (uint8_t)1);
if(Wire.available()){
*p++ = Wire.read();
}
addr++;
}
return i;
}
I think the lines works like pointers?
I can't understand how the code store correctly each type of data when I do that
struct data{
uint16_t yr;
uint8_t mont;
uint8_t dy;
uint8_t hr;
uint8_t mn;
uint8_t ss;
};
.
.
.
data myString;
writeObjectSimple(0x50,0,myString);
And then recover the values correctly using
data myStringRead;
readObjectSimple(0x50,0,myStringRead)
the function i2c byte write detect some special character between each data type to store in the correct place?
thx
First I have to state, that this code has been written by a person not fully familiar with the differences between how C++ and C deal with pointer types. My impression that this person has a strong C background and was simply trying to shut up a C++ compiler to throw warnings.
Let's break down what this line of code does
const uint8_t* p = (const uint8_t*)(const void*)&value;
The intent here is to take a buffer of an arbitrary type – which we don't even know here, because it's a template type – and treat it as if it were a buffer of unsigned 8 bit integers. The reason for that is, that later on the contents of this buffer are to be sent over a wire bit by bit (this is called "bit banging").
In C the way to do this would have been to write
const uint8_t* p = (const void*)&value;
This works, because in C is perfectly valid to assign a void* typed pointer to a non-void pointer and vice versa. The important rule set by the C language however is, that – technically – when you convert a void* pointer to a non-void type, then the void* pointer must have been obtained by taking the address (& operator) of an object of the same type. In practice however implementations allow casting of a void* types pointer to any type that is alignment compatible to the original object and for most – but not all! – architectures uint8_t buffers may be aligned to any address.
However in C++ this back-and-forth assignment of void* pointers is not allowed implicitly. C++ requires an explicit cast (which is also why you can often see C++ programmers writing in C code something like struct foo *p = (struct foo*)malloc(…)).
So what you'd write in C++ is
const uint8_t* p = (const uint8_t*)&value;
and that actually works and doesn't throw any warnings. However some static linter tools will frown upon it. So the first cast (you have to read casts from right to left) first discards the original typing by casting to void* to satisfy the linter, then the second cast casts to the target type to satisfy the compiler.
The proper C++ idiom however would have been to use a reinterpret_cast which most linters will also accept
const uint8_t* p = reinterpret_cast<const uint8_t*>(&value);
However all this casting still invokes implementation defined behavior and when it comes to bit banging you will be hit by endianess issues (the least).
Bit banging itself works by extracting each bit of a value one by one and tickling the wires that go in and out of a processor's port accordingly. The operators used here are >> to shift bits around and binary & to "select" particular bits.
So for example when you see a statement like
(v & (1<<x))
then what is does is checking if bit number x is set in the variable v. You can also mask whole subsets of the bits in a variable, by masking (= applying the binary & operator – not to be confused with the unary "address of" operator that yields a pointer).
Similarly you can use the | operator to "overlay" the bits of several variables onto each other. Combined with the shift operators you can use this to build the contents of a variable bit-by-bit (with the bits coming in from a port).
The target device is an I2C EEPROM, so the general form for writing is to send the destination address followed by some data. To read data from the EEPROM, you write the source address and then switch to a read mode to clock out the data.
First of all, the line:
const uint8_t* p = (const uint8_t*)(const void*)&value;
is simply taking the templated type T and casting away its type, and converting it to a byte array (uint8_t*). This pointer is used to advance one byte at a time through the memory containing value.
In the writeObjectSimple method, it first writes the 16-bit destination address (in big-endian format) followed by a data byte (where p is a data pointer into value):
Wire.write(*p++);
This writes the current byte from value and moves the pointer along one byte. It repeats this for however many bytes are in the type of T. After writing each byte, the destination address is also incremented, and it repeats.
When you code:
data myString;
writeObjectSimple(0x50,0,myString);
the templated writeObjectSimple will be instantiated over the data type, and will write its contents (one byte at a time) starting at address 0, to the device with address 0x50. It uses sizeof(data) to know how many bytes to iterate over.
The read operation works very much the same way, but writes the source address and then requests a read (which is implicit in the LSB of the I2C address) and reads one byte at a time back from the device.
the function i2c byte write detect some special character between each data type to store in the correct place?
Not really, each transaction simply contains the address followed by the data.
[addr_hi] [addr_lo] [data]
Having explained all that, operating one byte at a time is a very inefficient way of achieving this. The device is a 24LC256, and this 24LC family of EEPROMs support sequential writes (up to a page) in size in a single I2C transaction. So you can easily send the entire data structure in one transfer, and avoid having to retransmit the address (2 bytes for every byte of data). Have a look in the datasheet for the full details.
I'm using WinSock to send UDP packets to the server, I need to send the data in big endian. I'm not sure how to convert the byte order of my structure before sending.
I have a struct like this:
struct ConnectIn
{
std::int64_t ConnectionID = 0x41727101980;
std::int32_t Action = 0;
std::int32_t TransactionID;
ConnectIn(std::int32_t transactionID)
{
TransactionID = transactionID;
}
};
And at the moment I'm sending like this:
ConnectIn msg(123);
int len = sizeof(msg);
int bytesSent = sendto(s, (char*)&msg, len, 0, (SOCKADDR*)&dest, sizeof(address));
How can I convert the byte order of msg to big endian before sending?
If you're curious, the data I'm sending is for the Bit Torrent UDP tracker protocol.
If you want to do this manually then what you do is swap each member individually. You convert the members from the host computer's byte ordering to the network's byte ordering. On Win32 htonll() is for 64-bit integers and htonl() is for 32-bit integers:
#include <Winsock2.h>
ConnectIn msg(123);
msg.ConnectionID = htonll(msg.ConnectionID);
msg.Action = htonl(msg.Action);
msg.TransactionID= htonl(msg.TransactionID);
Then you might also want to send the members individually, to avoid relying on the host system's struct layout. The Windows ABI doesn't insert any padding in this struct, but perhaps for some other struct you use it does. So here's the basic idea:
char buf[sizeof msg.ConnectionID + sizeof msg.Action + sizeof msg.TransactionID];
char *bufi = buf;
std::memcpy(bufi, &msg.ConnectionID, sizeof msg.ConnectionID);
bufi += sizeof msg.ConnectionID;
std::memcpy(bufi, &msg.Action, sizeof msg.Action);
bufi += sizeof msg.Action;
std::memcpy(bufi, &msg.TransactionID, sizeof msg.TransactionID);
bufi += sizeof msg.TransactionID;
int len = sizeof buf;
int bytesSent = sendto(s, buf, len, 0, (SOCKADDR*)&dest, sizeof(address));
Then on the receiving side you use the appropriate ntoh*() functions for 64-bit and 32-bit types to convert from the network's byte ordering to the receiving host's byte ordering.
Yes, the Network Byte Order (NBO) is Big Endian and so you need to find a way to send that structure on the web.
What you're currently doing won't work: you're sending the whole struct but the receiver may have a different endianness, padding and so on.
The easiest options are:
Sending each field with a protocol-defined layout
Third part libraries which handle serialization: Google Protobuf is one of the most common ones.
For the first option, there're some functions which take care of that in the Winsock2 library. These are:
(WSA)ntoht (Network to Host t, where t can be short and unsigned)
(WSA)htont (Host to Network t, where t can be short and unsigned)
WSA functions are a little bit different and Windows-only.
The Network Programming Guide
Winsock Reference
One option is converting each of the numbers individually
For GCC:
int32_t __builtin_bswap32 (int32_t x)
int64_t __builtin_bswap64 (int64_t x)
For MSVC:
unsigned short _byteswap_ushort(unsigned short value);
unsigned long _byteswap_ulong(unsigned long value);
unsigned __int64 _byteswap_uint64(unsigned __int64 value);
I am trying to send a set of three variables, a 64 bit integer and two 32 bit integers, using boost asio. I know how to send the data using boost asio but I am struggling to convert the three variables into something I can send using boost asio, any ideas?
The types I'm using for the variables are as follows:
boost::uint64_t
boost::uint32_t
boost::uint32_t
The purpose of this is to send the data as UDP Tracker Connect Request (Bittorrent Protocol), a description of which can be found here: http://www.bittorrent.org/beps/bep_0015.html#udp-tracker-protocol
Offset Size Name Value
0 64-bit integer connection_id 0x41727101980
8 32-bit integer action 0 // connect
12 32-bit integer transaction_id
16
Create a raw memory buffer. Use endian-aware copy functions to place the integers in the buffer. Send the buffer.
What endian does the bittorrent protocol use? It's big endian, so any solution here that relies on casting won't work on your typical consumer electronics these days, because these use little-endian format in memory. In creating your buffer to send, you therefore also have to swap the bytes.
Okay, you're trying to match an existing network protocol that has documented its expected byte offset and endianness for each field. This is one of the times where you want to use a raw buffer of uint8_t. Your code should look something like this:
// This is *not* necessarily the same as sizeof(struct containing 1 uint64_t
// and 2 uint32_t).
#define BT_CONNECT_REQUEST_WIRE_LEN 16
// ...
uint8_t send_buf[BT_CONNECT_REQUEST_WIRE_LEN];
cpu_to_be64(connection_id, &send_buf[ 0]);
cpu_to_be32(0 /*action=connect*/, &send_buf[ 8]);
cpu_to_be32(transaction_id, &send_buf[12]);
// transmit 'send_buf' using boost::asio
The cpu_to_be32 function should look like this:
void
cpu_to_be32(uint32_t n, uint8_t *dest)
{
dest[0] = uint8_t((n & 0xFF000000) >> 24);
dest[1] = uint8_t((n & 0x00FF0000) >> 16);
dest[2] = uint8_t((n & 0x0000FF00) >> 8);
dest[3] = uint8_t((n & 0x000000FF) >> 0);
}
The inverse (be32_to_cpu) and the analogue (cpu_to_be64) are left as exercises. You might also like to try your hand at writing template functions that deduce the appropriate size from their first argument, but personally I think having an explicit indication of the size in the function name makes this kind of code more self-documenting.
It's easy to convert a structure to array/vector/string which can be sent via boost::asio. For example:
struct Type
{
boost::uint64_t v1;
boost::uint32_t v2;
boost::uint32_t v3;
}
Type t;
std::string str( reinterpret_cast<char*> (&t), sizeof(t) );
I don't know architecture of your application, but it's also possible to create asio::buffer just from memory:
boost::asio::buffer( &t, sizeof(t) );
In this case you should be careful about lifetime of t.
I'm streaming data from the server. Server sends various BigEndian variables, but also sends bytes (representing number). One of my SocketClient.read overloads accepts (int length, char* array). I want to pass an integer variable pointer to this function to get 0-255 value in it (unsigned byte).
What have I tried:
unsigned int UNSIGNED_BYTE;
socket.read(1, &((char*)&UNSIGNED_BYTE)[0]); //I am changing 1st byte of a variable - C++ uses little endian
//I know that function reads 6, and that is what I find in 1st byte
std::cout<<(int)((char*)&UNSIGNED_BYTE)[0]<<")\n"; //6 - correct
std::cout<<UNSIGNED_BYTE<<")\n"; //3435973638 -What the hell?
According to the above, I am changing the wrong part of the int. But what else should I change?
My class declaration and implementation:
/*Declares:*/
bool read(int bytes, char *text);
/*Implements:*/
bool SocketClient::read(int bytes, char *text) {
//boost::system::error_code error;
char buffer = 0;
int length = 0;
while(bytes>0) {
try
{
size_t len = sock.receive(boost::asio::buffer(&buffer, 1)); //Read a byte to a buffer
}
catch(const boost::system::system_error& ex) {
std::cout<<"Socket exception: "<<ex.code()<<'\n';
return false; //Happens when peer disconnects for example
}
if(byteEcho)
std::cout<<(int)(unsigned char)buffer<<' ';
bytes--; //Decrease ammount to read
text[length] = buffer;
length++;
}
return true;
}
So firstly:
unsigned int UNSIGNED_BYTE;
Probably isn't very helpfully named since I very much doubt the architecture you're using defines an int as an 8 bit unsigned integer additionally you're not initializing this to zero and then later you're writing to only part of it leaving the rest as garbage. It's likely to be 32/64 bits in size on most modern compilers/architectures.
Secondly:
socket.read(1, &((char*)&UNSIGNED_BYTE)[0])
Is reading 8 bits into a (probably) 32 bit memory location and the correct end to put the 8 bits is not down to C++ (as you say in your comments). It's actually down to your CPU since endianness is a property of the CPU not the language. Why don't you read the value into an actual char and then simply assign that to an int since this will deal with the conversion for you and will make your code portable.
The problem was, that I did not initialise the int. Though the 1st byte was changed, other 3 bytes had random values.
This makes the solution very simple (and also makes my question be likely to be closed as Too localised):
unsigned int UNSIGNED_BYTE = 0;
Let's say I want to send the following data to a socket using C or C++, all in one packet:
Headers
-------
Field 1: 2 byte hex
Field 2: 2 byte hex
Field 3: 4 byte hex
Data
----
Field1 : 2 byte hex
Field1 : 8 byte hex
What would the code typically look like to create and send the packet containing all this data?
Let's suppose that your program is already organized to have the header in one struct and the data in another struct. For example, you might have these data structures:
#include <stdint.h>
struct header {
uint16_t f1;
uint16_t f2;
uint32_t f3;
};
struct data {
uint16_t pf1;
uint64_t pf2;
};
Let's call this organization "host format". It really doesn't matter to me what the host format is, as long as it is useful to the rest of your program. Let's call the format that you will pass to the send() call "network format". (I chose these names to match the htons (host-to-network-short) and htonl (host-to-network-long) names.)
Here are some conversion functions that we might find handy. Each of these converts your host format structures to a network format buffer.
#include <arpa/inet.h>
#include <string.h>
void htonHeader(struct header h, char buffer[8]) {
uint16_t u16;
uint32_t u32;
u16 = htons(h.f1);
memcpy(buffer+0, &u16, 2);
u16 = htons(h.f2);
memcpy(buffer+2, &u16, 2);
u32 = htonl(h.f3);
memcpy(buffer+4, &u32, 4);
}
void htonData(struct data d, char buffer[10]) {
uint16_t u16;
uint32_t u32;
u16 = htons(d.pf1);
memcpy(buffer+0, &u16, 2);
u32 = htonl(d.pf2>>32);
memcpy(buffer+2, &u32, 4);
u32 = htonl(d.pf2);
memcpy(buffer+6, u32, 4);
}
void htonHeaderData(struct header h, struct data d, char buffer[18]) {
htonHeader(h, buffer+0);
htonData(d, buffer+8);
}
To send your data, do this:
...
char buffer[18];
htonHeaderData(myPacketHeader, myPacketData, buffer);
send(sockfd, buffer, 18, 0);
...
Again, you don't have to use the header and data structs that I defined. Just use whatever your program needs. The key is that you have a conversion function that writes all of the data, at well-defined offsets, in a well-defined byte order, to a buffer, and that you pass that buffer to the send() function.
On the other side of the network connection, you will need a program to interpret the data it receives. On that side, you need to write the corresponding functions (ntohHeader, etc). Those function will memcpy the bits out of a buffer and into a local variable, which it can pass to ntohs or ntohl. I'll leave those functions for you to write.
Well, typically it would look like it's preparing that packet structure into a memory buffer (making judicious calls the the htonl family of functions).
If would then use the send, sendto, sendmsg or write functions, hopefully with a lot of care taken with the length of the buffer and good error handling/reporting.
(Or one of the Win32 apis for the send, if that is the target plateforms.)
You'll find a good presentation about all this at Beej's Guide to Network Programming.
Specifially for the byte packing part (with endian consideration), look at the serialization topic. (There's way more detail in that section than what you need for plain fixed-size integer data types.
The code would look different depending on the OS's networking library (*nix uses Berkeley sockets, Windows uses Winsock, etc.). However, you could create a struct containing all the data you wanted to send in a packet, e.g.,
typedef struct
{
short field1;
short field2;
int field3;
} HeaderStruct;
typedef struct
{
short field1;
long long field2;
} PacketDataStruct;
assuming a 32-bit int size.
Edit:
As someone kindly reminded me in the comments, don't forget about converting to and from Network Order. Networking libraries will have functions to assist with this, such as ntohs, nothl, htons, and htonl.
One simple answer is that it would be sent in the format that the receiver expects. That begs the question a bit, though. Assuming the data is a fixed size as shown and the receiving end expects, then you could use a packed (1 byte alignment) structure and store the data in each field. The reason for using 1 byte alignment is that it is typically easier to make sure both ends are expecting the same data. Without 1 byte alignment, then the structure would possibly look different based on compiler options, 32-bit versus 64-bit architecture, etc.) And, typically, it is expected that you would send the values in network byte order if the hex values are integers. You can use functions such as htons and htonl (and possibly htobe64 if available) to convert them.
Assuming that the data is in the structure with the desired byte order, then the send call may be something like this:
ret = send( socket, &mystruct, sizeof( mystruct ), 0 );
That assumes that mystruct is declared as an instance of the structure as opposed to a pointer to the structure.