Populating struct resembling an IP Header with nibbles/sub-8-bit values - c++

I've been wanting to get into networking with raw sockets recently and have decided to perform a generic ICMP ping using C++ and raw sockets.
I started with making a struct called IP_Header, defined as such:
struct IP_Header {
uint8_t version : 4;
uint8_t IHL : 4;
uint8_t DSCP : 6;
uint8_t ECN : 2;
uint16_t total_len;
uint16_t ident;
uint16_t flags : 3;
uint16_t frag_offset : 13;
uint8_t ttl;
uint8_t proto;
uint16_t header_chksum;
uint32_t src;
uint32_t dst;
};
And so i populated this struct with some default values:
void populateHeaderDefault(IP_Header* ip) {
ip->version = 4;
ip->IHL = 5;
ip->DSCP = 0;
ip->ECN = 0;
ip->total_len = htons(20); //header only
ip->ident = htons(1);
ip->flags = 2;
ip->frag_offset = 0;
ip->ttl = 64;
ip->proto = IPPROTO::IPPROTO_HOPOPTS;
ip->header_chksum = 0;
IN_ADDR ia;
inet_pton(AF_INET, "192.168.178.31", &ia);
ip->src = ia.S_un.S_addr;
inet_pton(AF_INET, "127.0.0.1", &ia);
ip->dst = ia.S_un.S_addr;
ip->header_chksum = header_checksum(ip, sizeof IP_Header);
}
However, the resulting IP Header Version/IHL contains 0x54 instead of 0x45, and the short that contains the Flags and Fragmentation is 0x0200 instead 0f 0x4000. (I'm comparing my values to an exact copy of the packet recreated in scapy)
So my question is how I would fix these values? I know manually assigning the right value would probably work, but I'd like to use nibbles for better accessability.

So my question is how I would fix these values?
The values in the struct are already correct. The problem is in how you use the value. You simply cannot rely on order of bit fields.
Here is a correct way to get 0x45 from the Version/IHL:
uint8_t verIHL = ip->version << 4 | ip->IHL;

You can declare version and header length bitfields based on host machine byte order similar to Linux IP header as shown below, which will make sure right values set.
#if defined(__LITTLE_ENDIAN_BITFIELD)
__u8 ihl:4,
version:4;
#elif defined (__BIG_ENDIAN_BITFIELD)
__u8 version:4,
ihl:4;
#else
#error "Please fix <asm/byteorder.h>"
#endif
Secondly, you can define IP flags supported and set required values as below,
#define IP_FLAG_RF 0x8000 /* reserved fragment flag */
#define IP_FLAG_DF 0x4000 /* dont fragment flag */
#define IP_FLAG_MF 0x2000 /* more fragments flag */
ip->flags = htons(IP_FLAG_DF);

Related

Data corruption after casting byte array to struct in C

I have what should be a simple project I'm attempting to complete on two ARM Cortex Arduinos. I need to send data as a packed byte array over the air to a listening board on the other side. I'm using 2 RFM69HCW transceivers to send and receive the data. The library I'm using expects data of uint8_t* type -- alias of char* -- to send, and the same to receive. The same code using strings instead of structs works without issue, so it must be something in the way I'm converting to/from the struct. Here is my simplified example:
TX.c
/**
* TX.C
* Send a packed struct to a listening transceiver
*/
#include <RH_RF69.h>
#include <SPI.h>
#define RFM69_CS 53
#define RFM69_INT 7
#define RFM69_RST 5
typedef struct __attribute__((packed)) RP {
double x, y, z;
unsigned int time;
uint8_t[4] sound;
} RadioPacket;
// Singleton instance of the radio driver
RH_RF69 rf69(RFM69_CS, RFM69_INT);
void setup() {
Serial.begin(9600);
while(!Serial) delay(1);
// setup failed. Cannot progress beyond this point
if ( !(rf69.init() && rf69.setFrequency(915.0)) ) while(1) ;
}
void loop() {
// only populate x,y,z currently. Other values are unknown
RadioPacket radiopacket = RadioPacket { .x = 0.001, .y = 0.02, .z = 9.1 };
// Send a message!
rf69.send((uint8_t *)&radiopacket, sizeof(radiopacket));
rf69.waitPacketSent();
delay(100);
}
RX.C
/**
* TX.C
* Receive a byte array and transform it to a struct
*/
#include <RH_RF69.h>
#include <SPI.h>
#define RFM69_CS 53
#define RFM69_INT 7
#define RFM69_RST 5
typedef struct __attribute__((packed)) RP {
double x, y, z;
unsigned int time;
uint8_t[4] sound;
} RadioPacket;
// Singleton instance of the radio driver
RH_RF69 rf69(RFM69_CS, RFM69_INT);
void setup() {
Serial.begin(9600);
while(!Serial) delay(1);
// setup failed. Cannot progress beyond this point
if ( !(rf69.init() && rf69.setFrequency(915.0)) ) while(1) ;
}
void loop() {
if (rf69.available()) {
// Should be a message for us now
uint8_t buf[60];
uint8_t len = sizeof(buf);
if (rf69.recv(buf, &len)) {
if (!len) return;
buf[len] = 0;
// size of packet
if(len == 32) {
RadioPacket packet;
// populate packet
memcpy((void *)&packet, &buf, sizeof(RadioPacket));
Serial.println(packet.x); // ovf
Serial.println(packet.y); // 2.01
Serial.println(packet.z); // 1.11
}
}
Instead of the values I sent, I get completely different values after conversion. I've also tried straight casting without memcpy to save memory, but it's the same result.

How to access PE NT headers from DOS headers?

I'm trying to read a .exe PE file into memory and access NT headers. I can already access DOS headers but can't resolve NT headers from it.
Here's what I have so far:
static constexpr uint16_t DOS_HDR_MAGIC = 0x5A4D; // "MZ"
static constexpr uint32_t NT_HDR_MAGIC = 0x00004550; // "PE\x0\x0"
struct nt_headers_t
{
uint32_t signature;
file_header_t file_header;
optional_header_x64_t optional_header;
};
struct dos_header_t
{
uint16_t e_magic;
uint16_t e_cblp;
uint16_t e_cp;
uint16_t e_crlc;
uint16_t e_cparhdr;
uint16_t e_minalloc;
uint16_t e_maxalloc;
uint16_t e_ss;
uint16_t e_sp;
uint16_t e_csum;
uint16_t e_ip;
uint16_t e_cs;
uint16_t e_lfarlc;
uint16_t e_ovno;
uint16_t e_res[ 4 ];
uint16_t e_oemid;
uint16_t e_oeminfo;
uint16_t e_res2[ 10 ];
uint32_t e_lfanew;
};
int main(void) {
std::ifstream input("./stuff.exe", std::ios::in | std::ios::binary );
input.seekg(0, std::ios::end);
int file_size = input.tellg();
input.seekg(0, std::ios::beg);
std::byte *file = new std::byte[file_size];
input.read((char *)file, file_size);
struct dos_header_t *dos_header = (struct dos_header_t *)file;
assert(dos_header->e_magic == DOS_HDR_MAGIC);
struct nt_headers_t *nt_headers = (struct nt_headers_t *)file + dos_header->e_lfanew;
assert(nt_headers->signature == NT_HDR_MAGIC);
}
e_lfanew should contain address to the start of NT headers. I simply add this value to file start: (struct nt_headers_t *)file + dos_header->e_lfanew;
Am I doing that wrong? Attached picture says that e_lfanew contains the NT headers offset in reverse order. How should I reverse it?
I simply add this value to file start: (struct nt_headers_t *)file + dos_header->e_lfanew;
Am I doing that wrong?
Yes, but for a "boring reason" that has nothing to do with PE headers: since you've done the cast before the addition, the offset is scaled by the size of nt_headers_t. The offset needs to be added unscaled, so add it first, and cast afterwards.
Attached picture says that e_lfanew contains the NT headers offset in reverse order. How should I reverse it?
It's in little-endian byte order, you're probably running the code on a little-endian machine (that's most of them nowadays) so you don't need to do anything, just reading the value will interpret it correctly.

Boost Asio accessing asio::ip::address underlying data and byte ordering

My goal is to create a unique ID for all IP address - port pair. The UID must be same across systems (no conflict for different endian systems). Size of IPV4 UID is 6 bytes and for ipv6 is 18 bytes.
uint8_t sourcePair[18]; /*ipv4=(4+2) bytes or ipv6=(16+2) bytes*/
I have two functions that will take the remote endpoint of a socket and get the desired UID. The design is as follows.
void CmdInterpreter::makeSourcePairV4(asio::ip::tcp::endpoint& remoteEp, unsigned short portNum, unsigned char(&binSourcePair)[18])
{
auto addressClass = remoteEp.address().to_v4();
auto ipBin = addressClass.to_uint();
memcpy(&binSourcePair[0], &ipBin, 4);
memcpy(&binSourcePair[4], &portNum, 2);
}
void CmdInterpreter::makeSourcePairV6(asio::ip::tcp::endpoint& remoteEp, unsigned short portNum, unsigned char(&binSourcePair)[18])
{
auto addressClass = remoteEp.address().to_v6();
auto ipBin = addressClass.to_bytes();
memcpy(&binSourcePair[0], &ipBin[0], 16);
memcpy(&binSourcePair[16], &portNum, 2);
}
This is how these functions are called
remoteEp = socketPtr->remote_endpoint();
if (remoteEp.address().is_v4())
CmdInterpreter::makeSourcePairV4(remoteEp, remoteEp.port(), sourcePair);
else
CmdInterpreter::makeSourcePairV6(remoteEp, remoteEp.port(), sourcePair);
Here the problem is the only way to access the IPv6 underlying data is using to_byte() which will give the data in network byte order. Also, I am doing a memcopy in unsigned short which is multibyte in length. Does this work? Is it a safe way? Is their any workarounds?
void CmdInterpreter::makeSourcePairV4(asio::ip::tcp::endpoint& remoteEp, unsigned short portNum, uint8_t(&sourcePair)[18])
{
auto addressClass = remoteEp.address().to_v4();
auto ipBin = addressClass.to_bytes();
memcpy(&sourcePair[0], &ipBin[0], 4);
#ifdef BOOST_ENDIAN_LITTLE_BYTE
byteSwap(portNum);
#endif
memcpy(&sourcePair[4], &portNum, 2);
}
void CmdInterpreter::makeSourcePairV6(asio::ip::tcp::endpoint& remoteEp, unsigned short portNum, uint8_t(&sourcePair)[18])
{
auto addressClass = remoteEp.address().to_v6();
auto ipBin = addressClass.to_bytes();
memcpy(&sourcePair[0], &ipBin[0], 16);
#ifdef BOOST_ENDIAN_LITTLE_BYTE
byteSwap(portNum);
#endif
memcpy(&sourcePair[16], &portNum, 2);
}
For both IPv4 and IPv6 address, use to_byte() function to get the remote endpoint address in big-endian format. For little-endian host, the port number will make endianness problem which can be fixed by swapping the bytes. To encode it to base 64 I used cppcodec library.
UID = cppcodec::base64_rfc4648::encode(sourcePair, 6);
UID = cppcodec::base64_rfc4648::encode(sourcePair, 18);
The template function used to swap the port number is:
template <typename T>
void byteSwap(T& portNumber)
{
char* startIndex = static_cast<char*>((void*)&portNumber);
char* endIndex = startIndex + sizeof(T);
std::reverse(startIndex, endIndex);
}

I am trying to create ntp client which is connecting with some ntp server to sync its clock

I have tried creating a ntp client by creating a packet with ntp request send that to server. but I am not able to code to calculate offset and set time according to that offset. Though whatever i have done through that I am getting some offset value but i am not sure that value is correct. Please if anyone can help me. Thanks in advance.
#include <stdio.h>
#define NTP_HDR_LEN 48
#define FRAC 4294967296
struct ntp_packet
{
uint8_t li_vn_mode; // Eight bits. li, vn, and mode.
// li. Two bits. Leap indicator.
// vn. Three bits. Version number of the protocol.
// mode. Three bits. Client will pick mode 3 for client.
uint8_t stratum; // Eight bits. Stratum level of the local clock.
uint8_t poll; // Eight bits. Maximum interval between successive messages.
uint8_t precision; // Eight bits. Precision of the local clock.
uint32_t rootDelay; // 32 bits. Total round trip delay time.
uint32_t rootDispersion; // 32 bits. Max error aloud from primary clock source.
uint32_t refId; // 32 bits. Reference clock identifier.
uint32_t refTm_s;
uint32_t refTm_f; // 64 bits. Reference time-stamp seconds.
uint32_t origTm_s;
uint32_t origTm_f; // 64 bits. Originate time-stamp seconds.
uint32_t rxTm_s;
uint32_t rxTm_f; // 64 bits. Received time-stamp seconds.
uint32_t txTm_s; // 32 bits and the most important field the client cares about. Transmit time-stamp seconds.
uint32_t txTm_f; // 32 bits. Transmit time-stamp fraction of a second.
}packet;
char sendBuf[2048];
char rcvBuf[2048];
struct timeval txt, trecv, tsend;
void adjust_time(signed long long off)
{
unsigned long long ntp_tym;
struct timeval unix_time;
gettimeofday(&unix_time, NULL);
ntp_tym = off + ((ntohl(txt.tv_sec + 2208988800)) + (ntohl(txt.tv_usec) /
1e6));
unix_time.tv_sec = ntp_tym >> 32;
unix_time.tv_usec = (long)(((ntp_tym - unix_time.tv_sec) << 32) / FRAC *
1e6);
settimeofday(&unix_time, NULL);
}
void CreateSocket()
{
sourceIp = "192.168.1.109";
hostNameInteractive = "139.143.5.30";
source_dest_Port = 123;
uint32_t s_recv_s, s_recv_f, s_trans_s, s_trans_f;
unsigned long long t1, t2, t3, t4;
signed long long offs;
double offset;
if ((sock_raw = socket (PF_PACKET, SOCK_RAW, htons (ETH_P_ALL))) < 0)
{
perror ("socket() failed to get socket descriptor for using ioctl() ");
exit(1);
}
CreateNtpHeader();
SendRecv();
gettimeofday(&trecv, NULL);
struct ntp_packet *npkt = (struct ntp_packet*)(rcvBuf);
s_recv_s = ntohl(npkt->rxTm_s);
s_recv_f = ntohl(npkt->rxTm_f);
s_trans_s = ntohl(npkt->txTm_s);
s_trans_f = ntohl(npkt->txTm_f);
t1 = ((ntohl(txt.tv_sec + 2208988800)) + (ntohl(txt.tv_usec) /
1000000));
t4 = ((ntohl(trecv.tv_sec + 2208988800)) + (ntohl(trecv.tv_usec) /
1000000));
t2 = ((s_recv_s) + (s_recv_f / 1e12));
t3 = ((s_trans_s) + (s_trans_f / 1e12));
offs = (((t2 - t1) + (t3 - t4))/2);
offset = ((double)offs) / FRAC;
std::cout<<"offset : "<<offset<<" sec"<<std::endl;
adjust_time(offs);
close(sock_raw);
}
void CreateNtpHeader()
{
struct ntp_packet *pkt = (struct ntp_packet *)(sendBuf);
gettimeofday(&txt, NULL);
pkt->li_vn_mode = 0xe3;
pkt->stratum = 0;
pkt->poll = 3;
pkt->precision = 0xfa;
pkt->rootDelay = 0x00000100;
pkt->rootDispersion = 0x00000100;
pkt->refId = 0;
pkt->refTm_s = 0;
pkt->refTm_f = 0;
pkt->origTm_s = 0;
pkt->origTm_f = 0;
pkt->rxTm_s = 0;
pkt->rxTm_f = 0;
pkt->txTm_s = htonl(txt.tv_sec + 2208988800);
pkt->txTm_f = htonl(txt.tv_usec);
}
int main(int arg, char* args[])
{
CreateSocket();
return 0;
}
This is not a complete code. this is just a part of code where i am having the problem. I have included all the needed headers. If any thing else needed or any explanation needed please let me know.

Bit fields keil hardfault after restarting

When I use this struct just after flashing device it works well, but after restarting (power on/off) using this struct(assign to any bit) cause a HardFault irq. I use Keil uVision with STM32F205. Why it not works? and what should I change/remove/add to fix it? Direct using GPIOC->ODR don't cause any problems what is wrong with bitfields in Kail?
#pragma anon_unions
typedef union {
struct {
__IO uint16_t Data_Bus:8; // 0-7 data bus
__IO uint16_t Ctr_Pins:6; // 8-13 control pins
__IO uint16_t :2; // 14-15 unused here
};
struct {
__IO uint16_t D0:1; // 0 data bus pin
__IO uint16_t D1:1; // 1 data bus pin
__IO uint16_t D2:1; // 2 data bus pin
__IO uint16_t D3:1; // 3 data bus pin
__IO uint16_t D4:1; // 4 data bus pin
__IO uint16_t D5:1; // 5 data bus pin
__IO uint16_t D6:1; // 6 data bus pin
__IO uint16_t D7:1; // 7 data bus pin
// --------------------------------
__IO uint16_t RS:1; // 8 reset
__IO uint16_t CS:1; // 9 chip select
__IO uint16_t CD:1; // 10 control / data
__IO uint16_t RD:1; // 11 read tick
__IO uint16_t WR:1; // 12 write tick
__IO uint16_t EN:1; // 13 enable display
// ---------------------------------
__IO uint16_t :1; // 14 unused
__IO uint16_t LD:1; // 15 led
};
} *PC_STRUCT_PTR, PC_STRUCT;
PC_STRUCT_PTR __TMP = (PC_STRUCT_PTR)(GPIOC_BASE+0x14);
#define PINOUTS (*__TMP)
it's used like this:
void Write_Reg(unsigned char command)
{
PINOUTS.CD = 0; PINOUTS.RD = 1; PINOUTS.CS = 0; PINOUTS.WR = 0;
PINOUTS.Data_Bus = command; wait();
PINOUTS.WR = 1; PINOUTS.CS = 1; PINOUTS.CD = 1; wait();
}
In file 'startup_stm32f20x.s', make sure that you have the following piece of code:
EXTERN HardFault_Handler_C ; this declaration is probably missing
__tx_vectors ; this declaration is probably there
DCD HardFault_Handler
Then, in the same file, add the following interrupt handler (where all other handlers are located):
PUBWEAK HardFault_Handler
SECTION .text:CODE:REORDER(1)
HardFault_Handler
TST LR, #4
ITE EQ
MRSEQ R0, MSP
MRSNE R0, PSP
B HardFault_Handler_C
Then, in file 'stm32f2xx.c', add the following ISR:
void HardFault_Handler_C(unsigned int* hardfault_args)
{
printf("R0 = 0x%.8X\r\n",hardfault_args[0]);
printf("R1 = 0x%.8X\r\n",hardfault_args[1]);
printf("R2 = 0x%.8X\r\n",hardfault_args[2]);
printf("R3 = 0x%.8X\r\n",hardfault_args[3]);
printf("R12 = 0x%.8X\r\n",hardfault_args[4]);
printf("LR = 0x%.8X\r\n",hardfault_args[5]);
printf("PC = 0x%.8X\r\n",hardfault_args[6]);
printf("PSR = 0x%.8X\r\n",hardfault_args[7]);
printf("BFAR = 0x%.8X\r\n",*(unsigned int*)0xE000ED38);
printf("CFSR = 0x%.8X\r\n",*(unsigned int*)0xE000ED28);
printf("HFSR = 0x%.8X\r\n",*(unsigned int*)0xE000ED2C);
printf("DFSR = 0x%.8X\r\n",*(unsigned int*)0xE000ED30);
printf("AFSR = 0x%.8X\r\n",*(unsigned int*)0xE000ED3C);
printf("SHCSR = 0x%.8X\r\n",SCB->SHCSR);
while (1);
}
If you can't use printf at the point in the execution when this specific Hard-Fault interrupt occurs, then save all the above data in a global buffer instead, so you can view it after reaching the while (1).
Then, refer to the 'Cortex-M Fault Exceptions and Registers' section at http://www.keil.com/appnotes/files/apnt209.pdf in order to understand the problem, or publish the output here if you want further assistance.