I use Visual Studio Micro C++ to program Arduino Mega1280 AVR.
I keep all my mixed types config parameters in a struct that is persisted in the 4096 Bytes EEPROM (in addition to other information saved in the EEPROM).
For every new program upload, I wish to check the new config against the EEPROM saved config, which I do byte by byte, updating only EEPROM bytes that differ.
My problem is that while the code compiles OK, (no errors/warnings) I do not get what I expect, and when reading the struct by bytes, I only get part of the expected info. I need help to make it work properly. Thanks.
Here is my simplified code:
#include <avr\eeprom.h>
#include <EEPROM.h>
#include <Arduino.h>
char configVersion[5] = "VS41";
unsigned int deviceID = 0xF0F0;
char deviceRev[5] = "ABCD";
char revDate[11] = "09-08-2014";
unsigned int dataArraySize = 150;
struct defineConfigs {
char configVersion[5];
unsigned int deviceID;
char deviceRev[5];
char revDate[11];
unsigned int dataArraySize;
byte mode;
byte refpage;
byte tolerance;
byte externalTrigger;
byte laserEnable;
}
configData=
{
{configVersion[5]},
deviceID,
{deviceRev[5]},
{revDate[11]},
dataArraySize,
1,
2,
0,
0,
0
};
unsigned int configBaseAddress=3840; //0x0F00
void Setup()
{
loadConfig(configBaseAddress);
}
void loadConfig(int configAddress) //EA
{
int s;
for ( s=0; s<sizeof(configData); s++) {
// Serial.println(*((char*)(&configData) + s),HEX); //Serial.print(","); //for testing
if (*((char*)(&configData) + s) == EEPROM.read(configAddress + s)) {
} else {
EEPROM.write(configAddress + s, *((byte*)(&configData) + s));
Serial.println(EEPROM.read(configAddress + s),HEX);
//read back for testing
}
}
}
Initialization of configData use uninitialized data
{configVersion[5]}
point on the first character after the array configVersion[0 ... 4]. This is not as you expect the litteral string "VS41".
You could check this printing configData.configVersion :
printf("%s\n", configData.configVersion);
Try to modify configData initialization like this :
struct defineConfigs {
char configVersion[5];
unsigned int deviceID;
char deviceRev[5];
char revDate[11];
unsigned int dataArraySize;
byte mode;
byte refpage;
byte tolerance;
byte externalTrigger;
byte laserEnable;
} configData = {
"VS41",
deviceID,
"ABCD",
"09-08-2014",
dataArraySize,
1,
2,
0,
0,
0
};
Related
I'm trying to build a raw socket and I've built a structure to have every header : ETH, IP, etc.
I just started by assigning ETH field but when I run just this part of code, I get segmentation fault:
typedef struct Network_frame_test{
unsigned char dst_mac_addr[CONF_MAC_ADDRESS_SIZE];
unsigned char src_mac_addr[CONF_MAC_ADDRESS_SIZE];
struct ethhdr *ethh;
struct iphdr *iph;
struct udphdr *udph;
unsigned char buffer[ SIZE_BUFFER ];
} Network_frame_test;
int main(void)
{
Network_frame_test frame_test;
const unsigned char message[] = {'a','a','a','a','a','a','a','a','a','a','a','a','a','a','a','a','a','a'};
int message_size = sizeof(message)/sizeof(message[0]);
printf("messge size : %d", message_size);
unsigned char* sendbuff;
printf(" message %.2x", message[0]);
memset(&sendbuff,0,43);
printf(" %d", 0);
for(int i=0;i<6;i++)
{
frame_test.dst_mac_addr[i] = message[i+6];
}
frame_test.ethh = (struct ethhdr *)(sendbuff);
for(int i=0; i<CONF_MAC_ADDRESS_SIZE; i++)
{
frame_test.ethh->h_dest[i] = frame_test.dst_mac_addr[i];
}
}
sendbuff is a pointer to a char, not allocated or anything. By using memset on its address, you set its value to 0, which means a null pointer. Later on you assign it to frame_test.ethh, and try to access it. I believe that is where you see the issue.
Why is sendbuff an unsigned char pointer and not simply of type
struct ethhdr?
Also, why is the memset used for 43 bytes? I'd change it to be the size of struct ethhdr.
After that, simeply use
frame_test.ethh = &sendbuff;
...
struct ethhdr srndbuff:
memset(&sendbuff, 0, sizeof(sendbuff));
...
frame_test.ethh = &sendbuff
...
If you have to use it with unsigned char, then:
#DEFINE SIZE 43 /* or sizeof(struct ethhdr) */
...
unsigned char sendbuff[SIZE];
memset(sendbuff, 0, SIZE);
...
frame_test.ethh = (struct ethhdr *)(sendbuff);
...
Hope that helps.
I am on a Windows-XP platform, using
C++98. Here is my problem:
I receive a number encoded in base64 in a char array. The number is 16 bytes long
After decoding the number to base10, I end up with a number of 12 bytes long
Those 12 bytes (still in a char array) should be converted to a ASCII representation (requirement).
An example:
Reception of data : "J+QbMuDxYkxxtSRA" (in a char array : 16 bytes)
Decoding from base64: 0x27 0xE4 0x1B 0x32 0xE0 0xF1 0x62 0x4C 0x71 0xB5 0x24 0x40 (hex in a char array: 12 bytes)
Conversion from decimal to ASCII : "12345678912345678912345678912" (in a char array: 29 bytes)
I am currently stuck at the last step, I have no idea how to do it.
When I searched, the only thing I found is how to convert from int to string (so, for an int containing the value 10, a sprintf("%d", buffer) would do the trick I guess. Since my number is greater than 8 bytes/64 bits (12 bytes) and is not stored in a int/unsigned long long, I am pretty sure I need to do something else to convert my number.
Any help would be greatly appreciated
EDIT for a code example (link to the base64 library : http://libb64.sourceforge.net/
#include <iostream>
#include <cstring>
#include "b64/decode.h"
class Tokenizer{
public:
Tokenizer();
void decodeFrame(char *buffer, int bufferSize);
void retrieveFrame(char *buffer);
private:
char m_frame[255];
};
using namespace std;
int main(int argc, char *argv[])
{
Tokenizer token;
char base64Message[] = "J+QbMuDxYkxxtSRA"; // base64 message of 16 (revelant) bytes
char asciiMessage[30] = {0}; // final message, a number in ascii representation of 29 bytes
cout << "begin, message = " << base64Message << endl;
token.decodeFrame(base64Message, 16); // decode from base64 and convert to ASCII
token.retrieveFrame(asciiMessage); // retrieve the ASCII message
cout << "final message : " << asciiMessage << endl; // the message should be "12345678912345678912345678912"
return 0;
}
Tokenizer::Tokenizer()
{
}
void Tokenizer::decodeFrame(char *buffer, int bufferSize)
{
memset(m_frame, 0x00, 255);
char decodedFrame[255] = {0}; // there is maximum 255 byte in ENCODED (base64). When decoded, the size will automatically be lower
int decodedSize = 0;
base64::decoder base64Decoder; // base64 to base10 decoder, found there: http://libb64.sourceforge.net/
decodedSize = base64Decoder.decode(buffer, bufferSize, decodedFrame); // int decode(const char* code_in, const int length_in, char* plaintext_out)
// the frame now must be decoded to produce ASCII, I have no idea how to do it
for(int i = 0; i < 30; i++){ m_frame[i] = decodedFrame[i]; }
}
void Tokenizer::retrieveFrame(char *buffer)
{
if(buffer != NULL){
for(int i = 0; m_frame[i] != 0; i++){
buffer[i] = m_frame[i];
}
}
}
Building on an earlier answer you can easily build an arbitrary length integer from a bunch of bytes.
const char *example = "\x27\xE4\x1B\x32\xE0\xF1\x62\x4C\x71\xB5\x24\x40";
Bignum bn = 0;
for (int i = 0; i < 12; ++i)
{
bn *= 256;
bn += example[i] & 0xff;
}
std::cout << bn << std::endl;
See the whole code here: http://coliru.stacked-crooked.com/a/d1d9f39a6d575686
#Farouk : your error in your reasonment is due to at the convertion base64 string to decimal (base10) directly. Indeed this convertion change the original number, you have jumping a step in your convertion for more details follow below :
example :
receive base64 : J+QbMuDxYkxxtSRA
**your steps :**
**base64 -> dec** : 39 18162 3170 76 113 22784
**dec -> hex** : 153D8AE2CE845159A0
**logical steps:**
**base64 -> hex** : 27e41b32e0f1624c71b52440
**hex -> deci** : 12345678912345678912345678912
**deci -> hex** : 27E41B32E0F1624C71B52440
**hex -> base64** : J+QbMuDxYkxxtSRA
For the solution c++ to convert base64tohex i thinks you can find a librairy on the web if not i can code for you this convertion later.
#include <fstream>
#include<iostream>
#include<cstring>
using namespace std;
class Address {
public:
char addr[6];
Address() {}
Address(string address) {
size_t pos = address.find(":");
int id = stoi(address.substr(0, pos));
short port = (short)stoi(address.substr(pos + 1, address.size()-pos-1));
memcpy(addr, &id, sizeof(int));
memcpy(&addr[4], &port, sizeof(short));
}
};
enum MsgTypes{
JOINREQ,
JOINREPLY,
DUMMYLASTMSGTYPE,
HEARTBEAT
};
/**
* STRUCT NAME: MessageHdr
*
* DESCRIPTION: Header and content of a message
*/
typedef struct MessageHdr {
enum MsgTypes msgType;
}MessageHdr;
typedef struct en_msg {
// Number of bytes after the class
int size;
// Source node
Address from;
// Destination node
Address to;
}en_msg;
//class Testing{
void send(Address *myaddr, Address *toaddr, char *data, int size);
int main()
{
MessageHdr *msg=new MessageHdr();
size_t msgsize = sizeof(MessageHdr) + sizeof(Address) + sizeof(long) + 1;
msg=(MessageHdr *)malloc(msgsize*sizeof(char));
int id=233;
short port =22;
long heartbeat=1;
msg=(MessageHdr *)malloc(msgsize*sizeof(char));
string s=to_string(id)+":"+to_string(port);
string s1=to_string(id+1)+":"+to_string(port+1);
cout<<s<<'\n';
cout<<s1<<'\n';
Address *addr= new Address(s);
for (int i = 0; i < 6; i++)
cout << addr->addr[i];
Address *toaddr= new Address(s1);
msg->msgType = JOINREQ;
//cout<<(char *)msg->msgType;
memcpy((char *)(msg+1), addr, sizeof(addr));
memcpy((char *)(msg+1) + 1 + sizeof(addr), &heartbeat, sizeof(long));
send(addr, toaddr, (char *)msg, msgsize);
return 0;
}
void send(Address *myaddr, Address *toaddr, char *data, int size) {
cout<<"inside send"<<'\n';
en_msg *em;
//static char temp[2048];
em = (en_msg *)malloc(sizeof(en_msg) + size);
em->size = size;
memcpy(&(em->from), &(myaddr), sizeof(em->from));
memcpy(&(em->to), &(toaddr), sizeof(em->from));
memcpy(em + 1, data, size);
cout<<(char *)(em+1);
}
This is my program,in between I am trying to check the address what is being stored in my char array. but upon printing the array, it gives some strange output. two strange symbols after printing the value of s and s1.
I am trying to store the id:port in the char array of the address class, but looks without success. Please help
The code I am referring to for printing is in the main function. Approx ten lines down the main function.
For say, my id is 233 and port is 22, The address is 233:22 I want to retrieve back 233:22 and print it. How do I do that here?
Thanks in advance :)
The problem is in this line:
cout << addr->addr[i];
Since addr->addr is an array of char, each element will be printed as the character it represents. If you'd rather print the integer value of each, simply cast it to int first.
cout << static_cast<int>(addr->addr[i]); // or old-fashioned: (int)addr->addr[i];
Given the following code:
for (int i = 0; i <= 6; i++)
cout << addr->addr[i];
And given Address's constructor:
size_t pos = address.find(":");
int id = stoi(address.substr(0, pos));
short port = (short)stoi(address.substr(pos + 1, address.size()-pos-1));
memcpy(addr, &id, sizeof(int));
memcpy(&addr[4], &port, sizeof(short));
It's clear that you are printing the bytes that conform a number
addr->addr is a char array which contains two integer variables, one having two bytes (int) and the other having 2 bytes (short).
So, if the number is , lets say, 436, you are printing:
0xB4 0x01 0x00 0x00
<crazy char> SOH NULL NULL
You must understand what are you printing, or what you want to print in order to print it properly.
Note: The most popular setup is assumed here, which means:
Little Endian arquitecture
4-byte int
2-byte short
Update
How to get address and port back:
int address;
unsigned short port;
memset(&address, addr->addr, 4);
memset(&port, addr->addr+4, 2);
I am trying to learn more about binary files, so I started with HexEdit, and I manually wrote a file and created a template for it. Here is my work:
Now, I started working on a console application in C++ Win32 to read the contents in that file and make them look friendly. Here is part my code:
typedef unsigned char BYTE;
long getFileSize(FILE *file)
{
long lCurPos, lEndPos;
lCurPos = ftell(file);
fseek(file, 0, 2);
lEndPos = ftell(file);
fseek(file, lCurPos, 0);
return lEndPos;
}
int main()
{
const char *filePath = "D:\\Applications\\ColorTableApplication\\file.clt";
BYTE *fileBuf; // Pointer to our buffered data
FILE *file = NULL; // File pointer
if ((file = fopen(filePath, "rb")) == NULL)
printf_s("Could not open specified file\n");
else {
printf_s("File opened successfully\n");
printf_s("Path: %s\n", filePath);
printf_s("Size: %d bytes\n\n", getFileSize(file));
}
long fileSize = getFileSize(file);
fileBuf = new BYTE[fileSize];
fread(fileBuf, fileSize, 1, file);
for (int i = 0; i < 100; i++){
printf("%X ", fileBuf[i]);
}
_getch();
delete[]fileBuf;
fclose(file); // Almost forgot this
return 0;
}
(I provided that much code because I want to be clear, to help you get the idea about what I am trying to do)
First of all, I need to get the first 14 bytes and write them in the console as text, and then, in a for I need to write something like this for each color:
black col_id = 1; R = 00; G = 00; B = 00;
red col_id = 2; R = FF; G = 00; B = 00;
etc...
How can I read and translate these bytes?
It is correct as you have it to write out the 14 bytes.
a technique is to create a struct with the layout of your records, then cast e.g. (C-style)
typedef struct
{
char name[10];
long col_id;
unsigned char R;
unsigned char G;
unsigned char B;
} rec;
rec* Record = (rec*)(fileBuf + StartOffsetOfRecords);
now you can get the contents of the first record
Record->name, ...
getting next record is just a matter of moving Record forward
++Record;
You could also have a struct for the header to make it more convenient to pickout the number of records, it is good to use stdint.h in order to get well defined sizes. also to pack structures on byte boundary to make sure no padding is done by the compiler i.e. #pragma pack(1) at the top of your source.
typedef struct
{
char signature[14];
uint32_t tableaddress;
uint32_t records;
} header;
typedef struct
{
char name[10];
uint32_t col_id;
unsigned char R;
unsigned char B;
unsigned char G;
} rec;
so instead when you read you could do like this
header Header;
rec* Record;
fread(&Header,sizeof(header),1,file);
fread(fileBuf,1,fileSize,file);
Record = (rec*)(fileBuf); // first record can be accessed through Record
I'm trying to work with length-preceded TCP messages using Qt. I have following method:
QByteArray con::read()
{
QByteArray s;
s = _pSocket->read(4);
if (s.length() == 4) {
int size = char_to_int32(s);
s = _pSocket->read(size);
}
return s;
}
Well, it does not work. Looks like I lose all data after reading first 4 bytes: the first read works fine, but read(size) returns nothing. Is there a way to solve this?
The char_to_int32 is:
int char_to_int32(QByteArray s)
{
int size = 0;
size |= (s.at(0) << 24);
size |= (s.at(1) << 16);
size |= (s.at(2) << 8);
size |= (s.at(3));
return size;
}
EDIT :
The sending function (plain C):
int send(int connfd, const unsigned char* message, unsigned int size) {
int c;
unsigned char* bytes = (unsigned char*) malloc(4 + size);
int32_to_char(size, bytes); // converts message size to 4 bytes
memcpy(bytes + 4, message, size);
c = write(connfd, bytes, 4 + size);
free(bytes);
if (c <= 0)
return -1;
else
return 0;
}
By the way, when I call _pSocket->readAll(), the entire packet is read, including 4-byte size and message itself.
EDIT :
void int32_to_char(uint32_t in, char* bytes) {
bytes[0] = (in >> 24) & 0xFF;
bytes[1] = (in >> 16) & 0xFF;
bytes[2] = (in >> 8) & 0xFF;
bytes[3] = in & 0xFF;
return;
}
As you are using the QByteArray QIODevice::read(qint64 maxSize) function, you may not be detecting errors correctly:
This function has no way of reporting errors; returning an empty QByteArray() can mean either that no data was currently available for reading, or that an error occurred.
Some things to try:
Use the qint64 QIODevice::read(char* data, qint64 maxSize) which reports errors:
If an error occurs ... this function returns -1.
Call QIODevice::errorString and QAbstractSocket::error to find out what is going wrong.
For bonus points, listen to the QAbstractSocket::error error signal.
If this is a new protocol you are creating, try using QDataStream for serialization, this automatically handles length prefixs and is platform independent. Your char_to_int32 will break if you mix platforms with different endienness, and may break between different OSs or compilers as int is not guaranteed to be 32 bits (it is defined as at least 16 bits).
If you can't use QDataStream, at least use the htons, ntohs ... functions.
Edit
Here is some example code showing hton/ntoh usage. Note that uint32_t and not int is used as it's guaranteed to be 32 bits. I've also used memcpy rather than pointer casts in the encode/decode to prevent aliasing and alignment problems (I've just done a cast in the test function for brevity).
#include <stdio.h>
#include <string.h>
#include <arpa/inet.h>
void encode(uint32_t in, char* out)
{
/* Host to Network long (32 bits) */
const uint32_t t = htonl(in);
memcpy(out, &t, sizeof(t));
}
uint32_t decode(char* in)
{
uint32_t t;
memcpy(&t, in, sizeof(t));
/* Network to Host long (32 bits) */
return ntohl(t);
}
void test(uint32_t v)
{
char buffer[4];
printf("Host Input: %08x\n", v);
encode(v, buffer);
printf("Network: %08x\n", *((uint32_t*)buffer));
printf("Host Output: %08x\n\n", decode(buffer));
}
int main(int argc, char** argv)
{
test(0);
test(1);
test(0x55);
test(0x55000000);
return 0;
}