I am having a very hard time figuring out how to solve the following problem. I am on an embedded system with very little memory and want to minimize memory usage.
Pointers have always confused the heck out of me and will always do.
I have a whole bunch of defines for register addresses:
#define GPIO_PORTA_BASE (*((volatile unsigned long *)0x40004000))
#define GPIO_PORTB_BASE (*((volatile unsigned long *)0x40005000))
//etc..
These registers are direct accessible. e.g:
GPIO_PORT_BASE &= 0x01;
What I need is an array that contains the above registers so that I can easily map them to an index. e.g:
not_sure_what_to_declare_the array_as port_base_array[] {
GPIO_PORTA_BASE,
GPIO_PORTB_BASE,
//etc
}
What I need to end up being able to do is something like this:
volatile unsigned long *reg;
*reg_a = port_base_array[0];
reg_a &=0x1;
I am using gcc to compile my code for arm cortex m3.
Any insight would be appreciated.
I don't know why #Etienne deleted his answer, but it contained the essential information: The address is cast to volatile unsigned long *. That's what you need an array of.
typedef volatile unsigned long* reg_addr;
reg_addr registers[] = {
&GPIO_PORTA_BASE,
&GPIO_PORTB_BASE,
// ...
};
We need to take the address again (&GPIO_PORTA_BASE), since the macro automatically dereferences them. Access as:
*registers[i] &= your_value;
Usual way is to declare a struct, for example :
struct RegsAtAddrA
{
unsigned int array1[10];
char val1;
// etc
};
then to access it :
volatile RegsAtAddrA *pRegsA = (volatile RegsAtAddrA *) 0x40004000;
pRegsA->val1= 'a';
//etc
EDIT: I just realized that I haven't answered the question. So, here it is :
#include <iostream>
unsigned long a=1;
unsigned long b=2;
volatile unsigned long *port_base_array[] = {
&a,
&b,
//etc
};
int main()
{
std::cout<<"a="<<*port_base_array[0]<<std::endl;
std::cout<<"b="<<*port_base_array[1]<<std::endl;
}
What I think you're trying to do is something like this:
volatile unsigned long * gpio_porta = &GPIO_PORTA_BASE;
If you're using C++, you could also do the following:
volatile unsigned long & reg_foo = (&GPIO_PORTA_BASE)[3];
volatile unsigned long & reg_foo = gpio_porta[3];
And use it as:
reg_foo &= 0x1;
However, most times I would expect a base address register to actually be stored as a pointer, rather than as the dereference of the pointer. Because of that, I would probably want your macros to be defined as:
#define GPIO_PORTA_BASE ((volatile unsigned long *) 0x40004000)
#define GPIO_PORTB_BASE ((volatile unsigned long *) 0x40005000)
And then you could simply access them as
GPIO_PORTA_BASE[3] &= 0x1
If I'm getting you right, this should be enough:
volatile unsigned long* GPIO_PORTA = (volatile unsigned long*) 0x40004000;
You could use that as
volatile unsigned long regxx = GPIO_PORTA[0x17];
// even
GPIO_PORTA[10] &= 0xF000;
Related
I need to create structure with an optional value :
typedef struct pkt_header{
unsigned short Packet_Type;
unsigned short Unprotected_Payload_Length;
unsigned short Protected_Payload_Length; // optional (present/not present)
unsigned short Version;
} PKT_HEADER;
How can i sometimes use pkt_header->Protected_Payload_Length and sometimes not use this value in a struct when the field is not present ?
My first idea is to declare unsigned char * Protected_Payload_Length and pass NULL when i not use the field and use the unsigned char* for store my unsigned short value.
typedef struct pkt_header{
unsigned short Packet_Type;
unsigned short Unprotected_Payload_Length;
unsigned char * Protected_Payload_Length; // optional
unsigned short Version;
} PKT_HEADER;
I prepare my packet like this (and send this):
PKT_HEADER header;
header.Packet_Type = 0x0001;
header.Unprotected_Payload_Length = 0x0b00;
header.Protected_Payload_Length = NULL;
header.Version = 0x0000;
I receive response and do this :
PKT_HEADER * header= (PKT_HEADER*)recvbuf;
printf("Packet_Type : %04x\n", header->Packet_Type);
printf("Unprotected_Payload_Length : %04x\n", header->Unprotected_Payload_Length);
printf("Version : %04x\n", header->Version);
But in this case, if i understand correctly, unsigned char * Protected_Payload_Length contain a pointer with a length of 4 bytes then header->Protected_Payload_Length contain 4 bytes but i need 0 byte because the value/field is not present in this precise case.
Do I have to declare an appropriate structure in the data format or is there some other way to play with the structures?
Thanks for your help.
Beware. Structs can have padding, members are not necessarily adjacent in memory. Moreover reinterpreting something as a PKT_HEADER when that something is not a PKT_HEADER object is not allowed. Instead of casting:
PKT_HEADER * header= (PKT_HEADER*)recvbuf;
you probably should use memcpy. Having said this, now to your actual question...
If you rely on members having a specific order in the struct, then inheritance is not an option. In memory the base object comes first, then the derived members, you cannot mix that. For example
struct foo {
int x;
};
struct bar : foo {
int y;
int z;
};
Then a bar object will have in memory
| x | optional padding | y | optional padding | z | optional padding |
There is no simple way to get | y | x | z |.
If you want two different types the easiest is to define two different types:
struct PKT_HEADER_A {
unsigned short Packet_Type;
unsigned short Unprotected_Payload_Length;
unsigned short Protected_Payload_Length; // present
unsigned short Version;
};
struct PKT_HEADER_B {
unsigned short Packet_Type;
unsigned short Unprotected_Payload_Length;
//unsigned short Protected_Payload_Length; // not present
unsigned short Version;
};
Note that your way to typedef the struct is a C-ism. It is not necessary (and not recommended) in C++.
You should probably take a look at the packing done on NanoPb or Protobuff , because it sounds like you have a packing problem. Data should be pieced together before sending, and the Packet_Type would encode which header to decode/encode with.
If you can't properly pack/unpack, an alternative is to create both
typedef struct {
unsigned short Packet_Type;
unsigned short Unprotected_Payload_Length;
unsigned short Protected_Payload_Length;
unsigned short Version;
} PKT_HEADER_FULL;
typedef struct {
unsigned short Packet_Type;
unsigned short Unprotected_Payload_Length;
unsigned short Version;
} PKT_HEADER_SHORT;
then create your a packet header
typedef union u{
PKT_HEADER_FULL full;
PKT_HEADER_SHORT concat;
}PKT_HEADER;
// or as this
typedef struct{
unsigned short Protected_Payload_Length;
unsigned short version
}longform;
typedef struct{
unsigned short Packet_Type;
unsigned short Unprotected_Payload_Length;
union u{
longform l;
unsigned short version;
};
} PKT_HEADER;
Then the data coming in could be decoded either way (again, depending on Packet_type), and the remaining space can be ignored. A caveat to this method is you can't use sizeof(PKT_HEADER) because the struct size will always be the larger value.
I've searched through many sites and can not seem to find anything relevant.
I would like to be able to take the individual bytes of each default data types such as short, unsigned short, int, unsigned int, float and double, and to store each individual byte information(binary part) into each index of the unsigned char array. How can this be achieved?
For example:
int main() {
short sVal = 1;
unsigned short usVal = 2;
int iVal = 3;
unsigned int uiVal = 4;
float fVal = 5.0f;
double dVal = 6.0;
const unsigned int uiLengthOfShort = sizeof(short);
const unsigned int uiLengthOfUShort = sizeof(unsigned short);
const unsigned int uiLengthOfInt = sizeof(int);
const unsigned int uiLengthOfUInt = sizeof(unsigned int);
const unsigned int uiLengthOfFloat = sizeof(float);
const unsigned int uiLengthOfDouble = sizeof(double);
unsigned char ucShort[uiLengthOfShort];
unsigned char ucUShort[uiLengthOfUShort];
unsigned char ucInt[uiLengthOfInt];
unsigned char ucUInt[uiLengthOfUInt];
unsigned char ucFloat[uiLengthOfFloat];
unsigned char ucDouble[uiLengthOfDouble];
// Above I declared a variable val for each data type to work with
// Next I created a const unsigned int of each type's size.
// Then I created unsigned char[] using each data types size respectively
// Now I would like to take each individual byte of the above val's
// and store them into the indexed location of each unsigned char array.
// For Example: - I'll not use int here since the int is
// machine and OS dependent.
// I will use a data type that is common across almost all machines.
// Here I will use the short as my example
// We know that a short is 2-bytes or has 16 bits encoded
// I would like to take the 1st byte of this short:
// (the first 8 bit sequence) and to store it into the first index of my unsigned char[].
// Then I would like to take the 2nd byte of this short:
// (the second 8 bit sequence) and store it into the second index of my unsigned char[].
// How would this be achieved for any of the data types?
// A Short in memory is 2 bytes here is a bit representation of an
// arbitrary short in memory { 0101 1101, 0011 1010 }
// I would like ucShort[0] = sVal's { 0101 1101 } &
// ucShort[1] = sVal's { 0011 1010 }
ucShort[0] = sVal's First Byte info. (8 Bit sequence)
ucShort[1] = sVal's Second Byte info. (8 Bit sequence)
// ... and so on for each data type.
return 0;
}
Ok, so first, don't do that if you can avoid it. Its dangerous and can be extremely dependent on architecture.
The commentators above are correct, union is the safest way to do it, you have the endian problem still, yes, but at least you don't have the stack alignment problem (I assume this is for network code, so stack-alignment is another potential architecture problem)
This is what I've found to be the most straight-forward way to do this:
uint32_t example_int;
char array[4];
//No endian switch
array[0] = ((char*) &example_int)[0];
array[1] = ((char*) &example_int)[1];
array[2] = ((char*) &example_int)[2];
array[3] = ((char*) &example_int)[3];
//Endian switch
array[0] = ((char*) &example_int)[3];
array[1] = ((char*) &example_int)[2];
array[2] = ((char*) &example_int)[1];
array[3] = ((char*) &example_int)[0];
If you're trying to write cross-architecture code, you will need to deal with endian problems one way or another. My suggestion is to construct a short endian test and build functions to "pack" and "unpack" byte arrays based on the above method. It should be noted that to "unpack" a byte array, simply reverse the above assignment statements.
The simplest correct way is:
// static_assert(sizeof ucShort == sizeof sVal);
memcpy( &ucShort, &sVal, sizeof ucShort);
The stuff you write in comments is not correct; all types have machine-dependent size, other than character types.
With the help of Raw N by providing me a website, I did a search on byte manipulation and found this thread - http://www.cplusplus.com/forum/articles/12/ and it presents a similar solution towards what I am looking for, however I would have to repeat this process for every default data type.
After doing some testing this is what I have come up with so far and this is dependent on machine architecture, but to do this on other machines the concept is the same.
typedef struct packed_2bytes {
unsigned char c0;
unsigned char c1;
} packed_2bytes;
typedef struct packed_4bytes {
unsigned char c0;
unsigned char c1;
unsigned char c2;
unsigned char c3;
} packed_4bytes;
typedef struct packed_8bytes {
unsigned char c0;
unsigned char c1;
unsigned char c2;
unsigned char c3;
unsigned char c4;
unsigned char c5;
unsigned char c6;
unsigned char c7;
} packed_8bytes;
typedef union {
short s;
packed_2bytes bytes;
} packed_short;
typedef union {
unsigned short us;
packed_2bytes bytes;
} packed_ushort;
typedef union { // 32bit machine, os, compiler only
int i;
packed_4bytes bytes;
} packed_int;
typedef union { // 32 bit machine, os, compiler only
unsigned int ui;
packed_4bytes bytes;
} packed_uint;
typedef union {
float f;
packed_4bytes bytes;
} packed_float;
typedef union {
double d;
packed_8bytes bytes;
} packed_double;
There is no implementation of use only the declarations or definitions to these types. I do think that they should contain which ever endian is being used, but the person who is using them has to know this ahead of time just as knowing the machines architectures sizes for each of the default types. I am not sure if there would be a problem with signed int or not due to one's, two's compliment or signed bit implementations, but it could also be something to consider.
Go easy on me, I'm still a newb with C/C++.. I know this has been asked a few times, and I've tried following the solutions given to no avail. This code is for a NetBurner processor, DWORD is 32 bit unsigned, WORD is 16 bit unsigned.
header func.h:
class funcs
{
// ...
private:
void myfunc();
WORD data001;
DWORD data002[100];
DWORD data003[100];
// ...
}
I have this function that calls upon that data in my class, funcs.cpp. Assume all variables have been initialized:
void funcs::myfunc()
{
data001++;
data002[data001] = x; // random x for this example
data003[data001] = y;
}
My compiler is complaining: "error: invalid types 'DWORD[WORD] for array subscript". I've changed the array subscript type to "int", "unsigned int" and every other type I could think of, and still get the error. I tried the solutions given in previous posts:
void funcs::myfunc()
{
data001++;
this->data002[data001] = x; // random x for this example
this->data003[data001] = y;
}
but it was to no avail. I've also tried containing myfunc definition within the class, same error. Any ideas/solutions? I'm stumped. Thanks guys!!
Edit: data types provided in a header file:
typedef unsigned char BOOL;
typedef unsigned char BOOLEAN;
typedef unsigned char BYTE; /* Unsigned 8 bit quantity */
typedef signed short SHORT;/* Signed 16 bit quantity */
typedef unsigned short WORD; /* Unsigned 16 bit quantity */
typedef unsigned long DWORD;/* Unsigned 32 bit quantity */
typedef signed long LONG; /* Signed 32 bit quantity */
typedef volatile unsigned char VBOOLEAN;
typedef volatile unsigned char VBYTE; /* Unsigned 8 bit quantity */
typedef volatile short VSHORT; /* Signed 16 bit quantity */
typedef volatile unsigned short VWORD; /* Unsigned 16 bit quantity */
typedef volatile unsigned long VDWORD; /* Unsigned 32 bit quantity */
typedef volatile signed long VLONG; /* Signed 32 bit quantity */
Screenshot:
Your real code (transcribed from the screenshot) is:
DWORD u_data002;
WORD u_data003;
u_data002[u_data_003] = whatever;
which tries to index an integer as if it were an array or pointer.
Presumably, either u_data002 is supposed to be an array, or you meant to write something other than u_data002.
Why this piece of code is needed ?
typedef struct corr_id_{
unsigned int size:8;
unsigned int valueType:8;
unsigned int classId:8;
unsigned int reserved:8;
} CorrId;
I did some investigation around it and found that this way we are limiting the memory consumption to just what we need.
For E.g.
typedef struct corr_id_new{
unsigned int size;
unsigned int valueType;
unsigned int classId;
unsigned int reserved;
} CorrId_NEW;
typedef struct corr_id_{
unsigned int size:8;
unsigned int valueType:8;
unsigned int classId:8;
unsigned int reserved:8;
} CorrId;
int main(){
CorrId_NEW Obj1;
CorrId Obj2;
std::cout<<sizeof(Obj1)<<endl;
std::cout<<sizeof(Obj2)<<endl;
}
Output:-
16
4
I want to understand the real use case of such scenarios? why can't we declare the struct something like this,
typedef struct corr_id_new{
unsigned _int8 size;
unsigned _int8 valueType;
unsigned _int8 classId;
unsigned _int8 reserved;
} CorrId_NEW;
Does this has something to do with compiler optimizations? Or, what are the benefits of declaring the structure that way?
I want to understand the real use case of such scenarios?
For example, structure of status register of some CPU may look like this:
In order to represent it via structure, you could use bitfield:
struct CSR
{
unsigned N: 1;
unsigned Z: 1;
unsigned C: 1;
unsigned V: 1;
unsigned : 20;
unsigned I: 1;
unsigned : 2;
unsigned M: 5;
};
You can see here that fields are not multiplies of 8, so you can't use int8_t, or something similar.
Lets see a simple scenario,
typedef struct student{
unsigned int age:8; // max 8-bits is enough to store a students's age 255 years
unsigned int roll_no:16; //max roll_no can be 2^16, which long enough
unsigned int classId:4; //class ID can be 4-bits long (0-15), as per need.
unsigned int reserved:4; // reserved
};
Above case all work is done in 32-bits only.
But if you use just a integer it would have taken 4*32 bits.
If we take age as 32-bit integer, It can store in range of 0 to 2^32. But don't forget a normal person's age is just max 100 or 140 or 150 (even somebody studying in this age also), which needs max 8-bits to store, So why to waste remaining 24-bits.
You are right, the last structure definition with unsigned _int8 is almost equivalent to the definition using :8. Almost, because byte order can make a difference here, so you might find that the memory layout is reversed in the two cases.
The main purpose of the :8 notation is to allow the use of fractional bytes, as in
struct foo {
uint32_t a:1;
uint32_t b:2;
uint32_t c:3;
uint32_t d:4;
uint32_t e:5;
uint32_t f:6;
uint32_t g:7;
uint32_t h:4;
}
To minimize padding, I strongly suggest to learn the padding rules yourself, they are not hard to grasp. If you do, you can know that your version with unsigned _int8 does not add any padding. Or, if you don't feel like learning those rules, just use __attribute__((__packed__)) on your struct, but that may introduce a severe performance penalty.
It's often used with pragma pack to create bitfields with labels, e.g.:
#pragma pack(0)
struct eg {
unsigned int one : 4;
unsigned int two : 8;
unsigned int three : 16
};
Can be cast for whatever purpose to an int32_t, and vice versa. This might be useful when reading serialized data that follows a (language agnostic) protocol -- you extract an int and cast it to a struct eg to match the fields and field sizes defined in the protocol. You could also skip the conversion and just read an int sized chunk into such a struct, point being that the bitfield sizes match the protocol field sizes. This is extremely common in network programming -- if you want to send a packet following the protocol, you just populate your struct, serialize, and transmit.
Note that pragma pack is not standard C but it is recognized by various common compilers. Without pragma pack, however, the compiler is free to place padding between fields, reducing the use value for the purposes described above.
I am programming in linux, which is new to me. I am working on a project to design a 'layer 7' network protocol, and we have these packets that contain resources. And depending on the type of resource, the length of that resource would be different. I am kind of new to C/C++, and am not sure I understand unions all that well. The idea was that I would be able to make a "generic resource" type and depending on what resource it was I could just cast a void* as a pointer to this typedef structure and then call the data contained in it as anything I please and it would take care of the 'casting'. Anyways, here is what I came up with:
typedef struct _pktresource
{
unsigned char Type; // The type of the resource.
union {
struct { // This is used for variable length data.
unsigned short Size;
void *Data;
};
void *ResourceData; // Just a generic pointer to the data.
unsigned char Byte;
char SByte;
short Int16;
unsigned short UInt16;
int Int32;
unsigned int UInt32;
long long Int64;
unsigned long long UInt64;
float Float;
double Double;
unsigned int Time;
};
} pktresource, *ppktresource;
The principal behind this was simple. But when I do something like
pktresource.Size = XXXX
It starts out 4 bytes into the structure instead of 1 byte. Am I failing to grasp a major concept here? Because it feels like I am.
EDIT: Forgot to mention, when I reference
pktresource.Type
It starts at the beginning like its supposed to.
EDIT: Correction was to add pragma statements for proper alignment. After fix, the code looks like:
#pragma pack(push)
#pragma pack(1)
typedef struct _pktresource
{
unsigned char Type; // The type of the resource.
union {
struct { // This is used for variable length data.
unsigned short Size;
unsigned char Data[];
};
unsigned char ResourceData[]; // Just a generic pointer to the data.
unsigned char Byte;
char SByte;
short Int16;
unsigned short UInt16;
int Int32;
unsigned int UInt32;
long long Int64;
unsigned long long UInt64;
float Float;
double Double;
unsigned int Time;
};
} pktresource, *ppktresource;
#pragma pack(pop)
Am I failing to grasp a major concept here?
You're missing knowledge of structure alignment. Basically, it forces certain fields to be aligned by > 1 byte boundaries depending on their size. You can use #pragma to override this behavior, but that can cause interoperability issues if the structure is used anywhere outside your application.
I think the problem is alignment. By default most compilers align to the word size of the machine / OS, in this case 32 bits / 4 bytes. So, since you have that unsigned char Type field up front, the compiler is pushing the Size field to the next even 4 byte boundary.
try
#pragma pack 1
ahead of you structure definitions.
I don't know what compiler you are using, but that's good old-fashioned C code that's been regularly in use for network programming since before most of these rude kids on StackOverflow were born.