I do not know what to put in the 'Title' box so I consider that the title does not completely answer my problem and I am sorry about it.
First of all, I would like to give a bit of context. I discovered the development of the network for two weeks now for a 3D game.
Today I'm focusing on shipping packages using std::vector to send templates as .obj, but that's not my problem.
The problem is that I am not receiving the information from this vector.
As a picture is worth a thousand words, here is my code (this code is just to test the encryption of the data in a char[] on the 'server' side and the reception on the client side).
My C++ program:
#include <vector>
#include <iostream>
int main()
{
/* -- server -- */
// variables to buffer
std::vector<int> vec = { 1, 25, 156, 0, 1 };
short type = 25;
int written = 0;
char buffer[256] = {};
memcpy(&buffer[written], &type, sizeof(type));
written += sizeof(type);
memcpy(&buffer[written], &vec, sizeof(vec));
written += sizeof(vec);
std::cout << "/* -- Server -- */\n" << std::endl;
std::cout << "[written] --> " << "Size : " << sizeof(written) << " | Value : " << written << std::endl;
std::cout << "[type] --> " << "Size : " << sizeof(type) << " | Value : " << type << std::endl;
std::cout << "[vec] --> " << "Size : " << sizeof(vec) << " | Value : " << vec.data() << std::endl;
// 'send' to client && 'receive' from server \\ (for example)
char buffer2[256] = {};
memcpy(&buffer2, &buffer[0], sizeof(buffer2));
/* -- client -- */
// buffer to variables
std::vector<int>* vec2;
short type2 = 0;
int read = 0;
memcpy(&type2, &buffer[read], sizeof(type2));
read += sizeof(type2);
memcpy(&vec2, &buffer[read], sizeof(vec2));
read += sizeof(vec2);
std::cout << "\n\n";
std::cout << "/* -- Client -- */\n" << std::endl;
std::cout << "[read] --> " << "Size : " << sizeof(read) << " | Value : " << read << std::endl;
std::cout << "[type2] --> " << "Size : " << sizeof(type2) << " | Value : " << type2 << std::endl;
std::cout << "[vec2] --> " << "Size : " << sizeof(vec2) << " | Value : " << vec2->data() << std::endl;
std::cout << "\n"; system("pause");
return 0;
}
Return of the cmd:
/* -- Server -- */
[written] --> Size : 4 | Value : 18
[type] --> Size : 2 | Value : 25
[vec] --> Size : 16 | Value : 00589A00
/* -- Client -- */
[read] --> Size : 4 | Value : 6
[type2] --> Size : 2 | Value : 25
[vec2] --> Size : 4 | Value : 00000000
Press any key to continue...
memcpy(&buffer[written], &vec, sizeof(vec));
You already have problems here, even before receiving anything.
sizeof(vec) is the size of std::vector<int>. Which will probably be 8 or 16 bytes, or something like this. The size of the vector will always be the same, whether the vector is empty, or holds an image of every page in an encyclopedia.
A vector, and what's in a vector, are two completely different things.
A vector's size() method gives the number of values in the vector, so the above should obviously be:
memcpy(&buffer[written], vec.data(), vec.size()*sizeof(int));
The rest of the code should be adjusted accordingly.
Similarly, the process of deserializing into a vector is also wrong, in the shown code:
std::vector<int>* vec2;
This is a pointer to a vector. In C++, before using a pointer it must be initialized to point to an existing instance of the same type. There's nothing in the shown code that does that.
memcpy(&vec2, &buffer[read], sizeof(vec2));
Since vec2 is a pointer, sizeof(vec2) will be the size of a pointer: either 4 or 8 bytes. This attempts to deserialize the raw memory address of a pointer. Again, this makes no sense.
What the shown code is attempting to do should be:
Declare your vector
std::vector<int> vec2;
Determine how many values will be read into the vector, and resize it. If, for example, you know that you have n bytes worth of raw integer data to deserialize:
vec2.resize(n / sizeof(int));
At this point you can copy the raw data into th epointer.
memcpy(vec2.data(), &buffer[written], n);
This approach is slightly inefficient, due to resizing, but that's a secondary issue. It's also possible to implement this logic in more C++-friendly ways, but that's also a secondary issue. The main issue is that your sizeof will not magically give you the number of bytes that will be read into a vector. This is something that you need to track by yourself. There's very little that C++ will do for you, you'll always have to do all the work. You need to figure out how to keep track of the actual number of bytes that were read, that comprise the contents of the vector, then resize and read into the vector, accordingly.
Related
int numRows = 5;
string s ="hellohi";
vector<string> rows(min(numRows, int(s.size())));
I think it is using the fill constructor. https://www.cplusplus.com/reference/vector/vector/vector/
but I don't know it creates a vector of NULL string or a vector of an empty string ?
And what is the size of the NULL ?
And what is the size of the empty string? 1 bytes ("/0"char) ?
The constructor you're using will create empty strings. For example you can check with:
// check the number of entries in rows, should be 5
std::cout << rows.size() << std::endl;
// check the number of characters in first string, should be 0
std::cout << rows[0].size() << std::endl;
// now the size should be 11, since there are 11 entries
rows[0] = "hello world";
std::cout << rows[0].size() << std::endl;
I believe the size of NULL is implementation defined, you could find it with:
std::cout << sizeof(nullptr) << std::endl;
I get 8 as the size (which is 64 bits)
Similar to the nullptr, the size of an empty string is probably also implementation defined, you can find it like:
std::string test_string;
std::cout << sizeof(test_string) << "\n";
std::cout << test_string.size() << "\n"; // should be 0 since the string is empty
test_string = "hello world"; // it doesn't matter how long the string is, it's the same size
std::cout << sizeof(test_string) << "\n";
std::cout << test_string.size() << "\n"; // should be 11 since the string has data now
I get 32 bytes for the size. The reason the size of the string doesn't change is due to how it works behind the scenes, instead of storing data (most of the time) it only stores a pointer to the data (which is always a fixed size).
The following code:
#include<iostream>
int main (void) {
int lista[5] = {0,1,2,3,4};
std::cout << lista << std::endl;
std::cout << &lista << std::endl;
std::cout << lista+1 << std::endl;
std::cout << &lista+1 << std::endl;
std::cout << lista+2 << std::endl;
std::cout << &lista+2 << std::endl;
std::cout << lista+3 << std::endl;
std::cout << &lista+3 << std::endl;
return (0);
}
Outputs:
0x22ff20
0x22ff20
0x22ff24
0x22ff34
0x22ff28
0x22ff48
0x22ff2c
0x22ff5c
I understood that an array is another form to express a pointer, but we cannot change its address to point anywhere else after declaration. I also understood that an array has its value as the first position in memory. Therefore, 0x22ff20 in this example is the location of the array's starting position and the first variable is stored there.
What I did not understand is: why the other variables are not stored in sequence with the array address? I mean, why lista+1 is different from &lista+1. Should not they be the same?
In pointer arithmetic, types matter.
It's true that the value is the same for both lista and &lista, their types are different: lista (in the expression used in cout call) has type int* whereas &lista has type int (*)[5].
So when you add 1 to lista, it points to the "next" int. But &lista + 1 points to the location after 5 int's (which may not be a valid).
Answering the question as asked:
std::cout << &lista+1 << std::endl;
In this code you take the address of array lista and add 1 to obtained answer. Given the sizeof of the array is sizeof(int) * 5, which means when you increment a pointer to it by 1 you add sizeof(int) * 5 to the pointer address, you end up with a number you see.
I'm trying to read pairs values from a file in the constructor of an object.
The file looks like this:
4
1 1
2 2
3 3
4 4
The first number is number of pairs to read.
In some of the lines the values seem to have been correctly written into the vector. In the next they are gone. I am totally confused
inline
BaseInterpolator::BaseInterpolator(std::string data_file_name)
{
std::ifstream in_file(data_file_name);
if (!in_file) {
std::cerr << "Can't open input file " << data_file_name << std::endl;
exit(EXIT_FAILURE);
}
size_t n;
in_file >> n;
xs_.reserve(n);
ys_.reserve(n);
size_t i = 0;
while(in_file >> xs_[i] >> ys_[i])
{
// this line prints correct values i.e. 1 1, 2 2, 3 3, 4 4
std::cout << xs_[i] << " " << ys_[i] << std::endl;
// this lines prints xs_.size() = 0
std::cout << "xs_.size() = " << xs_.size() << std::endl;
if(i + 1 < n)
i += 1;
else
break;
// this line prints 0 0, 0 0, 0 0
std::cout << xs_[i] << " " << ys_[i] << std::endl;
}
// this line prints correct values i.e. 4 4
std::cout << xs_[i] << " " << ys_[i] << std::endl;
// this lines prints xs_.size() = 0
std::cout << "xs_.size() = " << xs_.size() << std::endl;
}
The class is defined thus:
class BaseInterpolator
{
public:
~BaseInterpolator();
BaseInterpolator();
BaseInterpolator(std::vector<double> &xs, std::vector<double> &ys);
BaseInterpolator(std::string data_file_name);
virtual int interpolate(std::vector<double> &x, std::vector<double> &fx) = 0;
virtual int interpolate(std::string input_file_name,
std::string output_file_name) = 0;
protected:
std::vector<double> xs_;
std::vector<double> ys_;
};
You're experiencing undefined behaviour. It seems like it's half working, but that's twice as bad as not working at all.
The problem is this:
xs_.reserve(n);
ys_.reserve(n);
You are only reserving a size, not creating it.
Replace it by :
xs_.resize(n);
ys_.resize(n);
Now, xs[i] with i < n is actually valid.
If in doubt, use xs_.at(i) instead of xs_[i]. It performs an additional boundary check which saves you the trouble from debugging without knowing where to start.
You're using reserve(), which increases capacity (storage space), but does not increase the size of the vector (i.e. it does not add any objects into it). You should use resize() instead. This will take care of size() being 0.
You're printing the xs_[i] and ys_[i] after you increment i. It's natural those will be 0 (or perhaps a random value) - you haven't initialised them yet.
vector::reserve reserve space for further operation but don't change the size of the vector, you should use vector::resize.
The application I am working on receives C style structs from an embed system whose code was generated to target a 16 bit processor. The application which speaks with the embedded system is built with either a 32 bit gcc compiler, or a 32 bit MSVC c++ compiler. The communication between the application and the embedded system takes place via UDP packets over ethernet or modem.
The payload within the UDP packets consist of various different C style structs. On the application side a C++ style reinterpret_cast is capable of taking the unsigned byte array and casting it into the appropriate struct.
However, I run into problems with reinterpret_cast when the struct contains enumerated values. The 16 bit Watcom compiler will treat enumerated values as an uint8_t type. However, on the application side the enumerated values are treated as 32 bit values. When I receive a packet with enumerated values in it the data gets garbled because the size of the struct on the application side is larger the struct on the embedded side.
The solution to this problem, so far, has been to change the enumerated type within the struct on the application side to an uint8_t. However, this is not an optimal solution because we can no longer use the member as an enumerated type.
What I am looking for is a solution which will allow me to use a simple cast operation without having to tamper with the struct definition in the source on the application side. By doing so, I can use the struct as is in the upper layers of my application.
As noted, correctly deal with the issue is proper serialization and deserialization.
But it doesn't mean we can't try some hacks.
Option 1:
If you particular compiler support packing the enum (in my case gcc 4.7 in windows), this might work:
typedef enum { VALUE_1 = 1, VALUE_2, VALUE_3 }__attribute__ ((__packed__)) TheRealEnum;
Option 2:
If your particular compiler supports class sizes of < 4 bytes, you can use a HackedEnum class which uses operator overloading for the conversion (note the gcc attribute you might not want it):
class HackedEnum
{
private:
uint8_t evalue;
public:
void operator=(const TheRealEnum v) { evalue = v; };
operator TheRealEnum() { return (TheRealEnum)evalue; };
}__attribute__((packed));
You would replace TheRealEnum in your structures for HackedEnum, but you still continue using it as TheRealEnum.
A full example to see it working:
#include <iostream>
#include <stddef.h>
using namespace std;
#pragma pack(push, 1)
typedef enum { VALUE_1 = 1, VALUE_2, VALUE_3 } TheRealEnum;
typedef struct
{
uint16_t v1;
uint8_t enumValue;
uint16_t v2;
}__attribute__((packed)) ShortStruct;
typedef struct
{
uint16_t v1;
TheRealEnum enumValue;
uint16_t v2;
}__attribute__((packed)) LongStruct;
class HackedEnum
{
private:
uint8_t evalue;
public:
void operator=(const TheRealEnum v) { evalue = v; };
operator TheRealEnum() { return (TheRealEnum)evalue; };
}__attribute__((packed));
typedef struct
{
uint16_t v1;
HackedEnum enumValue;
uint16_t v2;
}__attribute__((packed)) HackedStruct;
#pragma pop()
int main(int argc, char **argv)
{
cout << "Sizes: " << endl
<< "TheRealEnum: " << sizeof(TheRealEnum) << endl
<< "ShortStruct: " << sizeof(ShortStruct) << endl
<< "LongStruct: " << sizeof(LongStruct) << endl
<< "HackedStruct: " << sizeof(HackedStruct) << endl;
ShortStruct ss;
cout << "address of ss: " << &ss << " size " << sizeof(ss) <<endl
<< "address of ss.v1: " << (void*)&ss.v1 << endl
<< "address of ss.ev: " << (void*)&ss.enumValue << endl
<< "address of ss.v2: " << (void*)&ss.v2 << endl;
LongStruct ls;
cout << "address of ls: " << &ls << " size " << sizeof(ls) <<endl
<< "address of ls.v1: " << (void*)&ls.v1 << endl
<< "address of ls.ev: " << (void*)&ls.enumValue << endl
<< "address of ls.v2: " << (void*)&ls.v2 << endl;
HackedStruct hs;
cout << "address of hs: " << &hs << " size " << sizeof(hs) <<endl
<< "address of hs.v1: " << (void*)&hs.v1 << endl
<< "address of hs.ev: " << (void*)&hs.enumValue << endl
<< "address of hs.v2: " << (void*)&hs.v2 << endl;
uint8_t buffer[512] = {0};
ShortStruct * short_ptr = (ShortStruct*)buffer;
LongStruct * long_ptr = (LongStruct*)buffer;
HackedStruct * hacked_ptr = (HackedStruct*)buffer;
short_ptr->v1 = 1;
short_ptr->enumValue = VALUE_2;
short_ptr->v2 = 3;
cout << "Values of short: " << endl
<< "v1 = " << short_ptr->v1 << endl
<< "ev = " << (int)short_ptr->enumValue << endl
<< "v2 = " << short_ptr->v2 << endl;
cout << "Values of long: " << endl
<< "v1 = " << long_ptr->v1 << endl
<< "ev = " << long_ptr->enumValue << endl
<< "v2 = " << long_ptr->v2 << endl;
cout << "Values of hacked: " << endl
<< "v1 = " << hacked_ptr->v1 << endl
<< "ev = " << hacked_ptr->enumValue << endl
<< "v2 = " << hacked_ptr->v2 << endl;
HackedStruct hs1, hs2;
// hs1.enumValue = 1; // error, the value is not the wanted enum
hs1.enumValue = VALUE_1;
int a = hs1.enumValue;
TheRealEnum b = hs1.enumValue;
hs2.enumValue = hs1.enumValue;
return 0;
}
The output on my particular system is:
Sizes:
TheRealEnum: 4
ShortStruct: 5
LongStruct: 8
HackedStruct: 5
address of ss: 0x22ff17 size 5
address of ss.v1: 0x22ff17
address of ss.ev: 0x22ff19
address of ss.v2: 0x22ff1a
address of ls: 0x22ff0f size 8
address of ls.v1: 0x22ff0f
address of ls.ev: 0x22ff11
address of ls.v2: 0x22ff15
address of hs: 0x22ff0a size 5
address of hs.v1: 0x22ff0a
address of hs.ev: 0x22ff0c
address of hs.v2: 0x22ff0d
Values of short:
v1 = 1
ev = 2
v2 = 3
Values of long:
v1 = 1
ev = 770
v2 = 0
Values of hacked:
v1 = 1
ev = 2
v2 = 3
On the application side a C++ style reinterpret_cast is capable of taking the unsigned byte array and casting it into the appropriate struct.
The layout of structs is not required to be the same between different implementations. Using reinterpret_cast in this way is not appropriate.
The 16 bit Watcom compiler will treat enumerated values as an uint8_t type. However, on the application side the enumerated values are treated as 32 bit values.
The underlying type of an enum is chosen by the implementation, and is chosen in an implementation defined manner.
This is just one of the many potential differences between implementations that can cause problems with your reinterpret_cast. There are also actual alignment issues if you're not careful, where the data in the received buffer isn't appropriately aligned for the types (e.g., an integer that requires four byte alignment ends up one byte off) which can cause crashes or poor performance. Padding might be different between platforms, fundamental types might have different sizes, endianess can differ, etc.
What I am looking for is a solution which will allow me to use a simple cast operation without having to tamper with the struct definition in the source on the application side. By doing so, I can use the struct as is in the upper layers of my application.
C++11 introduces a new enum syntax that allows you to specify the underlying type. Or you can replace your enums with integral types along with a bunch of predefined constants with manually declared values. This only fixes the problem you're asking about and not any of the other ones you have.
What you should really do is proper serialization and deserialization.
Put your enumerated type inside of a union with a 32-bit number:
union
{
Enumerated val;
uint32_t valAsUint32;
};
This would make the embedded side have it expanded to 32-bit. Should work as long as both platforms are little-endian and the structs are zero-filled initially. This would change wire format, though.
If by "simple cast operation" you mean something that's expressed in the source code, rather than something that's necessarily zero-copy, then you can write two versions of the struct -- one with enums, one with uint8_ts, and a constructor for one from the other that copies it element-by-element to repack it. Then you can use an ordinary type-cast in the rest of the code. Since the data sizes are fundamentally different (unless you use the C++11 features mentioned in another answer), you can't do this without copying things to repack them.
However, if you don't mind some small changes to the struct definition on the application side, there are a couple of options that don't involve dealing with bare uint8_t values. You could use aaronps's answer of a class that is the size of a uint8_t (assuming that's possible with your compiler) and implicitly converts to and from an enum. Alternately, you could store the values as uint8_ts and write some accessor methods for your enum values that take the uint8_t data in the struct and convert it to an enum before returning it.
How can we split a std::string and a null terminated character array into two halves such that both have same length?
Please suggest an efficient method for the same.You may assume that the length of the original string/array is always an even number.
By efficiently I mean using less number of bytes in both the cases, since something using loops and buffer is not what I am looking for.
std::string s = "string_split_example";
std::string half = s.substr(0, s.length()/2);
std::string otherHalf = s.substr(s.length()/2);
cout << s.length() << " : " << s << endl;
cout << half.length() << " : " << half << endl;
cout << otherHalf .length() << " : " << otherHalf << endl;
Output:
20 : string_split_example
10 : string_spl
10 : it_example
Online Demo : http://www.ideone.com/fmYrO
You've already received a C++ answer, but here's a C answer:
int len = strlen(strA);
char *strB = malloc(len/2+1);
strncpy(strB, strA+len/2, len/2+1);
strA[len/2] = '\0';
Obviously, this uses malloc() to allocate memory for the second string, which you will have to free() at some point.