C++ define preprocessor - c++

I am learning C++ and right we are covering preprocessors but I am trying to solve a question from a quiz which I has confused me a bit or a lot.. I tried to worked out by my own before running the program.. and my output was..
System started...
Data at 2 is: 27 28 29 30
Data at 1 is: 23 24 25 26
The data is: 19
I checked the program in Xcode to see if my output is right but the right output is the next one:
System started...
Data at 1 is: 0 0 0 19
Data at 0 is: 7 0 0 0
The data is: 19 0 0 0
This is the code...
#include <iostream>
namespace test{
#define COMPILE_FAST
#define PRINT_SPLIT(v) std::cout << (int)*((char*)(v)) << ' ' << \
(int)*((char*)(v) + 1) << ' ' << (int)*((char*)(v) +2) << ' ' << \
(int)*((char*)(v) + 3) << std::endl
typedef unsigned long long uint;
namespace er{
typedef unsigned int uint;
}
void debug(void* data, int size = 0){
if(size==0){
std::cout << "The data is: ";
PRINT_SPLIT(data);
} else {
while(size--){
std::cout << "Data at " << size << " is: ";
char* a = (char*)data;
PRINT_SPLIT((a + (4+size)));
}
}
}
}// End of Test namespace...
int main(){
test::uint a = 19;
test::er::uint b[] = {256,7};
std::cout << "System started..." << std::endl;
test::debug(b,2);
test::debug(&a);
std::cout << "Test complete";
return 0;
}
My big doubt or what I actually don't understand is whats going on here in this preprocessor because clearly for what I did its totally wrong...
#define PRINT_SPLIT(v) std::cout << (int)*((char*)(v)) << ' ' << \
(int)*((char*)(v) + 1) << ' ' << (int)*((char*)(v) +2) << ' ' << \
(int)*((char*)(v) + 3) << std::endl
if someone can be so nice and give me a brief explanation I will extremely appreciate it.

The macro prints the values (as ints) of 4 consecutive bytes. It allows you to see how a 4 byte int is layed out in memory.
Memory contents, by byte, look like this (base10):
0x22abf0: 0 1 0 0 7 0 0 0
0x22abf8: 19 0 0 0 0 0 0 0
0 1 0 0 is 256, i.e. b[0]
7 0 0 0 is 7, i.e b[1]
19 0 0 0 0 0 0 0 is 19, i.e. a
The sizeof(a) is different than the sizeof(b[0]) because there are 2 different typedefs for uint. Namely, test:uint and test::er::uint.
The address of a is greater than the address of b[] even though b is declared after a because the stack is growing downwards in memory.
Finally, I would say the output represents a defective program because the output would more reasonably be:
System started...
Data at 1 is: 7 0 0 0
Data at 0 is: 0 1 0 0
The data is: 19 0 0 0
To get that output the program needs to be changed as follows:
while(size--){
std::cout << "Data at " << size << " is: ";
int* a = (int*)data;
PRINT_SPLIT((a + (size)));

Related

How can I convert a std::string to std::vector?

I have some code which I need to serialize a vector into bytes, then send it to a server. Later on, the the server replies with bytes and I need to serialize it back into a vector.
I have managed to serialize into bytes okay, but converting back into a vector is getting the wrong values:
#include <iostream>
#include <string>
#include <vector>
int main()
{
std::vector<double> v = {1, 2, 3};
std::cout << "Original vector: " << std::endl;
for (auto i : v) {
std::cout << i << " ";
}
std::cout << std::endl;
std::string str((char *)v.data(), sizeof(v[0])*v.size());
std::cout << "Vector memory as string: " << std::endl << str << std::endl;
std::cout << "Convert the string back to vector: " << std::endl;
auto rV = std::vector<double>(&str[0], &str[str.size()]);
for (auto i : rV){
std::cout << i << " ";
}
std::cout << std::endl;
return 0;
}
This outputs:
Original vector:
1 2 3
Vector memory as string:
�?##
Convert the string back to vector:
0 0 0 0 0 0 -16 63 0 0 0 0 0 0 0 64 0 0 0 0 0 0 8 64
What is going wrong with my conversion from a string to a vector, and how can I fix it?
Here is a link to run the code.
Like this
std::vector<double>((double*)str.data(), (double*)(str.data() + str.size()));
Basically the same as your code, but I've added some casts. In your version the chars get converted directly into doubles (as if you had written rV[0] = str[0] etc) and the vector is sizeof(double) times too big.

How to get torch::Tensor shape

If we << a torch::Tensor
#include <torch/script.h>
int main()
{
torch::Tensor input_torch = torch::zeros({2, 3, 4});
std::cout << input_torch << std::endl;
return 0;
}
we see
(1,.,.) =
0 0 0 0
0 0 0 0
0 0 0 0
(2,.,.) =
0 0 0 0
0 0 0 0
0 0 0 0
[ CPUFloatType{2,3,4} ]
How to get the tensor shape (that 2,3,4)? I searched https://pytorch.org/cppdocs/api/classat_1_1_tensor.html?highlight=tensor for an API call but couldn't find one. And I searched for the operator<< overload code, and also couldn't find it.
What works for me is:
#include <torch/script.h>
int main()
{
torch::Tensor input_torch = torch::zeros({2, 3, 4});
std::cout << "dim 0: " << input_torch.sizes()[0] << std::endl;
std::cout << "dim 1: " << input_torch.sizes()[1] << std::endl;
std::cout << "dim 2: " << input_torch.sizes()[2] << std::endl;
assert(input_torch.sizes()[0]==2);
assert(input_torch.sizes()[1]==3);
assert(input_torch.sizes()[2]==4);
return 0;
}
Platform:
libtorch 1.11.0
CUDA 11.3
You can use torch::sizes() method
IntArrayRef sizes()
It's equivalent of shape in python. Furthermore you can access specific size at given ax (dimension) by invoking torch::size(dim). Both functions are in the API page you linked
Well i have been using torch::_shape_as_tensor(tensor) which gives you another tensor object

Locally compiled c++ code is improperly looping

The following never terminates on my system.
#include <iostream>
using namespace std;
int main(){
int solutions[1000][4] = {};
for(int a=0; 3*a<=1000; a++){
for(int b=0; 5*b<=1000; b++){
for(int c=0; 7*c<=1000; c++){
cout << "enter" << "\t" << a << "\t" << b << "\t" << c << endl;
if (3*a+5*b+7*c > 1000) {break;}
solutions[3*a+5*b+7*c][0] = a;
solutions[3*a+5*b+7*c][1] = b;
solutions[3*a+5*b+7*c][2] = c;
solutions[3*a+5*b+7*c][3] = 1;
cout << "exit" << "\t" << a << "\t" << b << "\t" << c << endl << endl;
}
}
}
}
I'm completely stumped, so I decided to print a log of variable changes. It makes it to 4 iterations of b, and then when c hits 140, it loops back to 0. Log looks like this
...
enter 0 4 137
exit 0 4 137
enter 0 4 138
exit 0 4 138
enter 0 4 139
exit 0 4 139
enter 0 4 140
exit 0 4 0
enter 0 4 1
exit 0 4 1
enter 0 4 2
exit 0 4 2
enter 0 4 3
exit 0 4 3
...
I compiled this using g++ B.cpp -o B.exe, and then just ran the executable. The exact code (with logging commented out) terminates properly online at http://cpp.sh/. My compiler version is g++ (i686-posix-dwarf-rev0, Built by MinGW-W64 project) 5.3.0. What could be going wrong here?
When a = 0, b = 4, c = 140, 3*a+5*b+7*c becomes 1000 and write to out-of-bounds solution[1000] happens. It seems this out-of-bound write happened to break the loop counter.
Allocate one more element to avoid this out-of-bounds write.
int solutions[1001][4] = {};

Why is my fortran routine passing an incorrect value to my C++ function?

I have a fortran routine calling a C++ function in an external static library. I have an integer:
integer (c_int) :: processor_wall_point_count
That is simply being passed to a function:
print*, processor_wall_point_count ! gives 112
call fci_wmm_allocate_domain(processor_wall_point_count, wall_model_ray_point_count)
The C++ function:
void fci_wmm_allocate_domain(int* _processor_wall_point_count, int* _ray_point_count)
{
std::cout << *_processor_wall_point_count << std::endl; // gives 70
}
The main code has an MPI environment, and when run on 10 processors with
print*, process_id, processor_wall_point_count !process id, variable
call MPI_Barrier()
call fci_wmm_allocate_domain(processor_wall_point_count, wall_model_ray_point_count)
and in C++:
void fci_wmm_allocate_domain(int* _processor_wall_point_count, int* _ray_point_count)
{
std::cout << process_id << ", " <<*_processor_wall_point_count << std::endl;
MPI_Barrier(MPI_COMM_WORLD);
}
I get the following:
8 32
9 0
0 16
2 48
6 0
1 0
3 16
5 0
7 0
4 0
2, 48
8, 32
0, 10
3, 16
5, 0
9, 0
6, 0
1, 0
7, 0
4, 0
I.e., all the values are passed correctly except for processor 0. I have used the C bindings before without (too many) problems. What is going on here?
EDIT: here is the fortran interface:
interface
subroutine fci_wmm_allocate_domain(point_count_F, ray_points_F) bind (c)
use iso_c_binding
integer (c_int), intent(in) :: point_count_F, ray_points_F
end subroutine fci_wmm_allocate_domain
end interface
ISSUE:
I'm not sure how I missed thism but I had the folllowing function being called upstream only on processor 0:
void print_node_info(void)
{
if (is_node_root_process && verbose)
{
std::cout << "[I] Node " << node_name << " (0x";
std::cout << std::hex << processor_node_id;
std::cout << ") has root process " << current_node_root_process_id << std::endl;
}
}
Changing to
void print_node_info(void)
{
if (is_node_root_process && verbose)
{
std::cout << "[I] Node " << node_name << " (0x";
std::cout << std::hex << processor_node_id << std::dec;
std::cout << ") has root process " << current_node_root_process_id << std::endl;
}
}
fixed the issue.
It is interesting that 112 = 0x70 and 16 = 0x10. Could it be a forgotten C++ stream manipulator (std::hex) somewhere?

Trouble tracking recursive function

I have this code which outputs: 10 5 16 8 4 2 11
However, I don't have any clue from where the 11 is coming from since when tracing i get the following:
H(10)
H(5)
1+H(16) //does this result in 17?
H(8)
H(4)
H(2)
H(1) -> returns 0
Moreover what happens to the (1) in 1+H(16) ?
Thus shouldnt my output of the n values be: 10 5 17 8 4 2 1
#include <iostream>
using namespace std;
int H ( int n ) {
cout << " " << n<<" ";
if ( n == 1 ) return 0;
if ( n%2 != 0 ) return 1 + H ( 3*n + 1 );
else return H ( n/2 );
}
int main() {
// for ( int i=0; ++i<=20; )
// cout << H(i) << endl;
cout << H(10) << endl;
}
At the end of the recursion, the function prints 1 then the stack pops everything out and the main prints the returned value 1 (0 is returned at the end of the recursion and only the call to H(5) adds one to the result), so 11 is printed.