How to get torch::Tensor shape - c++

If we << a torch::Tensor
#include <torch/script.h>
int main()
{
torch::Tensor input_torch = torch::zeros({2, 3, 4});
std::cout << input_torch << std::endl;
return 0;
}
we see
(1,.,.) =
0 0 0 0
0 0 0 0
0 0 0 0
(2,.,.) =
0 0 0 0
0 0 0 0
0 0 0 0
[ CPUFloatType{2,3,4} ]
How to get the tensor shape (that 2,3,4)? I searched https://pytorch.org/cppdocs/api/classat_1_1_tensor.html?highlight=tensor for an API call but couldn't find one. And I searched for the operator<< overload code, and also couldn't find it.

What works for me is:
#include <torch/script.h>
int main()
{
torch::Tensor input_torch = torch::zeros({2, 3, 4});
std::cout << "dim 0: " << input_torch.sizes()[0] << std::endl;
std::cout << "dim 1: " << input_torch.sizes()[1] << std::endl;
std::cout << "dim 2: " << input_torch.sizes()[2] << std::endl;
assert(input_torch.sizes()[0]==2);
assert(input_torch.sizes()[1]==3);
assert(input_torch.sizes()[2]==4);
return 0;
}
Platform:
libtorch 1.11.0
CUDA 11.3

You can use torch::sizes() method
IntArrayRef sizes()
It's equivalent of shape in python. Furthermore you can access specific size at given ax (dimension) by invoking torch::size(dim). Both functions are in the API page you linked

Well i have been using torch::_shape_as_tensor(tensor) which gives you another tensor object

Related

How can I convert a std::string to std::vector?

I have some code which I need to serialize a vector into bytes, then send it to a server. Later on, the the server replies with bytes and I need to serialize it back into a vector.
I have managed to serialize into bytes okay, but converting back into a vector is getting the wrong values:
#include <iostream>
#include <string>
#include <vector>
int main()
{
std::vector<double> v = {1, 2, 3};
std::cout << "Original vector: " << std::endl;
for (auto i : v) {
std::cout << i << " ";
}
std::cout << std::endl;
std::string str((char *)v.data(), sizeof(v[0])*v.size());
std::cout << "Vector memory as string: " << std::endl << str << std::endl;
std::cout << "Convert the string back to vector: " << std::endl;
auto rV = std::vector<double>(&str[0], &str[str.size()]);
for (auto i : rV){
std::cout << i << " ";
}
std::cout << std::endl;
return 0;
}
This outputs:
Original vector:
1 2 3
Vector memory as string:
�?##
Convert the string back to vector:
0 0 0 0 0 0 -16 63 0 0 0 0 0 0 0 64 0 0 0 0 0 0 8 64
What is going wrong with my conversion from a string to a vector, and how can I fix it?
Here is a link to run the code.
Like this
std::vector<double>((double*)str.data(), (double*)(str.data() + str.size()));
Basically the same as your code, but I've added some casts. In your version the chars get converted directly into doubles (as if you had written rV[0] = str[0] etc) and the vector is sizeof(double) times too big.

Why is my fortran routine passing an incorrect value to my C++ function?

I have a fortran routine calling a C++ function in an external static library. I have an integer:
integer (c_int) :: processor_wall_point_count
That is simply being passed to a function:
print*, processor_wall_point_count ! gives 112
call fci_wmm_allocate_domain(processor_wall_point_count, wall_model_ray_point_count)
The C++ function:
void fci_wmm_allocate_domain(int* _processor_wall_point_count, int* _ray_point_count)
{
std::cout << *_processor_wall_point_count << std::endl; // gives 70
}
The main code has an MPI environment, and when run on 10 processors with
print*, process_id, processor_wall_point_count !process id, variable
call MPI_Barrier()
call fci_wmm_allocate_domain(processor_wall_point_count, wall_model_ray_point_count)
and in C++:
void fci_wmm_allocate_domain(int* _processor_wall_point_count, int* _ray_point_count)
{
std::cout << process_id << ", " <<*_processor_wall_point_count << std::endl;
MPI_Barrier(MPI_COMM_WORLD);
}
I get the following:
8 32
9 0
0 16
2 48
6 0
1 0
3 16
5 0
7 0
4 0
2, 48
8, 32
0, 10
3, 16
5, 0
9, 0
6, 0
1, 0
7, 0
4, 0
I.e., all the values are passed correctly except for processor 0. I have used the C bindings before without (too many) problems. What is going on here?
EDIT: here is the fortran interface:
interface
subroutine fci_wmm_allocate_domain(point_count_F, ray_points_F) bind (c)
use iso_c_binding
integer (c_int), intent(in) :: point_count_F, ray_points_F
end subroutine fci_wmm_allocate_domain
end interface
ISSUE:
I'm not sure how I missed thism but I had the folllowing function being called upstream only on processor 0:
void print_node_info(void)
{
if (is_node_root_process && verbose)
{
std::cout << "[I] Node " << node_name << " (0x";
std::cout << std::hex << processor_node_id;
std::cout << ") has root process " << current_node_root_process_id << std::endl;
}
}
Changing to
void print_node_info(void)
{
if (is_node_root_process && verbose)
{
std::cout << "[I] Node " << node_name << " (0x";
std::cout << std::hex << processor_node_id << std::dec;
std::cout << ") has root process " << current_node_root_process_id << std::endl;
}
}
fixed the issue.
It is interesting that 112 = 0x70 and 16 = 0x10. Could it be a forgotten C++ stream manipulator (std::hex) somewhere?

Why doesn't this out-of-bounds access segfault?

I was testing some code for a class that wraps a 2-dimensional array of structs.
WrapperClass x;
SomeStruct try1 = x.at(0, 0);
SomeStruct try2 = x.at('a', 1);
SomeStruct array[] = {try1, try2};
// There were originally 3 of these test variables above, but I forgot to
// change the loop upper bound when I deleted one
for (int i = 0; i < 3; i++) {
// I added this line after noticing the non-error
std::cout << &array[i] << '\n';
std::cout << array[i].property1 << '\n';
std::cout << array[i].property2 << '\n';
std::cout << array[i].property3 << '\n';
std::cout << "-\n";
}
return 0;
This outputs:
0x7ffdadface08
0
0
0
-
0x7ffdadface14
0
0
0
-
0x7ffdadface20
0
0
0
Why doesn't this code segfault with an "access out of bounds" error or something? I only created 2 structs in the array; why is there suddenly a third one that I can freely and safely access?
Because it's undefined behavior and anything can happen, including no immediate error. I suggest you use containers such as vectors which are bounds-checked by good debug compilers.

C++ define preprocessor

I am learning C++ and right we are covering preprocessors but I am trying to solve a question from a quiz which I has confused me a bit or a lot.. I tried to worked out by my own before running the program.. and my output was..
System started...
Data at 2 is: 27 28 29 30
Data at 1 is: 23 24 25 26
The data is: 19
I checked the program in Xcode to see if my output is right but the right output is the next one:
System started...
Data at 1 is: 0 0 0 19
Data at 0 is: 7 0 0 0
The data is: 19 0 0 0
This is the code...
#include <iostream>
namespace test{
#define COMPILE_FAST
#define PRINT_SPLIT(v) std::cout << (int)*((char*)(v)) << ' ' << \
(int)*((char*)(v) + 1) << ' ' << (int)*((char*)(v) +2) << ' ' << \
(int)*((char*)(v) + 3) << std::endl
typedef unsigned long long uint;
namespace er{
typedef unsigned int uint;
}
void debug(void* data, int size = 0){
if(size==0){
std::cout << "The data is: ";
PRINT_SPLIT(data);
} else {
while(size--){
std::cout << "Data at " << size << " is: ";
char* a = (char*)data;
PRINT_SPLIT((a + (4+size)));
}
}
}
}// End of Test namespace...
int main(){
test::uint a = 19;
test::er::uint b[] = {256,7};
std::cout << "System started..." << std::endl;
test::debug(b,2);
test::debug(&a);
std::cout << "Test complete";
return 0;
}
My big doubt or what I actually don't understand is whats going on here in this preprocessor because clearly for what I did its totally wrong...
#define PRINT_SPLIT(v) std::cout << (int)*((char*)(v)) << ' ' << \
(int)*((char*)(v) + 1) << ' ' << (int)*((char*)(v) +2) << ' ' << \
(int)*((char*)(v) + 3) << std::endl
if someone can be so nice and give me a brief explanation I will extremely appreciate it.
The macro prints the values (as ints) of 4 consecutive bytes. It allows you to see how a 4 byte int is layed out in memory.
Memory contents, by byte, look like this (base10):
0x22abf0: 0 1 0 0 7 0 0 0
0x22abf8: 19 0 0 0 0 0 0 0
0 1 0 0 is 256, i.e. b[0]
7 0 0 0 is 7, i.e b[1]
19 0 0 0 0 0 0 0 is 19, i.e. a
The sizeof(a) is different than the sizeof(b[0]) because there are 2 different typedefs for uint. Namely, test:uint and test::er::uint.
The address of a is greater than the address of b[] even though b is declared after a because the stack is growing downwards in memory.
Finally, I would say the output represents a defective program because the output would more reasonably be:
System started...
Data at 1 is: 7 0 0 0
Data at 0 is: 0 1 0 0
The data is: 19 0 0 0
To get that output the program needs to be changed as follows:
while(size--){
std::cout << "Data at " << size << " is: ";
int* a = (int*)data;
PRINT_SPLIT((a + (size)));

Values not written to vector

I'm trying to read pairs values from a file in the constructor of an object.
The file looks like this:
4
1 1
2 2
3 3
4 4
The first number is number of pairs to read.
In some of the lines the values seem to have been correctly written into the vector. In the next they are gone. I am totally confused
inline
BaseInterpolator::BaseInterpolator(std::string data_file_name)
{
std::ifstream in_file(data_file_name);
if (!in_file) {
std::cerr << "Can't open input file " << data_file_name << std::endl;
exit(EXIT_FAILURE);
}
size_t n;
in_file >> n;
xs_.reserve(n);
ys_.reserve(n);
size_t i = 0;
while(in_file >> xs_[i] >> ys_[i])
{
// this line prints correct values i.e. 1 1, 2 2, 3 3, 4 4
std::cout << xs_[i] << " " << ys_[i] << std::endl;
// this lines prints xs_.size() = 0
std::cout << "xs_.size() = " << xs_.size() << std::endl;
if(i + 1 < n)
i += 1;
else
break;
// this line prints 0 0, 0 0, 0 0
std::cout << xs_[i] << " " << ys_[i] << std::endl;
}
// this line prints correct values i.e. 4 4
std::cout << xs_[i] << " " << ys_[i] << std::endl;
// this lines prints xs_.size() = 0
std::cout << "xs_.size() = " << xs_.size() << std::endl;
}
The class is defined thus:
class BaseInterpolator
{
public:
~BaseInterpolator();
BaseInterpolator();
BaseInterpolator(std::vector<double> &xs, std::vector<double> &ys);
BaseInterpolator(std::string data_file_name);
virtual int interpolate(std::vector<double> &x, std::vector<double> &fx) = 0;
virtual int interpolate(std::string input_file_name,
std::string output_file_name) = 0;
protected:
std::vector<double> xs_;
std::vector<double> ys_;
};
You're experiencing undefined behaviour. It seems like it's half working, but that's twice as bad as not working at all.
The problem is this:
xs_.reserve(n);
ys_.reserve(n);
You are only reserving a size, not creating it.
Replace it by :
xs_.resize(n);
ys_.resize(n);
Now, xs[i] with i < n is actually valid.
If in doubt, use xs_.at(i) instead of xs_[i]. It performs an additional boundary check which saves you the trouble from debugging without knowing where to start.
You're using reserve(), which increases capacity (storage space), but does not increase the size of the vector (i.e. it does not add any objects into it). You should use resize() instead. This will take care of size() being 0.
You're printing the xs_[i] and ys_[i] after you increment i. It's natural those will be 0 (or perhaps a random value) - you haven't initialised them yet.
vector::reserve reserve space for further operation but don't change the size of the vector, you should use vector::resize.