Getting PCL Header when decoding with OctreePointCloudCompression - c++

I'm using a simple encode and decode application for sending point cloud data as a stream using TCP. My issue can actually be reproduced just using the code from the following link:
https://pcl.readthedocs.io/en/latest/compression.html
Before encoding, I check the input with:
std::cout << "Input time (us) = " << cloud->header.stamp << std::endl;
After the decode portion, I add:
std::cout << "Output time (us) = " << output->header.stamp << std::endl;
Instead of using openNI for the incoming point cloud, I am using an Ouster tof635 lidar sensor and placing the points into a point cloud pointer to be used in the callback. I have no issues with this part.
I get a valid integer value for the cloud in the callback, but the output time after decoding is always zero. My suspicion is that the decode only copies the actual point cloud data from the stream and does not copy the header data at all.
My question is:
"Is there a function already existing in PCL that provides a way to get the header from the encoded stream (if the header is encoded at all), or will I likely need to write my own deserializing algorithm to pull the time stamp from the header of the encoded point cloud?"
I don't actually have an issue with the code I have written, but I am more looking for an answer about some insight into how to use the PCL OctreePointCloudCompression class. I see in the OctreePointCloudCompression file, there is a read and write frame header class members that are protected. This would lead me to believe these should be capturing the headers. Is it because "cloudOut" is a new point cloud and only the point data is copied to it?

Related

Cannot read .jpg binary data, buffer only has 4 bytes of data

My question almost exactly the same as this one which is unanswered. I am trying to read the binary data of a .jpg to send as an HTTP response on a simple web server using C++. The code for reading the data is below.
FILE *f = fopen(file.c_str(),"rb");
if(f){
fseek(f,0,SEEK_END);
int length = ftell(f);
fseek(f,0,SEEK_SET);
char* buffer = (char*)malloc(length+1);
if(buffer){
int b = fread(buffer,1,length,f);
std::cout << "bytes read: " << b << std::endl;
}
fclose(f);
buffer[length] = '\0';
return buffer;
}
return NULL;
When the request for the image is made and this code runs, fread() returns 25253 bytes being read, which seems correct. However, when I perform strlen(buffer) I get only 4. Of course, this gives an error on a browser when the image tries to display. I have also tried manually setting the HTTP content length to 25253 but I then a receive a curl error 18, indicating the transfer ended early (as only 4 bytes exist).
As the other poster mentioned in their question, the 5th byte of the image (and I assume most .jpg images) is 0x00, but I am unsure if this has an effect on saving to the buffer.
I have verified the .jpg images I am loading are in the directory, valid, and display properly when opened normally. I have also tried 2 different methods of loading the binary data, and both also give only 4 bytes, so I am really at a loss. Any help is much appreciated.
When the request for the image is made and this code runs, fread()
returns 25253 bytes being read, which seems correct. However, when I
perform strlen(buffer) I get only 4.
Well there is your problem: You read binary data, not text, meaning that special characters like newline or the null character is not a something that indicates the structure of a text, its simple numbers.
strlen is a function to give you the count of characters other than '\0' or simply 0. However in a binary file like jpeg there a dozen of zeros usually in there, and because of a binary header structure, there seems to be always a zero at position 5 so, so strlen will stop at the first it found and return 4.
Also you seem confused by the fact that you try to send this "text interpreted" jpeg to a HTTP server. Of course it will complain, because you can not simply send binary data as text in HTTP, you either have to encode it, base64 is very popular, or set the content length header. Of course you also have to tell the HTTP client/server the type by setting the proper MIME header.

Trying to find a way to LOG Graphical data in OpenCV/BOOST

To begin with: I am working on Image Processing using OpenCV C++. After loading a Mat image in a C++ program, I plotted a graph of the image using GNUPLOT.
Now, The Requirement is to log the graphical data of the Mat image.
To do this, I created a BOOST C++ Logger by including all BOOST Libraries. BOOST is an excellent library for Testing and logging data as well but, the problem with the it's Log is that it could log only text messages. Correct me if I'm wrong.
Below is my CODE for plotting graph using GNUPlot in OpenCV:
try
{
Gnuplot g1("lines");
std::vector<double> rowVector;
std::vector<double> rowVectorExp;
for (int i = 0; i < 50; i++)
{
rowVector.push_back((double)i);
rowVectorExp.push_back((double)exp((float)i/10.0));
}
cout << "*** user-defined lists of doubles" << endl;
g1 << "set term png";
g1 << "set output \"test.png\"";
//type of plot pattern
g1.set_grid().set_style("lines");
g1.plot_xy(rowVector, rowVectorExp, "user-defined points 2d");
waitKey(0);
}
catch (GnuplotException ge)
{
cout << ge.what() << endl;
}
cout << endl << "*** end of gnuplot example" << endl;
Here is my BOOST Log CODE:
namespace logging = boost::log;
void PlainGetEdgeVector::init()
{
logging::add_file_log("sample%3N.log");
}
BOOST_LOG_TRIVIAL(info) << "This is my first Log line";
The good news is, my BOOST Logger successfully logs the text message. It would be great if it could log my graphical data as well.
Any suggestions? If anyone knows how to implement the same using BOOST, I would be very grateful or if there are any alternatives, good to know that as well.
The solution to your problem greatly depends on the nature of the data how do you want to use the logged data.
1. Re-consider converting binary data to text
For debugging purposes it is often more convenient to convert your binary data to text. Even with large amounts of data this approach can be useful because there are generally many more tools for text processing than for working with arbitrary binary data. For instance, you could compare two logs from different runs of your application with conventional merge/compare tools to see the difference. Text logs are also easier to filter with tools like grep or awk, which are readily available, as opposed to binary data for which you will likely have to write a parser.
There are many ways to convert binary data to text. The most direct approach is to use the dump manipulator, which will efficiently produce textual view of a raw binary data. It suits graphical data as well because it tends to be relatively large in amounts and it is often easy enough to compare in text representation (e.g. when a color sample fits a byte).
std::vector< std::uint8_t > image;
// Outputs hex dump of the image
BOOST_LOG_TRIVIAL(info) << logging::dump(image.data(), image.size());
A more structured way to output binary data is to use other libraries, such as iterator_range from Boost.Range. This can be useful if your graphical data is composed of something more complex than raw bytes.
std::vector< double > image;
// Outputs all elements of the image vector
BOOST_LOG_TRIVIAL(info) << boost::make_iterator_range(image);
You can also write your own manipulator that will format the data the way you want, e.g. split the output by rows.
2. For binary data use attributes and a custom sink backend
If you intend to process the logged data by a more specialized piece of software, like an image viewer or editor, you might want to save the data in binary form. This can be done with Boost.Log, but it will require more effort because the sinks provided by the library are text-oriented and you cannot save a binary data into a text file as is. You will have to write a sink backend that will write binary data in the format you want (e.g. if you plan to use an image editor you might want to write files in the format supported by that editor). There is a tutorial here, which shows the interface you have to implement and a sample implementation. The important bit is the consume function of the backend, which will receive a log record view with your data.
typedef boost::iterator_range< const double* > image_data;
BOOST_LOG_ATTRIBUTE_KEYWORD(a_image, "Image", image_data)
class image_writer_backend :
public sinks::basic_sink_backend< sinks::synchronized_feeding >
{
public:
void consume(logging::record_view const& rec)
{
// Extract the image data from the log record
if (auto image = rec[a_image])
{
image_data const& im = image.get();
// Write the image data to a file
}
}
};
In order to pass your image binary data to your sink you will need to attach it to the log record as an attribute. There are multiple ways to do that, but assuming you don't intend to filter log records based on the image, the easiest way to do this is to use the add_value manipulator.
std::vector< double > image;
BOOST_LOG_TRIVIAL(info) << logging::add_value(a_image, image) << "Catch my image";
Caveat: In order to avoid copying the potentially large image data, we're passing a lightweight iterator_range as the attribute value. This will only work with synchronous logging because the image vector needs to stay alive while the log record is being processed. For async logging you will have to pass the image by value or use reference counting.
If you do want to apply filters to the image data then you can use scoped attributes or add the attribute to a logger.
Note that by adding your new sink for writing binary data you do not preclude also writing textual logs with other sinks, so that "Catch my image" message can be processed by a text sink. By using other attributes, like log record counters you can associate log records in different files produced by different sinks.

Writing to a .csv file with C++?

TL;DR I am trying to take a stream of data and make it write to a .csv file. Everything is worked out except the writing part, which I think is simply due to me not referencing the .csv file correctly. But I'm a newbie to this stuff, and can't figure out how to correctly reference it, so I need help.
Hello, and a big thank you in advance to anyone that can help me out with this! Some advance info, my IDE is Xcode, using C++, and I'm using the Myo armband from Thalmic Labs as a device to collect data. There is a program (link for those interested enough to look at it) that is supposed to stream the EMG, accelerometer, gyroscope, and orientation values into a .csv file. I am so close to getting the app to work, but my lack of programming experience has finally caught up to me, and I am stuck on something rather simple. I know that the app can stream the data, as I have been able to make it print the EMG values in the debugging area. I can also get the app to open a .csv file, using this code:
const char *path= "/Users/username/folder/filename";
std::ofstream file(path);
std::string data("data to write to file");
file << data;
But no data ends up being streamed/printed into that file after I end the program. The only thing that I can think might be causing this is that the print function is not correctly referencing this file pathway. I would assume that to be a straightforward thing, but like I said, I am inexperienced, and do not know exactly how to address this. I am not sure what other information is necessary, so I'll just provide everything that I imagine might be helpful.
This is the function structure that is supposed to open the files: (Note: The app is intended to open the file in the same directory as itself)
void openFiles() {
time_t timestamp = std::time(0);
// Open file for EMG log
if (emgFile.is_open())
{
emgFile.close();
}
std::ostringstream emgFileString;
emgFileString << "emg-" << timestamp << ".csv";
emgFile.open(emgFileString.str(), std::ios::out);
emgFile << "timestamp,emg1,emg2,emg3,emg4,emg5,emg6,emg7,emg8" << std::endl;
This is the helper to print accelerometer and gyroscope data (There doesn't appear to be anything like this to print EMG data, but I know it does, so... Watevs):
void printVector(std::ofstream &path, uint64_t timestamp, const myo::Vector3< float > &vector)
{
path << timestamp
<< ',' << vector.x()
<< ',' << vector.y()
<< ',' << vector.z()
<< std::endl;
}
And this is the function structure that utilizes the helper:
void onAccelerometerData(myo::Myo *myo, uint64_t timestamp, const myo::Vector3< float > &accel)
{
printVector(accelerometerFile, timestamp, accel);
}
I spoke with a staff member at Thalmic Labs (the guy who made the app actually) and he said it sounded like, unless the app was just totally broken, I was potentially just having problems with the permissions on my computer. There are multiple users on this computer, so that may very well be the case, though I certainly hope not, and I'd still like to try and figure it out one more time before throwing in the towel. Again, thanks to anyone who can be of assistance! :)
My imagination is failing me. Have you tried writing to or reading from ostringstream or istringstream objects? That might be informative. Here's a line that's correct:
std::ofstream outputFile( strOutputFilename.c_str(), std::ios::app );
Note that C++ doesn't have any native support for streaming .csv code, though, you may have to do those conversions yourself. :( Things may work better if you replace the "/"'s by (doubled) "//" 's ...

vtkImageData to DcmDataset

I hold a volume image in a vtkImageData and need to convert it to DcmDataset (DCMTK). I know that I need to set general DICOM tags like patient data to the data set. That's not the problem.
Especially I'm interested in putting the pixel data to DcmDataset. Does anybody know an example or can explain how to do that?
Thanks in advance
Quoting from the DCMTK FAQ:
Is there a tool that converts common graphic formats like PGM/PPM,
PNG, TIFF, JPEG or BMP to DICOM?
No, unfortunately, there is no such tool in DCMTK. Currently, you have to write your own little program for that purpose.
The following code snippet from the toolkit's documentation could be a starting point:
char uid[100];
DcmFileFormat fileformat;
DcmDataset *dataset = fileformat.getDataset();
dataset->putAndInsertString(DCM_SOPClassUID, UID_SecondaryCaptureImageStorage);
dataset->putAndInsertString(DCM_SOPInstanceUID, dcmGenerateUniqueIdentifier(uid, SITE_INSTANCE_UID_ROOT));
dataset->putAndInsertString(DCM_PatientsName, "Doe^John");
/* ... */
dataset->putAndInsertUint8Array(DCM_PixelData, pixelData, pixelLength);
OFCondition status = fileformat.saveFile("test.dcm", EXS_LittleEndianExplicit);
if (status.bad())
cerr << "Error: cannot write DICOM file (" << status.text() << ")" << endl;
The current snapshot of the DCMTK (> version 3.5.4) contains a new
command line tool "img2dcm" that allows for converting JPEG images to
certain DICOM image SOP classes.
I would perhaps look first at the source code for img2dcm (documented here) to see the general process and then post back with any specific questions. IMHO, DCMTK is very powerful but extremely difficult to understand.

Comparing streams

I'm looking into generalizing my data sources in my C++ application by using streams. However, my code also uses a resource manager that functions in a manner similar to a factory, except its primary purpose is to ensure that the same resource doesn't get loaded twice into memory.
myown::ifstream data("image.jpg");
std::ifstream data2("image2.jpeg");
ResourcePtr<Image> img1 = manager.acquire(data);
ResourcePtr<Image> img2 = manager.acquire(data);
cout << img1 == img2; // True
ResourcePtr<Image> img3 = manager.acquire(data2);
cout << img1 == img3; // False
For it to do this, it obviously has to do some checks. Is there a reasonable way (readable and efficient) to implement this, if the resource manager has data streams as input?
You cannot "compare" data streams. Streams are not containers; they are flows of data.
BTW, cout << a == b is (cout << a) == b; I think you meant cout << (a==b).
The level of abstraction where the identity of the data is well above your streams. Think about what your stream would do with that information if it knew it. It could not act upon it, it is just a bunch of data. In terms of the interface, a stream doesn't necessarily even have an end. You would be violating least surprise for me if you tried to tie identity to it at that level.
That sounds like a reasonable abstraction for your ResourcePtr, though. You could hash the data when you load it into ResourcePtr, but a key on the file path is probably just as good.
Like Tomalak said, you can't compare streams. You'll have to wrap them in some class which associates an ID to them, possibly based on the absolute path if they are all associated to files on the file system