To begin with: I am working on Image Processing using OpenCV C++. After loading a Mat image in a C++ program, I plotted a graph of the image using GNUPLOT.
Now, The Requirement is to log the graphical data of the Mat image.
To do this, I created a BOOST C++ Logger by including all BOOST Libraries. BOOST is an excellent library for Testing and logging data as well but, the problem with the it's Log is that it could log only text messages. Correct me if I'm wrong.
Below is my CODE for plotting graph using GNUPlot in OpenCV:
try
{
Gnuplot g1("lines");
std::vector<double> rowVector;
std::vector<double> rowVectorExp;
for (int i = 0; i < 50; i++)
{
rowVector.push_back((double)i);
rowVectorExp.push_back((double)exp((float)i/10.0));
}
cout << "*** user-defined lists of doubles" << endl;
g1 << "set term png";
g1 << "set output \"test.png\"";
//type of plot pattern
g1.set_grid().set_style("lines");
g1.plot_xy(rowVector, rowVectorExp, "user-defined points 2d");
waitKey(0);
}
catch (GnuplotException ge)
{
cout << ge.what() << endl;
}
cout << endl << "*** end of gnuplot example" << endl;
Here is my BOOST Log CODE:
namespace logging = boost::log;
void PlainGetEdgeVector::init()
{
logging::add_file_log("sample%3N.log");
}
BOOST_LOG_TRIVIAL(info) << "This is my first Log line";
The good news is, my BOOST Logger successfully logs the text message. It would be great if it could log my graphical data as well.
Any suggestions? If anyone knows how to implement the same using BOOST, I would be very grateful or if there are any alternatives, good to know that as well.
The solution to your problem greatly depends on the nature of the data how do you want to use the logged data.
1. Re-consider converting binary data to text
For debugging purposes it is often more convenient to convert your binary data to text. Even with large amounts of data this approach can be useful because there are generally many more tools for text processing than for working with arbitrary binary data. For instance, you could compare two logs from different runs of your application with conventional merge/compare tools to see the difference. Text logs are also easier to filter with tools like grep or awk, which are readily available, as opposed to binary data for which you will likely have to write a parser.
There are many ways to convert binary data to text. The most direct approach is to use the dump manipulator, which will efficiently produce textual view of a raw binary data. It suits graphical data as well because it tends to be relatively large in amounts and it is often easy enough to compare in text representation (e.g. when a color sample fits a byte).
std::vector< std::uint8_t > image;
// Outputs hex dump of the image
BOOST_LOG_TRIVIAL(info) << logging::dump(image.data(), image.size());
A more structured way to output binary data is to use other libraries, such as iterator_range from Boost.Range. This can be useful if your graphical data is composed of something more complex than raw bytes.
std::vector< double > image;
// Outputs all elements of the image vector
BOOST_LOG_TRIVIAL(info) << boost::make_iterator_range(image);
You can also write your own manipulator that will format the data the way you want, e.g. split the output by rows.
2. For binary data use attributes and a custom sink backend
If you intend to process the logged data by a more specialized piece of software, like an image viewer or editor, you might want to save the data in binary form. This can be done with Boost.Log, but it will require more effort because the sinks provided by the library are text-oriented and you cannot save a binary data into a text file as is. You will have to write a sink backend that will write binary data in the format you want (e.g. if you plan to use an image editor you might want to write files in the format supported by that editor). There is a tutorial here, which shows the interface you have to implement and a sample implementation. The important bit is the consume function of the backend, which will receive a log record view with your data.
typedef boost::iterator_range< const double* > image_data;
BOOST_LOG_ATTRIBUTE_KEYWORD(a_image, "Image", image_data)
class image_writer_backend :
public sinks::basic_sink_backend< sinks::synchronized_feeding >
{
public:
void consume(logging::record_view const& rec)
{
// Extract the image data from the log record
if (auto image = rec[a_image])
{
image_data const& im = image.get();
// Write the image data to a file
}
}
};
In order to pass your image binary data to your sink you will need to attach it to the log record as an attribute. There are multiple ways to do that, but assuming you don't intend to filter log records based on the image, the easiest way to do this is to use the add_value manipulator.
std::vector< double > image;
BOOST_LOG_TRIVIAL(info) << logging::add_value(a_image, image) << "Catch my image";
Caveat: In order to avoid copying the potentially large image data, we're passing a lightweight iterator_range as the attribute value. This will only work with synchronous logging because the image vector needs to stay alive while the log record is being processed. For async logging you will have to pass the image by value or use reference counting.
If you do want to apply filters to the image data then you can use scoped attributes or add the attribute to a logger.
Note that by adding your new sink for writing binary data you do not preclude also writing textual logs with other sinks, so that "Catch my image" message can be processed by a text sink. By using other attributes, like log record counters you can associate log records in different files produced by different sinks.
Related
I'm using a simple encode and decode application for sending point cloud data as a stream using TCP. My issue can actually be reproduced just using the code from the following link:
https://pcl.readthedocs.io/en/latest/compression.html
Before encoding, I check the input with:
std::cout << "Input time (us) = " << cloud->header.stamp << std::endl;
After the decode portion, I add:
std::cout << "Output time (us) = " << output->header.stamp << std::endl;
Instead of using openNI for the incoming point cloud, I am using an Ouster tof635 lidar sensor and placing the points into a point cloud pointer to be used in the callback. I have no issues with this part.
I get a valid integer value for the cloud in the callback, but the output time after decoding is always zero. My suspicion is that the decode only copies the actual point cloud data from the stream and does not copy the header data at all.
My question is:
"Is there a function already existing in PCL that provides a way to get the header from the encoded stream (if the header is encoded at all), or will I likely need to write my own deserializing algorithm to pull the time stamp from the header of the encoded point cloud?"
I don't actually have an issue with the code I have written, but I am more looking for an answer about some insight into how to use the PCL OctreePointCloudCompression class. I see in the OctreePointCloudCompression file, there is a read and write frame header class members that are protected. This would lead me to believe these should be capturing the headers. Is it because "cloudOut" is a new point cloud and only the point data is copied to it?
I'm trying to pass the content of a binary file from c++ to node using the node-gyp library. I have a process that creates a binary file using the .fit format and I need to pass the content of the file to js to process it. So, my first aproach was to extract the content of the file in a string and try to pass it to node like this.
char c;
std::string content="";
while (file.get(c)){
content+=c;
}
I'm using the following code to pass it to Node
v8::Local<v8::ArrayBuffer> ab = v8::ArrayBuffer::New(args.GetIsolate(), (void*)content.data(), content.size());
args.GetReturnValue().Set(ab);
In node a get an arrayBuffer but when I print the content to a file it is different to the one that show a c++ cout.
How can I pass the binary data succesfully?
Thanks.
Probably the best approach is to write your data to a binary disk file. Write to disk in C++; read from disk in NodeJS.
Very importantly, make sure you specify BINARY MODE.
For example:
myFile.open ("data2.bin", ios::out | ios::binary);
Do not use "strings" (at least not unless you want to uuencode). Use buffers. Here is a good example:
How to read binary files byte by byte in Node.js
var fs = require('fs');
fs.open('file.txt', 'r', function(status, fd) {
if (status) {
console.log(status.message);
return;
}
var buffer = new Buffer(100);
fs.read(fd, buffer, 0, 100, 0, function(err, num) {
...
});
});
You might also find these links helpful:
https://nodejs.org/api/buffer.html
<= Has good examples for specific Node APIs
http://blog.paracode.com/2013/04/24/parsing-binary-data-with-node-dot-js/
<= Good discussion of some of the issues you might face, including "endianness" and "interpreting numbers"
ADDENDUM:
The OP clarified that he's considering using C++ as a NodeJS Add-On (not a standalone C++ program.
Consequently, using buffers is definitely an option. Here is a good tutorial:
https://community.risingstack.com/using-buffers-node-js-c-plus-plus/
If you choose to go this route, I would DEFINITELY download the example code and play with it first, before implementing buffers in your own application.
It depends but for example using redis
Values can be strings (including binary data) of every kind, for
instance you can store a jpeg image inside a value. A value can't be
bigger than 512 MB.
If the file is bigger than 512MB, then you can store it in chunks.
But I wouldnt suggest since this is an in-memory data store
Its easy to implement in both c++ and node.js
I have a program which performs some computations on images to produce other images. The system works with small amount of images used as an input and I would like to know how to make it work for large amounts of input data; like a million or more images. My main concern is how to store the input images and how to store the output (produced) images.
I have a function compute(const std:vector<cv::Mat>&) which makes computations on 124 images (due to GPU memory limitations). Because the algorithm is iterative it takes different amount of iterations for each image to produce the output image.
Right now, if I provide more than 124 images then the function computes for the first 124 images and when an image finishes its computation, then it is swapped with another one. I would like the algorithm to be used with larger inputs, like a million, or more, images. The computation function returns one output image for each input image and it is implemented like:
std::vector<cv::Mat> compute(std::vector<cv::Mat>& image_vec) {
std::vector<cv::Mat> output_images(image_vec.size());
std::vector<cv::Mat> tmp_images(124);
processed_images = 0;
while (processed_images < image_vec.size()) {
// make computations and update the output_images
// for the images that are currently in processed
// remove the images that finished from the tmp_images
// and update the processed_images variable
// import new images to tmp_images unless there
// are no more in the input vector
}
return output_images;
}
I am using the boost::filesystem to read images from a folder (also I use OpenCV to read and store each image) at the beginning of the program:
std::vector<cv::Mat> read_images_from_dir(std::string dir_name) {
std::vector<cv::Mat> image_vec;
boost::filesystem::path p(dir_name);
std::vector<boost::filesystem::path> tmp_vec;
std::copy(boost::filesystem::directory_iterator(p),
boost::filesystem::directory_iterator(),
back_inserter(tmp_vec));
std::vector<boost::filesystem::path>::const_iterator it = tmp_vec.begin();
for (; it != tmp_vec.end(); ++it) {
if (is_regular_file(*it)) {
//std::cout << it->string() << std::endl;
image_vec.push_back(read_image(it->string()));
}
}
return image_vec;
}
And then the main program looks like this:
void main(int argc, char* argv[]) {
// suppose for this example that argv[1] contains a correct
// path to a folder which contains images
std::vector<cv::Mat> input_images = read_images_from_dir(argv[1]);
std::vector<cv::Mat> output_images = compute(input_images);
// save the output_images
}
Here you can find the program in an online editor, if you wish.
Any sugestion that clarifies the question is welcomed.
Edit: Some of the answers and comments pointed out useful design decisions that I have to make, so that you will be able to answer the question. I would like to mention that:
The images are/will be already stored in the disk before the program starts.
The process will be done "offline" without new data comming and will be done once every few hours (or days). This is because the parameters of the program will change after it finishes.
I can tolerate not having the fastest possible implementation at first because I want to make things work and then consider for optimizations.
The code has many computations so the I/O so far does not take that much time.
I expect this code to run on a single machine, but I think it will be better to NOT have multithreading, as a first version, so that the code will be more portable and so that I can integrate it in another program that does not use mutlithreading and I do not want to have more dependencies.
One implementations that I thought about, is reading batches of data (say 5K images) and after computing their output load new data. But I do not know if there is something far better without too much additional complexity. Of course, any answer is welcomed.
I hold a volume image in a vtkImageData and need to convert it to DcmDataset (DCMTK). I know that I need to set general DICOM tags like patient data to the data set. That's not the problem.
Especially I'm interested in putting the pixel data to DcmDataset. Does anybody know an example or can explain how to do that?
Thanks in advance
Quoting from the DCMTK FAQ:
Is there a tool that converts common graphic formats like PGM/PPM,
PNG, TIFF, JPEG or BMP to DICOM?
No, unfortunately, there is no such tool in DCMTK. Currently, you have to write your own little program for that purpose.
The following code snippet from the toolkit's documentation could be a starting point:
char uid[100];
DcmFileFormat fileformat;
DcmDataset *dataset = fileformat.getDataset();
dataset->putAndInsertString(DCM_SOPClassUID, UID_SecondaryCaptureImageStorage);
dataset->putAndInsertString(DCM_SOPInstanceUID, dcmGenerateUniqueIdentifier(uid, SITE_INSTANCE_UID_ROOT));
dataset->putAndInsertString(DCM_PatientsName, "Doe^John");
/* ... */
dataset->putAndInsertUint8Array(DCM_PixelData, pixelData, pixelLength);
OFCondition status = fileformat.saveFile("test.dcm", EXS_LittleEndianExplicit);
if (status.bad())
cerr << "Error: cannot write DICOM file (" << status.text() << ")" << endl;
The current snapshot of the DCMTK (> version 3.5.4) contains a new
command line tool "img2dcm" that allows for converting JPEG images to
certain DICOM image SOP classes.
I would perhaps look first at the source code for img2dcm (documented here) to see the general process and then post back with any specific questions. IMHO, DCMTK is very powerful but extremely difficult to understand.
I'm looking into generalizing my data sources in my C++ application by using streams. However, my code also uses a resource manager that functions in a manner similar to a factory, except its primary purpose is to ensure that the same resource doesn't get loaded twice into memory.
myown::ifstream data("image.jpg");
std::ifstream data2("image2.jpeg");
ResourcePtr<Image> img1 = manager.acquire(data);
ResourcePtr<Image> img2 = manager.acquire(data);
cout << img1 == img2; // True
ResourcePtr<Image> img3 = manager.acquire(data2);
cout << img1 == img3; // False
For it to do this, it obviously has to do some checks. Is there a reasonable way (readable and efficient) to implement this, if the resource manager has data streams as input?
You cannot "compare" data streams. Streams are not containers; they are flows of data.
BTW, cout << a == b is (cout << a) == b; I think you meant cout << (a==b).
The level of abstraction where the identity of the data is well above your streams. Think about what your stream would do with that information if it knew it. It could not act upon it, it is just a bunch of data. In terms of the interface, a stream doesn't necessarily even have an end. You would be violating least surprise for me if you tried to tie identity to it at that level.
That sounds like a reasonable abstraction for your ResourcePtr, though. You could hash the data when you load it into ResourcePtr, but a key on the file path is probably just as good.
Like Tomalak said, you can't compare streams. You'll have to wrap them in some class which associates an ID to them, possibly based on the absolute path if they are all associated to files on the file system