Trying to make a live data grapher with CImg library (C++) - c++

I'm new to CImg. Not sure if there's already a live data plotter in the library but I thought I'd go ahead and make one myself. If what I'm looking for already exists in the library please point me to the function. otherwise, here is my super inefficient code that I'm hoping you can help me with~
#include <iostream>
#include "CImg.h"
#include <ctime>
#include <cmath>
using namespace cimg_library;
int main()
{
CImg<unsigned char> plot(400, 320, 1, 3, 0);
CImgDisplay graph(plot, "f(x)");
clock();
const unsigned char red[] = {255, 0, 0};
float* G = new float[plot.width()]; //define an array holding the values that are to be displayed on the graph
while (1){
G[0] = ((plot.height()/4) * sin(clock() / 1000.0)) + plot.height()/2; // new f(t) value
for (int i = 1; i <= plot.width() - 1; i++){
G[plot.width() - i] = G[plot.width() - i - 1]; //basically shift all the array values to current address+1
plot.draw_point(plot.width() - 3*i, G[i-1], red, 1).display(graph);
}
plot.fill(0);
}
return 0;
}
problems
the grapher traverses right to left soo slowly.. and I'm not sure how to make a smooth curve hence I went with points.. how do you make a smooth curve?

There is already something for you in the library, method CImg<T>::draw_graph(), as (brielfy) explained here :
http://cimg.eu/reference/structcimg__library_1_1CImg.html#a2e629aadedc4518001f00333f25bfec8
There are few examples provided with the library that use this method, see files examples/tutorial.cpp and examples/plotter1d.cpp.

Related

Application for stitching bmp images together. Help needed

I'm supposed to create some code to stitch together N bmp images found in a folder. At the moment I just want to add the images together, side by side, don't care yet about common regions in them (I'm referring to how panoramic images are made).
I have tried to use some examples online for different functions that i need, examples which I've partially understood. I'm currently stuck because I can't really figure out what's wrong.
The basis of the bmp.h file is this page:
https://solarianprogrammer.com/2018/11/19/cpp-reading-writing-bmp-images/
I'm attaching my code and a screenshot of the exception VS throws.
main:
#include "bmp.h"
#include <fstream>
#include <iostream>
#include <filesystem>
namespace fs = std::filesystem;
int main() {
int totalImages = 0;
int width = 0;
int height;
int count = 0;
std::string path = "imagini";
//Here i count the total number of images in the directory.
//I need this to know the width of the composed image that i have to produce.
for (const auto & entry : fs::directory_iterator(path))
totalImages++;
//Here i thought about going through the directory and finding out the width of the images inside it.
//I haven't managed to think of a better way to do this (which is probably why it doesn't work, i guess).
//Ideally, i would have taken one image from the directory and multiply it's width
//by the total number of images in the said directory, thus getting the width of the resulting image i need.
for (auto& p : fs::directory_iterator(path))
{
std::string s = p.path().string();
const char* imageName = s.c_str();
BMP image(imageName);
width = width + image.bmp_info_header.width;
height = image.bmp_info_header.height;
}
BMP finalImage(width, height);
//Finally, I was going to pass the directory again, and for each image inside of it, i would call
//the create_data function that i wrote in bmp.h.
for (auto& p : fs::directory_iterator(path))
{
count++;
std::string s = p.path().string();
const char* imageName = s.c_str();
BMP image(imageName);
//I use count to point out which image the iterator is currently at.
finalImage.create_data(count, image, totalImages);
}
//Here i would write the finalImage to a bmp image.
finalImage.write("textura.bmp");
}
bmp.h (I have only added the part I wrote, the rest of the code is found at the link I've provided above):
// This is where I try to copy the pixel RGBA values from the image passed as parameter (from it's data vector) to my
// resulting image (it's data vector) .
// The math should be right, I've gone through it with pen&paper, but right now I can't test it because the code doesn't work for other reasons.
void create_data(int count, BMP image, int totalImages)
{
int w = image.bmp_info_header.width * 4;
int h = image.bmp_info_header.height;
int q = 0;
int channels = image.bmp_info_header.bit_count / 8;
int startJ = w * channels * (count - 1);
int finalI = w * channels * totalImages* (h - 1);
int incrementI = w * channels * totalImages;
for(int i = 0; i <= finalI; i+incrementI)
for (int j = i + startJ; j < i + startJ + w * channels; j+4)
{
data[j] =image.data[q];
data[j+1]=image.data[q+1];
data[j+2]=image.data[q+2];
data[j+3]=image.data[q+3];
q = q + 4;
}
}
Error I get: https://imgur.com/7fq9BH4
This is the first time I post a question, I've only looked up answers here. If I don't provide enough info to my problem, or something I've done is not ok I apologize.
Also, English is my second language, so I hope I got my points across pretty clear.
EDIT: Since I forgot to mention, I would like to do this code without using external libraries like OpenCV or ImageMagick.

Subtensor of a Tensorflow tensor (C++)

I have a tensorflow::Tensor batch in C++ with shape [2, 720, 1280, 3] (#images x height x width x #channels).
I want to get another tensor with only the first image, thus I would have a tensor of shape [1, 720, 1280, 3]. In order words, I want:
tensorflow::Tensor first = batch[0]
What's the most efficient way to achieve it?
I know how to do this in python, but the C++ api and documentation are not as good as python's.
After spending some time trying to implement through copy, I realised that this operation is supported in the API as Slice:
tensorflow::Tensor first = batch.Slice(0, 1);
Note that, as documented, the returned tensor shares the internal buffer with the sliced one, and the alignment of both tensors may be different, if that is relevant to you.
EDIT:
Since I had already done it, here is my attempt at reproducing the same functionality, copy-based. I think it should work (it is pretty similar to what I use in other context).
#include <cstdlib>
#include <cassert>
#include <tensorflow/core/framework/tensor.h>
#include <tensorflow/core/framework/tensor_shape.h>
tensorflow::Tensor get_element(const tensorflow::Tensor data, unsigned int index, bool keepDim)
{
using namespace std;
using namespace tensorflow;
typedef typename tensorflow::DataTypeToEnum<T> DataType;
auto dtype = DataType::v();
assert(dtype == data.dtype());
auto dtype = data.dtype();
auto dataShape = data.shape();
TensorShape elementShape;
if (keepDim)
{
elementShape.addDim(1);
}
for (int iDim = 1; iDim < dataShape.dims(); iDim++) {
elementShape.AddDim(dataShape.dim_size(iDim));
}
Tensor element(dtype, elementShape);
auto elementBytes = elementShape.num_elements() * DataTypeSize(dtype);
memcpy(element.flat<void>().data(),
batch.flat<void>().data() + elementBytes * index,
elementBytes);
return element;
}
int main()
{
Tensor batch = ...;
Tensor first = get_element(batch, 0);
return 0;
}
The code can also be changed if you just want to extract the data to, for example, a vector or something else.
This works fine
#include "tensorflow/core/framework/tensor_slice.h"
Tensor t2 = t1.Slice(0,1);

How to store a vector field with VTK? C++, VTKWriter

Let's say, I have a vector field u, with components ux, uy and uz, defined at (unstructured) positions in space rx, ry and rz.
All I want, is to store this vector field with the VTK format, i.e. with the class "vtkwriter" from libvtk to enable visualization with Paraview.
I think I got the code for incorporating the positions right, but somehow I can't figure out, how to incorporate the data:
#include <vtkPoints.h>
#include <vtkPolyDataWriter.h>
#include <vtkSmartPointer.h>
void write_file (double* rx, double* ry, double* rz,
double* ux, double* uy, double* uz,
int n, const char* filename)
{
vtkSmartPointer<vtkPoints> points =
vtkSmartPointer<vtkPoints>::New ();
points->SetNumberOfPoints(n);
for (int i = 0; i < n; ++i) {
points->SetPoint(i, rx[i], ry[i], rz[i]);
}
// how to incorporate the vector field u?
vtkSmartPointer<vtkPolyDataWriter> writer =
vtkSmartPointer<vtkPolyDataWriter>::New ();
writer->setFileName (filename);
// how to tell the writer, what to write?
writer->Write ();
}
The first question is: is the general way correct, i.e. the coordinate's treatment with vtkPoints?
When searching the internet, I find many results, how the final file should look like.
I could probably generate that format by hand, but that isn't really what I want to do.
On the other hand, I'm somehow not able to understand VTK's documentation. Whenever I look up the documentation of a class, it refers to the documentation of some other classes, and these other classes' documentations refer back to the first one.
The same holds for the examples.
So far, I haven't found one, that explains how to handle vector valued data, that is defined at arbitrary positions, and the other examples are so complicated, that I'm completely stuck here.
I think, the solution somehow uses vtkPolyData, but I can't figure out, how to insert data.
I think, it needs a vtkDoubleArray, but I haven't found so far, how to make if vector valued.
Thanks in advance.
Ok, I got it done after enough trial and error.
The coordinates, where the vector field is defined should be vtkPoints and the data of interest should be a vtkDoubleArray.
The incorporation into the final vtkPolyData object is done via vtkPolyData::GetPointData()->SetVectors(...).
Finally, the cell type needs to be be set as vtkVertex:
#include <vtkCellArray.h>
#include <vtkDoubleArray.h>
#include <vtkPointData.h>
#include <vtkPoints.h>
#include <vtkPolyData.h>
#include <vtkPolyDataWriter.h>
#include <vtkSmartPointer.h>
#include <vtkVertex.h>
void VTKWriter::write_file(double* rx, double *ry, double *rz,
double* ux, double *uy, double *uz,
int n, const char* filename)
{
vtkSmartPointer<vtkPoints> points =
vtkSmartPointer<vtkPoints>::New();
points->SetNumberOfPoints(n);
vtkSmartPointer<vtkCellArray> vertices =
vtkSmartPointer<vtkCellArray>::New();
vertices->SetNumberOfCells(n);
for (int i = 0; i < n; ++i) {
points->SetPoint(i, rx[i], ry[i], rz[i]);
vtkSmartPointer<vtkVertex> vertex =
vtkSmartPointer<vtkVertex>::New();
vertex->GetPointIds()->SetId(0, i);
vertices->InsertNextCell(vertex);
}
vtkSmartPointer<vtkDoubleArray> u =
vtkSmartPointer<vtkDoubleArray>::New();
u->SetName("u");
u->SetNumberOfComponents(3);
u->SetNumberOfTuples(n);
for (int i = 0; i < n; ++i) {
u->SetTuple3(i, ux[i], uy[i], uz[i]);
}
vtkSmartPointer<vtkPolyData> polydata =
vtkSmartPointer<vtkPolyData>::New();
polydata->SetPoints(points);
polydata->SetVerts(vertices);
polydata->GetPointData()->SetVectors(u);
vtkSmartPointer<vtkPolyDataWriter> writer =
vtkSmartPointer<vtkPolyDataWriter>::New();
writer->SetFileName(filename);
writer->SetInputData(polydata);
writer->Write ();
}
The reason, why I didn't got this at first was, because the interaction between points, cells, vertices, pointdata and polydata isn't easy to grasp when one is new to VTK, the tutorials do not really cover this at all, and VTK's Doxygen documentation is also somehow useless at this point.

Point Cloud Library 1.8 - DepthSense Grabber does not seem to provide RGB data for NaN XYZ points

Running the below code with both the Xtion Pro and the DS325 depth cameras gives very different results. The Xtion Pro shows both coloured point cloud and RGB perfectly, whereas the DS325 has many black fuzzy areas in the image, making it unusable for the OpenCV functionality I was intending (after conversion to Mat form).
This link seems to be when XYZ data is captured as NaN. For example, the Xtion Pro shows full RGB fine even when pointed out the window (which makes the majority of XYZ data NaN), whereas doing the same for the DS325 makes almost the whole RGB image show black.
Can someone tell me if this is just an imperfection in the new grabber code? Or is more inherently linked to the differences in mapping for the different hardware?
Running the depthsense viewer application (from primesense SDK) does confirm to me that both depth and full RGB data can be streamed simultaneously, so slightly confused as to why the RGB seems to be being discarded. Any help would be greatly appreciated! Thanks.
Windows, VS2013, PCL 1.8
#include <iostream>
#include <mutex>
#include <boost/thread/mutex.hpp>
#include <boost/shared_ptr.hpp>
#include <boost/format.hpp>
#include <pcl/io/pcd_io.h>
#include <pcl/common/time.h>
#include <pcl/console/print.h>
#include <pcl/console/parse.h>
#include <pcl/io/io_exception.h>
#include <pcl/io/openni_grabber.h>
#include <pcl/io/depth_sense_grabber.h>
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/visualization/image_viewer.h>
using namespace pcl::console;
typedef pcl::PointCloud<pcl::PointXYZRGBA> PointCloudT;
std::mutex cloud_mutex;
void cloud_cb_(const PointCloudT::ConstPtr& callback_cloud, PointCloudT::Ptr& new_cloud_,
bool* new_cloud_available_flag)
{
cloud_mutex.lock();
*new_cloud_ = *callback_cloud;
cloud_mutex.unlock();
*new_cloud_available_flag = true;
}
void PointXYZRGBAtoCharArray(pcl::PointCloud<pcl::PointXYZRGBA>::Ptr point_cloud_ptr, unsigned char * Image)
{
for (int i = 0; i < point_cloud_ptr->height; i++)
{
for (int j = 0; j < point_cloud_ptr->width; j++)
{
Image[(i * point_cloud_ptr->width + j) * 3] = point_cloud_ptr->points.at(i * point_cloud_ptr->width + j).r;
Image[(i * point_cloud_ptr->width + j) * 3 + 1] = point_cloud_ptr->points.at(i * point_cloud_ptr->width + j).g;
Image[(i * point_cloud_ptr->width + j) * 3 + 2] = point_cloud_ptr->points.at(i * point_cloud_ptr->width + j).b;
}
}
}
int main()
{
boost::mutex new_cloud_mutex_;
PointCloudT::Ptr cloud(new PointCloudT);
bool new_cloud_available_flag = false;
std::string device_id = "";
boost::function<void(const typename PointCloudT::ConstPtr&)> f = boost::bind(&cloud_cb_, _1, cloud, &new_cloud_available_flag);
boost::shared_ptr<pcl::Grabber> grabber;
try
{
grabber.reset(new pcl::OpenNIGrabber);
cout << "Using OpenNI Device" << endl;
}
catch (pcl::IOException& e)
{
grabber.reset(new pcl::DepthSenseGrabber);
cout << "Using DepthSense Device" << endl;
}
grabber->registerCallback(f);
grabber->start();
// Image Viewer
pcl::visualization::ImageViewer Imageviewer("Image Viewer");
unsigned char* Image = new unsigned char[3*cloud->height*cloud->width];
Imageviewer.addRGBImage(Image, cloud->width, cloud->height);
// Point Cloud Viewer:
pcl::visualization::PCLVisualizer viewer("PCL Viewer");
viewer.setCameraPosition(0, 0, -2, 0, -1, 0, 0);
for (;;)
{
if (new_cloud_available_flag)
{
new_cloud_available_flag = false;
cloud_mutex.lock();
// Update Image
Imageviewer.removeLayer("rgb_image");
PointXYZRGBAtoCharArray(cloud, Image);
Imageviewer.addRGBImage(Image,cloud->width,cloud->height);
Imageviewer.spinOnce();
// Update Point Cloud
viewer.removeAllPointClouds();
viewer.addPointCloud<pcl::PointXYZRGBA>(cloud);
cloud_mutex.unlock();
viewer.spinOnce();
}
}
grabber->stop();
}
The DepthSense grabber receives two streams from the driver: depth and color. They are merged into a single point cloud with colors, which is then returned to the end user. Due to the fact that the two sensors involved (IR and color camera) have a certain displacement and different resolutions (QVGA and VGA respectively), the mapping between depth and color pixels in the streams is not trivial. In fact, for each depth/color frame the camera driver additionally produces so-called UV-map, which can be used to establish correspondences. Unfortunately, it fails to set UV-coordinates for invalid depth pixels, which makes it impossible to find corresponding RGB values for the NaN points.
I would recommend to use the DepthSense SDK directly to access raw RGB images.

How do I change this variable into an array c++

I am writing code in c++ for a game in which a bucket controlled by the user collects raindrops with the same radius. I want to use an array to make each of the 16 raindrops a different size(radius). I have no clue how to change the variable into an array.
I am given a variable:
int radius = randomBetween( MARGIN / 4, MARGIN / 2 );
Here is an example that uses actual C++.
#include <algorithm>
#include <functional>
#include <random>
#include <vector>
std::mt19937 prng(seed);
std::uniform_int_distribution<> dist(MARGIN / 4, MARGIN / 2);
std::vector<int> radii(16);
std::generate(radii.begin(), radii.end(), std::bind(dist, std::ref(prng)));
You're probably going to want to use floats, but basically if I understand you correctly...
int size_in_elements = 16;
float *a= new float[size_in_elements];
float maxvalue = 100.0f; // this will be the maximum value to assign to each element
for(int i = 0; i < size_in_elements; i++)
{
a[i] = fmodf((float)rand(), maxvalue);
}
delete[] a; // Don't forget the brackets here... delete[] is used for deleting arrays.
Hope I helped some