Save a PCL specific view as image - c++

I am new to C++ and the use of the Pointcloud Library PCL (https://pointclouds.org/). At the moment I am able to generate a viewer of the point cloud by using the <pcl::visualization::PCLVisualizer> and I was wondering if it would be possible to save an image of the current viewer "view".
Imagine I have a picture like the following:
At the moment I just take a screenshot manually of what it looks like. However, since I will be processing many point clouds, I would like to have a way to convert this "viewer view" to an image.

Of course I posted the question after researching online. However, I could not find the super easy solution available already in PCL.
You just need to use the function:
void pcl::visualization::PCLVisualizer::saveScreenshot ( const std::string & file )
Documentation here
I hope this will be helpful for someone else in the same situation.

Related

How to convert food-101 dataset into usable format for AWS SageMaker

I'm still very new to the world of machine learning and am looking for some guidance for how to continue a project that I've been working on. Right now I'm trying to feed in the Food-101 dataset into the Image Classification algorithm in SageMaker and later deploy this trained model onto an AWS deeplens to have food detection capabilities. Unfortunately the dataset comes with only the raw image files organized in sub folders as well as a .h5 file (not sure if I can just directly feed this file type into sageMaker?). From what I've gathered neither of these are suitable ways to feed in this dataset into SageMaker and I was wondering if anyone could help point me in the right direction of how I might be able to prepare the dataset properly for SageMaker i.e convert to a .rec or something else. Apologies if the scope of this question is very broad I am still a beginner to all of this and I'm simply stuck and do not know how to proceed so any help you guys might be able to provide would be fantastic. Thanks!
if you want to use the built-in algo for image classification, you can either use Image format or RecordIO format, re: https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html#IC-inputoutput
Image format is straightforward: just build a manifest file with the list of images. This could be an easy solution for you, since you already have images organized in folders.
RecordIO requires that you build files with the 'im2rec' tool, re: https://mxnet.incubator.apache.org/versions/master/faq/recordio.html.
Once your data set is ready, you should be able to adapt the sample notebooks available at https://github.com/awslabs/amazon-sagemaker-examples/tree/master/introduction_to_amazon_algorithms

Can I receive a boudingPoly for LABEL_DETECTION results?

How can this be completed with the Google Vision-API please?
send image to vision-api
request: 'features': [{': 'LABEL_DETECTION','maxResults': 10,}]
receive the labels in particular the one I'm interest in is a "clock"
receive the boundingPoly so that I know the exact location of the clock within the image
having received the boundingPoly I would want to use it to create a dynamic AR marker to be tracked by the AR library
Currently it doesn't look like Google Vision-API supports a boudingPoly for LABELS hence the question if there is a way to solve it with the Vision-API.
Currently Label Detection does not provide this functionality. We are always looking at ways to enhance the API
After two years, its the same. I am facing similar challenges and I am thinking of opting other solutions. I think custom solutions like TensorFlow object detection API or DarkNet YOLO object API will do this job very easily.

OpenCV with uEye Cameras

I need to use OpenCV with uEye Ethernet Camera. The problem is that I wasn't finding some useful tips regarding some example codes.
The source code provided with the installation is really linked to MFC stuff which is not what I want. It's really complicated to get rid of that, it was causing me so much problems (CWnd, Afx, Dialogs...)
I would like to read some frames from the camera and record some snapshots.
You can find the whole SDK description here: https://en.ids-imaging.com/manuals-ueye-software.html
Just simply make and account and you can access it. The documentation is really good.
I found this document in the internet
http://master-ivi.univ-lille1.fr/fichiers/Cours/uEye_SDK_manual_enu.pdf

How to batch download large number of high resolution satellite images from Google Map directly?

I'm helping a professor working on a satellite image analysis project, we need 800 images stitching together for a square area at 8000x8000 resolution each image from Google Map, it is possible to download them one by one, however I believe there must be a way to write a script for batch processing.
Here I would like to ask how can I implement this using shell or python script, and how could I download images by google maps url ?
Here is an example of the url:
https://maps.google.com.au/maps/myplaces?ll=-33.071009,149.554911&spn=0.027691,0.066047&ctz=-660&t=k&z=15
However I'm not able to analyse the image direct download link from this.
Update:
Actually, I solved this problem, however due to Google's intention, I would not post the way for doing this.
Have you tried the Google static maps API?
You get 25 000 free requests, but you're limited to 640x640, so you'll need to do ~160 requests at a higher zoom level.
I suggest downloading the images as so: Downloading a picture via urllib and python
URL to start with: http://maps.googleapis.com/maps/api/staticmap?center=-33.071009,149.554911&zoom=15&size=640x640&sensor=false&maptype=satellite
It's been long time since I solved the problem, sorry for the delay.
I posted my code to github here, plz star or fork as you like :)
The idea is to use a virtual web browser at a very high resolution to load the google map page, then do the page capture. The defect is there will be google symbol all around on each image, the solution is to apply over sampling on the resolution on each of the image, then use the stiching technique to stick them all together.

How can I pass TIFF image data to JUCE (which does not support TIFF)?

I am using learning gui programming using c++ JUCE library. That library have supports for image file format(png, jpg). But I wants to learn how can I use other file format for example tiff.
After google I got libtiff.
My question is what will be the accurate approach for displaying this. Should I need to convert .tiff file into jpeg/png from tiff for doing this.
But I think this will require double processing.
Can anyone explain the raw/native/basic image file format so that I need to convert all images into that type and use it directly.
As I find something in winAPI for dealing with images in which they use image data from file format.
It will be very helpful if someone can let me know the approach for handling images data and displaying it.
Can anyone explain the raw/native/basic image file format so that I need to convert all images into that type and use it directly.
There is no "native" image file format, but RGB comes close (especially if you strip the headers to give just a Width×Height×Channels array of pixel values). You probably wouldn't want to use this for storing everything though as your buffers will be very large. Let your libraries handle storage.
It will be very helpful if someone can let me know the approach for handling images data and displaying it.
There is no "the" approach. C++ itself doesn't say anything about images, and there are loads of ways you can go about working with them. Your design will depend on your functional requirements specification and on what libraries you have available.
I am using learning gui programming using c++ JUCE library. That
library have supports for image file format(png, jpg). But I wants to
learn how can I use other file format for example tiff.
After google I got libtiff.
My question is what will be the accurate approach for displaying this.
Should I need to convert .tiff file into jpeg/png from tiff for doing
this.
But I think this will require double processing.
If you mean using libtiff to convert TIFF-format images to formats that JUCE supports, you're right in saying that this introduces an extra initial processing step. However, as far as you've said, it sounds like any possible performance hit through this will be vastly, wildly and hugely outweighed by the benefit of simplicity and clarity. So I'd just do that.
In order to do something like read *.tiff images and using them in an application build with the JUCE framework, I would suggest to create a new class derived from the base interface ImageFileFormat.
class MyTiffFormat : public ImageFileFormat
{
private:
MyTiffFormat( const MyTiffFormat& );
MyTiffFormat& operator=( const MyTiffFormat& );
public:
MyTiffFormat();
~MyTiffformat();
const String getFormatName();
bool canUnderStand();
Image decodeImage( InputStream& input );
bool writeImageToStream( const Image& source, OuptputStream& dest );
};
Implementing the function "Image decodeImage( InputSTeram& input )" is the point were you need something like libtiff. In the JUCE source tree you will find the implementation for PNG and the other supported formats in the folder: \juce\src\gui\graphics\imaging
More information on extending JUCE features can be found in the JUCE user forum.
Juce works great with pngs, jpgs, and gifs (not animated), and they can be read from file, or even "compiled" with the BinaryBuilder.
For example to load it from compiled c++ with BinaryBuilder.
someImage = ImageFileFormat::loadFrom (AppResources::image_png, AppResources::image_pngSize);
Check out the doxygen docs, they are quite helpful. to compile your images with BinaryBuilder the syntax is:
./BinaryBuilder someFolder otherFolder ClassName