Having problems loading a jpg file using libjpeg - libjpeg

I need to load jpg files in my application. I used libjpeg to save JPGs (from processed raw files) and it works nicely.
Reading them though is a different issue. I am getting very weird results, the image is very distorted, in 12 columns, which are mostly gray scale.
I followed the example, and the only modification I made is how to put the data in my buffer (the put_scanline_someplace() function is missing from the example.
Here is my relevant code (I need the data in BGR format):
dest=0;
while(cinfo.output_scanline < cinfo.output_height)
{
jpeg_read_scanlines(&cinfo, buffer, 1);
src=0;
for(i=0;i<cinfo.output_width;i++)
{
image_buffer[dest*3+2]=buffer[src*3+0];
image_buffer[dest*3+1]=buffer[src*3+1];
image_buffer[dest*3+0]=buffer[src*3+2];
src++;
dest++;
}
}
Is there something wrong with this code?

I found the solution. buffer isa pointer to an array of ints, so the code that works is like so:
image_buffer[dest*3+2]=buffer[0][src*3+0];
image_buffer[dest*3+1]=buffer[0][src*3+1];
image_buffer[dest*3+0]=buffer[0][src*3+2];

Related

Get raw buffer for in-memory dataset in GDAL C++ API

I have generated a GeoTiff dataset in-memory using GDALTranslate() with a /vsimem/ filepath. I need access to the buffer for the actual GeoTiff file to put it in a stream for an external API. My understanding is that this should be possible with VSIGetMemFileBuffer(), however I can't seem to get this to return anything other than nullptr.
My code is essentially as follows:
//^^ GDALDataset* srcDataset created somewhere up here ^^
//psOptions struct has "-b 4" and "-of GTiff" settings.
const char* filep = "/vsimem/foo.tif";
GDALDataset* gtiffData = GDALTranslate(filep, srcDataset, psOptions, nullptr);
vsi_l_offset size = 0;
GByte* buf = VSIGetMemFileBuffer(filep, &size, true); //<-- returns nullptr
gtiffData seems to be a real dataset on inspection, it has all the appropriate properties (number of bands, raster size, etc). When I provide a real filesystem location to GDALTranslate() rather than the /vsimem/ path and load it up in QGIS it renders correctly too.
Looking a the source for VSIGetMemFileBuffer(), this should really only be returning nullptr if the file can't be found. This suggests i'm using it incorrectly. Does anyone know what the correct usage is?
Bonus points: Is there a better way to do this (stream the file out)?
Thanks!
I don't know anything about the C++ API. But in Python, the snippet below is what I sometimes use to get the contents of an in-mem file. In my case mainly VRT's but it shouldn't be any different for other formats.
But as said, I don't know if the VSI-api translate 1-on-1 to C++.
from osgeo import gdal
filep = "/vsimem/foo.tif"
# get the file size
stat = gdal.VSIStatL(filep, gdal.VSI_STAT_SIZE_FLAG)
# open file
vsifile = gdal.VSIFOpenL(filep, 'r')
# read entire contents
vsimem_content = gdal.VSIFReadL(1, stat.size, vsifile)
In the case of a VRT the content would be text, shown with something like print(vsimem_content.decode()). For a tiff it would of course be binary data.
I came back to this after putting in a workaround, and upon swapping things back over it seems to work fine. #mmomtchev suggested looking at the CPL_DEBUG output, which showed nothing unusual (and was silent during the actual VSIGetMemFileBuffer call).
In particular, for other reasons I had to put a GDALWarp call in between calling GDALTranslate and accessing the buffer, and it seems that this is what makes the difference. My guess is that GDALWarp is calling VSIFOpenL internally - although I can't find this in the source - and this does some kind of initialisation for VSIGetMemFileBuffer. Something to try for anyone else who encounters this.

How do I load an image (raw bytes) with OpenCV?

I am using Mat input = imread(filename); to read an image but I'd like to do it from memory instead. The source of the file is from an HTTP server. To make it faster, instead of writing the file to disk and then use imread() to read from it, i'd like to skip a step and directly load it from memory. How do I go about doing this?
Updated to add error
I tried the following but I'm getting segmentation fault
char * do_stuff(char img[])
{
vector<char> vec(img, img + strlen(img));
Mat input = imdecode(Mat(vec), 1);
}
See the man page for imdecode().
http://docs.opencv.org/modules/highgui/doc/reading_and_writing_images_and_video.html#imdecode
I had a similar problem. I needed to decode a jpeg image stream in memory and use the Mat image output for further analysis.
The documentation on OpenCV::imdecode did not provide me enough information to solve the problem.
However, the code here by OP worked for me. This is how I used it ( in C++ ):
//Here pImageData is [unsigned char *] that points to a jpeg compressed image buffer;
// ImageDataSize is the size of compressed content in buffer;
// The image here is grayscale;
cv::vector<unsigned char> ImVec(pImageData, pImageData + ImageDataSize);
cv:Mat ImMat;
ImMat = imdecode(ImVec, 1);
To check I saved the ImMat and was able to open the image file using a image viewer.
cv::imwrite("opencvDecodedImage.jpg", ImMat);
I used : OpenCV 2.4.10 binaries for VC10 on x86.
I hope this information can help others.

Saving output frame as an image file CUDA decoder

I am trying to save the decoded image file back as a BMP image using the code in CUDA Decoder project.
if (g_bReadback && g_ReadbackSID)
{
CUresult result = cuMemcpyDtoHAsync(g_bFrameData[active_field], pDecodedFrame[active_field], (nDecodedPitch * nHeight * 3 / 2), g_ReadbackSID);
long padded_size = (nWidth * nHeight * 3 );
CString output_file;
output_file.Format(_T("image/sample_45.BMP"));
SaveBMP(g_bFrameData[active_field],nWidth,nHeight,padded_size,output_file );
if (result != CUDA_SUCCESS)
{
printf("cuMemAllocHost returned %d\n", (int)result);
}
}
But the saved image looks like this
Can anybody help me out here what am i doing wrong .. Thank you.
After investigating further, there were several modifications I made to your approach.
pDecodedFrame is actually in some non-RGB format, I think it is NV12 format which I believe is a particular YUV variant.
pDecodedFrame gets converted to an RGB format on the GPU using a particular CUDA kernel
the target buffer for this conversion will either be a surface provided by OpenGL if g_bUseInterop is specified, or else an ordinary region allocated by the driver API version of cudaMalloc if interop is not specified.
The target buffer mentioned above is pInteropFrame (even in the non-interop case). So to make an example for you, for simplicity I chose to only use the non-interop case, because it's much easier to grab the RGB buffer (pInteropFrame) in that case.
The method here copies pInteropFrame back to the host, after it has been populated with the appropriate RGB image by cudaPostProcessFrame. There is also a routine to save the image as a bitmap file. All of my modifications are delineated with comments that include RMC so search for that if you want to find all the changes/additions I made.
To use, drop this file in the cudaDecodeGL project as a replacement for the videoDecodeGL.cpp source file. Then rebuild the project. Then run the executable normally to display the video. To capture a specific frame, run the executable with the nointerop command-line switch, eg. cudaDecodGL nointerop and the video will not display, but the decode operation and frame capture will take place, and the frame will be saved in a framecap.bmp file. If you want to change the specific frame number that is captured, modify the g_FrameCapSelect = 37; variable to some other number besides 37, and recompile.
Here is the replacement for videoDecodeGL.cpp I used pastebin because SO has a limit on the number of characters that can be entered in a question body.
Note that my approach is independent of whether readback is specified. I would recommend not using readback for this sequence.

Using libtiff's TIFFReadRawTile to get a jpeg tile without decompression/compression

i've a pyramidal tiled tiff file and I want to extract the tiles without decoding and re-encoding the jpeg, i've seen that using TIFFReadRawTile() function you can extract the raw tile without decoding, how can i write the extracted buffer to a readable jpeg file?
The task you are up to is not a trivial one. You might want to take a closer look at tiff2pdf utility's source code. The utility does what you need and you might extract relevant parts from it.
The problem is, the utility does many other things you will have to discard. Also, not any JPEG-in-TIFF could be successfully processed by the utility. Basically, because there is enough semi-broken TIFFs out there.
I've found that actually there is no way to get the encoded tile without directly messing with the huffmann tables of the tiff, which is pretty tricky.
The only way I've found is to read the decoded tile and then do some magic with vips to output to jpeg directly.
tdata_t buf;
tsize_t len;
buf = _TIFFmalloc( TIFFTileSize( tif ) );
len = TIFFReadEncodedTile(tif, tile, buf, (tsize_t) -1);
VImage result ((void *) buf, 256, 256, 3, VImage::FMTUCHAR);
void *outBuffer;
unsigned long len;
vips_jpegsave_buffer(result, &outBuffer, &len, "Q", 90, NULL);
and the use cout to output the image after some headers.

converting a binary stream into a png format

I will try to be clear ....
My project idea is as follow :
I took several compression algorithms which I implemented using C++, after that I took a text file and applied to it the compression algorithms which I implemented, then applied several encryption algorithms on the compressed files, now I am left with final step which is converting these encrypted files to any format of image ( am thinking about png since its the clearest one ).
MY QUESTION IS :
How could I transform a binary stream into a png format ?
I know the image will look rubbish.
I want the binary stream to be converted to a an png format so I can view it as an image
I am using C++, hope some one out there can help me
( my previous thread which was closed )
https://stackoverflow.com/questions/5773638/converting-a-text-file-to-any-format-of-images-png-etc-c
thanx in advance
Help19
If you really really must store your data inside a PNG, it's better to use a 3rd party library like OpenCV to do the work for you. OpenCV will let you store your data and save it on the disk as PNG or any other format that it supports.
The code to do this would look something like this:
#include <cv.h>
#include <highgui.h>
IplImage* out_image = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, bits_pr_pixel);
char* buff = new char[width * height * bpp];
// then copy your data to this buff
out_image->imageData = buff;
if (!cvSaveImage("fake_picture.png", out_image))
{
std::cout << "ERROR: Failed cvSaveImage" << std::endl;
}
cvReleaseImage(&out_image);
The code above it's just to give you an idea on how to do what you need using OpenCV.
I think you're better served with a bi-dimensional bar code instead of converting your blob of data into a png image.
One of the codes that you could use is the QR code.
To do what you have in mind (storing data in an image), you'll need a lossless image format. PNG is a good choice for this. libpng is the official PNG encoding library. It's written in C, so you should be able to easily interface it with your C++ code. The homepage I linked you to contains links to both the source code so you can compile libpng into your project as well as a manual on how to use it. A few quick notes on using libpng:
It uses setjmp and longjmp for error handling. It's a little weird if you haven't worked with C's long jump functionality before, but the manual provides a few good examples.
It uses zlib for compression, so you'll also have to compile that into your project.