Trying to encode a GIF file using giflib - c++

I am given image data and color table I am trying to export it as a single frame GIF using giflib. I looked into the API, but can't get it to work. The program crashes even at the first function:
GifFileType image_out;
int errorCode = 0;
char* fileName = "SomeName.gif";
image_out = *EGifOpenFileName(fileName,true, &errorCode);
It is my understanding that I first need to open a file by specifying it's name and then update it with fileHandle. Then Fill in the screen description, the extension block the image data and add the 3B ending to the file. Then use EGifSpew to export the whole gif. The problem is that I can't even use EGifOpenFileName(); The program crashes at that line.
Can someone help me the API of giflib? This problem is getting really frustrating.
Thanks.
EDIT:
For the purposes of simple encoding I do not want to specify a color table and I just want to encode a single frame GIF.

The prototype is:
GifFileType *EGifOpenFileName(char *GifFileName, bool GifTestExistance, int *ErrorCode)
You should write as
GifFileType* image_out = EGifOpenFileName(fileName,true, &errorCode);
Note GifFileType is not POD type so you should NOT copy like that.

Related

Get raw buffer for in-memory dataset in GDAL C++ API

I have generated a GeoTiff dataset in-memory using GDALTranslate() with a /vsimem/ filepath. I need access to the buffer for the actual GeoTiff file to put it in a stream for an external API. My understanding is that this should be possible with VSIGetMemFileBuffer(), however I can't seem to get this to return anything other than nullptr.
My code is essentially as follows:
//^^ GDALDataset* srcDataset created somewhere up here ^^
//psOptions struct has "-b 4" and "-of GTiff" settings.
const char* filep = "/vsimem/foo.tif";
GDALDataset* gtiffData = GDALTranslate(filep, srcDataset, psOptions, nullptr);
vsi_l_offset size = 0;
GByte* buf = VSIGetMemFileBuffer(filep, &size, true); //<-- returns nullptr
gtiffData seems to be a real dataset on inspection, it has all the appropriate properties (number of bands, raster size, etc). When I provide a real filesystem location to GDALTranslate() rather than the /vsimem/ path and load it up in QGIS it renders correctly too.
Looking a the source for VSIGetMemFileBuffer(), this should really only be returning nullptr if the file can't be found. This suggests i'm using it incorrectly. Does anyone know what the correct usage is?
Bonus points: Is there a better way to do this (stream the file out)?
Thanks!
I don't know anything about the C++ API. But in Python, the snippet below is what I sometimes use to get the contents of an in-mem file. In my case mainly VRT's but it shouldn't be any different for other formats.
But as said, I don't know if the VSI-api translate 1-on-1 to C++.
from osgeo import gdal
filep = "/vsimem/foo.tif"
# get the file size
stat = gdal.VSIStatL(filep, gdal.VSI_STAT_SIZE_FLAG)
# open file
vsifile = gdal.VSIFOpenL(filep, 'r')
# read entire contents
vsimem_content = gdal.VSIFReadL(1, stat.size, vsifile)
In the case of a VRT the content would be text, shown with something like print(vsimem_content.decode()). For a tiff it would of course be binary data.
I came back to this after putting in a workaround, and upon swapping things back over it seems to work fine. #mmomtchev suggested looking at the CPL_DEBUG output, which showed nothing unusual (and was silent during the actual VSIGetMemFileBuffer call).
In particular, for other reasons I had to put a GDALWarp call in between calling GDALTranslate and accessing the buffer, and it seems that this is what makes the difference. My guess is that GDALWarp is calling VSIFOpenL internally - although I can't find this in the source - and this does some kind of initialisation for VSIGetMemFileBuffer. Something to try for anyone else who encounters this.

Saving output frame as an image file CUDA decoder

I am trying to save the decoded image file back as a BMP image using the code in CUDA Decoder project.
if (g_bReadback && g_ReadbackSID)
{
CUresult result = cuMemcpyDtoHAsync(g_bFrameData[active_field], pDecodedFrame[active_field], (nDecodedPitch * nHeight * 3 / 2), g_ReadbackSID);
long padded_size = (nWidth * nHeight * 3 );
CString output_file;
output_file.Format(_T("image/sample_45.BMP"));
SaveBMP(g_bFrameData[active_field],nWidth,nHeight,padded_size,output_file );
if (result != CUDA_SUCCESS)
{
printf("cuMemAllocHost returned %d\n", (int)result);
}
}
But the saved image looks like this
Can anybody help me out here what am i doing wrong .. Thank you.
After investigating further, there were several modifications I made to your approach.
pDecodedFrame is actually in some non-RGB format, I think it is NV12 format which I believe is a particular YUV variant.
pDecodedFrame gets converted to an RGB format on the GPU using a particular CUDA kernel
the target buffer for this conversion will either be a surface provided by OpenGL if g_bUseInterop is specified, or else an ordinary region allocated by the driver API version of cudaMalloc if interop is not specified.
The target buffer mentioned above is pInteropFrame (even in the non-interop case). So to make an example for you, for simplicity I chose to only use the non-interop case, because it's much easier to grab the RGB buffer (pInteropFrame) in that case.
The method here copies pInteropFrame back to the host, after it has been populated with the appropriate RGB image by cudaPostProcessFrame. There is also a routine to save the image as a bitmap file. All of my modifications are delineated with comments that include RMC so search for that if you want to find all the changes/additions I made.
To use, drop this file in the cudaDecodeGL project as a replacement for the videoDecodeGL.cpp source file. Then rebuild the project. Then run the executable normally to display the video. To capture a specific frame, run the executable with the nointerop command-line switch, eg. cudaDecodGL nointerop and the video will not display, but the decode operation and frame capture will take place, and the frame will be saved in a framecap.bmp file. If you want to change the specific frame number that is captured, modify the g_FrameCapSelect = 37; variable to some other number besides 37, and recompile.
Here is the replacement for videoDecodeGL.cpp I used pastebin because SO has a limit on the number of characters that can be entered in a question body.
Note that my approach is independent of whether readback is specified. I would recommend not using readback for this sequence.

How do I write a Cairo surface to png to stdout?

I'm trying to write a CGI program that will output a PNG image to stdout. I can already do this from an image file (PNG or otherwise), but now I'm using Cairo to dynamically generate some image, then output it to the browser.
The problem I'm facing is this: the way Cairo writes a surface to a PNG is using one of two functions. The first is Surface::write_to_png(string filename). This doesn't work for me, since I'm not writing to a file, but to stdout. The second is Surface::write_to_png_stream( something-or-other write_func), as described here. I do not understand how this works, or even if this is what I want. Is there a better way to accomplish this, and if not, how do I use this abysmal function?
Thanks
As it says in the documentation, write a function to handle the writing:
#include <cstdio> // for stdout
Cairo::ErrorStatus my_write_func(unsigned char* data, unsigned int length)
{
return length == std::fwrite(data, length, stdout) ? CAIRO_STATUS_SUCCESS : CAIRO_STATUS_WRITE_ERROR;
}
Usage:
my_surface.write_to_png_stream(my_write_func);
For those who need the answer to this question(if you exist), I've figured it out:
Kerrek actually gets most of the credit here, but I thought I would post my results, and what ended up working. Here's the write function:
Cairo::ErrorStatus write_stdout(const unsigned char* data, unsigned int length)
{
return std::cout.write((char*)data,length)?CAIRO_STATUS_SUCCESS:CAIRO_STATUS_WRITE_ERROR;
}
Now, I don't know whether this will return CAIRO_STATUS_WRITE_ERROR on error, since I'm not sure what the return value of write is. However, this code does work.
To call it, I used:
surface->write_to_png_stream(&write_stdout);
surface was defined as such:
Cairo::RefPtr<Cairo::ImageSurface> surface =
Cairo::ImageSurface::create(Cairo::FORMAT_ARGB32, WIDTH, HEIGHT);
Basically, it's a normal surface. Anyways, thanks to Kerrek again, for answering, and I hope that helps someone.

converting a binary stream into a png format

I will try to be clear ....
My project idea is as follow :
I took several compression algorithms which I implemented using C++, after that I took a text file and applied to it the compression algorithms which I implemented, then applied several encryption algorithms on the compressed files, now I am left with final step which is converting these encrypted files to any format of image ( am thinking about png since its the clearest one ).
MY QUESTION IS :
How could I transform a binary stream into a png format ?
I know the image will look rubbish.
I want the binary stream to be converted to a an png format so I can view it as an image
I am using C++, hope some one out there can help me
( my previous thread which was closed )
https://stackoverflow.com/questions/5773638/converting-a-text-file-to-any-format-of-images-png-etc-c
thanx in advance
Help19
If you really really must store your data inside a PNG, it's better to use a 3rd party library like OpenCV to do the work for you. OpenCV will let you store your data and save it on the disk as PNG or any other format that it supports.
The code to do this would look something like this:
#include <cv.h>
#include <highgui.h>
IplImage* out_image = cvCreateImage(cvSize(width, height), IPL_DEPTH_8U, bits_pr_pixel);
char* buff = new char[width * height * bpp];
// then copy your data to this buff
out_image->imageData = buff;
if (!cvSaveImage("fake_picture.png", out_image))
{
std::cout << "ERROR: Failed cvSaveImage" << std::endl;
}
cvReleaseImage(&out_image);
The code above it's just to give you an idea on how to do what you need using OpenCV.
I think you're better served with a bi-dimensional bar code instead of converting your blob of data into a png image.
One of the codes that you could use is the QR code.
To do what you have in mind (storing data in an image), you'll need a lossless image format. PNG is a good choice for this. libpng is the official PNG encoding library. It's written in C, so you should be able to easily interface it with your C++ code. The homepage I linked you to contains links to both the source code so you can compile libpng into your project as well as a manual on how to use it. A few quick notes on using libpng:
It uses setjmp and longjmp for error handling. It's a little weird if you haven't worked with C's long jump functionality before, but the manual provides a few good examples.
It uses zlib for compression, so you'll also have to compile that into your project.

Saving image to file with IImageEncoder

do you have a working code to share.
I’m trying to figure out how to save to a file an IBitmapImage image.
I need to resize existing .jpg file and it seems like the only API for Windows Mobile. I managed to load it convert it to IImage -> IBitmapImage -> IBasicBitmapOps and resize it finally, but I have no clue how to save it properly to a new file.
Use IBitmapImage::LockBits to get access to the image data via its BitmapData* lockedBitmapData parameter. Use the BitmapData to prepare a bitmap file info header, then write that one and the image data in BitmapData::Scan0 to a file using regular file writing with ::WriteFile or higher level ones if you use such.