How to apply image effects to a JPEG in a memory buffer using LEADTOOLS 19 - c++

I'm making a Windows executable using C++, LEADTOOLS19, and VS2015 to read an image from a server, apply an image effect to it using LEADTOOLS, and display it in a browser.
The server provides me the image as an array of chars containing the JPEG encoding of the image (starting with "ÿØÿà") and the length of this buffer. Most of the LEADTOOLS functions read images from files, but I don't want to have to write it to disk just to read it as a bitmap.
The first thing I tried was the StartFeedLoad function:
//pImageData is the buffer of JPEG data, and imageLength is the
//server-provided size of pImageData in bytes
LBuffer buf((L_VOID *)pImageData, imageLength);
LFile imgFile;
LBitmap bitmap;
imgFile.SetBitmap(&imgbitmap);
// Initialize the file-load process
imgmemfile.StartFeedLoad(8, 0,
LOADFILE_ALLOCATE | LOADFILE_STORE, NULL);
imgmemfile.FeedLoad(&buf);
imgmemfile.StopFeedLoad();
With this code, I get this exception when trying to run StartFeedLoad:
Exception thrown at 0x000007F855BC2662 (ltwvcax.dll) in getimage.exe:
0xC0000005: Access violation reading location 0x0000000000000148.
I tried a few different things before calling StartFeedLoad, and tried changing the parameters I was passing it, but got that exception every time.
With that not working, the next method I tried was to save the buffer as an in-memory file using the LEADTOOLS library LMemoryFile class:
LBuffer buf((L_VOID *)pImageData, imageLength);
LMemoryFile imgmemfile;
BITMAPHANDLE pbit;
//The bitmap the image will be loaded into
LBitmap bitmap;
imgmemfile.SetBitmap(&bitmap);
//Load the buffer to the image
ret = imgmemfile.LoadMemory(buf, 0, ORDER_RGBORGRAY, LOADFILE_ALLOCATE | LOADFILE_STORE, NULL);
At this point, LoadMemory returns WRPERR_INVALID_PARAMETERS: One or more invalid parameters were specified. I've tried different bitsPerPixel values, color orders, and with or without adding another NULL parameter as fileInfo but still get the same error.
I feel like I need to do something else to "prep" the bitmap to load, but I don't know it's size or anything else to initialize it.
Thanks!
EDIT 5/9/16: Added "GetInfo" as indicated by Leadtools:
//Load image
LBuffer buf((L_VOID *)pImageData, imageLength);
//LFile imgmemfile;
FILEINFO fileInfo = FILEINFO();
LMemoryFile imgmemfile;
BITMAPHANDLE pbit;
if (LBase::GetLoadedLibraries() & LT_FIL == 0)
return false;
LBitmap bitmap;
imgmemfile.SetBitmap(&bitmap);
ret = imgmemfile.GetInfo(buf, &fileInfo, sizeof(FILEINFO), 0, NULL);
ret = imgmemfile.LoadMemory(buf, 0, ORDER_RGBORGRAY, LOADFILE_ALLOCATE | LOADFILE_STORE, NULL, &fileInfo);
ret = imgmemfile.Save(&buf, FILE_JPEG, 8, 30, NULL);
The code gets past the additional library check, but GetInfo returns -2041, indicating that LTFIL isn't loaded.

You should use LMemoryFile::GetInfo and LMemoryFile::LoadMemory if you have the whole file in memory at the start. If you don't, then FeedLoad is the way to go. There is an example here: https://www.leadtools.com/help/leadtools/v19/main/clib/lfile__startfeedload.html
You can find a full working example in your LEADTOOLS installation folder: C:\LEADTOOLS 19\Examples\ClassLibrary\MSVC\FeedLoad

The functions that LEADTOOLS Support gave me were correct, but I was still ahving the issue because I was linking to both the Unicode and ANSI versions of the Leadtools C++ class library (Ltwvc_x.lib AND Ltwvc_ax.lib). When I removed the Unicode library (from my ANSI project) everything worked fine.

Related

Cannot get an image handle from a bitmap within the resources when using "LoadImageA()" and cannot understand why

I am trying to load an image resource using the LoadImageA() function, yet it doesn't work and I don't understand why.
Here's a bit of my code :
bool isRessource = IS_INTRESOURCE(107);
// Load the resource to the HGLOBAL.
HGLOBAL imageResDataHandle = LoadImageA(
NULL,
MAKEINTRESOURCEA(107),
IMAGE_BITMAP,
0,
0,
LR_SHARED
);
HRESULT hr = (imageResDataHandle ? S_OK : E_FAIL);
The image I want to load is a bitmap saved in the resources, and represented as such within resources.h:
#define IDB_BITMAP1 107
When I execute the code, isRessource is equal to true, yet hr is equal to E_FAIL.
Any idea as to why this is happening? I am using Visual Studio 2019, and I made the image using Gimp.
After making the same image with the same format on another application (I used "Krita") and importing it again, the image finally loads with the same code (I only changed the reference to the resource). I guess that all types of bitmaps made from Gimp won't work in Visual Studio (I tried most formats of bitmaps from Gimp).
The first link searched with LoadImage gimp as a keyword is enough to answer this question.
This is some useful information:
The bitmap exported by GIMP has a broken header. Specifically, the
code seems to not write the RGBA masks, which AFAIK are not optional
in a BITMAPV5HEADER. This misaligns and changes the size of the entire
extended header, incidentally making it look like a BITMAPV4HEADER,
which explains why most programs will still open it fine. Without
having done any testing, I'd guess LoadImage() is more picky about the
values in this extended header; returning NULL is how it indicates
failure.
By the way, when you import a bitmap, the system does not remind you that the format of the image is unknown?
Like:
After testing, use LoadImage to load such an image will return NULL, and GetLastError will also return 0.

DirectShow universal media decoder

I am new to DirectShow API.
I want to decode a media file and get uncompressed RGB video frames using DirectShow.
I noted that all such operations should be completed through a GraphBuilder. Also, every the processing block is called a filter and there are many different filters for different media files. For example, for decoding H264 we should use "Microsoft MPEG-2 Video Decoder", for AVI files "AVI Splitter Filter" etc.
I would like to know if there is a general way (decoder) that can handle all those file types?
I would really appreciate if someone can point out an example that goes from importing a local file to decoding it into uncompressed RGB frames. All the examples I found are dealing with window handles and they just configure it and call pGraph->run(). I have also surfed through Windows SDK samples, but couldn't find useful samples.
Thanks very much in advance.
Universal DirectShow decoder in general is against the concept of DirectShow API. The whole idea is that individual filters are responsible for individual task (esp. decoding certain encoding or demultiplexing certain container format). The registry of the filters and Intelligent Connect let one to have the filters built in chain to do certain requested processing, in particular decoding from compressed format to 24-bit RGB for video.
From this standpoint you don't need a universal decoder and it is not expected that such decoder exists. However, such decoder (or close) does exist and it's a ffdshow or one of its derivatives. Presently, you might want to look at LAVFilters, for example. They wrap FFmpeg, which itself can handle many formats, and connect it to DirectShow API so that, as as filter, ffdshow could handle many formats/encodings.
There is no general rule to use or not use such codec pack, in most cases you take into consideration various factors and decide what to do. If your application handles various scenarios, a good starting point into graph building would be Overview of Graph Building.
My goal is to accomplish the task using DirectShow in order to have no external dependencies. Do you know a particular example that does uncompressing frames for some file type?
Your request is too broad and in the same time typical and, to some extent, fairy simple. If you spend some time playing with GraphEdit SDK tool, or rather GraphStudioNext, which is a more powerful version of the former, you will be able to build filter graph interactively, also render media files of different types and see what filters participate in rendering. You can accomplish the very same programmatically too, since the interactive actions basically all have matching API calls individually.
You will be able to see that specific formats are handled by different filters and Intelligent Connect mentioned above is building chains of filters in combinations in order to satisfy the requests and get the pipeline together.
Default use case is playback, and if you want to get video rendered to 24/32-bit RGB, your course of actions is pretty much similar: you are to build a graph, which just needs to terminate with something else. More flexible, sophisticated and typical for advanced development approach is to supply a custom video renderer filter and accept decompressed RGB frames on it.
A simple and so much popular version of the solution is to use Sample Grabber filter, initialize it to accept RGB, setup a callback on it so that your SampleCB callback method is called every time RGB frame is decompressed, and use Sample Grabber in the graph. (You will find really a lot of attempts to accomplish that if you search open source code and/or web for keywords ISampleGrabber, ISampleGrabberCB, SampleCB or BufferCB, MEDIASUBTYPE_RGB24).
Using the Sample Grabber
DirectShow: Examples for Using SampleGrabber for Grabbing a Frame and Building a VU Meter
Another more or less popular approach is to setup a playback pipeline, play a file, and read back frames from video presenter. This is suggested in another answer to the question, is relatively easy to do, and does the job if you don't have performance requirement and requirements to extract every single frame. That is, it is a good way to get a random RGB frame from the feed but not every/all frames. See related on this:
Different approaches on getting captured video frames in DirectShow
You are looking for vmr9 example in DirectShow library.
In your Windows SDK's install, look for this example:
Microsoft SDKs\Windows\v7.0\Samples\multimedia\directshow\vmr9\windowless\windowless.sln
And search this function: CaptureImage, in this method, see IVMRWindowlessControl9::GetCurrentImage, is exactly what you want.
This method captures a video frame in bitmap format (RGB).
Next, this is a copy of CaptureImage code:
BOOL CaptureImage(LPCTSTR szFile)
{
HRESULT hr;
if(pWC && !g_bAudioOnly)
{
BYTE* lpCurrImage = NULL;
// Read the current video frame into a byte buffer. The information
// will be returned in a packed Windows DIB and will be allocated
// by the VMR.
if(SUCCEEDED(hr = pWC->GetCurrentImage(&lpCurrImage)))
{
BITMAPFILEHEADER hdr;
DWORD dwSize, dwWritten;
LPBITMAPINFOHEADER pdib = (LPBITMAPINFOHEADER) lpCurrImage;
// Create a new file to store the bitmap data
HANDLE hFile = CreateFile(szFile, GENERIC_WRITE, FILE_SHARE_READ, NULL,
CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
if (hFile == INVALID_HANDLE_VALUE)
return FALSE;
// Initialize the bitmap header
dwSize = DibSize(pdib);
hdr.bfType = BFT_BITMAP;
hdr.bfSize = dwSize + sizeof(BITMAPFILEHEADER);
hdr.bfReserved1 = 0;
hdr.bfReserved2 = 0;
hdr.bfOffBits = (DWORD)sizeof(BITMAPFILEHEADER) + pdib->biSize +
DibPaletteSize(pdib);
// Write the bitmap header and bitmap bits to the file
WriteFile(hFile, (LPCVOID) &hdr, sizeof(BITMAPFILEHEADER), &dwWritten, 0);
WriteFile(hFile, (LPCVOID) pdib, dwSize, &dwWritten, 0);
// Close the file
CloseHandle(hFile);
// The app must free the image data returned from GetCurrentImage()
CoTaskMemFree(lpCurrImage);
// Give user feedback that the write has completed
TCHAR szDir[MAX_PATH];
GetCurrentDirectory(MAX_PATH, szDir);
// Strip off the trailing slash, if it exists
int nLength = (int) _tcslen(szDir);
if (szDir[nLength-1] == TEXT('\\'))
szDir[nLength-1] = TEXT('\0');
Msg(TEXT("Captured current image to %s\\%s."), szDir, szFile);
return TRUE;
}
else
{
Msg(TEXT("Failed to capture image! hr=0x%x"), hr);
return FALSE;
}
}
return FALSE;
}

Unreal Engine 4: save rendered frame to memory

Previously, I was looking to output frames rendered by UE4 to file.
I managed to do this and the details can be found in this StackOverflow post
The function to out put the frame to file is:
FScreenshotRequest::RequestScreenshot(filename, false, false);
Now, instead of writing to file, I would like to write to memory. I don't want to write to file and then read into memory.
I have been digging through the source code and found where screenshots are being made, but am having some trouble.
ViewportClient->ProcessScreenShots(this); is being called on line 1012 of UnrealClient.cpp
Following that, I found that the screenshot is actually being generated here:
bScreenshotSuccessful = GetViewportScreenShot(InViewport, Bitmap);
So, after finding all the bits that I think I need, I tried to recreate it in a custom Actor:
UGameViewportClient* gameViewport = GEngine->GameViewport;
FViewport* InViewport = gameViewport->Viewport;
TArray<FColor> Bitmap;
bool bScreenshotSuccessful = GetViewportScreenShot(InViewport, Bitmap);
if (bScreenshotSuccessful){
FIntVector Size(InViewport->GetSizeXY().X, InViewport->GetSizeXY().Y, 0);
TArray<uint8> CompressedBitmap;
FString ScreenShotName = TEXT("out.png");
FImageUtils::CompressImageArray(Size.X, Size.Y, Bitmap, CompressedBitmap);
FFileHelper::SaveArrayToFile(CompressedBitmap, *ScreenShotName);
}
For some reason, bool bScreenshotSuccessful = GetViewportScreenShot(InViewport, Bitmap); throws an Access violation reading location 0x0000000000000020. exception.
I think the error has something to do with this line:
Viewport->ReadPixels(Bitmap, FReadSurfaceDataFlags(), ViewRect)
I've treid 'googeling' for what an 'Access violation' is, and it seems it is something to do with a null pointer or something like that, but I am still not able to figure this out as I'm rather new to c++.
Question
How can I fix this so that bScreenshotSuccessful is true?
NOTE: I realise that FFileHelper::SaveArrayToFile(CompressedBitmap, *ScreenShotName); attempts to save to file, despite me saying that is not what I want to do, so please ignore that as if I have the bitmap, I can compress and stream it.
You have ViewRect just of three points instead of four and in wrong format. Try this snippet:
FIntRect Rect(0, 0, InViewport->GetSizeXY().X, InViewport->GetSizeXY().Y);
bScreenshotSuccessful = GetViewportScreenShot(InViewport, Bitmap, Rect);

Saving output frame as an image file CUDA decoder

I am trying to save the decoded image file back as a BMP image using the code in CUDA Decoder project.
if (g_bReadback && g_ReadbackSID)
{
CUresult result = cuMemcpyDtoHAsync(g_bFrameData[active_field], pDecodedFrame[active_field], (nDecodedPitch * nHeight * 3 / 2), g_ReadbackSID);
long padded_size = (nWidth * nHeight * 3 );
CString output_file;
output_file.Format(_T("image/sample_45.BMP"));
SaveBMP(g_bFrameData[active_field],nWidth,nHeight,padded_size,output_file );
if (result != CUDA_SUCCESS)
{
printf("cuMemAllocHost returned %d\n", (int)result);
}
}
But the saved image looks like this
Can anybody help me out here what am i doing wrong .. Thank you.
After investigating further, there were several modifications I made to your approach.
pDecodedFrame is actually in some non-RGB format, I think it is NV12 format which I believe is a particular YUV variant.
pDecodedFrame gets converted to an RGB format on the GPU using a particular CUDA kernel
the target buffer for this conversion will either be a surface provided by OpenGL if g_bUseInterop is specified, or else an ordinary region allocated by the driver API version of cudaMalloc if interop is not specified.
The target buffer mentioned above is pInteropFrame (even in the non-interop case). So to make an example for you, for simplicity I chose to only use the non-interop case, because it's much easier to grab the RGB buffer (pInteropFrame) in that case.
The method here copies pInteropFrame back to the host, after it has been populated with the appropriate RGB image by cudaPostProcessFrame. There is also a routine to save the image as a bitmap file. All of my modifications are delineated with comments that include RMC so search for that if you want to find all the changes/additions I made.
To use, drop this file in the cudaDecodeGL project as a replacement for the videoDecodeGL.cpp source file. Then rebuild the project. Then run the executable normally to display the video. To capture a specific frame, run the executable with the nointerop command-line switch, eg. cudaDecodGL nointerop and the video will not display, but the decode operation and frame capture will take place, and the frame will be saved in a framecap.bmp file. If you want to change the specific frame number that is captured, modify the g_FrameCapSelect = 37; variable to some other number besides 37, and recompile.
Here is the replacement for videoDecodeGL.cpp I used pastebin because SO has a limit on the number of characters that can be entered in a question body.
Note that my approach is independent of whether readback is specified. I would recommend not using readback for this sequence.

SDL and Visual Studio 2010 resources

I have a simple question. I use SDL and SDL_image in my c++ program and image loading is fine from a single png file.
SDL_Surface *dot = NULL;
dot = load_image("dot.png");
But how can I load the png file if I add it to the resources? so I don't want to store in a png file next to the exe. Is it possible to load from the resources?
Tried
dot = load_image(MAKEINTRESOURCE(IDB_PNG1));
but it didn't work.
It is fully possible to load an image or something else into SDL from a windows resource. To do this you need to get the raw data and pass it to the appropriate RWOPS.
HMODULE hModule = GetModuleHandle(_T("myapp.exe"));
HRSRC hWhite = FindResource(hModule, MAKEINTRESOURCE(IDB_WHITE_PNG), _T("PNG"));
unsigned int white_size = SizeofResource(hModule, hWhite);
HGLOBAL hgWhite = LoadResource(hModule, hWhite);
unsigned char* white_data = (unsigned char*)LockResource(hgWhite);
SDL_Surface* white = IMG_Load_RW(SDL_RWFromConstMem(white_data, white_size), 1);
This assumes that you have something similar in your *.rc file:
IDB_WHITE_PNG PNG "White.png"
According to the MAKEINTRESOURCE documentation :
The return value should be passed only to functions which explicitly indicate that they accept MAKEINTRESOURCE as a parameter.
You don't give the content of load_image (BTW, please include the content of the functions you use in your question, you'll get better answers ...) but I bet it's not using its parameter to call one of the Windows SDK functions which accept MAKEINTRESOURCE ... as far as I know these resources are supposed to hold some specific Windows UI data like mouse cursors, icons, etc.. for use with Windows functions, not with other libraries like SDL, so I'm not surprised it doesn't work.