Difficulties while using SDL_CreateRGBSurface to create a new SDL_Surface - c++

I'm creating a basic console-based image processing tool in c++ and have thus far found creating functions that operate on RGB values super easy!
What I'd like to do now is create functions for blurring, sharpening and resizing the image. I know that in order to do this, I have to map the new pixels to a new image. I am having a bit of a problem using this in-built SDL function to create a new blank surface onto which I intend to map the new pixels:
SDL_Surface *SDL_CreateRGBSurface(Uint32 flags, int width, int height, int depth, Uint32 Rmask, Uint32 Gmask, Uint32 Bmask, Uint32 Amask);
The rest of my program uses Uint8* rather than Uint32, so I'm not sure how this will affect proceedings. Also, I'm not 100% sure about all of the parameters and what they do/are used for: ie flags and depth.
Can someone give me a bit of advice on how to use this function to properly create a new SDL_Surface?

There's a code example here that you can look at: http://wiki.libsdl.org/moin.fcg/SDL_CreateRGBSurface
Specifically, the line you are looking for is:
surface = SDL_CreateRGBSurface(0,width,height,32,0,0,0,0);
The flags are used for various things, but you should be able to set it to 0 fine. As for the depth, this refers to how many bits per pixel you are looking at. So in this case, 8 * 4 is 32 (including alpha).
I presume you already have the image loaded. If so, you can use this loaded image to get the bits per pixel instead:
surface->format->BitsPerPixel

Related

Convert ARCore image (YUV_420_888) to cv::Mat in native c++ plugin

I'm currently struggling on the conversion of an image captured from ARCore in YUV_420_888 format to an OpenCV Mat object. I use ARCore as a plugin for unity from which I'm able to capture the bytes that represent the image.
I've furthermore written a marker detection in c++ using OpenCV that I planned on using as a native plugin. So far so good.
ARCore has the functionality of returning a GoogleARCore.CameraImageBytes object that contains the following information (taken from a single frame transmitted to the native plugin):
Width: 640
Height: 480
Y Row Stride: 640
UV Row Stride: -1008586736
UV Pixel Stride: 307200
as well as the image data as an System.IntPtr, which I'm able to convert to a byte[] and thus receive it on c++ side as unsigned char *.
These information are passed to c++ in the following functions signature:
extern "C" void findMarkersInImage(int width, int height, int y_row_stride, int uv_row_stride, int uv_pixel_stride,
unsigned char *image_data, int buffer_size)
I realize that there are many answers on this or various other platforms that suggest an algorithm for the conversion. However, all of them employ functionality to direclty gather image plane information from the image. Saying that every solution relies on calling some function called getImageplanes(image, plane_number). Furthermore all of the other solution invoke the use of an AImage which is not available to me. Also converting the gathered image bytes to an AImage, and then to a cv::Mat seems like an computational overkill for an algorithm that is supposed to run in realtime, supporting, not negatively affecting the overall performance.
So finally three main questions occur to me:
Is there a way to find out the planes positions in the unsigned char * I get containing the whole image.
The UV Row Stride value differs in every single frame, not by much but it does, is that normal behaviour?
The before-mentioned value is negative.
Maybe someone can suggest further literature that might help me. Or has a direct answer to one of the questions.

OpenCV + OpenGL - Get OpenGL image as OpenCV camera

I would like to grab an OpenGL image and feed it to OpenCV for analysis (as a simulator for the OpenCV algorithms) but I am not finding much information about it, all I can find is the other way around (placing an OpenCV image inside OpenGL). Could someone explain how to do so?
EDIT:
I will be simulating a camera on top of a Robot, so I will render in realtime a 3D environment and display it in a Qt GUI for the user. I will give the user the option to use a a real webcam feed or a simulated 3D scene (that changes as the robot moves) and the OpenCV algorithm will be the same for both inputs so the user might test his code without having to use a real robot all the time.
You are probably looking for the function glReadPixels. It will download whatever is currently displayed by OpenGL to a buffer.
unsigned char* buffer = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
cv::Mat image(height, width, CV_8UC3, buffer);
cv::imshow("Show Image", image);
For OpenCV you will probably also need to flip and convert to BGR as well.
Edit: Since just using glReadPixels is not a very efficient way to do it, here is some sample code using Framebuffers and Pixel Buffer Objects to efficiently transfer:
How to render offscreen on OpenGL?
I did it in a previous research project. There are not much difficulties here.
What you have to do is basically:
make a texture read from OpenGL to some pre-allocated memory buffer;
apply some geometric transform (flip X and/or Y coordinate) to account for the possibly different coordinate frames between OpenGL and OpenCV. It's a detail but it helps in visualization (hint: use a texture with an F letter inside to find quickly what coordinate you need to flip!);
you can build an OpenCV cv::Matobject directly around your pre-allocated memory buffe, and then process it directly or copy it to some other matrix object and process it.
As indicated in another answer, reading OpenGL texture is a simple matter of calling the glRead() function.
What you get is usually 3 or 4 channels with 8 bits per data (RGB / RGBA - 8 bits per channel), though it may depend on your actual OpenGL context.
If color is important to you, you may need (but it is not required) to convert the RGB image data to the BGR format (Blue - Green - Red). For historical reasons, this is the default color channel ordering in OpenCV.
You do this with a call to cv::cvtColor(source, dest, cv::COLOR_RGB2BGR) for example.
I needed this for my research.
I took littleimp's advice, but fixing the colors and flipping the image took valuable time to figure out.
Here is what I ended up with.
typedef Mat Image ;
typedef struct {
int width;
int height;
char* title;
float field_of_view_angle;
float z_near;
float z_far;
} glutWindow;
glutWindow win;
Image glutTakeCVImage() {
// take a picture within glut and return formatted
// for use in openCV
int width = win.width;
int height = win.height;
unsigned char* buffer = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
Image img(height, width, CV_8UC3, buffer);
Image flipped_img;
flip(img,flipped_img,0);
Image BGR_img;
cvtColor(flipped_img,BGR_img, COLOR_RGB2BGR);
return BGR_img;
}
I hope someone finds this useful.

Creating a Grayscale image in Visual C++ from a float array

I have an array of grayscale pixel values (floats as a fraction of 1) that I need to display, and then possibly save. The values just came from computations, so I have no libraries currently installed or anything. I've been trying to figure out the CImage libraries, but can't make much sense of what I need to do to visualize this data. Any help would be appreciated!
Thank you.
One possible approach which I've used with some success is to take D3DX's texture functions to create a Direct3D texture and fill it. There is some overhead in starting up D3D, but it provides you with multi-thread-able texture creation and built-in-ish viewing, as well as saving to files without much more fuss.
If you're not interested in using D3D(X), some of the specifics here won't be useful, but the generator should help figure out how to output data for any other library.
For example, assuming an existing D3D9 device pDevice and a noise generator (or other texture data source) pGen:
IDirect3DTexture9 * pTexture = nullptr;
D3DXCreateTexture(pDevice, 255, 255, 0, 0, D3DFMT_R8G8B8, D3DPOOL_DEFAULT, &pTexture);
D3DXFillTexture(pTexture, &texFill, pGen);
D3DXSaveTexture("texture.png", D3DXIFF_PNG, pTexture, NULL);
The generator function:
VOID WINAPI texFill(
D3DXVECTOR4* pOut,
CONST D3DXVECTOR2* pTexCoord,
CONST D3DXVECTOR2* pTexelSize,
LPVOID pData,
) {
// For a prefilled array:
float * pArray = (float *)pData;
float initial = pArray[(pTexCoord->y*255)+pTexCoord->x];
// For a generator object:
Generator * pGen = (Generator*)pData; // passed in as the third param to fill
float initial = pGen->GetPixel(pTexCoord->x, pTexCoord->y);
pOut->x = pOut->y = pOut->z = (initial * 255);
pOut->w = 255; // set alpha to opaque
}
D3DXCreateTexture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172800%28v=vs.85%29.aspx
D3DXFillTexture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172833(v=vs.85).aspx
D3DXSaveTextureToFile: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205433(v=vs.85).aspx
Corresponding functions are available for volume/3D textures. As they are already set up for D3D, you can simply render the texture to a flat quad to view, or use as a source in whatever graphical application you may want.
So long as your generator is thread-safe, you can run the create/fill/save in one thread per texture, and generate multiple slices or frames simultaneously.
I found that the best solution for this problem was to use the SFML library (www.sfml-dev.org). Very simple to use, but must be compiled from source if you want to use it with VS2010.
You can use the PNM image format without any libraries whatsoever. (The format itself is trivial). However it's pretty archaic and you'll have to have an image viewer that supports it. IvanView, for example, supports it on Windows.

Fast conversion from gray level image to QImage

In an application I handle images where each pixel is either an unsigned or a float with each value is a pixel with the given level of grey. I have the source available so I can access the data of the images freely.
I need to display/save and load these pictures using the qt framework. Currently the only way of handling the conversion is to get and set each pixel which is proving to be a bit slow.
Are there any other way one could convert these images?
Instead of using QImage::setPixel you should access to the image buffer directly.
After you create the image with the desidered format, width and height, you can use QImage::bits() to access the memory buffer, or also QImage::scanLine() to retrieve a pointer to the beginning of each line in the image and set the pixels directly in memory: this is much faster than calling setPixel() for each pixel.
QImage has a constructor that takes a pointer to an existing buffer/image:
QImage ( uchar * data, int width, int height, Format format )
It does not take ownership of the buffer nor does it copy the contents, so you are responsible that the buffer is valid throughout the lifetime of the QImage.
Note: QImage requires 32-bit aligned image rows, so you might need to copy the image rowwise into a new buffer with appropriate padding. You have only unsigned or float pixels, so it doesn't apply for you (already 32bit values), but remember it, should you have different pixel types in the future.

Converting image to pixmap using ImageMagic libraries

My assignment is to get "images read into pixmaps which you will then convert to texture maps". So for the pixmap part only, hear me out and tell me if I have the right idea and if there's an easier way. Library docs I'm using: http://www.imagemagick.org/Magick++/Documentation.html
Read in image:
Image myimage;
myimage.read( "myimage.gif" );
I think this is the pixmap I need to read 'image' into:
GLubyte pixmap[TextureSize][TextureSize][3];
So I think I need a loop that, for every 'pixmap' pixel index, assigns R,G,B values from the corresponding 'image' pixel indices. I'm thinking the loop body is like this:
pixmap[i][j][0] = myimage.pixelColor(i,j).redQuantum(void);
pixmap[i][j][1] = myimage.pixelColor(i,j).greenQuantum(void);
pixmap[i][j][2] = myimage.pixelColor(i,j).blueQuantum(void);
But I think the above functions return Quantums where I need GLubytes, so can anyone offer help here?
-- OR --
Perhaps I can take care of both the pixmap and texture map by using OpenIL (docs here: http://openil.sourceforge.net/tuts/tut_10/index.htm). Think I could simply call these in sequence?
ilutOglLoadImage(char *FileName);
ilutOglBindTexImage(ILvoid);
You can copy the quantum values returned by pixelColor(x,y) to ColorRGB and you will get normalized (0.0,1.0) color values.
If you don't have to stick with Magick++ maybe you can try OpenIL, which can load and convert your image to OpenGL texture maps without too much hassle.