Creating a Grayscale image in Visual C++ from a float array - c++

I have an array of grayscale pixel values (floats as a fraction of 1) that I need to display, and then possibly save. The values just came from computations, so I have no libraries currently installed or anything. I've been trying to figure out the CImage libraries, but can't make much sense of what I need to do to visualize this data. Any help would be appreciated!
Thank you.

One possible approach which I've used with some success is to take D3DX's texture functions to create a Direct3D texture and fill it. There is some overhead in starting up D3D, but it provides you with multi-thread-able texture creation and built-in-ish viewing, as well as saving to files without much more fuss.
If you're not interested in using D3D(X), some of the specifics here won't be useful, but the generator should help figure out how to output data for any other library.
For example, assuming an existing D3D9 device pDevice and a noise generator (or other texture data source) pGen:
IDirect3DTexture9 * pTexture = nullptr;
D3DXCreateTexture(pDevice, 255, 255, 0, 0, D3DFMT_R8G8B8, D3DPOOL_DEFAULT, &pTexture);
D3DXFillTexture(pTexture, &texFill, pGen);
D3DXSaveTexture("texture.png", D3DXIFF_PNG, pTexture, NULL);
The generator function:
VOID WINAPI texFill(
D3DXVECTOR4* pOut,
CONST D3DXVECTOR2* pTexCoord,
CONST D3DXVECTOR2* pTexelSize,
LPVOID pData,
) {
// For a prefilled array:
float * pArray = (float *)pData;
float initial = pArray[(pTexCoord->y*255)+pTexCoord->x];
// For a generator object:
Generator * pGen = (Generator*)pData; // passed in as the third param to fill
float initial = pGen->GetPixel(pTexCoord->x, pTexCoord->y);
pOut->x = pOut->y = pOut->z = (initial * 255);
pOut->w = 255; // set alpha to opaque
}
D3DXCreateTexture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172800%28v=vs.85%29.aspx
D3DXFillTexture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172833(v=vs.85).aspx
D3DXSaveTextureToFile: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205433(v=vs.85).aspx
Corresponding functions are available for volume/3D textures. As they are already set up for D3D, you can simply render the texture to a flat quad to view, or use as a source in whatever graphical application you may want.
So long as your generator is thread-safe, you can run the create/fill/save in one thread per texture, and generate multiple slices or frames simultaneously.

I found that the best solution for this problem was to use the SFML library (www.sfml-dev.org). Very simple to use, but must be compiled from source if you want to use it with VS2010.

You can use the PNM image format without any libraries whatsoever. (The format itself is trivial). However it's pretty archaic and you'll have to have an image viewer that supports it. IvanView, for example, supports it on Windows.

Related

How to use CImg functions with pixel data?

I am using Visual Studio and looking to find a useful image processing library that will take care of basic image processing functions such as rotation so that I don't have to keep coding them manually. I came across CImg and it supports this, as well as many other useful functions, along with interpolation.
However, all the examples I've seen show CImg being used by loading and using full images. I want to work with pixel data. So my loops are the typical:
for (x=0;x<width; x++)
for (y=0;y<height; y++)
I want to perform bilinear or bicubic rotation in this instance and I see CImg supports this. It provides a rotate() and get_rotate function, among others.
I can't find any examples online that show how to use this with pixel data. Ideally, I could simply pass it the pixel color, x, y, and interpolation method, and have it return the result.
Could anyone provide any helpful suggestions? If CImg is not the right library for this type of this, could anyone recommend a simple, light-weight, easy-to-use one?
Thank you!
You can copy pixel data to CImg class using iterators, and copy it back when you are done.
std::vector<uint8_t> pixels_src, pixels_dst;
size_t width, height, n_colors;
// Copy from pixel data
cimg_library::CImg<uint8_t> image(width, height, 1, n_colors);
std::copy(pixels_src.begin(), pixels_src.end(), image.begin());
// Do image processing
// Copy to pixel data
pixels_dst.resize(width * height * n_colors);
std::copy(image.begin(), image.end(), pixels_dst.begin());

OpenCV + OpenGL - Get OpenGL image as OpenCV camera

I would like to grab an OpenGL image and feed it to OpenCV for analysis (as a simulator for the OpenCV algorithms) but I am not finding much information about it, all I can find is the other way around (placing an OpenCV image inside OpenGL). Could someone explain how to do so?
EDIT:
I will be simulating a camera on top of a Robot, so I will render in realtime a 3D environment and display it in a Qt GUI for the user. I will give the user the option to use a a real webcam feed or a simulated 3D scene (that changes as the robot moves) and the OpenCV algorithm will be the same for both inputs so the user might test his code without having to use a real robot all the time.
You are probably looking for the function glReadPixels. It will download whatever is currently displayed by OpenGL to a buffer.
unsigned char* buffer = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
cv::Mat image(height, width, CV_8UC3, buffer);
cv::imshow("Show Image", image);
For OpenCV you will probably also need to flip and convert to BGR as well.
Edit: Since just using glReadPixels is not a very efficient way to do it, here is some sample code using Framebuffers and Pixel Buffer Objects to efficiently transfer:
How to render offscreen on OpenGL?
I did it in a previous research project. There are not much difficulties here.
What you have to do is basically:
make a texture read from OpenGL to some pre-allocated memory buffer;
apply some geometric transform (flip X and/or Y coordinate) to account for the possibly different coordinate frames between OpenGL and OpenCV. It's a detail but it helps in visualization (hint: use a texture with an F letter inside to find quickly what coordinate you need to flip!);
you can build an OpenCV cv::Matobject directly around your pre-allocated memory buffe, and then process it directly or copy it to some other matrix object and process it.
As indicated in another answer, reading OpenGL texture is a simple matter of calling the glRead() function.
What you get is usually 3 or 4 channels with 8 bits per data (RGB / RGBA - 8 bits per channel), though it may depend on your actual OpenGL context.
If color is important to you, you may need (but it is not required) to convert the RGB image data to the BGR format (Blue - Green - Red). For historical reasons, this is the default color channel ordering in OpenCV.
You do this with a call to cv::cvtColor(source, dest, cv::COLOR_RGB2BGR) for example.
I needed this for my research.
I took littleimp's advice, but fixing the colors and flipping the image took valuable time to figure out.
Here is what I ended up with.
typedef Mat Image ;
typedef struct {
int width;
int height;
char* title;
float field_of_view_angle;
float z_near;
float z_far;
} glutWindow;
glutWindow win;
Image glutTakeCVImage() {
// take a picture within glut and return formatted
// for use in openCV
int width = win.width;
int height = win.height;
unsigned char* buffer = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
Image img(height, width, CV_8UC3, buffer);
Image flipped_img;
flip(img,flipped_img,0);
Image BGR_img;
cvtColor(flipped_img,BGR_img, COLOR_RGB2BGR);
return BGR_img;
}
I hope someone finds this useful.

Difficulties while using SDL_CreateRGBSurface to create a new SDL_Surface

I'm creating a basic console-based image processing tool in c++ and have thus far found creating functions that operate on RGB values super easy!
What I'd like to do now is create functions for blurring, sharpening and resizing the image. I know that in order to do this, I have to map the new pixels to a new image. I am having a bit of a problem using this in-built SDL function to create a new blank surface onto which I intend to map the new pixels:
SDL_Surface *SDL_CreateRGBSurface(Uint32 flags, int width, int height, int depth, Uint32 Rmask, Uint32 Gmask, Uint32 Bmask, Uint32 Amask);
The rest of my program uses Uint8* rather than Uint32, so I'm not sure how this will affect proceedings. Also, I'm not 100% sure about all of the parameters and what they do/are used for: ie flags and depth.
Can someone give me a bit of advice on how to use this function to properly create a new SDL_Surface?
There's a code example here that you can look at: http://wiki.libsdl.org/moin.fcg/SDL_CreateRGBSurface
Specifically, the line you are looking for is:
surface = SDL_CreateRGBSurface(0,width,height,32,0,0,0,0);
The flags are used for various things, but you should be able to set it to 0 fine. As for the depth, this refers to how many bits per pixel you are looking at. So in this case, 8 * 4 is 32 (including alpha).
I presume you already have the image loaded. If so, you can use this loaded image to get the bits per pixel instead:
surface->format->BitsPerPixel

convert an rgb image into a matrix using C++ and Cimg library

I have this project in blind source seperation where I need to represent a RGB image in a matrix using Cimg. but I can't actually understand how to use Cimg.. I've looked through the documentation in
But there are TOO many functions and I wasn't able to know which one to use! really too many of them. I have never used Cimg, so if anyone could explain to me what should my procedure be please do!
I am programming with C++ and using eclipse.
Thanks!
First define your image :
CImg<float> img(320,200,1,3); // Define a 320x200 color image (3 channels).
Then fill it with your data :
cimg_forXYC(img,x,y,c) { // Do 3 nested loops
img(x,y,c) = pixel_value_at(x,y,c);
}
Then you can do everything you want with it.
img.display("Display my image");
when c==0, you will fill the red channel of your image, when c==1, the green one and when c==2 the blue one. Nothing really hard.
I have experimented a lof of image processing libraries, and CImg is probably one of the easiest to use. Look at the provided example files (folder CImg/examples/) to see how the whole thing is working (particularly CImg/examples/tutorial.cpp).
Getting started with any 3rd party library, I find it useful to start with a tutorial, like this one: CImg Tutorial
Especially if you are new to C++/programming in general.
Don't get frustrated with the wealth of the interface or magnitude of code. Stick to what you are looking for and let Google be your friend.
To get you started, **get acquainted with the CImg class. Then advance as your need dictates...
If you're not forced with CImg, I suggest you to use DevIL, an example of a working code looks like:
ilLoad();
ILuint image = 0;
ilGenImages(1,&image);
if(!image)
{
// Error
}
ilBindImage(image);
if(!ilLoadImage("yourimage.png"))
{
// Error
}
// 4-bytes per pixel for RGBA
ILuint width = ilGetInteger(IL_IMAGE_WIDTH);
ILuint height = ilGetInteger(IL_IMAGE_HEIGHT);
unsigned char* data=width*height*4;
ilCopyPixels(0,0,0,width,height,1,IL_RGBA,IL_UNSIGNED_BYTE,data);
ilDeleteImages(1,&image);
image = 0;
// now you can use 'data' as a pointer to all your required data.
// You can access from data[0] up to data[ (width*height*4) - 1].
// First pixel's red value: data[0]
// Second pixel's green value: data[1 + (4 * 1)]
// Third pixel's alpha value: data[3 + (4 * 2)]
// Once you're done...
delete[] data;
data = 0;

openGL Creating texture atlas at run time?

So I've set up my framework in a neat little system to wrap SDL, openGL and box2D all together for a 2D game.
Now how it works is that I create an object of "GameObject" class, specify a "source PNG", and then it automatically creates an openGL texture and a box2d body of the same dimensions.
Now I am worried about if I start needing to render many different textures on screen.
Is it possible to load in all my sprite sheets at run time, and then group them all together into one texture? If so, how? And what would be a good way to implement it (so that I wouldn't have to manually be specifying any parameters or anything).
The reason I want to do it at run time and not pre-done is so that I can easily load together all (or most) of the tiles, enemies etc.. of a certain level into this one texture, because every level won't have the same enemies. It'd also make the whole creating art process easier.
There are likely some libraries that already exist for creating texture atlases (optimal packing is a nontrivial problem) and converting old texture coordinates to the new ones.
However, if you want to do it yourself, you probably would do something like this:
Load all textures from disk (your "source PNG") and retrieve the raw pixel data buffer,
If necessary, convert all source textures into the same pixel format,
Create a new texture big enough to hold all the existing textures, along with a corresponding buffer to hold the pixel data
"Blit" the pixel data from the source images into the new buffer at a given offset (see below)
Create a texture as normal using the new buffer's data.
While doing this, determine the mapping from "old" texture coordinates into the "new" texture coordinates (should be a simple matter of recording the offsets for each element of the texture atlas and doing a quick transform). It would probably also be pretty easy to do it inside a pixel shader, but some profiling would be required to see if the overhead of passing the extra parameters is worth it.
Obviously you also want to check to make sure you are not doing something silly like loading the same texture into the atlas twice, but that's a concern that's outside this procedure.
To "blit" (copy) from the source image to the target image you'd do something like this (assuming you're copying a 128x128 texture into a 512x512 atlas texture, starting at (128, 0) on the target):
unsigned char* source = new unsigned char[ 128 * 128 * 4 ]; // in reality, comes from your texture loader
unsigned char* target = new unsigned char[ 512 * 512 * 4 ];
int targetX = 128;
int targetY = 0;
for(int sourceY = 0; sourceY < 128; ++sourceY) {
for(int sourceX = 0; sourceX < 128; ++sourceX) {
int from = (sourceY * 128 * 4) + (sourceX * 4); // 4 bytes per pixel (assuming RGBA)
int to = ((targetY + sourceY) * 512 * 4) + ((targetX + sourceX) * 4); // same format as source
for(int channel = 0; channel < 4; ++channel) {
target[to + channel] = source[from + channel];
}
}
}
This is a very simple brute force implementation: there are much faster, more succinct and more clever ways to copy an array, but the idea is that you are basically copying the contents of the source texture into the target texture at a given X and Y offset. In the end, you will have created a new texture which contains the old textures in it.
If the indexing math doesn't make sense to you, think about how a 2D array is actually indexed inside a 1D space (such as computer memory).
Please forgive any bugs. This isn't production code but instead something I wrote without checking if it compiles or runs.
Since you're using SDL, I should mention that it has a nice function that might be able to help you: SDL_BlitSurface. You can create an SDL_Surface entirely within SDL and simply use SDL_BlitSurface to copy your source surfaces into it, then convert the atlas surface into a GL texture.
It will take care of all the math, and can also do a format conversion for you on the fly.