I'm trying to use CCRenderTexture to create a heightmap to use with the Terrain class. I don't know if this is the best way to do it, I'm a newb to both opengl and cocos2d-x, so please bear with me.
auto* renderTexHeightMap = CCRenderTexture::create(width, height);
renderTexHeightMap->begin();
glRasterPos2i(0, 0);
glDrawPixels(width, height, GL_RGB, GL_FLOAT, pixelBuffer);
renderTexHeightMap->end();
renderTexHeightMap->saveToFile("heightmap.jpg", false);
I know that pixelBuffer contains the data that I want (greyscale pixel data), but whenever I call CCRenderTexture::saveToFile all I get is a black picture. What am I missing?
rendertexture will delay one frame to render ,so you need to saveToFile at next frame,my english not good ,do you anderstand?
you can use DelayTime to do it or another way
my way: my code type is lua
local function save()
renderTexture:saveToFile("heightmap.jpg",false)
end
local callfunc = cc.CallFunc:create(save)
local dela = cc.DelayTime:create(0.01)
local seq = cc.Sequence:create(dela,callfunc)
node:runAction(seq)
Related
I'm trying to build a sprite from an image. Using Image and Texture2D class and then later creating a sprite from the texture2D.
The image I am loading is 512x512 and I expected both versions of the createWithTexture to behave the same but they don't. Here the code:
Image* image = new Image();
image->initWithImageFile(fileName);
Texture2D* texture = new Texture2D();
texture->initWithImage(image);
//If used this way everything works as expected
Sprite* spr= Sprite::createWithTexture(texture);
//If used with a Rect weird result occurrs.
Sprite* spr= Sprite::createWithTexture(texture,Rect(0,0,512,512));
spr->setAnchorPoint(Vec2(0,0));
spr->setPosition(Vec2(0,0));
spr->setScale(1.0f,1.0f);
this->addChild(spr);
Here the result of the first one using a Rect:
And here the Second version without a Rect:
Do anybody know what is happening? I need to use the method that uses the rect because I will be creating a bunch of sprites from this image in the future.
Edit1: After debugging both versions of the sprite. I have noticed that the one created without the Rect shows a rect of 0,0,240,240. Instead of 0,0,512,512 as I provided. Why 240?
Thanks in advance.
I managed to figure out what was happening. Cocos2D-x uses director->setContentScaleFactor and glview->setDesignResolutionSize as a way to make things easier for multi resolution/device games. When you build the Rect to get a part (or full) texture you must have into account the CC_CONTENT_SCALE_FACTOR() macro, in order to get correct target coordinates.
This can be checked at this link: http://www.cocos2d-x.org/wiki/Multi_resolution_support
Cheers.
If your vecSize is bigger than the image's size, the image will be out of shape.
So if you don't know the image's real size, don't set it.
I would like to grab an OpenGL image and feed it to OpenCV for analysis (as a simulator for the OpenCV algorithms) but I am not finding much information about it, all I can find is the other way around (placing an OpenCV image inside OpenGL). Could someone explain how to do so?
EDIT:
I will be simulating a camera on top of a Robot, so I will render in realtime a 3D environment and display it in a Qt GUI for the user. I will give the user the option to use a a real webcam feed or a simulated 3D scene (that changes as the robot moves) and the OpenCV algorithm will be the same for both inputs so the user might test his code without having to use a real robot all the time.
You are probably looking for the function glReadPixels. It will download whatever is currently displayed by OpenGL to a buffer.
unsigned char* buffer = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
cv::Mat image(height, width, CV_8UC3, buffer);
cv::imshow("Show Image", image);
For OpenCV you will probably also need to flip and convert to BGR as well.
Edit: Since just using glReadPixels is not a very efficient way to do it, here is some sample code using Framebuffers and Pixel Buffer Objects to efficiently transfer:
How to render offscreen on OpenGL?
I did it in a previous research project. There are not much difficulties here.
What you have to do is basically:
make a texture read from OpenGL to some pre-allocated memory buffer;
apply some geometric transform (flip X and/or Y coordinate) to account for the possibly different coordinate frames between OpenGL and OpenCV. It's a detail but it helps in visualization (hint: use a texture with an F letter inside to find quickly what coordinate you need to flip!);
you can build an OpenCV cv::Matobject directly around your pre-allocated memory buffe, and then process it directly or copy it to some other matrix object and process it.
As indicated in another answer, reading OpenGL texture is a simple matter of calling the glRead() function.
What you get is usually 3 or 4 channels with 8 bits per data (RGB / RGBA - 8 bits per channel), though it may depend on your actual OpenGL context.
If color is important to you, you may need (but it is not required) to convert the RGB image data to the BGR format (Blue - Green - Red). For historical reasons, this is the default color channel ordering in OpenCV.
You do this with a call to cv::cvtColor(source, dest, cv::COLOR_RGB2BGR) for example.
I needed this for my research.
I took littleimp's advice, but fixing the colors and flipping the image took valuable time to figure out.
Here is what I ended up with.
typedef Mat Image ;
typedef struct {
int width;
int height;
char* title;
float field_of_view_angle;
float z_near;
float z_far;
} glutWindow;
glutWindow win;
Image glutTakeCVImage() {
// take a picture within glut and return formatted
// for use in openCV
int width = win.width;
int height = win.height;
unsigned char* buffer = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer);
Image img(height, width, CV_8UC3, buffer);
Image flipped_img;
flip(img,flipped_img,0);
Image BGR_img;
cvtColor(flipped_img,BGR_img, COLOR_RGB2BGR);
return BGR_img;
}
I hope someone finds this useful.
I have an array of grayscale pixel values (floats as a fraction of 1) that I need to display, and then possibly save. The values just came from computations, so I have no libraries currently installed or anything. I've been trying to figure out the CImage libraries, but can't make much sense of what I need to do to visualize this data. Any help would be appreciated!
Thank you.
One possible approach which I've used with some success is to take D3DX's texture functions to create a Direct3D texture and fill it. There is some overhead in starting up D3D, but it provides you with multi-thread-able texture creation and built-in-ish viewing, as well as saving to files without much more fuss.
If you're not interested in using D3D(X), some of the specifics here won't be useful, but the generator should help figure out how to output data for any other library.
For example, assuming an existing D3D9 device pDevice and a noise generator (or other texture data source) pGen:
IDirect3DTexture9 * pTexture = nullptr;
D3DXCreateTexture(pDevice, 255, 255, 0, 0, D3DFMT_R8G8B8, D3DPOOL_DEFAULT, &pTexture);
D3DXFillTexture(pTexture, &texFill, pGen);
D3DXSaveTexture("texture.png", D3DXIFF_PNG, pTexture, NULL);
The generator function:
VOID WINAPI texFill(
D3DXVECTOR4* pOut,
CONST D3DXVECTOR2* pTexCoord,
CONST D3DXVECTOR2* pTexelSize,
LPVOID pData,
) {
// For a prefilled array:
float * pArray = (float *)pData;
float initial = pArray[(pTexCoord->y*255)+pTexCoord->x];
// For a generator object:
Generator * pGen = (Generator*)pData; // passed in as the third param to fill
float initial = pGen->GetPixel(pTexCoord->x, pTexCoord->y);
pOut->x = pOut->y = pOut->z = (initial * 255);
pOut->w = 255; // set alpha to opaque
}
D3DXCreateTexture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172800%28v=vs.85%29.aspx
D3DXFillTexture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172833(v=vs.85).aspx
D3DXSaveTextureToFile: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205433(v=vs.85).aspx
Corresponding functions are available for volume/3D textures. As they are already set up for D3D, you can simply render the texture to a flat quad to view, or use as a source in whatever graphical application you may want.
So long as your generator is thread-safe, you can run the create/fill/save in one thread per texture, and generate multiple slices or frames simultaneously.
I found that the best solution for this problem was to use the SFML library (www.sfml-dev.org). Very simple to use, but must be compiled from source if you want to use it with VS2010.
You can use the PNM image format without any libraries whatsoever. (The format itself is trivial). However it's pretty archaic and you'll have to have an image viewer that supports it. IvanView, for example, supports it on Windows.
I'm using OpenGL in a QT application. At some point I'm rendering to a QGLPixelBuffer. I need to get the depth buffer of the image, what I'd normally accomplish with glReadPixels(..., GL_DEPTH_COMPONENT, ...); I tried making the QGLPixelBuffer current and then using glReadPixels() but all I get is a white image.
Here's my code
bufferCanvas->makeCurrent();
[ ...render... ]
QImage snapshot(QSize(_lastWidth, _lastHeight), QImage::Format_Indexed8);
glReadPixels(0, 0, _lastWidth, _lastHeight, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, snapshot.bits());
snapshot.save("depth.bmp");
Anything obviously wrong with it?
Well, there is no guarantee that the underlying pixel data stored in QImage (and obtained via its QImage::bits() function) is compatible to what OpenGL's glReadPixels() function writes.
Since you are using QGLPixelBuffer, what is wrong with QGLPixelBuffer::toImage() ?
I have never used QImage directly, but I would try to answer or look into following areas:
Are you calling glClear() with Depth bit before reading image?
Does your QGLFormat has depth buffer enabled?
Can you dump readPixel directly and verify whether it has correct data?
Does QImage::bits() ensure sequential memory store with required alignment?
Hope this helps
Wild guess follows.
You're creating an indexed bitmap using QImage, however you're not assinging a color table. My guess is that the default color table is making your image appear white. Try this before saving your image:
for ( int i = 0 ; i <= 255 ; i++ ) {
snapshot.setColor( i, qRGB( i, i, i ) );
}
I'm trying to set up something in SDL [in C++] where I can draw a one pixel big rectangle. I've got everything in my code working except my second SDL_Surface called rectangle. I'm having trouble initializing it. Here's the line where I try to initialize it:
rectangle = SDL_Surface(SDL_DOUBLEBUF | SDL_HWACCEL |
SDL_SRCALPHA | SDL_HWSURFACE,
screen->format, 1, 1, 16, NULL, clip_rect, 1);
Thank you for taking the time to read this and any answers you might choose to give.
I think that the main problem you are having is that there is no SDL_Surface function. To create a new surface, use SDL_CreateRGBSurface. Be sure to call SDL_FreeSurface on the returned surface after you are done with it or you will leak memory.
Additionally, I am not sure why you are creating a surface for the rectangle. A cleaner way of drawing a solid-color rectangle is SDL_FillRect without creating a new surface.