My screenshot photo cant be opened, written "appears to be damaged, corrupted, or is too large." when open with windows photo viewer in desktop. However, i do able to see my created texture (from getFullTexture() function) in game. Anyone please help.
My codes as follow:
public static Texture getFullTexture(){
if(Gdx.app.getGraphics().getGLVersion().isVersionEqualToOrHigher(2,0))
{
Texture texture = new Texture(Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), Pixmap.Format.RGB888);
Gdx.gl.glEnable(GL20.GL_TEXTURE_2D);
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE0);
texture.bind();
Gdx.gl.glCopyTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_RGB, 0, 0,Constant.REAL_DEVICE_WIDTH, Constant.REAL_DEVICE_HEIGHT, 0);
Gdx.gl.glDisable(GL20.GL_TEXTURE_2D);
return texture;
}
}
public static void saveImage(Texture photoTexture){
try{
if (!photoTexture.getTextureData().isPrepared()) {
photoTexture.getTextureData().prepare();
}
Pixmap pixmap = photoTexture.getTextureData().consumePixmap();
FileHandle fh;
do{
fh = new FileHandle(Gdx.files.getExternalStoragePath() + "test.jpg");
}while (fh.exists());
PixmapIO.writePNG(fh, pixmap);
pixmap.dispose();
}catch (Exception e)
{
}
}
I tried my saveImage() function with other texture and have no error. Only not workable with the texture from getFullTexture() function.
My main use of code:
ScreenShotFactory.saveImage(getFullTexture());
Related
Just implemented a SDL_Renderer in my engine
state_t init_rend(window_t *context,flag_t flags) {
rend.renderer = NULL;
rend.renderer = SDL_CreateRenderer(context,-1,flags);
rend.index = -1;
if (rend.renderer != NULL) {
return TRUE;
} else {
return FALSE;
}
}
In my client/test app:
// Init Base2D game
init_env(VIDEO|AUDIO);
// Init Display display
init_disp(640,480,"Display",RESIZABLE|VISIBLE,make_color(255,255,255,255));
// Init Renderer renderer
init_rend(display->window,SOFTWARE);
// Game Loop
state_t status;
while (TRUE) {
update();
status = listen();
if (!status) {
break;
}
/* User Event Handles */
}
And I could handle window resizing successfully with:
void resize_window() {
printf("I was here!\n");
SDL_FreeSurface(display->container);
printf("Now I am here\n");
display->container = SDL_GetWindowSurface(display->window);
SDL_FillRect(
display->container,
NULL,
SDL_MapRGBA(
display->container->format,
get_red(),
get_green(),
get_blue(),
get_alpha()
)
);
}
However, since I have implemented the renderer, whenever I attempt to resize my display it segfaults when trying to SDL_FreeSurface(display->container).
As I have mentioned, the resizing worked fine until I implemented a renderer.
Why is this happening?
Following the link provided by user:keltar,
It seems to me the way to go with SDL2 is to use a renderer for drawing to the window intead of the old SDL1 method of using a surface.
I did just that, removed the surface code and used a renderer only and the code works without problem
Thank You
I trying to render an OSG scene into a image in my Qt program. Refer to the example of SnapImageDrawCallback(https://www.mail-archive.com/osg-users#lists.openscenegraph.org/msg45360.html).
class SnapImageDrawCallback : public osg::CameraNode::DrawCallback {
public:
SnapImageDrawCallback()
{
_snapImageOnNextFrame = false;
}
void setFileName(const std::string& filename) { _filename = filename; }
const std::string& getFileName() const { return _filename; }
void setSnapImageOnNextFrame(bool flag) { _snapImageOnNextFrame = flag;}
bool getSnapImageOnNextFrame() const { return _snapImageOnNextFrame; }
virtual void operator () (const osg::CameraNode& camera) const
{
if (!_snapImageOnNextFrame) return;
int x,y,width,height;
x = camera.getViewport()->x();
y = camera.getViewport()->y();
width = camera.getViewport()->width();
height = camera.getViewport()->height();
osg::ref_ptr<osg::Image> image = new osg::Image;
image->readPixels(x,y,width,height,GL_RGB,GL_UNSIGNED_BYTE);
if (osgDB::writeImageFile(*image,_filename))
{
std::cout << "Saved screen image to `"<<_filename
<<"`"<< std::endl;
}
_snapImageOnNextFrame = false;
}
protected:
std::string _filename;
mutable bool _snapImageOnNextFrame;
};
I set this as a the osg::Viewer's camera's FinalDrawCallback, but I failed with a blank image, and get this warning "Warning: detected OpenGL error 'invalid operation' at start of State::apply()" when invoke image->readPixels, My osgViewer::Viewer in embedded in QQuickFramebufferObject. Can any one give some suggestions?
Not sure to give you the right pointer, you should provide more details about your setup and what you're after.
As a general note, if you're trying to render with OSG into a QtQuick widget the best approach is to have osg to render to an FBO in a separate shared GL context, and copy the FBO contents back the qtquick widget.
I had tested this approach some times ago, see code here:
https://github.com/rickyviking/qmlosg
Another similar project here: https://github.com/podsvirov/osgqtquick
you can use pbo
ext->glGenBuffers(1, &pbo);
ext->glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, pbo);
ext->glBufferData(GL_PIXEL_PACK_BUFFER_ARB, _width*_height*4, 0, GL_STREAM_READ);
glReadPixels(0, 0, _width, _height, _pixelFormat, _type, 0);
GLubyte* src = (GLubyte*)ext->glMapBuffer(GL_PIXEL_PACK_BUFFER_ARB,
GL_READ_ONLY_ARB);
if(src)
{
memcpy(image->data(), src, _width*_height*4);
ext->glUnmapBuffer(GL_PIXEL_PACK_BUFFER_ARB);
}
ext->glBindBuffer(GL_PIXEL_PACK_BUFFER_ARB, 0);
I am trying to play a video with SDL. For that I'm using opencv to load the video, and get the frames. Then I only need to convert those frames as I need them to a SDL_Texture* and I'm ready to draw them on the screen.
That's my problem, I'm converting it to a SDL_Surface* but then the conversion to SDL_Texture is failing and I'm not sure why. Here is my code:
void Cutscene::play()
{
this->onLoop();
this->onRender();
while(!frameMat.empty())
{
this->onLoop();
this->onRender();
}
}
void Cutscene::onLoop()
{
video >> frameMat;
convertCV_MatToSDL_Texture();
}
void Cutscene::onRender()
{
Image::onDraw(GameEngine::getInstance()->getRenderer(), frameTexture);
}
void Cutscene::convertCV_MatToSDL_Texture()
{
IplImage opencvimg2 = (IplImage)frameMat;
IplImage* opencvimg = &opencvimg2;
//Convert to SDL_Surface
frameSurface = SDL_CreateRGBSurfaceFrom((void*)opencvimg->imageData,
opencvimg->width, opencvimg->height,
opencvimg->depth*opencvimg->nChannels,
opencvimg->widthStep,
0xff0000, 0x00ff00, 0x0000ff, 0);
if(frameSurface == NULL)
{
SDL_Log("Couldn't convert Mat to Surface.");
return;
}
//Convert to SDL_Texture
frameTexture = SDL_CreateTextureFromSurface(
GameEngine::getInstance()->getRenderer(), frameSurface);
if(frameTexture == NULL)
{
SDL_Log("Couldn't convert Mat(converted to surface) to Texture."); //<- ERROR!!
return;
}
//cvReleaseImage(&opencvimg);
//MEMORY LEAK?? opencvimg opencvimg2
}
I've used this function SDL_CreateTextureFromSurface in other parts of my project and it works there. So the question is: Do you know what is the problem with the conversion I do in my code? If not, is there a better way to do what I'm trying to do?
I got it to work! I think the only problem was that i had to use &frameMat and not frameMat.
Here is my code if someone might be interested:
SDL_Texture* Cutscene::convertCV_MatToSDL_Texture(const cv::Mat &matImg)
{
IplImage opencvimg2 = (IplImage)matImg;
IplImage* opencvimg = &opencvimg2;
//Convert to SDL_Surface
frameSurface = SDL_CreateRGBSurfaceFrom(
(void*)opencvimg->imageData,
opencvimg->width, opencvimg->height,
opencvimg->depth*opencvimg->nChannels,
opencvimg->widthStep,
0xff0000, 0x00ff00, 0x0000ff, 0);
if(frameSurface == NULL)
{
SDL_Log("Couldn't convert Mat to Surface.");
return NULL;
}
//Convert to SDL_Texture
frameTexture = SDL_CreateTextureFromSurface(
GameEngine::getInstance()->getRenderer(), frameSurface);
if(frameTexture == NULL)
{
SDL_Log("Couldn't convert Mat(converted to surface) to Texture.");
return NULL;
}
else
{
SDL_Log("SUCCESS conversion");
return frameTexture;
}
cvReleaseImage(&opencvimg);
}
Here is another way without IplImage:
cv::Mat m ...;
// I'm using SDL_TEXTUREACCESS_STREAMING because it's for a video player, you should
// pick whatever suits you most: https://wiki.libsdl.org/SDL_TextureAccess
// remember to pick the right SDL_PIXELFORMAT_* !
SDL_Texture* tex = SDL_CreateTexture(
ren, SDL_PIXELFORMAT_BGR24, SDL_TEXTUREACCESS_STREAMING, m.cols,
m.rows);
SDL_UpdateTexture(tex, NULL, (void*)m.data, m.step1());
// do stuff with texture
SDL_RenderClear(...);
SDL_RenderCopy(...);
SDL_RenderPresent(...);
// cleanup (only after you're done displaying. you can repeatedly call UpdateTexture without destroying it)
SDL_DestroyTexture(tex)
I prefer this to the create surface methods because you don't need to free the surface and it is more flexible (you can update the texture easily for example, instead of create/destroy). I will also note that I could not combine these approaches: ie create the texture with SDL_CreateRGBSurfaceFrom and then later update it. That resulted in gray stripes and the image being messed up.
EDIT: SOLVED
The problem was me using the renderstate functions I needed for alphablending outside of the Sprite->Begin() and Sprite->End() codeblock.
I am creating my own 2D engine within DirectX 9.0. I am using sprites with corresponding spritesheets to draw them. Now the problem is, if I set my blending to D3DSPR_SORT_TEXTURE, I'll be able to see the texture without any problems (including transformation matrices), however if I try and set it to D3DSPR_ALPHABLEND, the sprite won't display. I've tried several things; SetRenderState, change the image format from .png to .tga, add an alpha channel to the image with a black background, used another image used within an example of 2D blending, changed my D3DFMT_ parameter of my D3DManager, etc.
I'm tried searching for an answer here but didn't find any answers related to my question.
Here's some of my code which might be of importance;
D3DManager.cpp
parameters.BackBufferWidth = w; //Change Direct3D renderer size
parameters.BackBufferHeight = h;
parameters.BackBufferFormat = D3DFMT_UNKNOWN; //Colors
parameters.BackBufferCount = 1; //The amount of buffers to use
parameters.MultiSampleType = D3DMULTISAMPLE_NONE; //Anti-aliasing quality
parameters.MultiSampleQuality = 0;
parameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
parameters.hDeviceWindow = window; //The window to tie the buffer to
parameters.Windowed = true; //Window mode, true or false
parameters.EnableAutoDepthStencil = NULL;
parameters.Flags = NULL; //Advanced flags
parameters.FullScreen_RefreshRateInHz = 0; //Fullscreen refresh rate, leave at 0 for auto and no risk
parameters.PresentationInterval = D3DPRESENT_INTERVAL_ONE; //How often to redraw
Sprite.cpp
void Sprite::draw(){
D3DXVECTOR2 center2D = D3DXVECTOR2(center.x,center.y);
D3DXMatrixTransformation2D(&matrix,¢er2D,NULL,&scale,¢er2D,angle,new D3DXVECTOR2(position.x,position.y));
sprite->SetTransform(&matrix);
sprite->Begin(D3DXSPRITE_ALPHABLEND);
if(!extended){
sprite->Draw(texture, NULL, NULL, &position, 0xFFFFFF);
}
else{
doAnimation();
sprite->Draw(texture, &src, ¢er, new D3DXVECTOR3(0,0,0), color);
}
sprite->End();
}
Main.cpp
//Clear the scene for drawing
void renderScene(){
d3dManager->getDevice().Clear(0,NULL,D3DCLEAR_TARGET,0x161616,1.0f,0); //Clear entire backbuffer
d3dManager->getDevice().BeginScene(); //Prepare scene for drawing
render(); //Render everything
d3dManager->getDevice().EndScene(); //Close off
d3dManager->getDevice().Present(NULL, NULL, NULL, NULL); //Present everything on-screen
}
//Render everything
void render(){
snake->draw();
}
I've got no clue at all. Any help would be appreciated.
The problem was me using the renderstate functions I needed for alphablending outside of the Sprite->Begin() and Sprite->End()
code block.
I'm attemting to load an image that I exported from flash CS3 it's a very cute face but it loads very weird it loads on a blueish way this is the code for the two files:
//main.cpp
#include <SDL/SDL.h>
#include <SDL/SDL_image.h>
#include "test.hpp"
int main(int argc, char *argv[])
{
SDL_Init(SDL_INIT_VIDEO);
// Activamos modo de video
screen = SDL_SetVideoMode(640,480,32,SDL_SWSURFACE | SDL_DOUBLEBUF);
image = IMG_Load("face.bmp");
dest.x = 200;
dest.y = 200;
//Main Loop
while(Abierto)
{
//We Draw
Draw();
//Events
while( SDL_PollEvent(&event))
{
if(event.type == SDL_QUIT)
Abierto = false;
}
}
// We free the image
SDL_FreeSurface(image);
SDL_Quit();
return 0;
}
Now the other one the;
//test.hpp
DL_Surface *image = NULL, *screen = NULL;
SDL_Rect dest;
SDL_Event event;
bool Abierto = true;
float PlaneX = 300, PlaneY = 200;
float velX = 0.1, velY = 0.1;
void Draw()
{
Uint32 color;
// Black Background is created
color = SDL_MapRGB (screen -> format, 0, 0, 0);
SDL_FillRect (screen, NULL, color);
SDL_DisplayFormatAlpha(image);
SDL_BlitSurface(image, NULL, screen, &dest);
// Flip the working image buffer with the screen buffer
SDL_Flip (screen);
}
I need help with this please Im not that experienced on SDL stuff oh and if you want to take a closer look I uplaoded the project here.
Oh my bad I must add the image is 32 pixels with alpha according to flash exporting options
According to docs, SDL_DisplayFormatAlpha returns a new image and keeps the original intact.
So, try in the first part, when you load the image:
SDL_Surface *origImage = IMG_Load("face.bmp");
image = SDL_DisplayFormatAlpha(origImage);
SDL_FreeSurface(origImage)
As there is no need to call SDL_DisplayFormatAlpha each frame.
Then in the second part, just blit image, without calling SDL_DisplayFormatAlpha.
UPDATE
I've just checked your picture, and it looks like it is a weird bmp. I've seen that before: BMP format is such a mess that if you don't keep to the basics chances are that different programs will interpret the data differently.
In your case:
display face.bmp shows correctly.
gthumb face.bmp shows nothing.
eog face.bmp says "bogus header data".
I strongly recommend using PNG files for all your game cartoon-like pictures and JPG for all the photo-like ones.
So run
$ convert face.bmp face.png
And use the PNG file. I'll will work better and you will have a file 20% the size of the original.