how to write displaying and selecting image using opengl? - opengl

I want to write a simple application using opengl under linux. I want to open the image and allow the user to interactively select a rectangle. After that user can save it to a specific location.
Could anyone give me the startup links or sample code.

From your question I take it that you think OpenGL was some kind of imaging library. This is not the case.
OpenGL is meant only for drawing nice pictures to the screen. It deals neither with image loading, or storing. It's also not meant for imaging operations like cropping (although this is actually quite easy to implement with OpenGL).
Regarding your question: OpenGL can be used for the "display the image" and "draw a rectangle around it" part. Loading and saving the image, and doing the actual crop is not to be done using OpenGL.

Related

How to check if image should be flipped vertically before loading using stb_image

I'm writing a project using OpenGL, and loading the textures using stb_image.
Some of the textures are loaded flipped upside down (regarding the y-axis) so I use
"stbi flip image vertically on load" to load them properly.
The problem is that some of the textures I load require flipping, and some not,
But of course my code flipps them all.
how can I check (before loading, or at least before flipping) whether or not to flip the
image?
Short answer: always flip when loading an image from stb_image to an OpenGL texture. Longer answer: you can't know whether a user wants to flip the image themselves. As it was posed, I think your question is answered by the question Kai Burjack linked you to (Should I vertically flip the lines of an image loaded with stb_image to use in OpenGL?) because it clarifies the correct use of this feature of stb_image.
If you are going straight from an image file to an OpenGL texture, then you should always flip during import if you want the "up" of the imported texture to match what users see in their art programs. However, if you want to give users the option to load images upside down independent of how the image looks in the art program, you can totally do that, too. The catch is that the user has to tell you. There's no way to know what the user wants, and IMO artists who want their images upside-down are likely to just make them that way in their art programs anyways.

show tracked object in Video using OpenGL

I am extending an existing OpenGL project with new functionality.
I can play a video stream using OpenGL with FFMPEG.
Some objects are moving in the video stream. Co-ordinates of those objects are know to me.
I need to show tracking of motion for that object, like continuously drawing a point or rectangle around the object as it moves on the screen.
Any idea how to start with it?
Are you sure you want to use OpenGL for this task? Usually for computer vision algorithms, like motion tracking one uses OpenCV. In this case you could simply use the drawing functions of OpenCV as documented here.
If you are using OpenGL you might have a look at this question because in this case I guess you draw the frames as textures.

C++ Zooming Graphical Content

I'm trying to make a program that handles graphics and I am not quite sure how to implement zooming. I have done a zooming effect with primitive shapes such as lines and circles (with SDL_gfxPrimitives) by scaling them down but that wont work for a picture. How would I implement zooming?
There is a SDL library that supports zooming:
SDL2_gfx Library
The SDL_gfx library evolved out of the SDL_gfxPrimitives code which
provided basic drawing routines such as lines, circles or polygons and
SDL_rotozoom which implemented a interpolating rotozoomer for SDL
surfaces.
The current components of the SDL_gfx library are:
Graphic Primitives (SDL_gfxPrimitves.h)
Rotozoomer (SDL_rotozoom.h)
Framerate control (SDL_framerate.h)
MMX image filters (SDL_imageFilter.h)
Custom Blit functions (SDL_gfxBlitFunc.h)
Your question is not specific enough to produce a specific answer that is likely to get you what you appear to be looking for.
What I can offer you is the suggestion that you first come up with a way to represent zooming.
If you already know how to draw a picture, consider the fact that when it comes to computer graphics, it is almost always the case that "zooming in" or "zooming out" is nothing more than drawing your picture at a progressively larger or smaller size.
With that in mind, maybe you will begin to see that a reasonable way to represent the concept of zooming is with some form of Camera class that will unambiguously determine the size and location of the pictures you draw.

Cinder: How to get a pointer to data\frame generated but never shown on screen?

There is that grate lib I want to use called libCinder, I looked thru its docs but do not get if it is possible and how to render something with out showing it first?
Say we want to create a simple random color 640x480 canvas with 3 red white blue circles on it, and get RGB\HSL\any char * to raw image data out of it with out ever showing any windows to user. (say we have console application project type). I want to use such feaure for server side live video stream generation and for video streaming I would prefer to use ffmpeg so that is why I want a pointer to some RGB\HSV or what ever buffer with actuall image data. How to do such thing with libCInder?
You will have to use off-screen rendering. libcinder seems to be just a wrapper for OpenGL, as far as graphics go, so you can use OpenGL code to achieve this.
Since OpenGL does not have a native mechanism for off-screen rendering, you'll have to use an extension. A tutorial for using such an extension, called Framebuffer Rendering, can be found here. You will have to modify renderer.cpp to use this extension's commands.
An alternative to using such an extension is to use Mesa 3D, which is an open-source implementation of OpenGL. Mesa has a software rendering engine which allows it to render into memory without using a video card. This means you don't need a video card, but on the other hand the rendering might be slow. Mesa has an example of rendering to a memory buffer at src/osdemos/ in the Demos zip file. This solution will probably require you to write a complete Renderer class, similar to Renderer2d and RendererGl which will use Mesa's intrusctions instead of Windows's or Mac's.

Animation with C++

is there anyway to build rich animation with C++?
I have been using OpenCV for object detection, and I want to show the detected object with rich animation, Is there any easy way to realize this?
I know flash can be used to easily build rich animation. But can flash be reliably integrated with C++ and How?
Also, Can OpenGL help me with this? To my knowledge, OpenGL is good for 3D rendering. But I am more interested in showing 2D animations in an image. So I am not sure whether this is a right way to go.
Another question, how are those animations in augmented reality realized? What kind of library are they using?
Thank you in advance.
Its difficult to tell if this answer will be relevant, but depending on what sort of application you are creating you may be able to use Simple DirectMedia Layer.
This is a cross-platform 2D and 3D (via OpenGL) media library for C, C++ and many other compatible languages.
It appears to me that you wish to produce an animated demo of your processing results. If I am wrong, let me know.
The simplest way to produce a demo of a vision algorithm is to dump the results to a distinct image file after each processed frame. After the processing session, these individual image files are employed to prepare the video using e.g. mencoder. I employed such procedure to prepare this.
Of course, your program can also produce OpenGL. Many people dealing with 3D reconstruction do that. However, in my opinion that would be an overkill for simple 2D detection. Producing flash would be an even greater overkill.