I am making a simple game for fun and learning using SFML for 2D stuff. The game is rather simple.. I loath to say it is a HoG (hidden object game) but I guess that would be a way to get my point across quickly. Basically I am using SFML to load and display 2D still art and capture mouse events.
Anyway... I would like to add video clips to my project. All the art is rendered and for example.. if my image is of a park with a fountain, I would like to have a looping video of the water running so the image has some life even though it is just a still.
All I need is the ability to play videos in the window, preferably compatible with sfml but I am in the planning projects I can swap to something else if needed. The project will have a set resolution (not scalable) and I just want to load the video and play them at a certain pixel location in x,y. So if I have a 1200x720 image I play a 100x100 pixel video on loop at a certain location to make the water of the fountain move.
Now then I am thinking I can just load 2D sprites onto of the video matching the background image to do simple masking. There are some formats like quicktime that can embed an alpha channel directly into the video and if that is supported awesome.. but some planning in the set design should mean that is not really needed. Though if that was supported more options open in set design.
I am pretty good with video as I am a 3D animator by profession, new to programming as a learning hobby. So the format and container of the video is not really an issue though I have been working with OGV a lot recently.
What I see as it needing is
Load multiple videos at once
Play with out any boarders or anything
Play at specific locations in a window.
loop seamlessly
Allow zdepth so I can place sprites onto of it
Dose anyone know were I would go to start looking into this? It seams like something that could possibly be a library I could use? Preferably an open source one as this is just a for fun project nothing commercial.
Thanks in advance for any ideas you may have.
Related
I am trying to build a multimedia editor. It includes audio and relatively simple 2D graphics. I am using C++. I would like as much of it to be cross platform as possible.
I wrote audio interface classes for android and windows using a common API so I have that under control for now but I need a 2D graphics package and possibly a cross platform GUI as well.
The big challenge is in trying to render the time line. It needs to generate many rows of waveforms and intersperse them with characters and other shapes some of which may include blends and transparencies. Or rather I should say the big challenge has been animating the time line as often I will need to update it in real time. I have this working nicely using a lot of cashing and shifting around of the pixels of off-screen bitmaps. If I have 50 lines on the screen and the screen is 1000 pixels wide that translates to over 50,000 line draw operations per frame. Actually I use multi segment lines that end up drawing 3 times as many segments. To generate each line of the audio waveform it needs to look at a few hundred samples of audio and compute the max and min values or maybe do an FFT to create a line of different colored pixels if I want to offer this to the user some day. Various forms of cashing let me do this with reasonable latency.
The animation side of things will include everything from moving poly lines and polygons around in 2D to importing images and playing back moving multiple images (video) at different arbitrary frame rates. I don’t think 3D is very useful for now anyway.
At the moment I am using a crazy mix of GDI and GDI plus on windows and running it all in a win32 “thing”. This is not great as I cannot invert regions in off screen bitmaps and I cannot draw individual pixels quickly enough to for instance show a spectrogram in real time. I think they were written in the 90s so there must be something newer I can use and get better performance and cross platform capabilities. I have been pulling out my remaining hair to figure out what to use.
I found another library on android that will let me set pixels and the performance actually seems a lot better but it does not support writing text. So I am hoping there is something else I could use for that. On Android the plan is to generate the bitmap and then blit it into an interface build with a Native Android GUI. These solutions do not seem great though a vast majority of the rest of the code can be ported without issue just being standard C++ and these horrors being cleanly wrapped.
I have seen a few potential candidates: openGL and Vulkan, seem to do 2D graphics as well as 3D but perhaps they are much more complex then what I need.
For the GUI I looked at QT but gave up on it (it seems to need half of my hard drive and has an incomprehensible licensing model). I recently started looking at IMGUI. They say it redraws on every frame. I don’t know how that will play with my existing rendering system and if it would drain a phones battery. A while ago I was able to get visual studio to create a cross platform App that would run on android but for some reason ditched that perhaps I should revisit it.
For the time line I need to draw a waveform. This could be done by drawing a lot of lines (50-150 k/frame) they can just be vertical ones for the most part, they do not need to be of fractional pixel width they need not be anti-aliased, and can have their end points specified with just integers. I also need to add some other lines polygons and text that does need to be anti-aliased. I may need to set a lot of pixels directly. Blends and transparencies would be nice but not essential. I also need to copy square chunks of bit map around. I also need sprites for things like the cursor. I am currently doing this by copying fragments of the bit map on and off screen. It would also be very nice to be able to select square regions of off screen bitmaps to invert for doing selections. And I need to assemble this off screen in a 2 or 3 buffer configuration so I can reuse chunks of one bitmap to make the next one and present it to the user in a real time animation. (all of this works with my GDI / GDI + wrapper though I have to work around the inversion problem)
For the animation part I need to draw similar graphics primitives, though it would also be nice to draw characters at arbitrary angles and scaling. As for Video if I can extract the images I guess I could blit them to the screen as needed. Maybe I would need yet another package to composite them into the other parts of the frame. Farther it would be nice to be able to write the animation out in a higher quality format in non real time to make a video file of some kind. It would be nice if I did not have to wrap yet another framework to make this happen though I can deal with this if I need to.
For the GUI it does not have to be all that fancy. Ideally I would like to have 2 or 3 floating and dockable windows on the PC and a few screens on a phone. I will have to make slightly different UIs for both but the time line bitmap and the media window bitmaps should be reusable for the most part. I just need standard widgets for the most part though.
My needs are somewhere in-between that of a game and that of a regular boring old forms app except for the need to animate the waveform.
Does anyone have any suggestions and perhaps know these systems well enough to know if they have a good chance to do what I need?
I fear I would have to spend weeks learning each one just to see if they give me the capabilities I need.
Is IMGUI likely to eat the phones battery just to make the cursor blink?
Any tips would be most welcome.
as a semester project me and my group had to create a game. We decided to make a 2D racing game, how is it different from a normal 2D racing game is that we use image processing to control the cars. The camera detects the users hands and based on their location it moves the car. So far we were able to do most of the code, except one part..
My question is, how do we assign the image (car) functionality based on the location of the hand.
We are using OpenCV, and it did everything so far, but as for functionality to the image, we couldnt figure anything out. I heard that SDL is very good when trying to make a 2D game such as this but I couldnt make it work with OpenCV.
I need suggestions on how to assign functionality to an image OR maybe change the image into something else sprite/object making it easier to assign functionality. I am open to any suggestions. I have tried doing this using only OpenCV/C++ methods, such as waitkey and mouse events but they will not work when we need the data from the video input.
Thank you
Imagine I have a video playing.. Can I have some sort of motion graphics being played 'over' that video.. Like say the moving graphics is on an upper layer than the video, which would be the lower layer..
I am comfortable in a C++ and Python, so a solution that uses these two will be highly appreciated..
Thank you in advance,
Rishi..
I'm not sure I understand the question correctly but a video file is a sequence of pictures that you can extract (for instance with the opencv library C++ interface) and then you can use it wherever you want. You can play the video on the sides of an opengl 3D cube (available in all opengl tutorials) and other 3D elements around it.
Of course you can also displays it in a conventional 2D interface and draw stuff on top of it, but for this you need a graphical ui.
Is it what you thought or am I completely lost?
Right now, what I'm trying to do is to make a new GUI, essentially a software using directX (more exact, direct3D), that display streaming images from Axis IP cameras.
For the time being I figured that the flow for the entire program would be like this:
1. Get the Axis program to get streaming images
2. Pass the images to the Direct3D program.
3. Display the program, on the screen.
Currently I have made a somewhat basic Direct3D app that loads and display video frames from avi videos(for testing). I dunno how to load images directly from videos using DirectX, so I used OpenCV to save frames from the video and have DX upload them up. Very slow.
Right now I have some unclear things:
1. How to Get an Axis program that works in C++ (gonna look up examples later, prolly no big deal)
2. How to upload images directly from the Axis IP camera program.
So guys, do you have any recommendations or suggestions on how to make my program work more efficiently? Anything just let me know.
Well you may find it faster to use directshow and add a custom renderer at the far end that, directly, copies the decompressed video data directly to a Direct3D texture.
Its well worth double buffering that texture. ie have texture 0 displaying and texture 1 being uploaded too and then swap the 2 over when a new frame is available (ie display texture 1 while uploading to texture 0).
This way you can de-couple the video frame rate from the rendering frame rate which makes dropped frames a little easier to handle.
I use in-place update of Direct3D textures (using IDirect3DTexture9::LockRect) and it works very fast. What part of your program works slow?
For capture images from Axis cams you may use iPSi c++ library: http://sourceforge.net/projects/ipsi/
It can be used for capturing images and control camera zoom and rotation (if available).
What is the best/easiest way to display a video (with sound!) in an application using XAudio2 and Direct3D9/10?
At the very least it needs to be able to stream potentially larger videos, and take care of the fact that the windows aspect ratio may differ from the videos (eg by adding letter boxes), although ideally Id like the ability to embed the video into a 3D scene.
I could of course work out a way to load each frame into a texture, discarding/reusing the textures once rendered, and playing the audio separately through XAudio2, however as well as writing a loader for at least one format, ive also got to deal with stuff like synchronising the video and audio components, so hopefully there is an eaier solution available or even a ready made free one with a suitable lisence (commercial distribution in binary form, dynamic linking is fine in the case of say LGPL).
In Windows SDK, there is a DirectShow example for rendering video to texture. It handles audio output too.
But there are limitations and I can't honestly call it easy.
Have you looked at Bink video? Its what lots of games use for video playback. Works great and you don't have to code all that video stuff yourself from scratch.