Are there any Autopano Server alternative or automated panorama stitching/video panorama open source? - server-side

I haven't found any server side panorama making from stitching images or a video. I would like an open source alternative, but found any. I just don't want to go trough the hassle of developing all this on my own since but paid software usually are closed source and not very flexible.
I've seen some nifty panorama from video software in the iphone and thought it would be easy to find on *nix systems but with no luck.
Any help will be appreciated. Thanks in advance.

The only option I know is to use panostart (which is part of hugin) from whatever server language you are using.
See here for more information and other command line tools that do parts of the process more specifically.
panostart just works with images, so obviously if you wanted it to work with videos then you would have to process the videos with something like ffmpeg -i z.mov -f image2 export2\%d.png to generate images to pass to panostart.
The other alternative which requires more effort is to write something that uses libpano13 and libffmpeg directly.

You can have a look to VideoStitch, a command line is provided to automate the stitching.

Related

What is the path from BITMAP[+WAVE(s)] to RTSP (Twitch) via C/C++ in Windows?

So I'm trying to get a basic tool to output video/audio(s) to Twitch. I'm new to this side (AV) of programming so I'm not even sure what to look for. I'm trying to use mainly Windows infrastructure and third party where not available.
What are the steps of getting raw bitmap and wave data into a codec and then into a rtsp client and finally showing up on Twitch? I'm not looking for code. I'm looking for concepts so I can search for as I'm not absolutely sure what to search for. I'd rather not go through OBS source code to figure it out and use that as last resort.
So I capture the monitor via Output Duplication and also the Sound on the system as a wave and the microphone as another wave. I'm trying to push this to Twitch. I know that there's Media Foundation on Windows but I don't know how far to streaming it can get as I assume there no netcode integrated in it? And also the libav* collection in FFMPEG.
What are the basic steps of sending bitmap/wave to Twitch via any of thee above libraries or even others as long as they work on Windows. Please don't add code, I just need a not very long conceptual explanation and I'll take it from there. Try to cover also how bitrate and framerate gets regulated (do I have do it or the codec does it)?
Assume absolute noob level in this area (concept-wise not code-wise).

C++: Writing images to video file independent of installed codecs

I'm trying to save a series of images (16 bit grayscale pgm) as video. The video has to be compressed. My program has to be independent of the codecs installed in the system.
My initial idea was to use OpenCV for this, unfortunately it depends on codecs installed in the system (unless I'm missing something).
I feel like there should be a way to compile an encoder (H264 or similar would be perfect) into the program or redistribute it as a dll with my program. I just can't find any good up to date guidance/examples.
I've been swimming in the deep vast ocean of AV encoding for a couple of days and would really appreciate it if someone could point me to a right direction.
Thanks.
As Ben suggests, it would be a good idea to use an established library in your code.
FFMPEG is probably the most used at the moment - it can be used on the command line, with a 'wrapper' program or the libraries it is built with can be used directly.
I think the last case sounds like the one you want - you can find documentation here:
https://trac.ffmpeg.org/wiki/Using%20libav*
Note the comment about disambiguation at the start - this is important to understand as the project lib and the library (which is what you want) are different things.
and there is some notes in this answer on how to build it into a program:
FFMpeg sample program

Can i use just ffmpeg?

I have a video sharing site, it uses phpmotion, i tried cliipbucket also, i didn't like any of the scripts, i decided to create my own script, using django and maybe pinax. the other 2 scripts, use ffmpeg, and many other stuff like ffmpeg-PHP, Mplayer, Mencoder, flv2tool, LAME MP3 Encoder, and Libog
i know that i won't need ffmpeg-php since i'm not gonna be using php, but do i really need those other things? can i just use ffmpeg to do all the work? i don't understand what the other stuff are used for.
Yes, you can use ffmpeg to do all the work, for all work that falls within the subset that ffmpeg supports "out of the box". You only need the other scripts if you want to optimise/do trickyness to the generated videos.
Also, NEVER run these scripts in the request/response cycle. Consider spawning a Celery task to do the encoding.
And Never save uploads to the server with user-defined names or name partials.
I'll advice you to use celery for scheduling encoding task.
You can find some code example here: http://code.google.com/p/365video/
it's django project for video works with ffmpeg and celery.
also can be plugged into pinax. Don't forget to use celery for video encoding tasks.

Doing sophisticated image analysis with python / django

I am working on a django project that analyzes images that contain text and (1) infers if the image needs to be rotated and (2) where text areas are.
I am currently using PIL to do some more simpler processing of these images but I am not quite sure how I can use PIL or other libraries to perform both tasks. I was wondering if anyone has done this before and if there are libraries / api available to help in the development.
OpenCV is probably the post popular open source image processing library. It's C/C++ but there are python bindings:
http://opencv.willowgarage.com/wiki/
and the python docs
http://opencv.willowgarage.com/documentation/python/index.html
I've never done an OCR with it, but I'm sure it's capable
I agree with #pastylegs that OpenCV is your best initial bet. If you need stuff specific to OCR you could also look at ocropus.

combining separate audio and video files into one file C++

I am working on a C++ project with openCV. It is a simple web cam application with basic features like capturing pictures and videos. I have already been able to save video (w/o audio). Since openCV doesnot support audio processing, I was wondering if there is any way I can record audio separately in a different file and later combine those together to get one video file.
While searching on the internet, I did hear something about using ffmpeg with openCV. But I just cant figure out how to do it exactly.....
Can you guys help me? I would be very grateful... Thankyou!
P.S. I have used openCV and QT (for GUI)
As you said, opencv doesn't by itself deal with audio. However once you get a separate audio and video file, you can combine them using a technique called muxing. There are many many ways to do this. I use VirtualDub for most of my muxing needs, although it is windows only (not sure if that's a problem). I know ffmpeg is also capable of muxing (via the command line interface), I can't recall what the command is. There's also mplayer and a multitude of other programs out there to do this.
as far as i know openCV is good for video/image processing. To support audio processing, you can use other libraries e.g. PortAudio or C-sound.