I need to use OpenCV with uEye Ethernet Camera. The problem is that I wasn't finding some useful tips regarding some example codes.
The source code provided with the installation is really linked to MFC stuff which is not what I want. It's really complicated to get rid of that, it was causing me so much problems (CWnd, Afx, Dialogs...)
I would like to read some frames from the camera and record some snapshots.
You can find the whole SDK description here: https://en.ids-imaging.com/manuals-ueye-software.html
Just simply make and account and you can access it. The documentation is really good.
I found this document in the internet
http://master-ivi.univ-lille1.fr/fichiers/Cours/uEye_SDK_manual_enu.pdf
Related
So I'm trying to get a basic tool to output video/audio(s) to Twitch. I'm new to this side (AV) of programming so I'm not even sure what to look for. I'm trying to use mainly Windows infrastructure and third party where not available.
What are the steps of getting raw bitmap and wave data into a codec and then into a rtsp client and finally showing up on Twitch? I'm not looking for code. I'm looking for concepts so I can search for as I'm not absolutely sure what to search for. I'd rather not go through OBS source code to figure it out and use that as last resort.
So I capture the monitor via Output Duplication and also the Sound on the system as a wave and the microphone as another wave. I'm trying to push this to Twitch. I know that there's Media Foundation on Windows but I don't know how far to streaming it can get as I assume there no netcode integrated in it? And also the libav* collection in FFMPEG.
What are the basic steps of sending bitmap/wave to Twitch via any of thee above libraries or even others as long as they work on Windows. Please don't add code, I just need a not very long conceptual explanation and I'll take it from there. Try to cover also how bitrate and framerate gets regulated (do I have do it or the codec does it)?
Assume absolute noob level in this area (concept-wise not code-wise).
I'm trying to save a series of images (16 bit grayscale pgm) as video. The video has to be compressed. My program has to be independent of the codecs installed in the system.
My initial idea was to use OpenCV for this, unfortunately it depends on codecs installed in the system (unless I'm missing something).
I feel like there should be a way to compile an encoder (H264 or similar would be perfect) into the program or redistribute it as a dll with my program. I just can't find any good up to date guidance/examples.
I've been swimming in the deep vast ocean of AV encoding for a couple of days and would really appreciate it if someone could point me to a right direction.
Thanks.
As Ben suggests, it would be a good idea to use an established library in your code.
FFMPEG is probably the most used at the moment - it can be used on the command line, with a 'wrapper' program or the libraries it is built with can be used directly.
I think the last case sounds like the one you want - you can find documentation here:
https://trac.ffmpeg.org/wiki/Using%20libav*
Note the comment about disambiguation at the start - this is important to understand as the project lib and the library (which is what you want) are different things.
and there is some notes in this answer on how to build it into a program:
FFMpeg sample program
I want to make a screen capture utility, so far i am able to capture the screen in regular interval to get a numbered sequence of images and now i want to encode them to a video format preferably flv(because of good compression and web support)
....I tried the ffmpeg.exe for that reason but for some strange reason it did'nt work
on my vista ultimate...only the first picture is encoded while the rest -I dont know what happened to them.
Also I would prefer doing the encoding stuf programatically (using c/c++ library api if any for that purpose) rather than using tools as ffmpeg.exe and i am interested in encoding picture sequence to video not capturing contineouse video directly.
I searched through internet....there are lots of libraries and tutorial for converting between video formats but I did'nt find something usefull for my problem.
I am not verry proficient with video formats and sdk library, I just need a quick way to encode some pictures to video with some basic control (as time interval between two consecutive frames).
So can you help me with some pointers as to which library i should use and how(code fragment and little descriptive answer would greatly help) and please dont recomend any .NET solution I need to learn something out of this and dont want to apply some bruteforce approach to solve the problem.
Sorry for my english....and thanks in advance.
It appears that an .avi file can more or less directly be made of .jpg's:
An AVI file may carry audio/visual data inside the chunks in virtually any compression scheme, including Full Frame (Uncompressed), ..., Motion JPEG.
Also, something very similar has been discussed here before.
I need to find a video filter in order to mix multiple video streams (let's say, maximum 4).
I've found a video mixer filter from MediaLooks and is ok, but the problem is that i'm trying to use it in a school project (for the entire semester) and so the 30 days trial is kind of unacceptable.
So my question to you is that: are you aware of a free direct show filter that could help. If this is not working then it means i must write one. The problem here is that i don't know from where to start.
If you need output to the display, you can use the VMR. If you need output to file, then I think you will need to write something. The standard solution to this is to write an allocator/presenter plugin for the VMR that allows you to get back the mixed video and then save it somewhere. This is more efficient that a fully software-only mixer filter.
G
I finally ended up by implementing my own filter.
The VideoMixerRender9 (and 7) will do the trick for you. You can set the opacity and area each video going into the VMR9. I suggest playing with it from within graphedit.
I would also like to suggest skipping that all together. If you use WPF, you will get far more media capabilities, much easier.
If you want low level DirectShow support, you can try my project, WPF Mediakit. I have a control called MediaUriElement that is similar to WPF's MediaElement.
I need to use the connected component labeling algorithm on an image in a C++ application. I can implement that myself, but I was trying to use Boost's union-find/disjoint sets implementation since it was mentioned in the union-find wiki article.
I can't figure out how to create the disjoint_sets object so that it'll work with the image data I have (unsigned shorts). What am I missing? The examples in the Boost documentation aren't making any sense to me. Do I need all the extra Graph mumbo-jumbo in those examples when I have an image? OR, is there already an OpenCV connected component labeling implementation. Currently we're using OpenCV 1.1pre1 and Boost 1.37.
Surprisingly, there is no CCL in OpenCV. However, there is a workaround that is described in the reference manual. See the example for cvDrawContours. When I tried to use it, I had some strange behaviour on first and last rows and columns of an image, but I probably did something wrong.
An alternative way is to use cvBlobs library.
We ended up writing the algorithms for CCL and Union-Find ourselves using the descriptions found on Wikipedia and elsewhere. It seemed easier and faster than adding another library to our application just for this purpose.
Another possibility is to use the source codes provided provided by Ali Rahimi, and you can have a look at this.
I was able to use disjoint_sets of the boost library for the connected component labeling.
But to test, I was trying to create an image with pixel intensities having the value same as its label.
This led to the problem which I haven't been able to handle yet. Have a look at the thread.