Related
I am new to DirectShow API.
I want to decode a media file and get uncompressed RGB video frames using DirectShow.
I noted that all such operations should be completed through a GraphBuilder. Also, every the processing block is called a filter and there are many different filters for different media files. For example, for decoding H264 we should use "Microsoft MPEG-2 Video Decoder", for AVI files "AVI Splitter Filter" etc.
I would like to know if there is a general way (decoder) that can handle all those file types?
I would really appreciate if someone can point out an example that goes from importing a local file to decoding it into uncompressed RGB frames. All the examples I found are dealing with window handles and they just configure it and call pGraph->run(). I have also surfed through Windows SDK samples, but couldn't find useful samples.
Thanks very much in advance.
Universal DirectShow decoder in general is against the concept of DirectShow API. The whole idea is that individual filters are responsible for individual task (esp. decoding certain encoding or demultiplexing certain container format). The registry of the filters and Intelligent Connect let one to have the filters built in chain to do certain requested processing, in particular decoding from compressed format to 24-bit RGB for video.
From this standpoint you don't need a universal decoder and it is not expected that such decoder exists. However, such decoder (or close) does exist and it's a ffdshow or one of its derivatives. Presently, you might want to look at LAVFilters, for example. They wrap FFmpeg, which itself can handle many formats, and connect it to DirectShow API so that, as as filter, ffdshow could handle many formats/encodings.
There is no general rule to use or not use such codec pack, in most cases you take into consideration various factors and decide what to do. If your application handles various scenarios, a good starting point into graph building would be Overview of Graph Building.
My goal is to accomplish the task using DirectShow in order to have no external dependencies. Do you know a particular example that does uncompressing frames for some file type?
Your request is too broad and in the same time typical and, to some extent, fairy simple. If you spend some time playing with GraphEdit SDK tool, or rather GraphStudioNext, which is a more powerful version of the former, you will be able to build filter graph interactively, also render media files of different types and see what filters participate in rendering. You can accomplish the very same programmatically too, since the interactive actions basically all have matching API calls individually.
You will be able to see that specific formats are handled by different filters and Intelligent Connect mentioned above is building chains of filters in combinations in order to satisfy the requests and get the pipeline together.
Default use case is playback, and if you want to get video rendered to 24/32-bit RGB, your course of actions is pretty much similar: you are to build a graph, which just needs to terminate with something else. More flexible, sophisticated and typical for advanced development approach is to supply a custom video renderer filter and accept decompressed RGB frames on it.
A simple and so much popular version of the solution is to use Sample Grabber filter, initialize it to accept RGB, setup a callback on it so that your SampleCB callback method is called every time RGB frame is decompressed, and use Sample Grabber in the graph. (You will find really a lot of attempts to accomplish that if you search open source code and/or web for keywords ISampleGrabber, ISampleGrabberCB, SampleCB or BufferCB, MEDIASUBTYPE_RGB24).
Using the Sample Grabber
DirectShow: Examples for Using SampleGrabber for Grabbing a Frame and Building a VU Meter
Another more or less popular approach is to setup a playback pipeline, play a file, and read back frames from video presenter. This is suggested in another answer to the question, is relatively easy to do, and does the job if you don't have performance requirement and requirements to extract every single frame. That is, it is a good way to get a random RGB frame from the feed but not every/all frames. See related on this:
Different approaches on getting captured video frames in DirectShow
You are looking for vmr9 example in DirectShow library.
In your Windows SDK's install, look for this example:
Microsoft SDKs\Windows\v7.0\Samples\multimedia\directshow\vmr9\windowless\windowless.sln
And search this function: CaptureImage, in this method, see IVMRWindowlessControl9::GetCurrentImage, is exactly what you want.
This method captures a video frame in bitmap format (RGB).
Next, this is a copy of CaptureImage code:
BOOL CaptureImage(LPCTSTR szFile)
{
HRESULT hr;
if(pWC && !g_bAudioOnly)
{
BYTE* lpCurrImage = NULL;
// Read the current video frame into a byte buffer. The information
// will be returned in a packed Windows DIB and will be allocated
// by the VMR.
if(SUCCEEDED(hr = pWC->GetCurrentImage(&lpCurrImage)))
{
BITMAPFILEHEADER hdr;
DWORD dwSize, dwWritten;
LPBITMAPINFOHEADER pdib = (LPBITMAPINFOHEADER) lpCurrImage;
// Create a new file to store the bitmap data
HANDLE hFile = CreateFile(szFile, GENERIC_WRITE, FILE_SHARE_READ, NULL,
CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, 0);
if (hFile == INVALID_HANDLE_VALUE)
return FALSE;
// Initialize the bitmap header
dwSize = DibSize(pdib);
hdr.bfType = BFT_BITMAP;
hdr.bfSize = dwSize + sizeof(BITMAPFILEHEADER);
hdr.bfReserved1 = 0;
hdr.bfReserved2 = 0;
hdr.bfOffBits = (DWORD)sizeof(BITMAPFILEHEADER) + pdib->biSize +
DibPaletteSize(pdib);
// Write the bitmap header and bitmap bits to the file
WriteFile(hFile, (LPCVOID) &hdr, sizeof(BITMAPFILEHEADER), &dwWritten, 0);
WriteFile(hFile, (LPCVOID) pdib, dwSize, &dwWritten, 0);
// Close the file
CloseHandle(hFile);
// The app must free the image data returned from GetCurrentImage()
CoTaskMemFree(lpCurrImage);
// Give user feedback that the write has completed
TCHAR szDir[MAX_PATH];
GetCurrentDirectory(MAX_PATH, szDir);
// Strip off the trailing slash, if it exists
int nLength = (int) _tcslen(szDir);
if (szDir[nLength-1] == TEXT('\\'))
szDir[nLength-1] = TEXT('\0');
Msg(TEXT("Captured current image to %s\\%s."), szDir, szFile);
return TRUE;
}
else
{
Msg(TEXT("Failed to capture image! hr=0x%x"), hr);
return FALSE;
}
}
return FALSE;
}
I'm currently setting up my output context for creating .avi like this:
avformat_alloc_output_context2(&outContext, NULL, NULL, "out.avi");
if (!outContext)
die("Could not allocate output context");
However, the resulting video quality is very unpleasant. As such, I'd like to be able to fetch the installed codecs on the system and use one of them in avformat_alloc_output_context2. Similar to below:
So I guess my two questions are:
How do I create a list (array) containing the installed codecs (as above)?
How do I use one of them in the output container?
If possible, I'd also like to be able to modify output quality (0%-100%) and open the codec configuration window.
First, make your map with string(or whatever) with AVCodecID, like this :
std::map<string, AVCodecID> _codecList;
_codecList["h264"] = AV_CODEC_ID_H264;
_codecList["mpeg4"] = AV_CODEC_ID_MPEG4;
....
Note that FFmpeg does not provide information that which codec is available in what container so you should validate yourself. but you can reference following link(at least it is officlal) : https://en.wikipedia.org/wiki/Comparison_of_video_container_formats
Next thing to do is that find encoder by name, or AVCodecID in following code :
avcodec_find_encoder_by_name("libx264");
avcodec_find_encoder(AV_CODEC_ID_H264);
Both are return AVCodec* so you can use this when calling avformat_new_stream(), like this :
AVCodecID codec_id = (_codecList.find("h264") != _codecList.end()) ?
_codecList.find("h264") : AV_CODEC_ID_NONE;
if(codec_id == AV_CODEC_ID_NONE)
{
return -1;
}
AVCodec* encoder = avcodec_find_encoder(codec_id);
// or you can just get it from avcodec_find_encoder_by_name("libx264");
AVStream* newStream = avformat_new_stream(avFormatContext, encoder);
Thre are so many things when determining video quality. x264, especially has more options. In this case, you can list it by crf value or bitrate things(you can't use both option). You can determine it with AVCodecContext.
AVCodecContex* codec_ctx = newStream->codec;
codec_ctx->bitrate = 1000000 // 1MB
// codec_ctx->qmin = 18;
// codec_ctx->qmin = 31;
Once you done, open it with avcodec_open2
avcodec_open2(avFormatContext, encoder, NULL);
And Don't forget to close when you release it.
avcodec_close(avFormatContext);
There is much to do when you creating your own output stream. If you have deeper experience with it, i think that this answer will be enough.
But If you don't have much experience with FFmpeg, you can find my full example in here(https://github.com/sorrowhill/FFmpegTutorial)
I have created a simple waveform generator which is connected to an AUGraph. I have reused some sample code from Apple to set AudioStreamBasicDescription like this
void SetCanonical(UInt32 nChannels, bool interleaved)
// note: leaves sample rate untouched
{
mFormatID = kAudioFormatLinearPCM;
int sampleSize = SizeOf32(AudioSampleType);
mFormatFlags = kAudioFormatFlagsCanonical;
mBitsPerChannel = 8 * sampleSize;
mChannelsPerFrame = nChannels;
mFramesPerPacket = 1;
if (interleaved)
mBytesPerPacket = mBytesPerFrame = nChannels * sampleSize;
else {
mBytesPerPacket = mBytesPerFrame = sampleSize;
mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
}
}
In my class I call this function
mClientFormat.SetCanonical(2, true);
mClientFormat.mSampleRate = kSampleRate;
while sample rate is
#define kSampleRate 44100.0f;
The other setting are taken from sample code as well
// output unit
CAComponentDescription output_desc(kAudioUnitType_Output, kAudioUnitSubType_RemoteIO, kAudioUnitManufacturer_Apple);
// iPodEQ unit
CAComponentDescription eq_desc(kAudioUnitType_Effect, kAudioUnitSubType_AUiPodEQ, kAudioUnitManufacturer_Apple);
// multichannel mixer unit
CAComponentDescription mixer_desc(kAudioUnitType_Mixer, kAudioUnitSubType_MultiChannelMixer, kAudioUnitManufacturer_Apple);
Everything works fine, but the problem is that I am not getting stereo sound and my callback function is failing (bad access) when I try to reach the second buffer
Float32 *bufferLeft = (Float32 *)ioData->mBuffers[0].mData;
Float32 *bufferRight = (Float32 *)ioData->mBuffers[1].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
switch (generator.soundType) {
case 0: //Sine
bufferLeft[frame] = sinf(thetaLeft) * amplitude;
bufferRight[frame] = sinf(thetaRight) * amplitude;
break;
So it seems I am getting mono instead of stereo. The pointer bufferRight is empty, but don't know why.
Any help will be appreciated.
I can see two possible errors. First, as #invalidname pointed out, recording in stereo probably isn't going to work on a mono device such as the iPhone. Well, it might work, but if it does, you're just going to get back dual-mono stereo streams anyways, so why bother? You might as well configure your stream to work in mono and spare yourself the CPU overhead.
The second problem is probably the source of your sound distortion. Your stream description format flags should be:
kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked
Also, don't forget to set the mReserved flag to 0. While the value of this flag is probably being ignored, it doesn't hurt to explicitly set it to 0 just to make sure.
Edit: Another more general tip for debugging audio on the iPhone -- if you are getting distortion, clipping, or other weird effects, grab the data payload from your phone and look at the recording in a wave editor. Being able to zoom down and look at the individual samples will give you a lot of clues about what's going wrong.
To do this, you need to open up the "Organizer" window, click on your phone, and then expand the little arrow next to your application (in the same place where you would normally uninstall it). Now you will see a little downward pointing arrow, and if you click it, Xcode will copy the data payload from your app to somewhere on your hard drive. If you are dumping your recordings to disk, you'll find the files extracted here.
reference from link
I'm guessing the problem is that you're specifying an interleaved format, but then accessing the buffers as if they were non-interleaved in your IO callback. ioData->mBuffers[1] is invalid because all the data, both left and right channels, is interleaved in ioData->mBuffers[0].mData. Check ioData->mNumberBuffers. My guess is it is set to 1. Also, verify that ioData->mBuffers[0].mNumberChannels is set to 2, which would indicate interleaved data.
Also check out the Core Audio Public Utility classes to help with things like setting up formats. Makes it so much easier. Your code for setting up format could be reduced to one line, and you'd be more confident it is right (though to me your format looks set up correctly - if what you want is interleaved 16-bit int):
CAStreamBasicDescription myFormat(44100.0, 2, CAStreamBasicDescription::kPCMFormatInt16, true)
Apple used to package these classes up in the SDK that was installed with Xcode, but now you need to download them here: https://developer.apple.com/library/mac/samplecode/CoreAudioUtilityClasses/Introduction/Intro.html
Anyway, it looks like the easiest fix for you is to just change the format to non-interleaved. So in your code: mClientFormat.SetCanonical(2, false);
I want to develop a virtual webcam driver which from User mode I'll pass image to it and it will display as webcam output.
I don't want to use DirectX filter and CSourceStream etc. Because they don't work on some programs which doesn't use DirectX for capturing webcam image.
I have to write a kernel mode device driver so.
Any ideas? I tried testcap from DDK samples, but it doesn't process image from user mode and doesn't get any input, just it displays 7 colors in webcam...
Any help would be greatly appreciated.
Thanks
Thank you all!
I tried code from here:
http://tmhare.mvps.org/downloads.htm (find Capture Source Filter)
It worked well when I compiled it in Yahoo, MSN, but it crashed AIM, Internet Explorer Flash Webcam, Firefox Flash webcam and Skype... I got crash in QueryInterface after 8 time call to that, I found it with tracing it with a lot of tricks..
Now I know, it crashes on 8th call to
HRESULT CVCamStream::QueryInterface(REFIID riid, void **ppv)
8th call when it reaches to last if, I mean:
return CSourceStream::QueryInterface(riid, ppv);
It's in 17th line of Filters.cpp
Why do you think I'm getting crash??
Thank you all for guiding me to find correct solution which is DirectShow, not driver
There are several APIs from Microsoft which provide access to image data.
Twain: Used for single image capture from scanners, etc.
WIA: This seems to have degenerated to a single image codec library.
VfW: A very old (Win16) API which really works only Video-File encoding/decoding, but has support for some video acquisition.
DirectShow: previously part in the DirectX SDK, currently in the Platform SDK. This is the place to go for current (general) streaming solutions.
Windows Media/Media Foundation: This seems more to be geared at video playback/reencoding.
Manufacturer Specific Libraries: Pylon/Halcon/Imaging Control/...
DirectShow specific :
To create image acquisition devices under windows, you have to provide either a device (driver) which implements the streamclasses interfaces (or newer Avstream) or you have to write a usermode COM object which has to be added to the VideoInputCategory enumerator.
The Avstream sample provides everything for a real image acquisition device. Only the lower layer for the actual device really is missing.
If you can design a device, you should either create it DCAM or UVC compatible. For both there are built-in drivers supplied by windows.
How to write a software source device :
You have to create a DirectShow filter which provides at least one output pin and register this under the VideoInputCategory. There may be several interfaces certain applications require from a capture application, but these depend on the application itself. Simple applications to try out filters are GraphEdit and AMCap which are supplied in the Plattform SDK.
Some code :
#include <InitGuid.h>
#include <streams.h>
const AMOVIESETUP_MEDIATYPE s_VideoPinType =
{
&MEDIATYPE_Video, // Major type
&MEDIATYPE_NULL // Minor type
};
const AMOVIESETUP_PIN s_VideoOutputPin =
{
L"Output", // Pin string name
FALSE, // Is it rendered
TRUE, // Is it an output
FALSE, // Can we have none
FALSE, // Can we have many
&CLSID_NULL, // Connects to filter
NULL, // Connects to pin
1, // Number of types
&s_VideoPinType // Pin details
};
const AMOVIESETUP_FILTER s_Filter =
{
&CLSID_MyFilter, // Filter CLSID
L"bla", // String name
MERIT_DO_NOT_USE, // Filter merit
1, // Number pins
&s_VideoOutputPin // Pin details
};
REGFILTER2 rf2;
rf2.dwVersion = 1;
rf2.dwMerit = MERIT_DO_NOT_USE;
rf2.cPins = 1;
rf2.rgPins = s_Filter.lpPin;
HRESULT hr = pFilterMapper->RegisterFilter( CLSID_MyFilter, _FriendlyName.c_str(), 0,
&CLSID_VideoInputDeviceCategory, _InstanceID.c_str(), &rf2 );
if( FAILED( hr ) )
{
return false;
}
std::wstring inputCat = GUIDToWString( CLSID_VideoInputDeviceCategory );
std::wstring regPath = L"CLSID\\" + inputCat + L"\\Instance";
win32_utils::CRegKey hKeyInstancesDir;
LONG rval = openKey( HKEY_CLASSES_ROOT, regPath, KEY_WRITE, hKeyInstancesDir );
if( rval == ERROR_SUCCESS )
{
win32_utils::CRegKey hKeyInstance;
rval = createKey( hKeyInstancesDir, _InstanceID, KEY_WRITE, hKeyInstance );
....
_InstanceID is a GUID created for this 'virtual device' entry.
You can not decide how other program would call your driver. Most of programs will use DirectShow. Some would use the win3.x technology VFW. Many new programs, including Windows XP's scanner and camera wizard, may call you via the WIA interface. If you do not want to implement all that, you need to at least provide the DirectShow interface via WDM and let vfwwdm32.dll gives you a VFW interface, or write your own VFW driver.
I would like to open a small video file and map every frames in memory (to apply some custom filter). I don't want to handle the video codec, I would rather let the library handle that for me.
I've tried to use Direct Show with the SampleGrabber filter (using this sample http://msdn.microsoft.com/en-us/library/ms787867(VS.85).aspx), but I only managed to grab some frames (not every frames!). I'm quite new in video software programming, maybe I'm not using the best library, or I'm doing it wrong.
I've pasted a part of my code (mainly a modified copy/paste from the msdn example), unfortunately it doesn't grabb the 25 first frames as expected...
[...]
hr = pGrabber->SetOneShot(TRUE);
hr = pGrabber->SetBufferSamples(TRUE);
pControl->Run(); // Run the graph.
pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.
// Find the required buffer size.
long cbBuffer = 0;
hr = pGrabber->GetCurrentBuffer(&cbBuffer, NULL);
for( int i = 0 ; i < 25 ; ++i )
{
pControl->Run(); // Run the graph.
pEvent->WaitForCompletion(INFINITE, &evCode); // Wait till it's done.
char *pBuffer = new char[cbBuffer];
hr = pGrabber->GetCurrentBuffer(&cbBuffer, (long*)pBuffer);
AM_MEDIA_TYPE mt;
hr = pGrabber->GetConnectedMediaType(&mt);
VIDEOINFOHEADER *pVih;
pVih = (VIDEOINFOHEADER*)mt.pbFormat;
[...]
}
[...]
Is there somebody, with video software experience, who can advise me about code or other simpler library?
Thanks
Edit:
Msdn links seems not to work (see the bug)
Currently these are the most popular video frameworks available on Win32 platforms:
Video for Windows: old windows framework coming from the age of Win95 but still widely used because it is very simple to use. Unfortunately it supports only AVI files for which the proper VFW codec has been installed.
DirectShow: standard WinXP framework, it can basically load all formats you can play with Windows Media Player. Rather difficult to use.
Ffmpeg: more precisely libavcodec and libavformat that comes with Ffmpeg open- source multimedia utility. It is extremely powerful and can read a lot of formats (almost everything you can play with VLC) even if you don't have the codec installed on the system. It's quite complicated to use but you can always get inspired by the code of ffplay that comes shipped with it or by other implementations in open-source software. Anyway I think it's still much easier to use than DS (and much faster). It needs to be comipled by MinGW on Windows, but all the steps are explained very well here (in this moment the link is down, hope not dead).
QuickTime: the Apple framework is not the best solution for Windows platform, since it needs QuickTime app to be installed and also the proper QuickTime codec for every format; it does not support many formats, but its quite common in professional field (so some codec are actually only for QuickTime). Shouldn't be too difficult to implement.
Gstreamer: latest open source framework. I don't know much about it, I guess it wraps over some of the other systems (but I'm not sure).
All of this frameworks have been implemented as backend in OpenCv Highgui, except for DirectShow. The default framework for Win32 OpenCV is using VFW (and thus able only to open some AVI files), if you want to use the others you must download the CVS instead of the official release and still do some hacking on the code and it's anyway not too complete, for example FFMPEG backend doesn't allow to seek in the stream.
If you want to use QuickTime with OpenCV this can help you.
I have used OpenCV to load video files and process them. It's also handy for many types of video processing including those useful for computer vision.
Using the "Callback" model of SampleGrabber may give you better results. See the example in Samples\C++\DirectShow\Editing\GrabBitmaps.
There's also a lot of info in Samples\C++\DirectShow\Filters\Grabber2\grabber_text.txt and readme.txt.
I know it is very tempting in C++ to get a proper breakdown of the video files and just do it yourself. But although the information is out there, it is such a long winded process building classes to hand each file format, and make it easily alterable to take future structure changes into account, that frankly it just is not worth the effort.
Instead I recommend ffmpeg. It got a mention above, but says it is difficult, it isn't difficult. There are a lot more options than most people would need which makes it look more difficult than it is. For the majority of operations you can just let ffmpeg work it out for itself.
For example a file conversion
ffmpeg -i inputFile.mp4 outputFile.avi
Decide right from the start that you will have ffmpeg operations run in a thread, or more precisely a thread library. But have your own thread class wrap it so that you can have your own EventAgs and methods of checking the thread is finished. Something like :-
ThreadLibManager()
{
List<MyThreads> listOfActiveThreads;
public AddThread(MyThreads);
}
Your thread class is something like:-
class MyThread
{
public Thread threadForThisInstance { get; set; }
public MyFFMpegTools mpegTools { get; set; }
}
MyFFMpegTools performs many different video operations, so you want your own event
args to tell your parent code precisely what type of operation has just raised and
event.
enum MyFmpegArgs
{
public int thisThreadID { get; set; } //Set as a new MyThread is added to the List<>
public MyFfmpegType operationType {get; set;}
//output paths etc that the parent handler will need to find output files
}
enum MyFfmpegType
{
FF_CONVERTFILE = 0, FF_CREATETHUMBNAIL, FF_EXTRACTFRAMES ...
}
Here is a small snippet of my ffmpeg tool class, this part collecting information about a video.
I put FFmpeg in a particular location, and at the start of the software running it makes sure that it is there. For this version I have moved it to the Desktop, I am fairly sure I have written the path correctly for you (I really hate MS's special folders system, so I ignore it as much as I can).
Anyway, it is an example of using windowless ffmpeg.
public string GetVideoInfo(FileInfo fi)
{
outputBuilder.Clear();
string strCommand = string.Concat(" -i \"", fi.FullName, "\"");
string ffPath =
System.Environment.GetFolderPath(Environment.SpecialFolder.Desktop) + "\\ffmpeg.exe";
string oStr = "";
try
{
Process build = new Process();
//build.StartInfo.WorkingDirectory = #"dir";
build.StartInfo.Arguments = strCommand;
build.StartInfo.FileName = ffPath;
build.StartInfo.UseShellExecute = false;
build.StartInfo.RedirectStandardOutput = true;
build.StartInfo.RedirectStandardError = true;
build.StartInfo.CreateNoWindow = true;
build.ErrorDataReceived += build_ErrorDataReceived;
build.OutputDataReceived += build_ErrorDataReceived;
build.EnableRaisingEvents = true;
build.Start();
build.BeginOutputReadLine();
build.BeginErrorReadLine();
build.WaitForExit();
string findThis = "start";
int offset = 0;
foreach (string str in outputBuilder)
{
if (str.Contains("Duration"))
{
offset = str.IndexOf(findThis);
oStr = str.Substring(0, offset);
}
}
}
catch
{
oStr = "Error collecting file information";
}
return oStr;
}
private void build_ErrorDataReceived(object sender, DataReceivedEventArgs e)
{
string strMessage = e.Data;
if (outputBuilder != null && strMessage != null)
{
outputBuilder.Add(string.Concat(strMessage, "\n"));
}
}
Try using the OpenCV library. It definitely has the capabilities you require.
This guide has a section about accessing frames from a video file.
If it's for AVI files I'd read the data from the AVI file myself and extract the frames. Now use the video compression manager to decompress it.
The AVI file format is very simple, see: http://msdn.microsoft.com/en-us/library/dd318187(VS.85).aspx (and use google).
Once you have the file open you just extract each frame and pass it to ICDecompress() to decompress it.
It seems like a lot of work but it's the most reliable way.
If that's too much work, or if you want more than AVI files then use ffmpeg.
OpenCV is the best solution if video in your case only needs to lead to a sequence of pictures. If you're willing to do real video processing, so ViDeo equals "Visual Audio", you need to keep up track with the ones offered by "martjno". New windows solutions also for Win7 include 3 new possibilities additionally:
Windows Media Foundation: Successor of DirectShow; cleaned-up interface
Windows Media Encoder 9: It does not only include the programm, it also ships libraries for coding
Windows Expression 4: Successor of 2.
Last 2 are commercial-only solutions, but the first one is free. To code WMF, you need to install the Windows SDK.
I would recommend FFMPEG or GStreamer. Try and stay away from openCV unless you plan to utilize some other functionality than just streaming video. The library is a beefy build and a pain to install from source to configure FFMPEG/+GStreamer options.