I'm writing an audio player using Microsoft Media Foundation.
I wonder is it possible to change the playback device without re-creating the session?
IMFActivate *m_p_sink_activate;
...
m_p_sink_activate->SetString(MF_AUDIO_RENDERER_ATTRIBUTE_ENDPOINT_ID, name_device);
This doesn't take effect if the audio is already being played.
Btw, the media player provided by Microsoft.Windows.SDK.Contracts (Windows.Media.Playback.MediaPlayer) does it perfectly.
When I change m_mediaPlayer.AudioDevice, the audio stream is redirected to the assigned device immediately. So I wonder if this is also possible for MSMF.
So far I have a way could do this job,
Create a new topology which cloned from previous one.
Create new audio renderer using MFCreateAudioRendererActivate and using set string with the new audio endpoint ID add it to a topologynode;
Add the new node to new topology;
Using IMFMediaSession::SetTopology() do set new topology to play.
You could refer to MS sample TopoEdit for detail.
One side effect is that each time SetTopology cause huge memory growth.
Related
I'm working on a VR game at the moment in UE AND FMOD. We're trying to implement the Room Effects, as has been done so neatly with the Unity plugin.
We've successfully managed to create a room within a collider which seems to work with the room effects, however, we're having trouble having more than one room in the same map/level so we can change the room effect as we walk through the level.
has anyone managed to get room effects working in UE before, as in unity?
At present, with the Resonance Audio FMOD plugin combined with UE, you can pass new 'RoomProperties' to the Listener plugin via a call to
setParameterData(int index, void *data, unsigned int length);
From C++.
You must, however, handle detection of movement between different "rooms" yourself.
As I believe you most likely already know, you should pass a pointer to an instance of the RoomProperties struct, found here: https://github.com/resonance-audio/resonance-audio-fmod-sdk/blob/master/Plugins/include/RoomProperties.h
cast to void pointer, with the index parameter set to 1 and the length parameter set to sizeof(RoomProperties)
You can create multiple Room Effect 'zones' using Unreal's Audio Volumes.
Add a new Audio Volume, go to its Details panel and open the Reverb tab. You should see a Reverb Plugin Effect drop-down list. Locate the Create New Asset section and select Resonance Audio Reverb Plugin Preset to create a new reverb preset.
Then, in your new Resonance Audio Reverb Plugin Presets you can select some unique room effect settings for the volume you've just created.
You then repeat the process for additional 'rooms'.
You can also add a Global Reverb Preset if you would like to use some 'default' room effects settings (for example, when the player is no longer in any of the Audio Volumes).
Please see: https://developers.google.com/resonance-audio/develop/unreal/developer-guide#using_the_resonance_audio_reverb_plugin for more info!
i have been unable to get the audio volume resonance reverbs to work with stock UE4 (4.19.1, with resonance 1.0 that doesn't require a custom build). i can get the master reverb to work, but not just for the audio volumes. any advice on this?
this issue is also posted here:
https://forums.unrealengine.com/development-discussion/audio/1472284-reverb-plugin-of-google-resonance-for-ue4-doesn-t-work
First of all, here is what I'm trying to accomplish:
We'd like to add the capability to our commercial application to generate a video file to visualize data. It should be saved in a reasonably compressed format. It is important that the encoding library/codecs are licensed such that we're allowed to use and sell our software without restriction. It's also important that the media format can easily be played by a customer, i.e. can be played by Windows Media Player without requiring a codec pack to be installed.
I'm trying to use DirectShow in c++ by creating a source filter with one output pin that generates the video. I'm closely following the DirectShow samples called Bouncing Ball and Push Source. In GraphEdit I can successfully connect to a video renderer and see the video play. I have also managed to connect to AVI Mux and then file writer to write an uncompressed AVI file. The only issue with this is the huge file size. However, I have not been able to save the video in a compressed format. This problem also happens with the Ball and Push Source samples.
I can connect the output pin to a WM ASF Writer, but when I click play I get "This graph can't play. Unspecified error (Return code: 0x80004005)."
I can't even figure out how to connect to the WMVideo9 Encoder DMO ("These filters cannot agree on a connection"). I could successfully save to mjpeg, but compression was not very substantial.
Please let me know if I'm doing something wrong in GraphEdit or if my source filter code needs to be modified. Alternatively, if there is another (non-DirectShow) option that would work for me I'm open to suggestions. Thanks.
You don't give details to help you with your modification of the filters, however Ball sample generates output which can be written to a file.
Your choice of WM ASF Writer filter is okay - it is a stock filter and it is more or less easy to deal with. There is however a caveat: you need to select video only profile on the filter first, and then connect video input. WM ASF Writer won't run with an unconnected input pin, and in default state it also has an audio input. Of course this can also be done programmatically.
The graph can be as simple as this, and it can be run and it generates a playable file.
I'm creating an application for video conferencing using media foundation and I'm having an issue decoding the H264 video frames I receive over the network.
The Design
Currently my network source queues a token on every request sample, unless there is an available stored sample. If a sample arrives over the network and no token is available the sample is stored in a linked list. Otherwise it is queued with the MEMediaSample event. I also have the decoder set to low latency.
My Issue
When running the topology using my network source I immediately see the first frame rendered to the screen. I then experience a long pause until a live stream begins to play perfectly. After a few seconds the stream appears to pause but then you notice that it's just looping through the same frame over and over again adding in a live frame every couple of seconds that then disappears immediately and goes back to displaying the old loop.
Why is this happening? I'm by no means an expert in H264 or media foundation for that matter but, I've been trying to fix this issue for weeks with no success. I have no idea where the problem might be. Please help me!
The time stamp is created by starting at 0 and adding the duration to it for every new sample. The other data is retrieved from a IMFSampleGrabberSinkCallback.
I've also posted some of my MFTrace onto the msdn media foundation forums Link
I mentioned on there that the presentation clock doesn't seem to change on the trace but, I'm unsure if that's the cause or how to fix it.
EDIT:
Could you share the video and a full mftrace log for this issue? It's not clear for me what really happens: do you see the live video after a while?
The current log does not contain enough information to trace sample processing. From your description is looks like that only keyframes are rendered. Plus, duration is weird for the rendered keyframe:
Sample #00A74970, Time 6733ms, Duration 499ms. <- Duration is not 33ms.
I would like to see what happened to that sample.
In any case, if you are using standard encoder and decoder, the issue should be with your media source, and how it buffers frames. Incorrect circular buffer implementation? You may want to try and cache a second or two of samples before starting giving them to the decoder.
I have a directshow Video renderer redived from CBaseVideoRenderer. The renderer is used in a graph that receives data from a live source (BDA). It looks like the connections are established properly, but the video rendering immediately ends because there is no sample. However, audio Rendering works, ie I can hear the sound while DoRenderSample of my renderer is never called.
Stepping through the code in the debugger, I found out that in CBaseRenderer::StartStreaming, the stream ends immedately, because the member m_pMediaSample is NULL. If I replace my renderer with the EVR renderer, it shows frames, ie the stream is not ending before the first frame for the EVR renderer, but only for my renderer.
Why is that and how can I fix it? I implemented (following the sample from http://www.codeproject.com/Articles/152317/DirectShow-Filters-Development-Part-Video-Render) what I understand as the basic interface (CheckMediaType, SetMediaType and DoRenderSample), so I do not see any possibility to influence what is happening here...
Edit: This is the graph as seen from the ROT:
What I basically try to do is capturing a DVB stream that uses VIDEOINFOHEADER2, which is not supported by the standard sample grabber. Although the channel is a public German TV channel without encryption, could it be that this is a DRM issue?
Edit 2: I have attached my renderer to another source (a Blackmagic Intensity Shuttle). It seams that the source causes the issue, because I get samples in the other graph.
Edit 3: Following Roman's Suggestion, I have created a transform filter. The graph looks like
an has unfortunately the same problem, ie I do not get any sample (Transform is not called).
You supposedly chose wrong path of fetching video frames out of media pipeline. So you are implementing a "network renderer", something that terminates the pipeline to further send data to network.
A renderer which accepts the feed sounds appropriate. Implementing a custom renderer, however, is an untypical task and then there is not so much information around on this. Additionally, a fully featured renderer is typically equipped with sample scheduling part, which end of stream delivery - things relatively easy to break when you customize it through inheriting from base classes. That is, while the approach sounds good, you might want to compare it to another option you have, which is...
A combination of Sample Grabber + Null Renderer, two standard filters, which you can attach your callback to and get frames having the pipeline properly terminated. The problem here is that standard Sample Grabber does not support VIDEOINFOHEADER2. With another video decoder you could possibly have the feed decoded info VIDEOINFOHEADER, which is one option. And then improvement of Sample Grabber itself is another solution: DirectX SDK Extras February 2005 (dxsdk_feb2005_extras.exe) was the SDK which included a filter similar to standard Sample Grabber called Grabber \DirectShow\Samples\C++\DirectShow\Filters\Grabber. It is/was available in source code and provided with a good description text file. It is relatively easy to extend to allow it accept VIDEOINFOHEADER2 and make payload data available to your application this way.
The easiest way to get data out of a DirectShow graph, if youњre not going to use
MultiMedia Streaming, is probably to write your own TransInPlace filter, a sub-variety
of a Transform filter. Then connect this filter to the desired stream of data you wish to
monitor, and then run, pause, seek, or otherwise control the graph. The data, as it passes
through the transform filter, can be manipulated however you want. We call this kind of
filter, a Њsample grabberћ. Microsoft released a limited-functionality sample grabber
with DX8.0. This filter is limited because it doesnњt deal with DV Data or mediatypes
with a format of VideoInfo2. It doesnњt allow the user to receive prerolled samples.
(Whatњs a preroll sample? See the DX8.1 docs) Its ЊOneShotћ mode also has some problems.
To add to this, the Grabber sample is pretty simple itself - perhaps 1000 lines of code all together, including comments.
Looks like your decoder or splitter isn't demuxing the video frames. Look further up the chain to see what filters are supplying your renderer pin with data, chances are its only recognising audio.
Try dropping the file into Graphedit (there's a better one on the web BTW) and see what filters it creates.
Then look at the samples in the DirectShow SDK.
Background:
I have a google glass, and I am thinking on an app that can grab any/all images a user takes using the native camera, and passing those images to an online service (e.g. Twitter or Google+). Kind of like a life-blogging style application.
In my first prototype, I implemented a FileObserver Service that watches for new files in the directory that glass stores its camera preview thumbnails (sdcard/google_cached_files/). The preview files always started with t_, so once I saw a new file there, I uploaded it to my webservice. This was working very well, but in Glass XE11 this cache file was moved out of my reach (/data/private-cache).
So now, I am watching the folder sdcard/DCIM/Camera/ for new .jpg files. This works ok, but the camera is storing the full size image there, so I have to wait 5-8 sec before the image is available for upload.
The Question:
Should it be possible to have background service running on glass that can intercept the camera event, and grab the thumbnail or the full image as a byte array from the Bundle so that I don't have to wait for it to write to disk before accessing it?
I have been reading up more on android development, and I suspect the answer involves implementing a BroadcastReciever in my service, but I wanted to check with the experts before going down the wrong path.
Many thanks in advance
Richie
Yes. Implement a PreviewCallback. Same way it worked for phones, example here: http://www.dynamsoft.com/blog/webcam/how-to-implement-a-simple-barcode-scan-application-on-android/
I tested it on Google Glass and it works. In this post ( http://datawillconfess.blogspot.com.es/2013/11/google-glass-gdk.html ) I list the parameters (below the video) which the camera returns after doing:
Camera m_camera = Camera.open(0);
m_params = m_camera.getParameters();
m_params.setPreviewFormat(ImageFormat.NV21);
m_camera.setParameters(m_params);
m_params = m_camera.getParameters();
m_params.setPreviewSize(320,240);
m_params.set("focus-mode",(String)"infinity");
m_params.set("autofocus", "false");
m_params.set("whitebalance",(String)"daylight");
m_params.set("auto-whitebalance-lock",(String)"true");
m_camera.setParameters(m_params);
String s = m_params.flatten();
Log.i("CAMERA PARAMETERS", "="+s);