I've done a bunch of research, but I can't figure out how to load a .wav file into memory and then play it. Also, I'd like to have the capability to simultaneously play multiple loaded .wav files.
Currently, I load the .wav file into string buffer, then call:
PlaySound(stows(buffer).c_str(), NULL, SND_MEMORY);
stows() is a function I created which converts a string into a wide string.
What am I doing wrong/what could I be doing better?
EDIT: I'm trying to use SFML now. I followed the walkthrough here to a T, but I'm getting 9 linker errors when I attempt to compile. It doesn't matter if I use the static or dynamic dlls.
EDIT2: I must have VS2010 32 bit. I downloaded SFML 2.0 32 bit for VS2010 and now I'm down to 5 linker errors. It says all of them are occurring in sfml-audio-s.lib.
EDIT3: I got it to compile! However, it simply isn't playing anything. I used VS to debug and confirmed that I am opening the WAV file correctly and storing it in a Sound object, but when I call the play() method, nothing happens. Furthermore, I'm getting an unhandled exception every time I close the program.
I'm reasonably convinced that PlaySound isn't able of mixing two sounds playing simultaneously. You can have one sound interrupt the other, and you can make it play "in the background" while your code is doing other things (using SND_ASYNC), but the documentation is pretty clearly stating that "if another sound is played, the current one stops" - in the section of SND_NOSTOP:
If this flag is not specified, PlaySound attempts to stop any sound
that is currently playing in the same process
(If you specify SND_NOSTOP, it doesn't play this sound if another one is already playing).
Link to full documentation: PlaySound.
I'm afraid I don't know exactly how you create sound playing using multiple sounds simultaneously, as I've never done that on a PC (I have used hardware mixers, and I've also written code to merge two sounds from a file, but that's almost certainly not what you want)
I got it working! I ended up using direct sound. I made a class to load and play .WAV files based on a tutorial I found here, which was immensely helpful.
Related
I try to access frames of RTCVideoRenderer without success, can you help me please ?
I noticed that there is a "didCaptureVideoFrame" method in RTCVideoCapturerDelegate, but not in RTCVideoViewDelegate.
I have never done objc, I added a method in RTCVideoViewDelegate to get frames (bellow "didChangeVideoSize"), but it do not get fired, I guess it do not work like that.
I am able to access frames from the remote using Android using the "onFrame" of VideoSink, I thought it would be that easy using ios.
PS: To add the method, I took the framework from the pod and put it in the project, because I noticed that when you modify a pod, changes do not apply.
Here is the line I added :
- (void)videoView:(id<RTCVideoRenderer>)videoView didRenderVideoFrame:(RTCVideoFrame *)frame;
I will now try to compile the library with the changes I want.
EDIT:
I am now compiling the library, I noticed the need to change several files to be able to access frames, it will not be done just by adding 10 lines.
Solved thanks to this : How to get frame data in AppRTC iOS app for video modifications?
I used this line instead (because names changed since) :
#property(atomic, strong) RTCVideoFrame* videoFrame;
I wanted a "onFrame" like VideoSink on Android, but it will be ok for now.
Question Intro
I'm running an opencv project in Visual Studios 2010 and have implemented cuda support (refer to my previous question for precise info on my set-up). All cuda-functionalities are working fine - to the best of my knowledge - and are indeed improving speed on the image processing.
However, I now also wanted to attemp to speed up the video-writing function in this project by replacing the current cv::VideoWriter with the gpu::VideoWriter_GPU function. The reason for this is that the cv::VideoWriter seems to somehow cause processes running outside of the scope in which the VideoWriter is called to be slowed down, resulting in images available at the DirectShow driver being dropped by the VideoCapture-function, hence messing up an algorithm I've implemented.
Problem
To attempt to solve this issue, I've now replaced the VideoWriter-calls with VideoWriter_GPU-functionality (and corresponding syntax), but when I run my project (Compile & Run in Debug-mode), I get the following error-message (directly originating atthe calling of gpu::VideoWriter_GPU):
OpenCv Error: The function/feature is not implemented (The called functionality
is disabled for current build or platform) in unknown function, file
c:slave\builds\wininstallermegapack\opencv\modules\gpu\src\precomp.hpp, line 131.
and the program then ends with
code -529697949 (0xe06d7363)
I've purposely currently not included any of my code because the error-message originates so clearly from the call to the gpu::VideoWriter_GPU, which is making me think it's not a coding or syntax problem. (Please comment if you feel my code is necessary for answering this question.
My steps so far
I miss the natural gift of understanding what precisely this message means or how to interpret it. Does my opencv v2.4.4 simply not support what I want...? Does this function simply not work on my windows 7, 64bit system...?
I've checked out as many available google-hits I could find (relating to this error message and combinations of searchterms like "opencv, gpu, VideoWriter_GPU, disabled for current build") but have not understood what the problem is or how to solve it.
Corresponding header-file and error message can also be found here.
This post and this post suggest the error message is trying to tell me that opencv simply does not provide the option of using the function or functionality I am aiming to use. Or maybe even that cuda is not at all supported.. But that's all against my experience as every single opencv gpu-function I've tried to use has seemed to work fine.
Question
Could someone please explain to me why this is not working for me, and more importantly share with me what I should do to make the VideoWriter_GPU work?
Many thanks!
Maybe this link can give you a little idea of what the problem is: VideoReader_GPU not available, but built with NVCUVID?.
It seems to be that the problem is the CUDA_DISABLER var.
I was happily improving my C++-program where I read videos via directshow. Now I tried to also write videos which was also nicely working.
Then came the search for an appropriate codec (thought about vob/ogg)...
However, suddenly today the video was displayed really slowly.
And now that I uninstalled any additional codecs I installed before, the video won't play at all.
The reason seems to be CComQIPtr< IMediaSeeking, &IID_IMediaSeeking > pSeeking( pGraph ); and hr = pSeeking->SetPositions( &Startzeit, AM_SEEKING_AbsolutePositioning,NULL, AM_SEEKING_NoPositioning ); gives an error, SetPosition is not supported at that time... acutally at any time.
also hr=pSeeking->GetDuration(&duration) returns 0 and the corresponding AM_MEDIA_TYPE mt; I use to get the framepersecond has an empty format-type. (pbFormat is Null).
Did I unintentionally installed/uninstalled something important?
Have you heard of similar problems?
As I said, some days ago the same video and source-code was working fine (I commented by changes out by now).
I would like to give you more source-code but it is kind of long but if you think it would be helpful I will add it of course.
Regards,
Julian
Here is the source-code: http://pastebin.com/jMdWejH9
It's of course only a part of the whole code, but I think this is the main part as here are all filters inserted.
Keep in mind that this actually worked until some days before!^^
The first part is the variable-deklaration (all important variables as fas as I could tell, the second is the function called)
If you render a file in DirectShow, the framework uses the installed codecs/filters in the system. If you remove some codecs it takes another or breaks because it can't render. To know wich filters the framework uses you can try to render the file in GraphEdit or GraphStudioNext. (Just drop the file on one of these programms and see the filtergraph). We got the best results with the codec pack ffdshow-tryouts and the Haali Media Splitter for our player.
Is there a way to get all opened file handles for a process and arrange it by time files were opened? We have a project, which requires exactly this - we need to determine which files are opened by a Dj software, such as Traktor or Serato. The reason we need to know its order is to determine, which file is in the first deck, and which is in the second one.
Currently we are using Windows internal APIs from the Ntdll.dll (Winternl.h) to determine a list of all opened files for a process. Maybe that's not the best way to do it. Any suggestions are highly appreciated.
We relied on an observed behavior of that APIs on certain OS version and certain Dj software versions, which was that the list of all opened files for a process never get rearranges, i.e. adheres an order. I know that's a bad practice, but it was a "should be" feature from the customer right before the release, so we had to. The problem is now we have a bug when those handles are sometimes randomly rearranged without any particular cause. That brakes everything. I thought maybe there would be a field in those win structures to obtain file's been opened time, but seemingly there are no such things. Docs on that APIs are quite bad.
I thought about some code paste, but it's a function 200 lines long and it uses indirect calls from the dll using function pointers and all structures for WinAPIs are redefined manually, so it's really hard to read it. Actually, the Winternl.h header isn't even included - all stuff is loaded manually too, like that:
GetProcAddress( GetModuleHandleA("ntdll.dll"), "NtQuerySystemInformation" );
It's really a headache for a cross platform application...
P.S. I have posted a related question here about any cross-platform or Qt way to get opened file handles, maybe that stuff will be useful or related.
if it's just to check the behavior in other OS for debug purpose, you can use the technique of creating process in debug mode and intercept in the order all events of dll loading, here's a good article talking about that.
I have got the following situation. On a machine there is a Fritz ISDN card. There is a process that is responsible for playing a certain wave file on this device's wave out (ISDN connection is made at startup and made persistent). The scenario is easy, whenever needed the process calls waveOutWrite() on the previously opened wave device (everything initialized without any problems of course) and a callback function waits for MM_WOM_DONE msg to know that the playback has been finished.
Since a few days however (nothing changed neither in the process nor the machine) the MM_WOM_DONE message has been coming immediately after calling waveOutWrite() even though the wave lasts a couple of seconds. Again no error is reported, it looks like the file was played but had zero length (which is not the case). I am also sure that waveOutReset() was not called by my process (it would also trigger sending the mentioned message). I have already used to have some strange problems in the past that where solved simply by reinstalling TAPI drivers. This time for some reason it is problematic for me to perform that once again and I am trying more analytical approach :). Any suggestions what might cause such a behavior? Maybe something on the other end of the ISDN line?
Based on your description, you are doing the playing asynchonously. Are you sure that the backing memory for the wav file is not being cleaned up in that time?
I don't have the time to Google too much for this, but I know that either Larry Osterman or Raymond Chen blogged about a similar situation.
I'll check back later when I have more time to see if this question is still open.
What is the return value when the sound does not play? If you get MMSYSERR_NOERROR that points to the driver incorrectly reporting to the OS that the buffer was processed.
Has the WAV file itself changed? This blog entry indicates that some pretty in-depth validation is done on the metadata.