I am creating a basic signal generator and decided to use my audio card as the analogue output. I chose to use DirectSound because... it seemed like a good option.
I have it up and running quite nicely, but I now realize that my code using secondary buffers and as such any other sounds on the computer get mixed in with my generated signal. This is something of an issue, as when I'm running a motor I don't want it to get sent an MSN poke noise as a command.
In order to gain total control I've attempted to take over the system by setting my cooperative level to DSSCL_WRITEPRIMARY. All in all this strategy is really giving me a headache as I am running into error after error trying to get this set up. The documentation on using the primary buffer isn't great and I can't find any really good examples.
So my question is:
Does anyone have a good, working example of taking over and writing to the primarybuffer.
Is there a simpler way of outputing a waveform to the audio card, and ensuring that my application has full and sole control?
Thank you
only thing I've seen related is:
http://blogs.msdn.com/b/matthew_van_eerde/archive/2009/04/03/sample-wasapi-exclusive-mode-event-driven-playback-app-including-the-hd-audio-alignment-dance.aspx
Related
I'm looking for some help with an implementation problem I'm facing. I'm an experienced C/C++ programmer in embedded environments and RTOSs, but when it comes to linux I'm a newbie.
I have a beaglebone black running Debian. I need to log and process data from sensors connected to the I2C bus and the ADC. I have written the handler functions for collecting the data from the sensors connected, no problem there, they work fine. I want to implement (similar to a RTOS) a timer interrupt that can throw the process to my handler functions so they can do their things and I want this to run in the background. i.e. I don't want to tie up the shell or whatever so the user can do other things. I was reading that timer_create is a way do this within Debian, or using fork()-exec() but I thought I'd ask people experienced in Linux first before going down any particular path! Also, not 100% sure how to use either of these functions.
side-note: I know that timers etc. are not highly accurate in Linux unless you are implementing pre-emptive kernels or whatever, which is a whole other problem in itself, but the time constraints of this problem are somewhere near 10-50ms which is not extremely tight.
Thanks
to make a daemon process , just take this as a reference:
https://github.com/memcached/memcached/blob/master/daemon.c
I'm not actually well versed in C++ or SDL_Mixer, but I'm asking this question anyway on behalf on the Doom community. Put simply, nobody writing Doom source ports can seem to figure out how to control normal sound volume and MIDI sound volume independently using SDL_Mixer on Windows Vista or 7. I'll let James Haley, author of Eternity Engine, put it in his own words:
Seems the concept of independent volume for native MIDI doesn't exist under Windows Vista or 7, as using MIDI volume sliders in any application that has them (including most games that use SDL_mixer) also affects the volume of digital sound output. This makes attempting to adjust the relative volume of music for comfort impossible.
Has anybody found any workarounds for this? I'm guessing it's unlikely given how Microsoft seems to have skimped throughout the OS on any way to control the volume of individual sound devices separately.
I've heard of various workarounds all involving a Timidity driver, but this requires the user go above and beyond simply installing the game on his system. The only port that I know of that definitively fixes this issue is ZDoom, but it uses the GPL-incompatible FModEx and is thus not a suitable solution.
If you want some code to look at, Chocolate Doom is perhaps the easiest Doom source port to grok and you can grab its source here.
Any suggestions on other open-source sound and music libraries would be welcome as well.
A solution would be to ship with a FluidSynth-enabled SDL_mixer. You would also need to ship a SoundFont2 file to go with it. Fortunately, there are free SF2's out there, and some are even optimized for Doom's MIDI files. Licenses shouldn't be a problem, since SoundFonts are assets, not code.
You then load the SF2 using Mix_SetSoundFonts().
You may want to look at different MIDI libraries outside of SDL.
http://wildmidi.sourceforge.net/
http://sourceforge.net/apps/trac/fluidsynth/
http://timidity.sourceforge.net
I am maintaining a similar game port (Descent 2), and I have come across the same problem. Afaik there is no solution for it when using SDL_mixer. A cure to avoid sound being muted when turning off midi music I have found is to retrieve a handle to a temporary midi device, set the midi volume to max and then close the temporary device again.
For the longest time, the only solution we found was to use something like PortMIDI. However, Quasar of Eternity Engine fame has come across a neat solution:
http://www.doomworld.com/vb/showthread.php?s=&postid=1124981#post1124981
He essentially puts SDL_Mixer into its own process and controls it with RPC. Very clever.
So one problem with the previous answer I gave was that sometimes the MIDI subprocess did not behave itself, and would break or stop working in strange ways. Eternity's specific implementation used IDL, and I personally re-implemented it using pipes, but the subprocess itself was a bug magnet.
Thankfully, another answer was figured out rather recently. You can simply bypass SDL_Mixer entirely and deal with Windows' native MIDI support directly, which turns out to not require a ton of code once you know what you're doing.
https://github.com/chocolate-doom/chocolate-doom/blob/master/src/i_winmusic.c
You can also implement this sort of idea with PortMIDI and get the benefit of being able to communicate with external MIDI devices as well.
https://github.com/odamex/odamex/blob/stable/client/sdl/i_musicsystem_portmidi.cpp
I am really puzzled here. I want to create an application that does different events upon different temperatures of my graphics card which is an AMD one.
The reason i want to make such an applications is because, for a GPU i haven't found one, and the second is to ensure i never fry my card by reaching enormous temps.
However i have no idea how people(not connected to amd/intel/nvidia) write applications to monitor temperatures of any kind.
So how does it happen? Some APIs i don't know or something?
After a little bit of googling, i found this:
I think this is really vendor specific, it will probably involve interfacing directly with the motherboard or video driver and knowing which IOCTL represents the code for requesting the temperature. I reverse engineered a motherboard driver once for this purpose. It's not as hard as it sounds, download a manufacturer motherboard/BIOS utility and try to hook the function that gets called when that app needs to display the temperature to the user. Then watch for a call to DeviceIoControl() in Windows, or ioctl() in linux and see what the inputs / outputs are.
This may be your best bet. I found this information here:
http://www.gamedev.net/topic/557599-get-gpucpu-temperature/
Edit:
Also found this:
http://msdn.microsoft.com/en-us/library/aa389762%28v=VS.85%29.aspx
http://msdn.microsoft.com/en-us/library/aa394493%28VS.85%29.aspx
Hope it helps.
You could use one of the existing GPU temperature monitoring programs, such as GPU-Z, configure it for continuous monitoring, and read the log entries.
RivaTuner is another GPU monitoring program, which has a shared memory interface allowing other programs to get the data in real-time, but is nVidia focused. As long as your action isn't "reduce the GPU clock speed" it'll probably work well enough with ATI cards.
I am making a program in C++ for Windows XP that requires sound to be played so that any program that is currently recording the microphone can hear it, but it will not come out of the speakers. There seems to be no "real" way of doing it, but it is possible to go into "sndvol32 -R" and set the Wave out mix or similar as the current input device. Then you can turn the master volume to 0, play the sound, turn it back up, and reset the input device to the microphone. Is there a way of doing this transparently, or setting the current input device using functions, so that you dont have to see sndvol32 pop up?
Thanks
Doing this would require a complicated kernel-level driver.
Fortunately for you, someone has already done this (it's not free, but it's a fantastic program).
Basically I'm currently using the wiiuse library to get the wiimote working on linux. I want to now be able to control the mouse through the IR readings.
Can somebody point me in the right direction as to how to approach this? I know of uinput but there doesn't seem to be a lot of tutorials/guides on the web.
I'm working with c/c++ so a library in c/c++ would be helpful.
Cheers.
I think you should look into "becoming" a new mouse device. This would require developing a device driver that knows how to read the Wii device, and present that data to the input system as if it came from a mouse. The Linux kernel supports multiple mice connected at the same time, and merges the inputs from all of them, so this will work fine.
This book might be a handy help along the way. Not sure if it's possible to do this totally in userland, but that is of course worth investigating too.
I`m not sure if I understood you question corectly. If looking for controling mouse pointer from userspace look at XTest Extension Usefull link
Edit:
From kernel POV uinput looks like good starting point
In the end I decided to just draw "cursor" objects on the screen and use setup each input device to control a separate "cursor" object. This seemed the best idea as we were short on time.