Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Our requirement is to develop remote sound system which should support all the windows OS starting from WindowsXP Onwards .
It is something like http://www.elusiva.com/products/RemoteSound/
I would like to describe the functionalites more clearly here...
Microsoft has 3 different options over RDP using virtual chanel
Play on this computer
Don't play
Play in Remote System
For that microft is using RdpendP.dll and virtual chanel to control over that three diff. options..
Our case is just reverse of Microsoft when one user is selecting Play in Remote System option that should be played at local system not in remote system.. using our own virtual chanel which we have registered. Secondly when the user is selecting Play on this computer option that should be played on remote system not in local system with low latency...
Our approach:
We just created one virtual client dll and registered at (HCURRENT_USER/Software/Microsoft/Terminal Server Client/Default/Addin).Now we able to make a virtual chanel with terminal server.
Problems:
First problem is how to redirect the sound from Terminal server to client(So that when one media player is playing at terminal server it should be played at client system).We tried with WASAPIS to capture sound from terminal server endpoint device speaker and using WTSVirtualWrite to write audio data to virtual chanel and then at the client side we use to render it by using getbuffer method to write client endpoint device , but I think it is very bad approach and huge latency is coming.
So what we think here instead of capturing and rendering if we redirect the sound which is playing in terminal server side to client audio device that would be the better approach but how to do these implementaions using virtual channel and terminal server APIs or any other Windows APIs..
What are the registry changes should be done for that…
Second problem is how to add a dummy speaker icon to volume Mixture device speaker(One combo box will come at device speaker , here we would have two diff. speaker icon with their property, one is HDRealtalk as supported and second one is our speaker with a speaker property general only)
Note : Here Instead of writing virtual Audio driver or our own audiodrv, how to achieve that.
Third problem is how to support our own codec(like verbois)...
(Here we created one .acm file but how to support it, We have gone through diff. ACM functionalities like ACMDriverAdd, ACMDrvClose, ACMOpen but we don't have idea to implement that)...
Fourth Problem is how to capture and render from microphone to client system(Like to capture from terminal server microphone to local capturing device and also playback there)...
Our requirement is like it should supported by all windows OS starting from Window XP onwards..
You could always use icecast for this. Just need some scripting and edcast and you should be sweet.
Other options include Mumble.
Related
Link to the bug report on 'Feedback Hub'
An audio endpoint device, from here on referred to as 'endpoint', is a physical or virtual audio output or input device.
With the Windows 10 April Update 1803 the long overdue 'App volume and device preferences' have been introduced. These settings allow more control over audio stream management as it is now possible to set different endpoints for different applications, no matter whether that particular application comes with an endpoint selection or not.
However, there is an issue where the audio of a program, whose endpoint is non-default, is streamed through the default endpoint (or not at all) after it has been closed and launched again, although the endpoint is displayed correctly in the settings:
As far as I know the issue can be recreated on a Windows 10 machine (version 1803 or higher) with any virtual or physical endpoint and an affected program. I used 'VLC Media Player' in this example (disregarding the fact that it comes with an endpoint selection) as it is well known and widely accessible, which should make it easier to recreate the issue.
What I'm searching for...
... is a programmatically solution to switch between endpoints, which ideally can be launched in form of a script to set the correct endpoint with an application launch.
For my purpose it would be enough to have to adjust the device instance path manually, as the device would be always the same, but I'm not going to complain about a solution which retrieves the device instance path from the registry, too.
Defined endpoints and the device instance path of the device they are using can be retrieved from the subkeys of the key HKEY_USERS\# YOUR SID #\Software\Microsoft\Multimedia\Audio\DefaultEndpoint. I don't know how windows generates the name of the subkeys or where they can be found. If I had to take a wild guess, I'd say these are Application IDs (feel free to correct me if I'm wrong).
The device instance path itself can be found in the Device Manager (under 'Audio inputs and outputs' double click the desired device, navigate to the tab 'Details' and select 'Device instance path' from the 'Property' drop-down menu).
Additionally the entry about Audio Endpoint Devices and Stream Management in the Microsoft Docs might be helpful, but that is way above my head.
A possible but impractical workaround...
... would be, to manually set another endpoint for the application and switch back to desired endpoint at every launch of said application (as shown above).
But not just takes this at least 10 seconds at each and every launch, you might even forget to do this as the audio might just get streamed through the default endpoint *¹.
The alternative to the latter is, that no audio will be streamed at all *² or in some cases it actually works *³.
*¹ e.g.: VLC Media Player, Tom Clancy's Rainbow Six Siege (although the audio will be streamed correctly during the splash screens)
*² e.g.: Call of Duty 4: Modern Warfare, Call of Duty: Modern Warfare 2, Call of Duty: Modern Warfare 3
*³ e.g.: Window Media Player, Microsoft Edge, Firefox
Observations
VLC Media Player comes with an endpoint selection, but so does TeamSpeak 3 and, unlike VLC, it skips the Windows settings completely.
Call of Duty not streaming any audio most likely is connected to the engine as I didn't encounter any other application doing something similar.
Windows Media Player, Microsoft Edge and Firefox are the only programs (I tested so far) which work fine. They have no endpoint selection (I'd know of) and will use the correct endpoint after closing and launching it again. It should be noted, however, that Firefox and Microsoft Edge will show multiple instances in the "App volume and device preferences" when adjusting the endpoint.
Disclaimer
I already tried two 3rd party softwares: 'Audio Router', which didn't work at all and 'CheVolume', which doesn't solve the issue and constantly crashes while doing so.
This question is based on one I asked over at Super User (here), where I didn't get an answer I was able to work with due to my lack of knowledge regarding actual programming (I'm only somewhat familiar with Batch and PowerShell). I'm well aware that neither Stack Overflow nor Super User are script writing services, however, the issue is not being fixed with the Windows 10 October Update 1809 and I see this as a problem which is affecting not just me and with that would be helpful for multiple people after me. Feel free to write a comment or propose an edit if you see this differently.
I'm also not sure whether the tags 'audio-streaming' and 'endpoint' should be used in this context, please propose an edit if they shouldn't or you can think of any better.
Edit - 05/11/18
Using the 3rd party software 'EarTrumpet' I was able to overcome the issue with the 'Call of Duty' games (no audio at all after restarting), however, 'VLC Media Player' would not restart after I assigned a non-default endpoint with 'EarTrumpet' until I closed 'EarTrumpet' again and the issue with 'Tom Clancy's Rainbow Six Siege' remains the same.
Edit - 18/01/19
Added link to a bug report I created on the 'Feedback Hub' 2 month ago.
Edit - 20/01/19
After doing some testing again it should be noted that having 'EarTrumpet' run in the background will keep a non-default endpoint for 'VLC Media Player' across restarts, however, 'VLC Media Player' will only (reliably) restart when the non-default endpoint was set in the 'App volume and device preferences'.
I do not have any solution regarding a programming language to handle such events.
But I can recommend EarTrumpet app to handle this change more quickly https://www.theverge.com/2018/6/13/17457778/eartrumpet-windows-10-audio-app
(Windows store: https://www.microsoft.com/en-us/p/eartrumpet/9nblggh516xp?ranMID=24542&ranEAID=nOD%2FrLJHOac&ranSiteID=nOD_rLJHOac-hUn6PgKuMKwQLdrzRqnPTA&epi=nOD_rLJHOac-hUn6PgKuMKwQLdrzRqnPTA&irgwc=1&OCID=AID681541_aff_7593_1243925&tduid=%28ir__qwqlg6jd0jba3y9hpnbvikaite2xk6kuyv9udtr100%29%287593%29%281243925%29%28nOD_rLJHOac-hUn6PgKuMKwQLdrzRqnPTA%29%28%29&irclickid=_qwqlg6jd0jba3y9hpnbvikaite2xk6kuyv9udtr100&activetab=pivot:overviewtab )
I will update the answer if I find a easy way to script/program a change of output on each app.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I connect the remote Ubuntu 12.04 64bit by vncviewer application. But when I run a OpenGL application, it shows the exception information:
Caught exception GLShader::GLShader: GL_ARB_shader_objects not supported while initializing rendering windows
But if I connect the monitor with the remote computer, it works well and could show the OpenGL application.
Is there any solution to make the OpenGL application run in the remote window by vncviewer? Thanks!
UPDATED:
In the remote Ubuntu 12.04 64bit server, the ~/.vnc/xstartup file is as follows:
.
And the VNC Viewer client in a Windows 7 32bit system is as follows:
Usually on Linux the VNC server is a dedicated variant of the Xorg X11 server (Xvnc) that uses a software based renderer backend and has not GPU acceleration. I guess you're using a NVidia GPU and the NVidia proprietary drivers, or AMD GPU with AMD proprietary drivers, because otherwise the Mesa softpipe implementation would have kicked in.
If you really want to use the GPU you'll have to VNC into a running X11 session into which you start the x11vnc server.
Update
First things first, for the GPU to work a X server must be running and have its output be sent to the display connectors. Sorry, the current driver model doesn't allow for a purely off-screen GPU accelerated X11 sever; this is not a limitation of the hardware, but just the Xorg X11 server implementation. This also means, that whatever you're doing will be visible to whoever connects a monitor to the screen. At least we can take care of, that nobody messes with mouse and keyboard.
First things first create a custom /etc/X11/xorg.vnc.conf consisting of this
Section "ServerFlags"
Option "AllowEmptyInput" "true"
Option "AutoAddDevices" "off"
Option "DontZap" "false"
Option "DontVTSwitch" "true"
Option "HandleSpecialKeys" "Never"
EndSection
Section "Device"
Identifier "DeviceGPU"
Driver "nvidia"
EndSection
Next implement a script stat starts everything you want to run in that particular X11 session. Most of the time this would be something that launches the x11vnc server and then execs into the desktop envinronment, e.g.
#!/bin/sh
x11vnc -display $DISPLAY &
exec startxfce4 # or whatever
I refer you to the manpage of x11vnc on how to configure the authentication to use.
Lastly you should check that the Xorg server binary is SUID root; the NVidia driver is still not making full use of KMS and depends on the X server being started with full privileges.
Once these prerequsites are met you can start a X11 session that supports VNC using
xinit $FULL_PATH_TO_YOUR_SESSION_SCRIPT -- $DISPLAY -config xorg.vnc.conf
where $DISPLAY is a free X11 display number.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm using C++ and libusb-win32 to try and communicate with a commercial USB device ... I don't know much about USB programming, but I want to send some commands to the device that I know from using a sniffer program. Libusb-win32 seemed OK, but it looks like it can only be used on a device that uses the libusb driver for the device.
I want to use it on a device with a driver "USB Composite Device" driver provided by windows usbccgp.sys ... is it even possible? If not, how can I do this?
I just need to send some Control Transfers
This is currently not possible. libusb is designed around the Linux driver model where composite devices are treated as a single device by the system. Windows treats composite devices as multiple separate ones - a parent composite device and child devices for each interface.
As such libusb cannot access the child devices without first changing the parent driver to a libusb supported one. It can be done but then the device won't work with the vendor supplied software.
If you want to talk to a commercial device you need to contact the manufacturer and find out if there is an interface to that device that is published through their driver. Most manufacturers won't have a way to interface with generic control requests in a product. There may be an undocumented IOCTL, but again you'll need to work with them to get this information.
If you just want to hook the device and send it a control request then you need to replace the manufacturer's driver with the libusb driver. The problem here is that while you can get at the device it may not function the way you want unless you spoof what the manufacturer does (for example, the device might expect some vendor specific communication to get the device ready to interact with the host). If you do see problems then you can reverse engineer the vendor specific protocol by looking at the USB line through some hardware analyzer.
Read USB Complete, it's a great introduction to the USB protocol and will help you understand more of what's going on between a USB device and your host PC.
Is it possible to control the Sony QX10/100, using the Sony’s Camera Remote API SDK from C++ Windows program?
Thank you for your patience and time...
Yes, that should be possible. See https://developer.sony.com/develop/cameras/ for more details.
I answered your question on this related other question, maybe you would want to check it out ?
Windows compatibility with the Sony Camera Remote API
The API works with any device on any OS and programming language, it's just http calls and camera discovery needs some basic socket listening. Though to make it easy to connect to the camera, the device you're connecting from, should support Wi-Fi Direct : http://en.wikipedia.org/wiki/Wi-Fi_Direct
The problem is that these APIs are incredibly simplistic. No way to transfer/delete images off the memory card, can't take pictures without a memory card, can't add a custom streaming service, (only USTERAM is supported), the device doesn't have an 'always on' mode so you need to physically walk up and turn-on/reset (for cameras where this makes sense like the AS100V). It's like Sony has one guy in the basement working on this.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
How do I to get the full control of a remote computer which I know his open port number and his IP address using C++ ?
I know that I need to establish a socket connection between these two computers which I've already done. But, how to get the remote computer's screen image dynamically which gives me a live watch of what is happening?
I'm just looking for what do I need to know to deal this.
PS: I'm trying to implement this solution, I know that there is many softwares whom deal this.
OS: Windows 7 SP1.
If you want to just see the remote computer's screen, you can take screen captures by instructions from Here or Here.
Next you should send data of bitmap pointing by a pointer like ULONG *pBitmap over network. You can put a header before each frame data and a footer after that. In the receiving side you can detect each frame packet by headers and footers ensuring that each frame data is received completely.
After receiving a frame you should display it by whatever GUI framework you are using.
This is quite complex. Things like "Remote Desktop" will implement a virtual screen driver, virtual keyboard driver and virtual mouse driver. The virtual driver then does the packaging of what is going on (when enabled) and sends the data to the local computer, which has code to redraw the remote machine's graphics. The local machine on the other hand (assuming you want to control the remote machine) will send keypresses and mouse movement to the remote machine to allow control. These will be picked up by the remote machine's virtual keyboard and mouse drivers and entered into the system as if they were "real" keyboard and mouse movements.
You could do a very simple version by simply doing a screengrab and sending the data from that to your local machine over the network. You may want to do some sort of "compare this image with the before one, only send what changed" to avoid too much data across the network.
One of many answers about screen shots here on SO:
How can I take a screenshot and save it as JPEG on Windows?
And there are interfaces to send keyboard and mouse events to the system, such as what is described here:
http://msdn.microsoft.com/en-us/library/ms171548%28v=vs.110%29.aspx