I would like to get a PS4 controller to work on my PC and then relay the data to a microcontroller via UART.
The problem is, that I have no experience in C++ programming for Linux. The uC part is more in my favor.
Nevertheless, I would like to write a program which can set up a connection with a PS4 controller and read all buttons, sticks, motions and the track-pad. Also, it would be nice to be able to control rumble and the LED color.
I am using Ubuntu 16.4 and have read that the PS4 controller is natively supported since ver. 14.xx. But all I can find regarding a connection is how to set up the controller for steam or gaming in general.
But not how to get that status information and work with them using C++.
On the internet, I found some projects but they are all at least 3-4 years old and using an old version of Ubuntu. But since the controller is natively supported it would be nice to use it without outdated plugins/drivers which are obsolete anyway. I also started to look into HID-devices but that seems more like a workaround and I was hoping to find e.g. a library to include and use...
If someone can give me a hint, it would be greatly appreciated.
I did most of this on a raspberry pi but most of it still applies because the underlying drivers are mostly the same
Connecting: https://wiki.gentoo.org/wiki/Sony_DualShock look at the part about bluetoothctl and either try to follow that or get a wireless dongle. (That should set up automatically)
Controlls:
Your best bet is reading /dev/input/jsX where x is the number of the controller you are connected to (normally 0). This works with normal file reads so should really be no problem. This file contains everything from button presses to trackpad events and all other sensor data. It is event based so if you press a button you will get a 8 Byte burst of data. The structure looks like this:
1. Timestamp lowest byte
2. Timestamp second lowest byte
3. Timestamp second highest byte
4. Timestamp highest byte
5. Measured Data MSB
6. Measured Data LSB
7. Type (1 for Button. 2 for Axis (so a stick or another analog value))
8. ID Byte (so the id of the button you pressed. e.g.: 1 for x, 2 for square, 5 for left stick x)
LEDs:
This one is a bit more complicated. The only way i've found so far is accessing /sys/class/leds
That folder should contain a subfolder folder named something like 0005:054C:05C4.0009:<blue/green/red/global>
Those are your R/G/B channels. In these folders there are files called max_brightness and brightness To change the color to 0x00ff00 for instance write 0 to red 255 to green and 0 to blue
Related
I am running Arch Linux on a Raspberry and need to get the positioning data for 4 USB mice from a C++ application, as in for each individual mouse I need to know how many pixels it has moved whenever it moved. I do not have x server on my system and would prefer to leave it that way unless necessary because this is for an embedded project that does not require a GUI and I would prefer not waste space or overhead on x server.
The most useful thing I have found is this link https://www.kernel.org/doc/Documentation/input/input.txt but I cannot really figure out how to make it work for my purpose. As can obviously be determined I am NOT experienced in Linux development do don't be to hard on me please.
You open e.g. /dev/input/mouse0 for reading (using open), then read the structure defined in the document you linked (at the bottom of the document), it also says which header file to include.
I'm guessing you will get an event of type EV_REL for mouse-movement, with a code of REL_X or REL_Y for the direction of the movement, and the value is the number of units the mouse moved. Compare the timestamp to the timestamp of the previous event to see how fast it moves.
I have a problem I'm actually recoding the RFB protocol in my software to comunicate with a VNC Server, and I want to know how to get the size of the server Desktop Size.
I have allready test the framebuffer_width inside of the serverInit message but it is not representing the reel size of desktop I don't know how to get it ?
My second question is to send a PointerEvent message to the server.
To move the mouse, actually in my software I set the x and y of the mouse to position {0, 0}, when I send this to the VNC server it works successfully, but when I add 5 to the x position it doesn't move 5 pixel it move bigger than what I want, I don't understand why???? can you help me please ?
Thanks for your answers !
Sounds like both of your problems could be a scaling issue in your client.
Some questions that might help you answer your own question (since you really need to post more information if you want a definitive answer):
How are you determining that the real size of the desktop is not what is sent as the width in the serverInit message? Perhaps you are starting the VNC server and assuming that it is using the same size as the current desktop on the server and in fact it is starting with a different default size. With VNC servers on *nix systems, the VNC server generally runs as a separate desktop from the main desktop and the size isn't necessarily the same.
Are you certain that you are treating the serverInit width and the pointerEvent x and y position as 16-bit values?
Are you (advertising and) getting a DesktopSize pseudo-encoding after the ServerInit? It's possible the server may be changing the frame buffer size after you connect.
What language/framework/etc are you using to implement the client. Are you certain the rendering functions aren't being scaled somehow?
BTW, I've found the official RFB documentation to be somewhat lacking and I think these links are better for RFB reference:
http://tigervnc.sourceforge.net/cgi-bin/rfbproto
https://datatracker.ietf.org/doc/draft-levine-rfb/
I need to split a PCM audio stream with up to 16 channels into several stereo streams.
As I haven't found anything capable of doing that, I'm trying to write my first directshow filter.
Anything capable of splitting the audio would be very welcomed but I'm assuming that I must do it so there's what I've done:
At first, I tried to create a filter based on ITransformFilter. However, it seems that it's made thinking of filters with only one input pin and one output pin. As I need several output pins, I disregarded it, however perhaps it can be adapted more easily than I thought, so any advice is highly appreciated.
Then, I begin basing on IBaseFilter. I managed to do something. I create the necessary output pins when the input pin gets connected, and destroy them when the input gets disconnected. However, when I connect any output pin to an ACM Wrapper (just to test it), the input tries to reconnect, destroying all my output pins.
I tried to just not destroy them, but then I checked the media type of my input pin and it had changed to a stereo stream. I'm not calling QueryAccept from my code.
How could I avoid the reconnection, or what's the right way to do a demuxer filter?
Edit 2010-07-09:
I've come back to ITransformFilter, but I'm creating the necessary pins. However I've encountered the same problem as with IBaseFilter: When I connect my output pin to an ACM Wrapper, the input pins changes its mediatype to 2 channels.
Not sure how to proceed now...
You can take a look at the DMOSample in the Windows Server 2003 R2 Platform SDK. It is also included in older directx sdk's, but not in newer windows sdk's. You can locate it in Samples\Multimedia\DirectShow\DMO\DMOSample. Here is the documentation of this sample.
I have seen someone create a filter based on this which had a stereo input and two mono outputs. Unfortunately I cannot post the sourcecode.
I have some questions about Directsound and windows mixers.
My goal is to enumerate all microphones and be able to change the input volume of each one.
I think i'm not far from the solution, but I don't find what is wrong in my code.
Here is what I have done:
- I enumerate all input devices and get a GUID for each one
- I use a method found on a topic to get the mixer id corresponding to a directsound guid using this method (but I'm not sure if it works)
- Then I get the id corresponding to the control in the mixer
- Then I can modify the volume
Here is the code: a vs2008 project
To test, I have connected two microphones usb + the line-in microphone, and I visually check what sliders are moving. But unfortunately it's not the good one ...
here is a screenshot (img177.imageshack.us/img177/5189/mixers.jpg) of all my mixers opened in windows xp.
Have you an idea of what I am doing wrong? Is there an easiest solution?
bonus question: do you know if there is a way to know if a microphone is connected or not in Line-in, using Directsound? Because the Line-in is always detected as connected even if no microphone is connected.
check this questions:
How to adjust microphone gain from C# (needs to work on XP & W7)
or
http://social.msdn.microsoft.com/Forums/en/isvvba/thread/05dc2d35-1d45-4837-8e16-562ee919da85
When I run this code:
MIXERLINE MixerLine;
memset( &MixerLine, 0, sizeof(MIXERLINE) );
MixerLine.cbStruct = sizeof(MIXERLINE);
MixerLine.dwComponentType = MIXERLINE_COMPONENTTYPE_SRC_WAVEOUT;
mmResult = mixerGetLineInfo( (HMIXEROBJ)m_dwMixerHandle, &MixerLine, MIXER_GETLINEINFOF_COMPONENTTYPE );
Under XP MixerLine.cChannels comes back as the number of channels that the sound card supports. Often 2, these days often many more.
Under Vista MixerLine.cChannels comes back as one.
I have been then getting a MIXERCONTROL_CONTROLTYPE_VOLUME control and setting the volume for each channel that is supported, and setting the volumne control to different levels on different channels so as to pan music back and forth between the speakers (left to right).
Obviously under Vista this approach isn't working since there is only one channel. I can set the volume and it is for both channels at the same time.
I tried to get a MIXERCONTROL_CONTROLTYPE_PAN for this device, but that was not a valid control.
So, the question for all you MMSystem experts is this: what type of control do I need to get to adjust the left/right balance? Alternately, is there a better way? I would like a solution that works with both XP and Vista.
Computer Details: Running Vista Ultimta 32 bit SP1 and all latest patches. Audio is provided by a Creative Audigy 2 ZS card with 4 speakers attached which can all be properly addressed (controlled) through Vista's sound panel. Driver is latest on Creative's site (SBAX_PCDRV_LB_2_18_0001). The Vista sound is not set to mono, and all channels are visable and controlable from the sound panel.
Running the program in "XP Compatibility Mode" does not change the behaviour of this problem.
If you run your application in "XP compatibility" mode, the mixer APIs should work much closer to the way they did in XP.
If you're not running in XP mode, then the mixer APIs reflect the mix format - if your PC's audio solution is configured for mono, then you'll see only one channel, but if you're machine is configured for multichannel output the mixer APIs should reflect that.
You can run the speaker tuning wizard to determine the # of channels configured for your audio solution.
Long time Microsoftie Larry Osterman has a blog where he discusses issues like this because he was on the team that redid all the audio stuff in Vista.
In the comments to this blog post he seems to indicate that application controlled balance is not something they see the need for:
CN, actually we're not aware of ANY situations in which it's appropriate for an application to control its balance. Having said that, we do support individual channel volumes for applications, but it is STRONGLY recommended that apps don't use it.
He also indicates that panning the sound from one side to the other can be done, but it is dependent on whether the hardware supports it:
Joku, we're exposing the volume controls that the audio solution implements. If it can do pan, we do pan (we actually expose separate sliders for the left and right channels).
So that explains why the MIXERCONTROL_CONTROLTYPE_PAN thing failed -- the audio hardware on your system does not support it.