psmouse driver inverse mouse - c++

I just wanna have fun with the mouse driver under Ubuntu Linux. I have got the psmouse-base.c and I can compile it and load it to a kernel as well. The only thing which I want to do is to inverse the mouse. I have found this function which receives the data from the mouse
psmouse_interrupt(struct serio *serio,unsigned char data, unsigned int flags)
where the received data is stored in the unsigned char data. I figured it out that 6 data represents every possible mouse state so it receives 6 data and after 6 again but I cannot figure it out what does these data stand for. If somebody could tell me the answer or tell me where to find a documentation which describes it I would be happy.
I think I have found something. Since I use touch pad I keep receiving 6 byte. I found the description of the data here: www.synaptics.com/sites/default/files/511-000024-01a.pdf. It can be found at the 2nd and the 3rd page. According to this documentation the direction of the movement can be found in the 4th byte's 4th and 5th bits. But the following code does nothing:
if (psmouse->pktcnt == 3)
{
data |= 1 << 4;
data |= 1 << 5;
}
I would assume that I could only move my mouse only in one direction in the x and the y axes.
I have found out that the driver which is responsible for my touch pad is elentech.c.
x1 = ((packet[1] & 0x0f) << 8) | packet[2];
y1 = etd->y_max - (((packet[4] & 0x0f) << 8) | packet[5]);
And these lines calculates the movement. I could reverse my touch pad in the x axes but it was just a luck. I have no idea why it works. The following line does it:
psmouse->packet[1] *=-1;
psmouse->packet[2] *=-1;
However I would assume that the next lines do the same thing that the previous two but they don't:
psmouse->packet[1] ^= 0x80;
psmouse->packet[2] ^= 0x80;
And I wasnt able to inverse the mouse in the y axes. Any idea?

You are probably better off modifying the code that handles the packet, psmouse_process_byte, rather than the interrupt handler itself.
It reports the X&Y movements here, and it shouldn't be very hard to make it move the other way around.
Basically, all you need to do is reverse the XNG/YNG bits (packet[0] bits 4 & 5 respectively).
Here's one page that describes the packet format:
http://www.computer-engineering.org/ps2mouse/
Another here:
http://wiki.osdev.org/Mouse_Input

Related

Understanding some sample code?

I'm trying to get my head around this sample on arduino for the MPU-9150 sensor. It is connected over the I2C bus and it uses this function to sample from the sensor,
int MPU9150_readSensor(int addrL, int addrH){
Wire.beginTransmission(MPU9150_I2C_ADDRESS);
Wire.write(addrL);
Wire.endTransmission(false);
Wire.requestFrom(MPU9150_I2C_ADDRESS, 1, true);
byte L = Wire.read();
Wire.beginTransmission(MPU9150_I2C_ADDRESS);
Wire.write(addrH);
Wire.endTransmission(false);
Wire.requestFrom(MPU9150_I2C_ADDRESS, 1, true);
byte H = Wire.read();
return (int16_t)((H<<8)+L);
}
addrL and addrH are two addresses for instance,
#define MPU9150_TEMP_OUT_H 0x41
#define MPU9150_TEMP_OUT_L 0x42
What i get back over the serial monitor when the values are printed is always -1, i think this is somthing to do with the (int16_t) type who's removal does nothing to the printed value but im not sure? Also im not sure why there are two address for getting a single value, H and L and why they are bit shifted and added together? Is it something to do with the I2C bus?
I have a big lack of knowledge here and am trying to understand this so any help would be much appreciated.
Thanks
Alex

C++ ASIO, accessing buffers

I have no experience in audio programming and C++ is quite low level language so I have a little problems with it. I work with ASIO SDK 2.3 downloaded from http://www.steinberg.net/en/company/developers.html.
I am writing my own host based on example inside SDK.
For now I've managed to go through the whole sample and it looks like it's working. I have external sound card connected to my PC. I've successfully loaded driver for this device, configured it, handled callbacks, casting data from analog to digital etc. common stuff.
And part where I am stuck now:
When I play some track via my device I can see bars moving in the mixer (device's software). So device is connected in right way. In my code I've picked the inputs and outputs with the names of the bars that are moving in mixer. I've also used ASIOCreateBuffers() to create buffer for each input/output.
Now correct me if I am wrong:
When ASIOStart() is called and driver is in running state, when I input the sound signal to my external device I believe the buffers get filled with data, right?
I am reading the documentation but I am a bit lost - how can I access the data being sent by device to application, stored in INPUT buffers? Or signal? I need it for signal analysis or maybe recording in future.
EDIT: If I had made it to complicated then in a nutshell my question is: how can I access input stream data from code? I don't see any objects/callbacks letting me to do so in documentation.
The hostsample in the ASIO SDK is pretty close to what you need. In the bufferSwitchTimeInfo callback there is some code like this:
for (int i = 0; i < asioDriverInfo.inputBuffers + asioDriverInfo.outputBuffers; i++)
{
int ch = asioDriverInfo.bufferInfos[i].channelNum;
if (asioDriverInfo.bufferInfos[i].isInput == ASIOTrue)
{
char* buf = asioDriver.bufferInfos[i].buffers[index];
....
Inside of that if block asioDriver.bufferInfos[i].buffers[index] is a pointer to the raw audio data (index is a parameter to the method).
The format of the buffer is dependent upon the driver and that can be discovered by testing asioDriverInfo.channelInfos[i].type. The types of formats will be 32bit int LSB first, 32bit int MSB first, and so on. You can find the list of values in the ASIOSampleType enum in asio.h. At this point you'll want to convert the samples to some common format for downstream signal processing code. If you're doing signal processing you'll probably want convert to double. The file host\asioconvertsample.cpp will give you some idea of what's involved in the conversion. The most common format you're going to encounter is probably INT32 MSB. Here is how you'd convert it to double.
for (int i = 0; i < asioDriverInfo.inputBuffers + asioDriverInfo.outputBuffers; i++)
{
int ch = asioDriverInfo.bufferInfos[i].channelNum;
if (asioDriverInfo.bufferInfos[i].isInput == ASIOTrue)
{
switch (asioDriverInfo.channelInfos[i].type)
{
case ASIOInt32LSB:
{
double* pDoubleBuf = new double[_bufferSize];
for (int i = 0 ; i < _bufferSize ; ++i)
{
pDoubleBuf[i] = *(int*)asioDriverInfo.bufferInfos.buffers[index] / (double)0x7fffffff;
}
// now pDoubleBuf contains one channels worth of samples in the range of -1.0 to 1.0.
break;
}
// and so on...
Thank you very much. Your answer helped quite much but as I am inexperienced with C++ a bit :P I find it a bit problematic.
In general I've written my own host based on hostsample. I didn't implement asioDriverInfo structure and use common variables for now.
My first problem was:.
char* buf = asioDriver.bufferInfos[i].buffers[index];
as I got error that I can't cast (void*) to char* but this probably solved the problem:
char* buf = static_cast<char*>(bufferInfos[i].buffers[doubleBufferIndex]);
My second problem is with the data conversion. I've checked the file you've recommended me but I find it a little black magic. For now I am trying to follow your example and:
for (int i = 0; i < inputBuffers + outputBuffers; i++)
{
if (bufferInfos[i].isInput)
{
switch (channelInfos[i].type)
{
case ASIOSTInt32LSB:
{
double* pDoubleBuf = new double[buffSize];
for (int j = 0 ; j < buffSize ; ++j)
{
pDoubleBuf[j] = bufferInfos[i].buffers[doubleBufferIndex] / (double)0x7fffffff;
}
break;
}
}
}
I get error there:
pDoubleBuf[j] = bufferInfos[i].buffers[doubleBufferIndex] / (double)0x7fffffff;
which is:
error C2296: '/' : illegal, left operand has type 'void *'
What I don't get is that in your example there is no table there: asioDriverInfo.bufferInfos.buffers[index] after bufferInfos and even if I fix it... to what kind of type should I cast it to make it work. P
PS. I am sure ASIOSTInt32LSB data type is fine for my PC.
The ASIO input and output buffers are accessible using void pointers, but using memcpy or memmove to access I/O buffer will create a memory copy which is to be avoided if you are doing real-time processing. I would suggest casting the pointer type to int* so you can directly access them.
It's also very slow in real-time processing to cast types 1 by 1 when you have like 100+ audio channels when AVX2 is supported on most CPUs.
_mm256_loadu_si256() and _mm256_cvtepi32_ps() will do the conversion much faster.

C++ reading 16bit Wav file

I'm having trouble reading in a 16bit .wav file. I have read in the header information, however, the conversion does not seem to work.
For example, in Matlab if I read in wave file I get the following type of data:
-0.0064, -0.0047, -0.0051, -0.0036, -0.0046, -0.0059, -0.0051
However, in my C++ program the following is returned:
0.960938, -0.00390625, -0.949219, -0.00390625, -0.996094, -0.00390625
I need the data to be represented the same way. Now, for 8 bit .wav files I did the following:
uint8_t c;
for(unsigned i=0; (i < size); i++)
{
c = (unsigned)(unsigned char)(data[i]);
double t = (c-128)/128.0;
rawSignal.push_back(t);
}
This worked, however, when I did this for 16bit:
uint16_t c;
for(unsigned i=0; (i < size); i++)
{
c = (signed)(signed char)(data[i]);
double t = (c-256)/256.0;
rawSignal.push_back(t);
}
Does not work and shows the output (above).
I'm following the standards found Here
Where data is a char array and rawSignal is a std::vector<double> I'm probably just handing the conversion wrong but cannot seem to find out where. Anyone have any suggestions?
Thanks
EDIT:
This is what is now displaying (In a graph):
This is what it should be displaying:
There are a few problems here:
8 bit wavs are unsigned, but 16 bit wavs are signed. Therefore, the subtraction step given in the answers by Carl and Jay are unnecessary. I presume they just copied from your code, but they are wrong.
16 bit waves have a range from -32,768 to 32,767, not from -256 to 255, making the multiplication you are using incorrect anyway.
16-bit wavs are 2 bytes, thus you must read two bytes to make one sample, not one. You appear to be reading one character at a time. When you read the bytes, you may have to swap them if your native endianness is not little-endian.
Assuming a little-endian architecture, your code would look more like this (very close to Carl's answer):
for (int i = 0; i < size; i += 2)
{
int c = (data[i + 1] << 8) | data[i];
double t = c/32768.0;
rawSignal.push_back(t);
}
for a big-endian architecture:
for (int i = 0; i < size; i += 2)
{
int c = (data[i] << 8) | data[i+1];
double t = c/32768.0;
rawSignal.push_back(t);
}
That code is untested, so please LMK if it doesn't work.
(First of all about little-endian/big-endian-ness. WAV is just a container format, the data encoded in it can be in countless format. Most of the codecs are lossless (MPEG Layer-3 aka MP3, yes, the stream can be "packaged" into a WAV, various CCITT and other codecs). You assume that you deal with some kind of PCM format, where you see the actual wave in RAW format, no lossless transformation was done on it. The endianness depends on the codec, which produced the stream.
Is the endianness of format params guaranteed in RIFF WAV files?)
It's also a question if the one PCM sample is in linear scale sampled integer or there some scaling, log scale or other transformation behind it. Regular PCM wav files I encountered were simple linear scale samples, but I'm not working in the audio recording or producing industry.
So a path to your solution:
Make sure that you are dealing with regular 16 bit PCM encoded RIFF WAV file.
While reading the stream, always read two bytes (char) at a time and convert the two chars into a 16 bit short. People showed this before me.
The wave form you show clearly suggest that you either not estimated the frequency well (or you just have one mono channel instead of a stereo). Because the sampling rate (44.1kHz, 22KHz, 11KHz, 8kHz, etc) is just as important as the resolution (8 bit, 16 bit, 24 bit, etc). Maybe in the first case you had a stereo data. You can read it in as mono, you may not notice it. In the second case if you have mono data, then you'll run out of samples half way into reading the data. That's what it seems to happen according to your graphs. Talking about the other cause: the lower sampling resolutions (and 16 bit is also lower) often paired with lower sampling rates. So if your input data is the recording time, and you think you have a 22kHz data, but it's actually just 11kHz, then again you'll run out half way through from the actual samples and read in memory garbage. So either one of these.
Make sure that you interpret and treat your loop iterator variable and the size well. It seems that size tells how many bytes you have. You'll have exactly half as much short integer samples. Notice, that Bjorn's solution correctly increments i by 2 because of that.
My working code is
int8_t* buffer = new int8_t[size];
/*
HERE buffer IS FILLED
*/
for (int i = 0; i < size; i += 2)
{
int16_t c = ((unsigned char)buffer[i + 1] << 8) | (unsigned char)buffer[i];
double t = c/32768.0;
rawSignal.push_back(t);
}
A 16-bit quantity gives you a range from -32,768 to 32,767, not from -256 to 255 (that's just 9 bits). Use:
for (int i = 0; i < size; i += 2)
{
c = (data[i + 1] << 8) + data[i]; // WAV files are little-endian
double t = (c - 32768)/32768.0;
rawSignal.push_back(t);
}
You might want something more like this:
uint16_t c;
for(unsigned i=0; (i < size); i++)
{
// get a 16 bit pointer to the array
uint16_t* p = (uint16_t*)data;
// get the i-th element
c = *( p + i );
// convert to signed? I'm guessing this is what you want
int16_t cs = (int16_t)c;
double t = (cs-256)/256.0;
rawSignal.push_back(t);
}
Your code converts the 8 bit value to a signed value then writes it into an unsigned variable. You should look at that and see if it's what you want.

My interrupt routine does not access an array correctly

Update to this - seems like there are some issues with trig functions in math.h (using MPIDE compiler)- it is no wonder I couldn't see this with my debugger which was using its own math.h and therefore giving me the expected (correct solutions). I found this out by accident on the microchip boards and instead implemented a 'fast sine/cosine' algorithm instead (see devmaster dot com for this). My ISR and ColourWheel array now work perfectly.
I must say that, as a fairly newcomer to C/C++ I have spent a lot of hours reviewing and re-reviewing my own code for errors. The last possible thing on my mind was that some very basic functions that were no doubt written decades ago could give such problems.
I suppose I would have seen the problem earlier myself if I'd had access to a screen dump of the actual array but, as my chip is connected to my led cube I've no way to access the data in the chip directly.
Hey, ho !! - when I get the chance I'll post a link to a u tube video showing the wave function that I've now been able to program and looks pretty good on my LED cube.
Russell
ps
Thank you all so very much for your help here - it stopped me giving up completely by giving me some avenues to chase down - certainly did not know much about endianess before this so learned about that and some systematic ways to go about a robust debugging approach.
I have a problem when trying to access an array in an interrupt routine.
The following is a snippet of code from inside the ISroutine.
if (CubeStatusArray[x][y][Layer]){
for(int8_t bitpos=7; bitpos >= 0; bitpos--){
if((ColourWheel[Colour]>>16)&(1<<bitpos)) { // This line seems to cause trouble
setHigh(SINRED_PORT,SINRED_PIN);
}
else {
setLow(SINRED_PORT,SINRED_PIN);
}
}
}
..........
ColourWheel[Colour] has been declared as follows at the start of my program (outside any function)
static volatile uint32_t ColourWheel[255]; //this is the array from which
//the colours can be obtained -
//all set as 3 eight bit numbers
//using up 24 bits of a 32bit
//unsigned int.
What this snippet of code is doing is taking each bit of an eight bit segment of the code and setting the port/pin high or low accordingly with MSB first (I then have some other code which updates a TLC5940 IC LED driver chip for each high/low on the pin and the code goes on to take the green and blue 8 bits in a similar way).
This does not work and my colours output to my LEDs behave incorrectly.
However, if I change the code as follows then the routine works
if (CubeStatusArray[x][y][Layer]){
for(int8_t bitpos=7; bitpos >= 0; bitpos--){
if(0b00000000111111111110101010111110>>16)&(1<<bitpos)) { // This line seems to cause trouble
setHigh(SINRED_PORT,SINRED_PIN);}
else {
setLow(SINRED_PORT,SINRED_PIN);
}
}
}
..........
(the actual binary number in the line is irrelevant (the first 8 bits are always zero, the next 8 bits represent a red colour, the next a blue colour etc)
So why does the ISR work with the fixed number but not if I try to use a number held in an array.??
Following is the actual code showing the full RGB update:
if (CubeStatusArray[x][y][Layer]){
for(int8_t bitpos=7; bitpos >= 0; bitpos--){
{if((ColourWheel[Colour]>>16)&(1<<bitpos))
{setHigh(SINRED_PORT,SINRED_PIN);}
else
{setLow(SINRED_PORT,SINRED_PIN);}}
{if((ColourWheel[Colour]>>8)&(1<<bitpos))
{setHigh(SINGREEN_PORT,SINGREEN_PIN);}
else
{setLow(SINGREEN_PORT,SINGREEN_PIN);}}
{if((ColourWheel[Colour])&(1<<bitpos))
{setHigh(SINBLUE_PORT,SINBLUE_PIN);}
else
{setLow(SINBLUE_PORT,SINBLUE_PIN);}}
pulse(SCLK_PORT, SCLK_PIN);
pulse(GSCLK_PORT, GSCLK_PIN);
Data_Counter++;
GSCLK_Counter++; }
I assume the missing ( after if is a typo.
The indicated research technique, in the absence of a debugger, is:
Confirm one more time that test if( ( 0b00000000111111111110101010111110 >> 16 ) & ( 1 << bitpos ) ) works. Collect (print) the result for each bitpos
Store 0b00000000111111111110101010111110 in element 0 of the array. Repeat with if( ( ColourWheel[0] >> 16 ) & ( 1 << bitpos ) ). Collect results and compare with base case.
Store 0b00000000111111111110101010111110 in all elements of the array. Repeat with if( ( ColourWheel[Colour] >> 16 ) & ( 1 << bitpos ) ) for several different Colour values (assigned manually, though). Collect results and compare with base case.
Store 0b00000000111111111110101010111110 in all elements of the array. Repeat with if( ( ColourWheel[Colour] >> 16 ) & ( 1 << bitpos ) ) with a value for Colour normally assigned. Collect results and compare with base case.
Revert to the original program and retest. Collect results and compare with base case.
Confident that the value in ColourWheel[Colour] is not as expected or unstable. Validate the index range and access once. Code speed enhancement included.
[Edit] If the receiving end does not like the slower signal changes caused by replacing a constant with ColourWheel[Colour]>>16, more effcient code may solve this.
if (CubeStatusArray[x][y][Layer]){
uint32_t value = 0;
uint32_t maskR = 0x800000UL;
uint32_t maskG = 0x8000UL;
uint32_t maskB = 0x80UL;
if ((Colour >= 0) && (Colour < 255)) {
value = ColourWheel[Colour];
}
// All you need to do is shift 'value'
for(int8_t bitpos=7; bitpos >= 0; bitpos--){
{ if( (value & maskR) // set red
}
{ if( (value & maskG) // set green
}
{ if( (value & maskB) // set blue
}
value <<= 1;
}

Want to translate/typecast parts of a char array into values

I'm playing around with networking, and I've hit a bit of a road block with translating a packet of lots of data into the values I want.
Basically I've made a mockup packet of what I'm expecting my packets to look like a bit. Essentially a Char (8bit value) indicating what the message is, and that is detected by a switch statement which then populates values based off the data after that 8 bit value. I'm expecting my packet to have all sorts of messages in it which may not be in order.
Eg, I may end up with the heartbeat at the end, or a string of text from a chat message, etc.
I just want to be able to say to my program, take the data from a certain point in the char array and typecast (if thats the term for it?) them into what I want them to be. What is a nice easy way to do that?
char bufferIncoming[15];
ZeroMemory(bufferIncoming,15);
//Making a mock packet
bufferIncoming[0] = 0x01; //Heartbeat value
bufferIncoming[1] = 0x01; //Heartbeat again just cause I can
bufferIncoming[2] = 0x10; //This should = 16 if its just an 8bit number,
bufferIncoming[3] = 0x00; // This
bufferIncoming[4] = 0x00; // and this
bufferIncoming[5] = 0x00; // and this
bufferIncoming[6] = 0x09; // and this should equal "9" of its is a 32bit number (int)
bufferIncoming[7] = 0x00;
bufferIncoming[8] = 0x00;
bufferIncoming[9] = 0x01;
bufferIncoming[10] = 0x00; //These 4 should be 256 I think when combines into an unsigned int
//End of mockup packet
int bufferSize = 15; //Just an arbitrary value for now
int i = 0;
while (i < bufferSize)
{
switch (bufferIncoming[i])
{
case 0x01: //Heart Beat
{
cout << "Heartbeat ";
}
break;
case 0x10: //Player Data
{
//We've detected the byte that indicates the following 8 bytes will be player data. In this case a X and Y position
playerPosition.X = ??????????; //How do I combine the 4 hex values for this?
playerPosition.Y = ??????????;
}
break;
default:
{
cout << ".";
}
break;
}
i++;
}
cout << " End of Packet\n";
UPDATE
Following Clairvoire's idea I added the following.
playerPosition.X = long(bufferIncoming[3]) << 24 | long(bufferIncoming[4]) << 16 | long(bufferIncoming[5]) << 8 | long(bufferIncoming[6]);
Notice I changed around the shifting values.
Another important change was
unsigned char bufferIncoming[15]
If I didn't do that, I was getting negative values being mixed with the combining of each element. I don't know what the compiler was doing under the hood but it was bloody annoying.
As you can imagine this is not my preferred solution but I'll give it a go. "Chad" has a good example of how I could have structured it, and a fellow programmer from work also recommended his implementation. But...
I have this feeling that there must be a faster cleaner way of doing what I want. I've tried things like...
playerPosition.X = *(bufferIncoming + 4) //Only giving me the value of the one hex value, not the combined >_<
playerPosition.X = reinterpret_cast<unsigned long>(&bufferIncoming); //Some random number that I dont know what it was
..and a few other things that I've deleted that didn't work either. What I was expecting to do was point somewhere in that char buffer and say "hey playerPosition, start reading from this position, and fill in your values based off the byte data there".
Such as maybe...
playerPosition = (playerPosition)bufferIncoming[5]; //Reads from this spot and fills in the 8 bytes worth of data
//or
playerPosition.Y = (playerPosition)bufferIncoming[9]; //Reads in the 4 bytes of values
...Why doesnt it work like that, or something similar?
There is probably a pretty version of this, but personally I would combine the four char variables using left shifts and ors like so:
playerPosition.X = long(buffer[0]) | long(buffer[1])<<8 | long(buffer[2])<<16 | long(buffer[3])<<24;
Endianness shouldn't be a concern, since bitwise logic is always executed the same, with the lowest order on the right (like how the ones place is on the right for decimal numbers)
Edit: Endianness may become a factor depending on how the sending machine initially splits the integer up before sending it across the network. If it doesn't decompose the integer in the same way as it does to recompose it using shifts, you may get a value where the first byte is last and the last byte is first. It's small ambiguities like these that prompt most to use networking libraries, aha.
An example of splitting an integer using bitwise would look something like this
buffer[0] = integer&0xFF;
buffer[1] = (integer>>8)&0xFF;
buffer[2] = (integer>>16)&0xFF;
buffer[3] = (integer>>24)&0xFF;
In a typical messaging protocol, the most straight forward way is to have a set of messages that you can easily cast, using inheritance (or composition) along with byte aligned structures (important for casting from a raw data pointer in this case) can make this relatively easy:
struct Header
{
unsigned char message_type_;
unsigned long message_length_;
};
struct HeartBeat : public Header
{
// no data, just a heartbeat
};
struct PlayerData : public Header
{
unsigned long position_x_;
unsigned long position_y_;
};
unsigned char* raw_message; // filled elsewhere
// reinterpret_cast is usually best avoided, however in this particular
// case we are casting two completely unrelated types and is therefore
// necessary
Header* h = reinterpret_cast<Header*>(raw_message);
switch(h)
{
case HeartBeat_MessageType:
break;
case PlayerData_MessageType:
{
PlayerData* data = reinterpret_cast<PlayerData*>(h);
}
break;
}
Was talking to one of the programmers I know on Skype and he showed me the solution I was looking for.
playerPosition.X = *(int*)(bufferIncoming+3);
I couldn't remember how to get it to work, or what its called. But it seems all good now.
Thanks guys for helping out :)