How to transfer data over SPI (I'm getting a null buffer) - c++

I am new to SPI communication. I am trying to send 32 bit data. Here in this below code i have my transmit data buffer of 32 bit unsigned, and initialized a dummy received buffer 32 bit long. I get the fact that SPI sends and receive data at the same time. I have set the Chipselect pin low and the MOSI pin high and the clock is set to the mode (0,1) in the applilet code Generator. My first idea was to send a master command to read a particular address from the slave but i receive a null buffer. Is there anyone who could provide me some suggestions?
u32 CO2_SpiDriver::getData(const u32 Buffer) {
//#[ operation getData(u32)
u32 transmitBuffer = Buffer; //contains the data to be transmitted
u8 pBuffer_u8[4];
memcpy(pBuffer_u8, &Buffer, 4);
u32 receive = 0U;
u8 pReceiverBuffer_u8[4];
memcpy(pReceiverBuffer_u8, &receive, 4); //duummy buffer to receive data
setCSpin(true); //Chip select pin active low
setMOSIpin(true); //set MOSI pin active high
R_CSI20_Create();
R_CSI20_Start(); //start the communication
(R_CSI20_Send_Receive(pBuffer_u8, 4, pReceiverBuffer_u8));
return receive;
//#]
}
Thanks

Related

Reading value with I2C protocol from magnetoscope

I'm pretty new still to all this. So please excuse me if there is something obvious.
I have been struggling with the included datasheet for a magnetoscope. For some reason it seems like everything is working, but when I wave a magnet at it, I'm not really getting any response in the serial.
So here is some information.
#include <Wire.h>
void setup() {
Wire.begin(); // join i2c bus (address optional for master)
Serial.begin(9600); // start serial communication at 9600bps
}
void loop() {
int reading = 0;
int Address = 30;
Wire.beginTransmission(Address);
Wire.write(byte(0x03));
Wire.write(byte(0x04));
Wire.endTransmission();
Wire.requestFrom(Address, 2);
delay(10);
if (2 <= Wire.available()) {
reading = Wire.read();
reading = reading << 8;
reading |= Wire.read();
Serial.println(int(reading));
}
delay(250); // wait a bit since people have to read the output :)
}
With this code, I receive a number.
-5637
-5637
-5637
-5637
-5637
But then if I remove the following line Wire.write(byte(0x03));, my output does not change. The value from the device is supposed to be expressed as two's complement.
So at first I thought I didn't know how to send multiple bytes to the device, but after some research I found that I was doing it right (I think).
Then if I only put Wire.write(byte(0x03)); I receive "0" as response. Reading the datasheet I see that response 0 means that the command is invalid.
I included the datasheet to this post. Can someone point me in the right dirrection? The IC I'm using is an LSM303DLHC and I'm using it from this "sheild".
Here is the datasheet.
The following picture is a picture of the communication of the bus.
I believe the following code does this, which is like Table 11 in the datasheet:
Wire.beginTransmission(Address); // START and write to device address 0x1E
Wire.write(byte(0x03)); // Set the register pointer to sub-address 0x03
Wire.write(byte(0x04)); // Write a value of 0x04 to sub-address 0x3
Wire.endTransmission(); // STOP.
Then I suspect the device's register pointer gets automatically incremented from register 0x03 to 0x04. And then the rest of your code maybe reads two bytes from sub-address 0x04 and 0x05.
You didn't express your intention for your code but I suspect the above is NOT what you intended. My guess is that you intend to read two bytes from device address 0x1E, sub-address 0x03 and sub-address 0x04. Is that right?
You should be doing an operation like what is described in Table 13.
Wire.beginTransmission(Address); // START and write to device address 0x1E
Wire.write(byte(0x03)); // Set the register pointer to sub-address 0x03
Wire.requestFrom(Address, 2); // REPEAT-START and read 2 bytes from sub-address 0x03 and 0x04

Convert uint16_t integer to UTF-8 string stored in uint8_t array

I'm working on a project involving an Arduino that's reading sensor data and transmitting it to an app via a BLE connection. The sensor library I'm using returns a uint16_t integer value and the BLE library is expecting a uint8_t[20] as a buffer. The App on the receiving end is interpreting the data as UTF-8 bytes. So I need to convert a uint16_t integer into a UTF-8 string and store it in an array.
I haven't used C++ in 15 years so this is the best I've managed to come up with but it fails in a number of ways.
// global vars for transmit buffer
static uint8_t transmit_buffer[20];
static uint8_t transmit_buffer_size;
// read sensor data using vendor lib
uint16_t mm = sensor.read();
char buf[6];
transmit_buffer_size = sprintf(buf, "%d", mm);
// copy data buffer to transmit buffer
for(uint8_t i = 0; i < transmit_buffer_size; i++)
transmit_buffer[i] = buf[i];
// transmit data
transmitter.send(transmit_buffer, transmit_buffer_size);
// zero out the transmit buffer after use
memset(transmit_buffer, 0x00, 20);
What's a better way of doing this?

Openframeworks, reading serial data from Arduino

I'm trying to read serial data from an Arduino UNO using an ofSerialobject and assign it as an int.
I am able to read in individual bytes, however, the values I'm receiving in the openframeworks console are not the same as the values I'm reading in the Arduino serial monitor.
I have provided screenshots of the respective consoles:
My Arduino code is simply the basic "AnalogReadSerial" example available with the Arduino IDE.
// the setup routine runs once when you press reset:
void setup() {
// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
}
// the loop routine runs over and over again forever:
void loop() {
// read the input on analog pin 0:
int sensorValue = analogRead(A0);
// print out the value you read:
Serial.println(sensorValue);
delay(1); // delay in between reads for stability
}
Whereas my C++ code is mostly copied from the documentation for the ofSerial readByte function.
void serialReader::setup()
{
serial.listDevices();
vector <ofSerialDeviceInfo> deviceList = serial.getDeviceList();
serial.setup("COM4", 9600); //open the first device and talk to it at 9600 baud
}
void serialReader::printByteToConsole()
{
int myByte = 0;
myByte = serial.readByte();
if ( myByte == OF_SERIAL_NO_DATA )
printf("\nno data was read");
else if ( myByte == OF_SERIAL_ERROR )
printf("\nan error occurred");
else
printf("\nmyByte is %d ", myByte);
}
Any insight into what may be causing this disparity between the readings would be greatly appreciated. Thank you.
Arduino's Serial.println translates the raw bytes into their ASCII equivalent and then sends those bytes followed by linefeed (10) and carriage return (13) bytes. So, the raw byte 12 is sent as 4 total bytes -- two bytes representing the ASCII 1 (49), 2 (50) and then (10) and (13) for the new line characters. So, since openFrameworks does not automatically translate the ASCII values back into raw bytes, you are seeing the ASCII version. The Arduino console shows you the ASCII version as readable text.
You can see the translation between ASCII and raw bytes (Decimal / aka DEC) here:
http://www.asciitable.com/
If you want the two numbers to match on both sides, consider using Serial.write in Arduino to write the raw bytes without ASCII translation and new line characters.

Capturing with pcap can't keep up?

For a small tool that I am building for OSX, I want to capture the lengths of packets send and received from a certain ethernet controller.
When I fetch the ethernet cards I also get extra information like maximum packet sizes, link speeds etc.
When I start the (what I call) 'trafficMonitor' I launch it like this:
static void initializeTrafficMonitor(const char* interfaceName, int packetSize) {
char errbuf[PCAP_ERRBUF_SIZE];
pcap_t* sessionHandle = pcap_open_live(interfaceName, packetSize, 1, 100, errbuf);
if (sessionHandle == NULL)
{
printf("Error opening session for device %s: %s\n", interfaceName, errbuf);
return;
}
pcap_loop(sessionHandle, -1, packetReceived, NULL);
}
The supplied interfaceName is the BSD name of the interface, for example en0. The packetSize variable is an integer where I supply the maximum packetsize for that ethernet adapter (that seemed logical at that time). For example the packetsize for my WiFi adapter is 1538.
My callback method is called packetReceived and looks like this:
void packetReceived(u_char* args, const struct pcap_pkthdr* header, const u_char* packet) {
struct pcap_work_item* item = malloc(sizeof(struct pcap_pkthdr) + header->caplen);
item->header = *header;
memcpy(item->data, packet, header->caplen);
threadpool_add(threadPool, handlePacket, item, 0);
}
I stuff all the properties for my packet in a new struct and launch a worker thread to analyze the packet and process the results. This is to not keep pcap waiting and is an attempt to fix this problem which already existed before adding this worker thread method.
The handlePacket method is like this:
void handlePacket(void* args) {
const struct pcap_work_item* workItem = args;
const struct sniff_ethernet* ethernet = (struct sniff_ethernet*)(workItem->data);
u_int size_ip;
const struct sniff_ip* ip = (struct sniff_ip*)(workItem->data + SIZE_ETHERNET);
size_ip = IP_HL(ip) * 4;
if (size_ip < 20) {
return;
}
const u_int16_t type = ether_packet(&workItem->header, workItem->data);
switch (ntohs(type)) {
case ETHERTYPE_IP: {
char sourceIP[INET_ADDRSTRLEN];
char destIP[INET_ADDRSTRLEN];
inet_ntop(AF_INET, &ip->ip_src, sourceIP, sizeof(sourceIP));
inet_ntop(AF_INET, &ip->ip_dst, destIP, sizeof(destIP));
[refToSelf registerPacketTransferFromSource:sourceIP destinationIP:destIP packetLength:workItem->header.caplen packetType:ethernet->ether_type];
break;
}
case ETHERTYPE_IPV6: {
// handle v6
char sourceIP[INET6_ADDRSTRLEN];
char destIP[INET6_ADDRSTRLEN];
inet_ntop(AF_INET6, &ip->ip_src, sourceIP, sizeof(sourceIP));
inet_ntop(AF_INET6, &ip->ip_dst, destIP, sizeof(destIP));
[refToSelf registerPacketTransferFromSource:sourceIP destinationIP:destIP packetLength:workItem->header.caplen packetType:ethernet->ether_type];
break;
}
}
}
Based on the type of ethernet packet I try to figure out if it is an packet send using an IPv4 or IPv6 address. After that is determined I send some details to an objectiveC method (Source IP address, Destination IP address and packet length).
I cast the packet to the structs explained on the website of tcpdump (http://www.tcpdump.org/pcap.html).
The problem is that pcap either does not seem to keep up with the packets received/send. Either I am not sniffing all the packets or the packet lengths are wrong.
Does anyone have any pointers where I need to adjust my code to make pcap catch them all or where I have some sort of problem.
These methods are called from my objectiveC application and the refToSelf is a reference to a objC class.
Edit: I am calling the initializeTrafficMonitor in a background thread, because the pcap_loop is blocking.
On which version of OS X is this? In releases prior to Lion, the default buffer size for libpcap on systems using BPF, such as OS X, was 32K bytes; 1992 called, they want their 4MB workstations and 10Mb Ethernets back. In Lion, Apple updated libpcap to version 1.1.1; in libpcap 1.1.0, the default BPF buffer size was increased to 512MB (the maximum value in most if not all systems that have BPF).
If this is Snow Leopard, try switching to the new pcap_create()/pcap_activate() API, and use pcap_set_buffer_size() to set the buffer size to 512MB. If this is Lion or later, that won't make a difference.
That won't help if your program can't keep up with the average packet rate, but it will, at least, mean fewer packet drops if there are temporary bursts that exceed the average.
If your program can't keep up with the average packet rate, then, if you only want the IP addresses of the packets, try setting the snapshot length (which you call "packetSize"`) to a value large enough to capture only the Ethernet header and the IP addresses for IPv4 and IPv6. For IPv4, 34 bytes would be sufficient (libpcap or BPF might round that up to a larger value), as that's 14 bytes of Ethernet header + 20 bytes of IPv4 header without options. For IPv6, it's 54 bytes, for 14 bytes of Ethernet header and 40 bytes of IPv6 header. So use a packetSize value of 54.
Note that, in this case, you should use the len field, NOT the caplen field, of the struct pcap_pkthdr, to calculate the packet length. caplen is the amount of data that was captured, and will be no larger than the specified snapshot length; len is the length "on the wire".
Also, you might want to try running pcap_loop() and all the processing in the same thread, and avoid allocating a buffer for the packet data and copying it, to see if that speeds the processing up. If you have to do them in separate threads, make sure you free the packet data when you're done with it.

Byte Alignment when sending or receiving a struct

I am facing a bit of issue in writing a network software. When I try to send or receive a struct that contains a data type of 8 bytes the next sent or received struct is somehow affected. I have a few things in mind but first I wanted to confirm one thing before I get into debugging.
I am using 32-bit Ubuntu 11.04 (silly me) on a 64-bit x-86 system. Does this has anything to do with the byte alignment problems?
I am developing a controller to communicate with the Open Flow switch. The openflow protocol defines a set of specs based on which switches are built. The problem is when I try to communicate with the switch everything goes fine until I send or receive a struct that contains a 64 bit date type (uint64_t). The specific structs that are used for sending and receiving features are
estruct ofp_header {
uint8_t version; /* OFP_VERSION. */
uint8_t type; /* One of the OFPT_ constants. */
uint16_t length; /* Length including this ofp_header. */
uint32_t xid; /* Transaction id associated with this packet.
Replies use the same id as was in the request
to facilitate pairing. */};
assert(sizeof(struct ofp_header) == 8);
/* Switch features. */
struct ofp_switch_features {
struct ofp_header header;
uint64_t datapath_id; /* Datapath unique ID. The lower 48-bits are for a MAC address, while the upper 16-bits are implementer-defined. */
uint32_t n_buffers; /* Max packets buffered at once. */
uint8_t n_tables; /* Number of tables supported by datapath. */
uint8_t pad[3]; /* Align to 64-bits. */
/* Features. */ /* Bitmap of support "ofp_capabilities". */
uint32_t capabilities; /* Bitmap of supported "ofp_action_type"s. */
uint32_t actions;
/* Port info.*/
struct ofp_phy_port ports[0]; /* Port definitions. The number of ports is inferred from the length field in the header. */
};
assert(sizeof(struct ofp_switch_features) == 32);
The problem is when I communicate using any other structs that have data types less than 64-bit everything goes fine. When I receive features reply it shows the right values but after that if I receive any other struct it shows garbage values. Even if I receive features reply again I get garbage values. In short if at any point of code I receive features request or any other struct defined in the specs that has a data type of 64-bit the next structs receive garbage values. The code used for sending and receiving features request is as follows
////// features request and reply ////////////
ofp_header features_req;
features_req.version=OFP_VERSION;
features_req.type=OFPT_FEATURES_REQUEST;
features_req.length= htons(sizeof features_req);
features_req.xid = htonl(rcv_hello.xid);
if (send(connected, &features_req, sizeof(features_req), 0)==-1) {
printf("Error in sending message\n");
exit(-1);
}
printf("features req sent!\n");
ofp_switch_features features_rep={0};
if (recv(connected, &features_rep, sizeof(features_rep), 0)==-1) {
printf("Error in receiving message\n");
exit(-1);
}
printf("message type : %d\n",features_rep.header.type);
printf("version : %d\n",features_rep.header.version);
printf("message length: %d\n",ntohs(features_rep.header.length));
printf("xid : %d\n",ntohl(features_rep.header.xid));
printf("buffers: %d\n",ntohl(features_rep.n_buffers));
printf("tables: %d\n",features_rep.n_tables);
Convert your struct into an array of characters before sending them - this is call serialisation
Use the family of functions htons etc to ensure that integers are sent in network order. Saves hassle on the endians of the various machines
One the recieving end read the bytes and reconstruct the struct.
This will ensure that you will not have any hassle at all.
I got help from Daniweb.com and all credit goes to a guy with a nick NEZACHEM. His answer was, and I quote :
The problem has nothing to do with 64 bit types.
Values you read are not garbage, but a very valuable port definitions:
struct ofp_phy_port ports[0]; /* Port definitions. The number of ports is inferred from the length field in the header. */
Which means, once you've
recv(connected, &features_rep, sizeof(features_rep), 0)
you need to inspect features_rep.header.length,
figure out how many struct ofp_phy_port follow,
allocate memory for them and read those data.
I did that and thanks to him my problems were solved and all went well :)
thanx for everyone that replied.
cheers :)
You could even consider using serialization techniques: perhaps JSON, XDR, YAML could be relevant. or libraries like s11n, jansson, etc.
Here is what is want
features_req.version=OFP_VERSION;
features_req.type=OFPT_FEATURES_REQUEST;
features_req.length= htons(sizeof features_req);
features_req.xid = htonl(rcv_hello.xid);
char data[8];
data[0] = features_req.version;
data[1] = features_req.type;
memcpy(data + 2, &features_req.length, 2);
memcpy(data + 4, &features_req.xid, 4);
if (send(connected, data, 8) ....
On the receving end
char data[8];
if (recv(conncted, data, 8) ...
features_req.version = data[0];
features_req.type = data[1];
memcpy(&features_req.length, data + 2, 2);
memcpy(&features_req.xid, data + 4, 4);
features_req.length = ntohs(features_req.length);
features_req.xid= ntohl(features_req.xid);
1 In case you stick to sending the structures you should make sure they are byte aligned.
To do so use the pragma pack like this:
#pragma pack(1)
struct mystruct{
uint8_t myint8;
uint16_t myint16;
};
#pragma pack()
Doing so you makes sure this structure does use 3 bytes only.
2 For converting 64bit values from host order to network order this post reads interessing: Is there any "standard" htonl-like function for 64 bits integers in C++? (no, it only starts with c++ and ends with C also)