Arduino stops responding after writing to I2C - c++

I'm currently experimenting with writing to EEPROMs over I2C. Reading goes fine and I get excellent throughput. However when I try to write to the device, the Arduino stops responding, and I have to reset it in order to get it to work again.
The I2C write also doesn't show up in my I2C debugger.
void i2cWrite(unsigned char device, unsigned char memory, const char *wrBuf, unsigned short len) {
int i = 0;
ushort bytesWritten = 0;
ushort _memstart = memory;
ushort blockSize = 4;
#ifdef DEBUG_MSGS
char serialBuf[255] = { '\0'};
Serial.print("Writing To i2c: ");
sprintf(serialBuf, "%p", wrBuf);
Serial.println(serialBuf);
#endif //DEBUG_MSGS
while (bytesWritten < len) {
Wire.beginTransmission((int)device);
Wire.write((unsigned char)_memstart);
for (int j = 0; i < blockSize; j++) {
Wire.write(wrBuf[bytesWritten + j]);
}
Wire.endTransmission();
bytesWritten += blockSize;
_memstart += blockSize;
delay(25);
}
#ifdef DEBUG_MSGS
Serial.println("\mDone writing.");
#endif //DEBUG_MSGS
}
I'm quite unsure as to what I'm doing wrong. I'm getting the following output over the serial connection:
Write Request Received: Andy
Writing To i2c: 0xa800fd98
"Writing to i2c" always gives the same value, and it always seems to crash straight after.

the error seemed to be located in the loop as the output is
Write Request Received: Andy
Writing To i2c: 0xa800fd98
I'm working here
I wrote the memory adress
I wrote a byte of data
I wrote a byte of data
I wrote a byte of data
I wrote a byte of data
I wrote a byte of data
....
this seems to go on ad infinitum.
After adding a few more debug statements and changing the points Some programmer dude noticed
{
Wire.beginTransmission((int)device);
Serial.println("I'm working here");
Wire.write((unsigned char)_memstart);
Serial.println("I wrote the memory adress");
for (int j = 0; j < blockSize; j++) {
Wire.write(wrBuf[bytesWritten + j]);
Serial.println("I wrote a byte of data");
//Serial.write(wrBuf[bytesWritten + j]);
}
Wire.endTransmission();
Serial.println("I ended the transmission");
bytesWritten += blockSize;
_memstart += blockSize;
delay(25);
}
i noticed that i was checking for I < blocksize (copied from the reading part,) now i'm running into some other (small) issues but this solved the problem i was having.

Related

How is this Checksum calculated?

Good day friends.
I'm using Qt C++ and I've ran into a issue calculating a checksum for a serial communication protocol. I'm very new to serial programming and this is a little above my knowledge atm
Per their documentation.
Communication format
Head Address CID Datalength Data Check Tail
0x7E 0x01~0x0e 0x01 - - checksum 0x0D
the head never changes, the address can change, the CID is a command
I don't understand how the data length is to be calculated
the checksum I'm not clear on how they calculate it. Are they taking the first 5 bytes then calculating the checksum then adding the tail ? or only the "command" which to me is the CID byte. but that doesn't make sense.
The function they use to calculate the checksum
byte check(byte* buf, byte len)
{
byte i, chk= 0;
int sum = 0;
for(i = 0; i < len; i++)
{
chk ^= buf[i];
sum += buf[i];
}
return ((chk^sum)&0xFF);
}
Which I Wrote in Qt as
unsigned char MainWindow::Checksum_Check(char* buf, char len)
{
unsigned char i, chk = 0;
int sum = 0;
for(i = 0; i < len; i++)
{
chk ^= buf[i];
sum += buf[i];
}
return ((chk^sum)&0xFF);
}
So sending a hexidecimal command using QBytearray
QByteArray Chunk("\x7E\x01\x01\x00");
char *TestStr = Chunk.data();
unsigned char Checksum = Checksum_Check(TestStr, Chunk.length()); // tried sizeof(char) even tried sizeof(Chunk.size())
const char cart[] = {'\x7E', '\x01', '\x01', '\x00', checksum, '\x0D'};
QByteArray ba4(QByteArray::fromRawData(cart, 6));
SerialPort->write(ba4.toHex()); // Tried write(ba4); as well.
SerialPort->waitForBytesWritten(1000);
What i get in qDebug() is
Checksum "FE"
Command "7E010100FE0D"
Which looks correct but the device ignores the request.
My question is. Is the checksum calculated correctly by me or am I missing something crucial ?
Any advice or help would be most welcome as I am stuck.
Ive checked it with one of their example commands:
7E 01 F2 02 FF FF FE 0D
which if i do this:
QByteArray Chunk("\x7E\x01\xF2\x02\xFF\xFF");
char *TestStr = Chunk.data();
unsigned char Checksum = Checksum_Check(TestStr, Chunk.length());
const char cart[] = {'\x7E', '\x01', '\xF2', '\x02','\xFF', '\xFF', Checksum, '\x0D'};
QByteArray ba4(QByteArray::fromRawData(cart, 8));
QString hexvalue;
hexvalue = QString("%1").arg(CRC, 0, 16, QLatin1Char( '0' ));
qDebug() << "Checksum " << hexvalue.toUpper();
qDebug() << "Command " << ba4.toHex().toUpper();
SerialPort->write(ba4.toHex());
Gives me
Checksum "FE"
Command "7E01F202FFFFFE0D"
which is correct. By the looks of things. But the device still doesn't respond
So I just wanna verify that the checksum is in fact generated correctly.
Also in the documentation is an example
PC software Command:
buf[0] = 0x7E; //head
buf[1] = 0x01; //addr
buf[2] = 0x01; //CID
buf[3] = 0x00; //data length
buf[4] = CHK; //Check code
buf[5] = 0x0D; //tail
And here is another question. How can a checksum buf[4] be generated while the ByteArray is still being constructed. It doesn't make sense to me at this point. I dont know/understand which bytes they use for the checksum
Anyone? thanks

Cannot write bitsets fast enough

I need to be able to write down 12-bit bitsets on the order of speed of around 1 millisecond per 10,000 bitsets. Basically I'm provided with data in 12-bit packages (bitsets, in this case) and I need to be able to store them (I've chosen to write them to a file, open to suggestions if other methods exist) within an incredibly small timespan.
Right now I've set up an example of a bitset array of size 10,000 (to simulate what I would actually get) and write them all down into a file
int main()
{
std::bitset<12> map[10000];
std::ofstream os("myfile.txt", std::ofstream::binary);
//From here
for (int i = 0; i < 10000; ++i)
{
os << map[i];
}
//to here takes slightly under 7 ms -- too slow
}
As the comments say, it takes 7 ms. I'm open to any and all speed improvements, and am hopeful to get (optimally) 1ms for that loop.
Edit Info: This is for a Serial Peripheral Interface (SPI), and the data will be all available, as it is in the example, then dumped all at once, not as a stream of bitsets. For more technical specs, I'm using an Arduino Atmega328p, ADS7816, and an SD card reader
Two recommendations:
Minimize trips to the OS. Write multiple bytes in one go.
Pack the bits before writing. Your current solution writes the bits as characters, i.e. one byte for every bit. Write in binary mode, which would be 8 times more compact (and also faster).
#include <bitset>
#include <fstream>
#include <vector>
int main()
{
std::bitset<12> map[10000];
// Initialize with demo values
for (int i = 0; i < 10000; ++i) {
map[i] = i + 1;
}
// Pack bits into a binary buffer
std::vector<uint8_t> buffer((10000 * 12 + 7) / 8);
for (int i = 0, j = 0, rem = 0; i < 10000; ++i) {
unsigned long b = map[i].to_ulong();
buffer[j++] |= static_cast<uint8_t>(b >> (4 + rem));
buffer[j] |= static_cast<uint8_t>(b << (4 - rem));
rem += 12 % 8;
if (rem >= 8) {
rem -= 8;
j++;
}
}
// Write the buffer in 1 go
std::ofstream os("myfile.bin", std::ofstream::binary);
os.write(reinterpret_cast<const char*>(buffer.data()), buffer.size());
os.close(); // don't forget to close() to flush the file
}
If you prefer to keep your text file format, at least enable buffering:
int main()
{
std::bitset<12> map[10000];
// Enable buffering
std::vector<char> buf(256 * 1024);
std::ofstream os("myfile.txt", std::ofstream::binary);
os.rdbuf()->pubsetbuf(buf.data(), buf.size());
for (int i = 0; i < 10000; ++i)
{
os << map[i];
}
os.close(); // to flush the buffer
}

arduino low i2c read speed;

I'm currently working on a project using the genuino 101 where i need to read large amounts of data trough i2c, to fill an arbitrarily sized buffer.from the following image i can see that the read requests themselves only take about 3 milliseconds and the write request about 200 nanoseconds.
however there is a very large time (750+ ms) between read transactions in the same block
#define RD_BUF_SIZE 32
void i2cRead(unsigned char device, unsigned char memory, int len, unsigned char * rdBuf)
{
ushort bytesRead = 0;
ushort _memstart = memory;
while (bytesRead < len)
{
Wire.beginTransmission((int)device);
Wire.write(_memstart);
Wire.endTransmission();
Wire.requestFrom((int)device, BLCK_SIZE);
int i = 0;
while (Wire.available())
{
rdBuf[bytesRead+i] = Wire.read();
i++;
}
bytesRead += BLCK_SIZE;
_memstart += BLCK_SIZE;
}
}
from my understanding this shouldn't take that long, unless adding to memstart and bytesRead is taking extremely long. by my, arguably limited, understanding of time complexity this function has a time complexity of O(n) and should, in the best case only take about 12 ms for a 128 byte query
Am i missing something?
Those 700ms are not caused by the execution time of the few instructions in your function. Those should be done in microseconds. You may have a buffer overflow, or the other device might be delaying transfers, or there's another bug not related to buffer overflow.
This is about how I'd do it:
void i2cRead(unsigned char device, unsigned char memory, int len, unsigned char * rdBuf, int bufLen)
{
ushort _memstart = memory;
if ( bufLen < len ) {
len = bufLen;
}
while (len > 0)
{
Wire.beginTransmission((int)device);
Wire.write(_memstart);
Wire.endTransmission();
int reqSize = 32;
if ( len < reqSize ) {
reqSize = len;
}
Wire.requestFrom((int)device, reqSize);
while (Wire.available() && (len != 0))
{
*(rdBuf++) = Wire.read();
_memstart++;
len--;
}
}
}

Buffer Overflow in C++ while reading virtual memory

I've got a program which is reading processes virtual memory and some registers for some data, then making amendments to it.
Here I pass the contents of eax register to my function (this seems to work fine, but I thought it might demonstrate what types of data are being involved)
case EXCEPTION_SINGLE_STEP: // EXCEPTION_SINGLE_STEP = 0x80000004
bl_flag = TRUE;
memset((void *)&context, 0, 0x2CC);
context.ContextFlags = 0x10017;
thread = OpenThread(0x1FFFFF, 0, debug_event.dwThreadId);
GetThreadContext(thread, &context);
context.Eip = context.Eip + 1;
// sub_FD4BF0((HANDLE)(*((DWORD *)(lpThreadParameter))), context.Eax);
StringToHtml((HANDLE)(dwArray[0]), context.Eax);
SetThreadContext(thread, &context);
CloseHandle(thread);
break;
void StringToHtml(HANDLE hProcess, DWORD address)
{
WCHAR buff[0x100];
WCHAR html[0x100];
DWORD oldProt = 0, real = 0;
int len = 0;
VirtualProtectEx(hProcess, (LPVOID)address, 0x200, PAGE_READWRITE, &oldProt);
ReadProcessMemory(hProcess, (LPCVOID)address, (LPVOID)buff, 0x200, &real);
len = wcslen(buff);
int k = 0, j = 0;
wprintf(L"Found out chat string : \"%s\" \n", buff);
for (int pp = 0; pp < 0x100; pp++)
html[pp] = NULL;
while(j < len)
{
if (buff[j] == L'&')
{
if (wcsncmp((const WCHAR *)(buff + j + 1), L"lt;", 3) == 0)
{
//html[k] = L'<';
html[k] = L'<font color="#00FF10">';
k++;
j = j + 4;
continue;
}
I am aware this is an incomplete function snippet. However the issue is arriving at my for loop here.
for (int pp = 0; pp < 0x100; pp++)
If i enter more than 256 characters (I at first thought this would be enough) then it crashes. I have clearly missed something obvious as I tried doing pp < len which I thought would use the buffer size, however, I still get the same crash.
How can I read the total size of the string entered in the chat into the loop and make it iterate over the WHOLE thing. Or at the very least catch this error?
Did you change the size of html and buffer according to the max of your for loop? Maybe that is already the solution.

How to encrypt data using AES(openssl)?

I need to encrypt my data,so i encrypt them using AES. And I can encrypt short data.But I need to encrypt long data, it can't work.What can I do to fix this problem.This is my code.
#include "cooloi_aes.h"
CooloiAES::CooloiAES()
: MSG_LEN(0)
{
for(int i = 0; i < AES_BLOCK_SIZE; i++)
{
key[i] = 32 + i;
}
}
CooloiAES::~CooloiAES()
{
}
std::string CooloiAES::aes_encrypt(std::string msg)
{
int i = msg.size() / 1024;
MSG_LEN = ( i + 1 ) * 1024;
char in[MSG_LEN];
char out[MSG_LEN];
memset((char*)in,0,MSG_LEN);
memset((char*)out,0,MSG_LEN);
strncpy((char*)in,msg.c_str(),msg.size());
unsigned char iv[AES_BLOCK_SIZE];
for(int j = 0; j < AES_BLOCK_SIZE; ++j)
{
iv[j] = 0;
}
AES_KEY aes;
if(AES_set_encrypt_key((unsigned char*)key, 128, &aes) < 0)
{
return NULL;
}
int len = msg.size();
AES_cbc_encrypt((unsigned char*)in,(unsigned char*)out,len,&aes,iv,AES_ENCRYPT);
std::string encrypt_msg(&out[0],&out[MSG_LEN+16]);
std::cout << std::endl;
return encrypt_msg;
}
std::string CooloiAES::aes_decrypt(std::string msg)
{
MSG_LEN = msg.size();
char in[MSG_LEN];
char out[MSG_LEN+16];
memset((char*)in,0,MSG_LEN);
memset((char*)out,0,MSG_LEN+16);
strncpy((char*)in,msg.c_str(),msg.size());
std::cout << std::endl;
unsigned char iv[AES_BLOCK_SIZE];
for(int j = 0; j < AES_BLOCK_SIZE; ++j)
{
iv[j] = 0;
}
AES_KEY aes;
if(AES_set_decrypt_key((unsigned char*)key, 128, &aes) < 0)
{
return NULL;
}
int len = msg.size();
AES_cbc_encrypt((unsigned char*)in,(unsigned char*)out,len,&aes,iv,AES_DECRYPT);
std::string decrypt_msg = out;
return decrypt_msg;
}
When i encrypt data which has 96 byte, it will failed.I get this error "terminate called after throwing an instance of 'std::length_error'
what(): basic_string::_S_create
".But I don't think this string is longer than max length.And I don't where is wrong.
You have nothing wrong in your encryption/decryption except for the padding issues and usage of strncpy and (char *) constructor when dealing with binary. You shouldn't encrypt last block of data if it doesn't fit all of the 16 bytes. So you should implement your own padding or don't encrypt last small block at all, your code will be simplified to this:
string aes_encrypt/decrypt(string msg)
{
unsigned char out[msg.size()];
memcpy((char*)out,msg.data(),msg.size());
AES_cbc_encrypt((unsigned char *)msg.data(),out,msg.size()/16*16,&aes,iv,AES_ENCRYPT **or** AES_DECRYPT);
return string((char *)out, msg.size());
}
To summarize:
don't use strncpy() with binary
don't use string s = binary_char_massive; constructor
don't encrypt last portion of data if it doesn't fit to block size or pad it yourself
Use EVP_* openssl API if there is possibility of future algorithms change
AES normally encrypts data by breaking it up into 16 byte blocks. If the last block is not 16 bytes long, it's padded to 16 bytes. Wiki articles:
http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
http://en.wikipedia.org/wiki/AES_implementations