Parsing Data chuck of a .wav file in c++ - c++

I am using a SHA1 Hash to verify the authenticity of a .wav file. The SHA1 function I am using takes in three parameter:
a pointer to the authentication file with .auth extension
The data buffer read from the .wav file (which must be less than 42000 bytes in size)
The length of the buffer
for (int i = 0; i < size_buffer; i++) {
DataBuffer[i] = fgetc(WavResult);
}
util_sha1_calculate(&AuthContext, DataBuffer, size_buffer);
How can I set a read function to read 42000 bytes, transfer the data to util_sha1_calculate(&AuthContext, DataBuffer, size_buffer), and start from the position it left off when the loop is repeated, and proceed to read then next 42000 bytes?

You can put your shown for loop inside of another outer loop that runs until EOF is reached, eg:
size_t size;
int ch;
while (!feof(WavResult))
{
size = 0;
for (int i = 0; i < size_buffer; i++) {
ch = fgetc(WavResult);
if (ch == EOF) break;
DataBuffer[size++] = (char) ch;
}
if (size > 0)
util_sha1_calculate(&AuthContext, DataBuffer, size);
}
However, you should consider replacing the inner for loop with a single call to fread() instead, eg:
size_t nRead;
while ((nRead = fread(DataBuffer, 1, size_buffer, WavResult)) > 0)
{
util_sha1_calculate(&AuthContext, DataBuffer, nRead);
}

Related

Write contents of text file to an array of file blocks (512 Bytes) in C++

I am trying to separate a 5 KB text file into a File array of 10 blocks, which are each 512 Bytes.
I have the file loading properly and writing to a char array but I don't understand what is happening at while(infile >> temp[i]) below. Does that mean "while test1.txt still has characters to write, write it to temp[]"?
Basically, I want characters 0 to 511 in input1.txt to load into temp[] then store temp in fileArray[0]. And then characters 512 to 1023 to load into temp[] and then be stored into fileArray[1] and so on. If the file is shorter than 5 KB, fill the rest of the items in fileArray[] with 0's.
Code:
FILE* fileArray[10];
//something like for(int a = 0; a < fileArray.length; a++)
ifstream infile;
int i = 0;
int k = 0;
char temp[512];
infile.open("inputFiles/test1.txt"); //open file in read mode.. IF FILE TOO BIG, CRASHES BECAUSE OF TEMP
while (infile >> temp[i])//WHAT DOES THIS MEAN?
i++;
k = i;
for (int i = 0; i < k; i++) {
cout << temp[i]; //prints each char in test1.txt
}
New Code:
FILE* input = fopen(filename, "r");
if (input == NULL) {
fprintf(stderr, "Failed to open %s for reading OR %s is a directory which is fine\n", filename, filename);
return;
}
FILE **fileArray = (FILE**) malloc(10 * 512); //allow files up to 5.12KB (10 sectors of 512 Bytes each)
//load file into array in blocks of 512B
//if file is less than 5.12KB fill rest with 0's
std::filebuf infile;
infile.open("inputFiles/test1.txt", std::ios::in | std::ios::binary);
for (int a = 0; a < 10; a++) {
outfile.open(fileArray[a], std::ios::out | std::ios::binary);
int block = 512 * a;
int currentBlockPosition = 0;
while (currentBlockPosition < 512) {
std::copy(std::istreambuf_iterator<char>(&infile[block + currentBlockPosition]), {},
std::ostreambuf_iterator<char>(&outfile));
//input[block * currentBlockPosition] >> fileArray[a];
//currentBlockPosition++;
}
}
while (infile >> temp[i])//WHAT DOES THIS MEAN?
i++; // This is means while there exist data in the file put this data in temp array
and I think it is good idea to take the whole data from the file and then split array

Filling and comparing char* inside a function

I wrote the function readChar() which is designed to read the characters send by my WiFi module one by one(function works has advertised) and pass them to a char buffer of increasing size. The function should stop when char *endChar (multiple characters) have been detected or the number of character returned by timedRead() has exceeded size_t length.
I have several issues:
1/. I don't understand the syntax (found inside the Arduino Stream library) :
*buffer++ = (char)c;
Can you explain how the array buffer gets filled?
And why buffer[index] = (char)c; doesn't work here?
2/. I would like to compare buffer and endChar in the loop, possibly by using strcmp(buffer,endChar) (maybe there is a better way). But that doesn't seem to work. In fact when printing the ASCII values of my char *buffer then seem to increment from the end of the buffer. E.G.:
So what is the best way to do that comparison?
The code, inserted in the loop:
_dbgSerial->println("buffer");
for (int i = 0; i < 32; i++){
_dbgSerial->print(buffer[i], DEC);
_dbgSerial->print(",");
}
_dbgSerial->println("");
prints:
buffer
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13,10,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13,10,13,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13,10,13,10,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13,10,13,10,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13,10,13,10,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13,10,13,10,0,0,0,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,13,10,13,10,0,0,0,0,
Here is the function readChar():
size_t Debugwifi::readChar(char *endChar, char *buffer, size_t length) {
if (length < 1) return 0;
size_t index = 0;
while (index < length) {
int c = timedRead();
if (c < 0 ) break;
//buffer[index] = (char)c;
*buffer++ = (char)c;
_dbgSerial->println("buffer");
for (int i = 0; i < 32; i++){
_dbgSerial->print(buffer[i], DEC);
_dbgSerial->print(",");
}
_dbgSerial->println("");
if (strcmp(buffer,endChar)==0) {
break;
_dbgSerial->println("brk");}
index++;
}
return index;
}
As Rickard has explained, *buffer++ = (char)c; is how you assign a character to the memory a pointer points at, and then increment the pointer.
However, your function has a lot of problems - you keep comparing unset memory with *endChar. I suggest:
size_t Debugwifi::readChar(const char * const endStr, // const pointer to const.
char * const buffer, const size_t length) {
if (length < 1) return 0;
const size_t endLen = strlen(endStr);
for (size_t index = 0; index < length; index++) {
const int c = timedRead();
if (c < 0 ) break;
buffer[index] = (char)c;
// Debug
_dbgSerial->println("buffer");
for (size_t i = 0; i < length; i++){ // Better to use size_t here,
// and compare against length not 32
_dbgSerial->print(buffer[i], DEC);
_dbgSerial->print(",");
}
_dbgSerial->println("");
// Finished?
if (index >= endLen) {
if (memcmp(&buffer[index-endLen], endStr, endLen)==0) {
_dbgSerial->println("brk"); // Must do this *before* "break"!
break;
}
}
}
return index;
}
I have added a lot of consts. It's hard to have too many.
The important point is that once you have read enough characters, to start comparing the last characters you have read to the end marker.
Note that this function does not remove the end marker, and if you pass a 32-byte zero-filled array and it reads 32 characters, the result will NOT be zero terminated.
Finally, I changed the argument name to endStr because I had expected endChar to be a pointer to a single character - not a NUL-terminated string.
*buffer++ = (char) c;
First writes the value of c to what buffer is currently pointing to.
Then it increments the buffer
This is also why your loop to print buffer doesn't work.
You start printing from the position after what was just filled.
This is also why your strcmp doesn't work. It doesn't actually compare what you have filled your buffer with. It compares the content beyond what have been filled.
If you want your printing code to work you should save the initial value of buffer before the loop;
const char *buffer_start = buffer;
Then use that in your printing code instead of buffer.

PGM File Reader Doesn't Read Asymmetric Files

I'm writing a simple PGM file reader for a basic CV idea, and I'm having a weird issue. My method seems to work alright for symmetric files (255 x 255, for example), but when I try to read an asymmetric file (300 x 246), I get some weird input. One file reads to a certain point and then dumps ESCAPE characters (ASCII 27) into the remainder of the image (see below), and others just won't read. I think this might be some flawed logic or a memory issue. Any help would be appreciated.
// Process files of binary type (P5)
else if(holdString[1] == '5') {
// Assign fileType value
fileType = 5;
// Read in comments and discard
getline(fileIN, holdString);
// Read in image Width value
fileIN >> width;
// Read in image Height value
fileIN >> height;
// Read in Maximum Grayscale Value
fileIN >> max;
// Determine byte size if Maximum value is over 256 (1 byte)
if(max < 256) {
// Collection variable for bytes
char readChar;
// Assign image dynamic memory
*image = new int*[height];
for(int index = 0; index < height; index++) {
(*image)[index] = new int[width];
}
// Read in 1 byte at a time
for(int row = 0; row < height; row++) {
for(int column = 0; column < width; column++) {
fileIN.get(readChar);
(*image)[row][column] = (int) readChar;
}
}
// Close the file
fileIN.close();
} else {
// Assign image dynamic memory
// Read in 2 bytes at a time
// Close the file
}
}
Tinkered with it a bit, and came up with at least most of a solution. Using the .read() function, I was able to draw the whole file in and then read it piece by piece into the int array. I kept the dynamic memory because I wanted to draw in files of different sizes, but I did pay more attention to how it was read into the array, so thank you for the suggestion, Mark. The edits seem to work well on files up to 1000 pixels wide or tall, which is fine for what I'm using it for. After, it distorts, but I'll still take that over not reading the file.
if(max < 256) {
// Collection variable for bytes
int size = height * width;
unsigned char* data = new unsigned char[size];
// Assign image dynamic memory
*image = new int*[height];
for(int index = 0; index < height; index++) {
(*image)[index] = new int[width];
}
// Read in 1 byte at a time
fileIN.read(reinterpret_cast<char*>(data), size * sizeof(unsigned char));
// Close the file
fileIN.close();
// Set data to the image
for(int row = 0; row < height; row++) {
for(int column = 0; column < width; column++) {
(*image)[row][column] = (int) data[row*width+column];
}
}
// Delete temporary memory
delete[] data;
}

Reading 32 bit hex data from file

What is the best way to go about reading signed multi-byte words from a buffer of bytes?
Is there a standard way to do this that I am not aware of, or am I on the right track reading in 4 chars and raising them to their respecting power of 16 and summing them together?
int ReadBuffer(int BuffPosition, int SequenceLength){
int val = 0;
int limit = BuffPosition + SequenceLength;
int place = 0;
for( BuffPosition; BuffPosition < limit; BuffPosition++ ){
int current = Buff[BuffPosition];
current *= pow(16, (2*place));
val += current;
place++;
}
return val;}
Assuming you read/write your file on the same machine (same endianness), you can use a 32 bit type like int32_t (#include <cstdint>) and read directly. Small example below:
#include <iostream>
#include <fstream>
#include <cstdint>
int main()
{
std::fstream file("file.bin", std::ios::in | std::ios::out | std::ios::binary);
const std::size_t N = 256; // length of the buffer
int32_t buf[N]; // our buffer
for (std::size_t i = 0; i < N; ++i) // fill the buffer
buf[i] = i;
// write to file
file.write((char*)buf, N * sizeof(int32_t));
for (std::size_t i = 0; i < N; ++i) // zero-in the buffer
buf[i] = 0; // to convince we're not cheating
// read from file
file.seekg(0); // rewind to beginning
file.read((char*)buf, N * sizeof(int32_t));
// display the buffer
for (std::size_t i = 0; i < N; ++i) // fill the buffer
std::cout << buf[i] << " ";
}
I now realize that I can take a char* buffer and cast it to a data type with the correct size.
char* 8BitBuffer[4000];
int* 32BitBuffer;
if(sizeof(int) == 4){
32BitBuffer = (int*)8BitBuffer;
}
dostuffwith(32BitBuffer[index]);
I am trying to process a wav file, so In an attempt to maximize efficiency I was trying to avoid reading from the file 44100 times a second. Whether or not that is actually slower than reading from an array I am not actually sure.

deleting a structure from file in c++

So, here's the code of the procedure which reads every structure from file, deletes first-found structure which has an AgreementNo that is equal to the inserted int query. It then shortens the array and rewrites the file.
The problem is, it just shortens the array and deletes the last element - as if the searching criterias are not met, even though they should be.
(Before the procedure starts, the file is opened in a+b mode, so in the end, it is reopened that way.)
void deleteClient(int query, FILE *f){
int filesize = ftell(f);
int n = filesize/sizeof(Client);
Client *c = new Client[n];
Client *c2 = new Client[n-1];
rewind(f);
fread(c, sizeof(Client), n, f);
for(int i=0; i<n; i++){
if(c[i].agreementNo == query ){
c[i] = c[n];
break;
}
}
for (int i=0; i<n-1; i++){ c2[i] = c[i]; } // reduce the size of the array ( -1 extra element)
fclose(f);
remove("Client.dat");
f = fopen("Client.dat", "w+b");
for(int i=0;i<n-1; i++) {
fwrite(&c2[i], sizeof(Client), 1, f);
}
fclose(f);
f = fopen("Client.dat", "a+b");
}
What could be the cause of the described problem? Did I miss something in the code?
I'd do it this way:
struct MatchAgreementNo
{
MatchAgreementNo(int agree) : _agree(agree) {}
bool operator()(const Client& client) { return client.agreementNo == agree; }
};
void deleteClient(int query, FILE *f)
{
int rc = fseek(f, 0, SEEK_END);
assert(rc == 0);
long filesize = ftell(f);
int n = filesize / sizeof(Client);
assert(filesize % sizeof(Client) == 0);
Client *begin = mmap(NULL, filesize, PROT_READ|PROT_WRITE,
MAP_SHARED, fileno(f), 0);
assert(begin != MAP_FAILED);
Client *end = std::remove_if(begin, begin + n, MatchAgreementNo(query));
rc = ftruncate(fileno(f), (end - begin) * sizeof(Client));
assert(rc == 0);
munmap(begin, filesize);
}
That is, define a predicate function which does the query you want. Memory-map the entire file, so that you can apply STL algorithms on what is effectively an array of Clients. remove_if() takes out the element(s) that match (not only the first one), and then we truncate the file (which may be a no-op if nothing was removed).
By writing it this way, the code is a bit higher-level, more idiomatic C++, and hopefully less error-prone. It's probably faster too.
one change needed in your code is to save the index of the first found "bad" entry somewhere, and then copy your original array around that entry. Obviously, if no "bad" entry is found, then you aren't supposed to do anything.
One word of warning: the approach of reading the original file as a whole is only applicable for relatively small files. For the larger files, a better approach would be opening another (temporary) file, reading the original file in chunks and then copying it as you go (and after you found the entry which is skipped just copying the rest of the contents). I guess there is even more space for the optimization here, considering that except for that one entry, the rest of file contents is left unchanged.
void deleteClient(int query, FILE *f){
int filesize = ftell(f);
int n = filesize/sizeof(Client);
int found = -1;
Client *c = new Client[n];
Client *c2 = new Client[n-1];
rewind(f);
fread(c, sizeof(Client), n, f);
for(int i=0; i<n; i++){
if(c[i].agreementNo == query ){
printf("entry No.%d will be deleted\n", i);
found = i;
break;
}
}
if(found == -1) return;
if (i>0) for (int i=0; i<found; i++) { c2[i] = c[i]; } // copy the stuff before the deleted entry if it's >0
for (int i=found+1; i<n; i++){ c2[i-1] = c[i]; } // reduce the size of the array ( -1 extra element)
fclose(f);
remove("Client.dat");
f = fopen("Client.dat", "w+b");
for(int i=0;i<n-1; i++) {
fwrite(&c2[i], sizeof(Client), 1, f);
}
fclose(f);
f = fopen("Client.dat", "a+b");
}