Currently I'm working on this Arduino/Nanode project where we want to play a collection of WAV-files stored on a SD-card, with PWM on clock OCR0.
- I'm able to play the PWM perfectly, starting from the sketch from Michael Smith on the Arduino website: http://www.arduino.cc/playground/Code/PCMAudio
- I'm able to read the SD-card correctly and convert the data to 8bit integers that look correct when I print them to the serial window.
The problem I have is when I feed in these integers to the PWM value of the clock.
As I said, when I'm using the original PWM Audio file with my own WAV-file converted to a .h-file (through wav2c) it works and it sounds good. When I'm reading the SD-card it shows me the correct values. It shows correct when I'm reading the WAV-files direct and also (what I'm trying in my latest version posted here) as I convert them to text-files and read these. When I'm feeding in the integers from the text file, I hear a horn-like sound, like if the PWM uses the wrong values to output.
I'm guessing the problem is somewhere in the casting of the data into the byte data the Atmega uses. But I don't have any clue where to look or how to solve it. I noticed that the original file uses unsigned char's where I'm using uint_t8. I tried to cast them but it's not working.
Does anyone has some experience in this? Or any clue how I could possible solve this?
Many thanks for you help and time!
Jeroen
PS: Below is the piece of my code where I read through the text files and convert them to integers. They always consist of 3 characters; value 21 for example is printed as 021 in the file, and seperated with a comma which the script skips with the 4th myFile.read()
myFile = SD.open(FileName);
char sampleTMP[4];
sampleTMP[0] = (myFile.read());
sampleTMP[1] = (myFile.read());
sampleTMP[2] = (myFile.read());
sampleTMP[3] = 0;
myFile.read();
unsigned char ss;
ss = atoi(sampleTMP);
Serial.println(ss, DEC);
OCR0A = ss;
OCR0B = ss;
Related
I have a 16 bit, 48kHz, 1-channel (mono) PCM audio file (with no header but it would be the same with a WAV header anyway) and I can read that file correctly using a software such as Audacity, however when I try to read it programatically (in C++), some samples just seem to be out of place while most are correct when comparing Audacity values.
My process of reading the PCM file is the following:
Convert the byte array of PCM to a short array to get readable values by bitshifting (the order of bytes is little-endian here).
for(int i = 0; i < bytesSize - 1; i += 2)
shortValue[i] = bytes[i] | bytes[i + 1] << 8;
note: bytes is a char array of the binary contents of the PCM file. And shortValue is a short array.
Convert the short values to Amplitude levels in a float array by dividing by the max value of short (32767)
for(int i = 0; i < shortsSize ; i++)
amplitude[i] = static_cast<float>(shortValue[i]) / 32767;
This is obviously not optimal code and I could do it in one loop but for the sole purpose of explaining I separated the two steps.
So what happens exactly is that when I try to find very big changes of amplitude levels in my last array, it shows me samples that are not correct? Like here in Audacity notice how the wave is perfectly smooth and how the sample 276,467 pointed in green goes just a bit lower to the next sample pointed in red, which should be around -0.17.
However, when reading from my code, I get a totally wrong value of the red sample (-0.002), while still getting a good value of the green sample (around -0.17), the sample after the red one is also correct (around -0.17 as well).
I don't really understand what's happening and how Audacity is able to read those bytes correctly, I tried with multiple PCM/WAV files and I get the same results. Any help would really be appreciated!
I want to use TinyGPS++ on an Arduino to parse NMEA data and display information on an OLED display. But, instead of using software serial and the TX/RX pins, the NMEA data will be received by USB.
I followed the examples from TinyGPS++, but i encountered two problems:
1)
Only the first 64 characters are received by the Arduino, when i send one NMEA sentence over the serial monitor (Windows, Arduino 1.6.9). How can I overcome this restriction? I help myself by deleting a couple of decimal places, but this is not the preferred way to go.
2)
In the TinyGPS++ BasicExample a sample NMEA string is defined in the read-only memory:
// A sample NMEA stream.
const char *gpsStream =
"$GPRMC,045103.0,A,3014.0,N,09748.0,W,36.88,65.02,030913,,,A*7C\r\n"
"$GPGGA,045104.0,3014.0,N,09749.0,W,1,09,1.2,211.6,M,-22.5,M,,*62\r\n"
"$GPRMC,045200.0,A,3014.0,N,09748.0,W,36.88,65.02,030913,,,A*77\r\n"
"$GPGGA,045201.0,3014.0,N,09749.0,W,1,09,1.2,211.6,M,-22.5,M,,*6C\r\n"
"$GPRMC,045251.0,A,3014.0,N,09748.0,W,36.88,65.02,030913,,,A*7D\r\n"
"$GPGGA,045252.0,3014.0,N,09749.0,W,1,09,1.2,211.6,M,-22.5,M,,*6F\r\n";
and parsed by
while (*gpsStream) {
Serial.print(*gpsStream);
gps.encode(*gpsStream++);
}
I receive my NMEA (unfortunately only one line) this way:
if (Serial.available()) {
while (Serial.available() > 0) {
if(index < 80)
{
inChar = Serial.read();
inData[index] = inChar;
index++;
inData[index] = '\0';
}
}
}
and try to parse it by:
index = 0;
while (index < 80) {
gps.encode(inData[index]);
Serial.print(inData[index]);
index++;
}
But this does not work as desired. Checking if the location isValid() always returns not to be true.
Unfortunately, i have several possible sources for this undesired behavior.
The too short sentences (unlikely)
Incorrect way of reading the data over serial.
I only submit one line.
Something else.
I am not that experienced with neither NMEA, nor the serial data communication, and i have only little experience with Arduino/C. Can you point me into a direction how to solve for this (these) problems?
Basically, you do not need to accumulate NMEA characters. Just feed them to the GPS library as you receive them. You don't provide the entire loop, but it is very common to have a problem there, too.
After struggling with several GPS libraries and their examples, I eventually wrote NeoGPS. It is faster and smaller than all other libraries, it validates the checksum, and the examples are structured correctly. Unlike other libraries, NeoGPS does not store GPS values as floating-point values, so it is able to retain the full accuracy of your GPS device.
If you'd like to try it, be sure to follow the Installation instructions. The NMEA.ino example will emit one line of info (CSV format) for each batch of GPS sentences that you send, ending with the default RMC sentence. Be sure to modify it to use the Serial object instead of gps_port, or simply define it that way:
#define gps_port Serial
It will also show the number of characters that have been parsed, how many good sentences have been received, and how many sentences had checksum errors. That could help with debugging if you are not generating the checksum correctly. This site is useful, too.
Those CSV lines will be sent back over the USB port (to the PC), but you can easily change it to send specific fields to the OLED (see NMEAloc.ino).
Although it is possible to develop something on a PC and then port it to an embedded environment like the Arduino, you have to be careful about (1) linear program structure and (2) ignoring resource limits (program size, MCU speed and RAM). There are a number of quirks with the Arduino environment that usually make it frustrating to port a "sketch" to/from a PC. :P
I have created a simple waveform generator which is connected to an AUGraph. I have reused some sample code from Apple to set AudioStreamBasicDescription like this
void SetCanonical(UInt32 nChannels, bool interleaved)
// note: leaves sample rate untouched
{
mFormatID = kAudioFormatLinearPCM;
int sampleSize = SizeOf32(AudioSampleType);
mFormatFlags = kAudioFormatFlagsCanonical;
mBitsPerChannel = 8 * sampleSize;
mChannelsPerFrame = nChannels;
mFramesPerPacket = 1;
if (interleaved)
mBytesPerPacket = mBytesPerFrame = nChannels * sampleSize;
else {
mBytesPerPacket = mBytesPerFrame = sampleSize;
mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
}
}
In my class I call this function
mClientFormat.SetCanonical(2, true);
mClientFormat.mSampleRate = kSampleRate;
while sample rate is
#define kSampleRate 44100.0f;
The other setting are taken from sample code as well
// output unit
CAComponentDescription output_desc(kAudioUnitType_Output, kAudioUnitSubType_RemoteIO, kAudioUnitManufacturer_Apple);
// iPodEQ unit
CAComponentDescription eq_desc(kAudioUnitType_Effect, kAudioUnitSubType_AUiPodEQ, kAudioUnitManufacturer_Apple);
// multichannel mixer unit
CAComponentDescription mixer_desc(kAudioUnitType_Mixer, kAudioUnitSubType_MultiChannelMixer, kAudioUnitManufacturer_Apple);
Everything works fine, but the problem is that I am not getting stereo sound and my callback function is failing (bad access) when I try to reach the second buffer
Float32 *bufferLeft = (Float32 *)ioData->mBuffers[0].mData;
Float32 *bufferRight = (Float32 *)ioData->mBuffers[1].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
switch (generator.soundType) {
case 0: //Sine
bufferLeft[frame] = sinf(thetaLeft) * amplitude;
bufferRight[frame] = sinf(thetaRight) * amplitude;
break;
So it seems I am getting mono instead of stereo. The pointer bufferRight is empty, but don't know why.
Any help will be appreciated.
I can see two possible errors. First, as #invalidname pointed out, recording in stereo probably isn't going to work on a mono device such as the iPhone. Well, it might work, but if it does, you're just going to get back dual-mono stereo streams anyways, so why bother? You might as well configure your stream to work in mono and spare yourself the CPU overhead.
The second problem is probably the source of your sound distortion. Your stream description format flags should be:
kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked
Also, don't forget to set the mReserved flag to 0. While the value of this flag is probably being ignored, it doesn't hurt to explicitly set it to 0 just to make sure.
Edit: Another more general tip for debugging audio on the iPhone -- if you are getting distortion, clipping, or other weird effects, grab the data payload from your phone and look at the recording in a wave editor. Being able to zoom down and look at the individual samples will give you a lot of clues about what's going wrong.
To do this, you need to open up the "Organizer" window, click on your phone, and then expand the little arrow next to your application (in the same place where you would normally uninstall it). Now you will see a little downward pointing arrow, and if you click it, Xcode will copy the data payload from your app to somewhere on your hard drive. If you are dumping your recordings to disk, you'll find the files extracted here.
reference from link
I'm guessing the problem is that you're specifying an interleaved format, but then accessing the buffers as if they were non-interleaved in your IO callback. ioData->mBuffers[1] is invalid because all the data, both left and right channels, is interleaved in ioData->mBuffers[0].mData. Check ioData->mNumberBuffers. My guess is it is set to 1. Also, verify that ioData->mBuffers[0].mNumberChannels is set to 2, which would indicate interleaved data.
Also check out the Core Audio Public Utility classes to help with things like setting up formats. Makes it so much easier. Your code for setting up format could be reduced to one line, and you'd be more confident it is right (though to me your format looks set up correctly - if what you want is interleaved 16-bit int):
CAStreamBasicDescription myFormat(44100.0, 2, CAStreamBasicDescription::kPCMFormatInt16, true)
Apple used to package these classes up in the SDK that was installed with Xcode, but now you need to download them here: https://developer.apple.com/library/mac/samplecode/CoreAudioUtilityClasses/Introduction/Intro.html
Anyway, it looks like the easiest fix for you is to just change the format to non-interleaved. So in your code: mClientFormat.SetCanonical(2, false);
I'm writing two programs that communicate by reading files which the other one writes.
My problem is that when the other program is reading a file created by the first program it outputs a weird character at the end of the last data. This only happens seemingly at random, as adding data to the textfile can result in a normal output.
I'm utilizing C++ and Qt4. This is the part of program 1:
std::ofstream idxfile_new;
QString idxtext;
std::string fname2="some_textfile.txt"; //Imported from a file browser in the real code.
idxfile_new.open (fname2.c_str(), std::ios::out);
idxtext = ui->indexBrowser->toPlainText(); //Grabs data from a dialog of the GUI.
//See 'some_textfile.txt' below
idxfile_new<<idxtext.toStdString();
idxfile_new.clear();
idxfile_new.close();
some_textfile.txt:
3714.1 3715.1 3716.1 3717.1 3719.1 3739.1 3734.1 3738.1 3562.1 3563.1 3623.1
part of program 2:
std::string indexfile = "some_textfile.txt"; //Imported from file browser in the real code
std::ifstream file;
std::string sub;
file.open(indexfile.c_str(), std::ios::in);
while(file>>sub)
{
cerr<<sub<<"\n"; //Stores values in an array in the real code
}
This outputs:
3714.1
3715.1
3716.1
3717.1
3719.1
3739.1
3734.1
3738.1
3562.1
3563.1
3623.1�
If I add more data it works at times. Sometimes it can output data such as
3592.�
or
359�
at the end. So it is not consistent in reading the whole data either. At first I figured it wasn't reading the eof properly, and I have read and tried many solutions to similar problems but can't get it to work correctly.
Thank you guys for the help!
I managed to solve the problem by myself this morning.
For anyone with the same problem I will post my solution.
The problem was the UTF-8 encoding when creating the file. Here's my solution:
Part of program 1:
std::ofstream idxfile_new;
QString idxtext;
std::string fname2="some_textfile.txt";
idxfile_new.open (fname2.c_str(), std::ios::out);
idxtext = ui->indexBrowser->toPlainText();
QByteArray qstr = idxtext.toUtf8(); //Enables Utf8 encoding
idxfile_new<<qstr.data();
idxfile_new.clear();
idxfile_new.close();
The other program is left unchanged.
A hex converter displayed the extra character as 'ef bf bd', which is due to the replacement character U+FFFD that replace invalid bytes when encoding to Utf8.
I am trying to read a bmp file in C++(Turbo). But i m not able to print binary stream.
I want to encode txt file into it and decrypt it.
How can i do this. I read that bmp file header is of 54 byte. But how and where should i append txt file in bmp file. ?
I know only Turbo C++, so it would be helpfull for me if u provide solution or suggestion related to topic for the same.
int main()
{
ifstream fr; //reads
ofstream fw; // wrrites to file
char c;
int random;
clrscr();
char file[2][100]={"s.bmp","s.txt"};
fr.open(file[0],ios::binary);//file name, mode of open, here input mode i.e. read only
if(!fr)
cout<<"File can not be opened.";
fw.open(file[1],ios::app);//file will be appended
if(!fw)
cout<<"File can not be opened";
while(!fr)
cout<<fr.get(); // error should be here. but not able to find out what error is it
fr.close();
fw.close();
getch();
}
This code is running fine when i pass txt file in binary mode
EDIT :
while(!fr)
cout<<fr.get();
I am not able to see binary data in console
this was working fine for text when i was passing character parameter in fr.get(c)
I think you question is allready answered:
Print an int in binary representation using C
convert your char to an int and you are done (at least for the output part)
With steganography, what little I know about it, you're not "appending" text. You're making subtle changes to the pixels (shading, etc..) to hide something that's not visually obvious, but should be able to be reverse-decrypted by examining the pixels. Should not have anything to do with the header.
So anyway, the point of my otherwise non-helpful answer is to encourage you go to and learn about the topic which you seek answers, so that you can design your solution, and THEN come and ask for specifics about implementation.
You need to modify the bit pattern, not append any text to the file.
One simple example :
Read the Bitmap Content (after header), and sacrifice a bit from each of the byte to hold your content
If on Windows, recode to use CreateFile and see what the real error is. If on Linux, ditto for open(2). Once you have debugged the problem you can probably shift back to iostreams.