Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I made a structure in C and read all data in dat structure using fread function,Actually i confused about that,what is actual "audio data" means original sample data?
and how can we extract frequencies from dat audio data.
And I can successfully read that data but cant understand what i have to do further.
Pl explain.
You can easily read a wav file , just follow this document.
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
As for extracting frequencies from the file you would need to apply a Fourier Transform to your data , which would convert your data from Amplitude Time to Frequency time domain.
http://en.wikipedia.org/wiki/Fast_Fourier_transform
An audio file, typically, consists of a header and "samples". The samples can be 8, 16 or 32 bit and integer or floating point. Some audio files store the audio samples in a compressed form (mp3 for example), where others store the data as "raw samples".
To analyse the frequency, you need to perform a "fourier transform", which will give you an array of "how much at this frequency". The actual fourier transform is quite complex to describe (it's certainly more than a few dozen lines).
If the samples are in integer form, you'll have to convert from integer to floating point by dividing each sample by the max value (255, 32767 or 231-1).
Here's a package of C++ code to do FFT. There are several others out there.
http://fftwpp.sourceforge.net/
Here is another example of performing the FFT. This one displays the results in a Windows GUI.
http://www.relisoft.com/Freeware/index.htm
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 months ago.
Improve this question
I am building a software to analyze log files from ArduPilot in c++.
The Data from the files comes in the fallowing form:
Sensor name (GYRO, Barometer, ect).
Each sensor has several fields of data, for example the barometer has the fallowing fields:
Altitude, Pressure, Temperature, Offset and some more.
All the inputs in the log file that record the Barometer data will have all these fields.
Example of line in log file:
BARO, 843762779, 0, -1.443359, 94956.91, 43.06, -1.074093, 843762, 0, 28.38455, 1
Here is the general idea:
list of Sensors: BARO, GYRO, BAT ...
Every Sensor has some fields
Every Field should have ether a float array, or a float vector.
This way I can feed the Graph module with the address of the vector to display the data of the field.
I would love some help how to build the data structure.
So I can easily add data every time I read a line with more sensor data.
Easily access an array/vector of a single field for graph display.
Any ideas?
Edit:
To clear things up:
I can have 100,000 readings per field X many fields per sensor X many sensors...
I can't make up my mind if to use vectors on the heap, of pointers to vectors on the stack.
Should I use somthing like unordered_map for quick access
unordered_map<int,somthing>
Where int is the sensor's id
Maybe you can bundle the individual values in a struct? Something like:
struct Sensor {
std::string name;
double pressure;
double temperature
...
};
and then collect all sensors in a std::vector<Sensor> ?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 months ago.
The community reviewed whether to reopen this question 9 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I am trying to read the data from some .OFF files(Object File Format)and store them in certain data struture. They contain the description of the composing polygons of a geometric object,(From wiki) and look like this:
OFF
2903 5804 120
3.71636 2.34339 0
4.12656 0.642027 0
...
3 1955 2193 2185
3 2193 1965 2192
My understanding about the .OFF file structure is: Some header data on the very beginning. Data like ' 3.71636 2.34339 0' should be the coordinates of vertices. Data like '3 1955 2193 2185' should be 'the number of vertices of one face and the indexes of the vertices'. Is it correct?
I found some methods to read data with C++. But I didn't find a way to read different types of data in one file. Is there a good way to read different data from one file?
Is there a way to read the data row by row?
How can I calculate the normals based on the data in such .OFF file?
First see OFF fileformat descrition
Yes starting 3 numbers are header
its the number of points, faces, edges (the last is not very useful) So you know how big tables you have to allocate ahead before reading.
beware the first OFF line is optional...
Yes you can read more than one types from file
just use fstream/cin or fscanf or whatever you got at your disposal I usually use direct binary file access instead of text file functions (as I have my own) for more info see
Convert the Linux open, read, write, close functions to work on Windows
however file access functions depend also on used OS and programing environment so yours might be called differently.
Yes there is a way to read text file row by row (line by line)
You have to parse the text line by line I read the whole file into memory and scan byte by byte for line separators 13,10
Then parse line word by word by scanning for space,tab 32,9 or separators like ,;+- if I know ahead I am reading number then I consider any ASCII code not present in numbers as separator too.
Then convert string to number (atof()) and append it to target table. Beware the national setting of decimal point separator in OS might affect the conversion as the file itself might use different one so you should handle that either by converting the string or by changng the separator for the conversion.
Here an example of using std::ifstream to read line by line and parse wavefront file (similar to OFF but slightly more complex) The other answer is mine using my own functions for parsing instead...
compute face normal using cross product on 2 of its edges
This is very common way so if you have triangle:
3 1955 2193 2185
then:
normal = cross( pnt[1955]-pnt[2193] , pnt[2193]-pnt[2185] );
If you compute the normal consistently from the same edges across all faces and your mesh has strict winding rule then all normals would point outside or inside of your mesh too...
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm working on a research project and am assigned to do a bit of data scraping and writing code in R that can help extract current temperature for a particular zip code from a site such as wunderground.com. Now this may be a bit of an abstract question but does anyone know how to do the following:
I can extract the current temperature of a particular zip code by doing this:
temps <- readLines("http://www.wunderground.com/q/zmw:20904.1.99999")
edit(temps)
temps //gives me the source code for the website where I can look at the line that contains the temperature
ldata <- temps[lnumber]
ldata
# then have a few gsub functions that basically extracts
# just the numerical data (57.8 for example) from that line of code
I have a cvs file that contains zip code of every city in the country and I have that imported in R. It is arranged in a table according to zip, city and state. My challenge now is to write a method (using java analogy here because I'm new to R) that basically extracts 6-7 consecutive zip codes (after a particular one specified) and runs the above code by modifying the link within the readLines function and putting in the respective zip code after the link segment zmw:XXXXX and running everything after that based on that link. Now I don't quite know how to extract the data from the table. Maybe with a for-loop function? But then I don't know how to use that to modify the link. I think that's where I'm really getting stuck on. I have a bit of Java background so I understand HOW to approach this problem, just not the knowledge of the syntax. I understand this is quite an abstract question as I didn't provide a lot of code but I just want to know they functions/syntax that will help me extract the data from the table and somehow use that to modify the link through a function rather than manually doing it.
So this is about the Weather Underground data.
You can download csv files from individual weather stations in wunderground, however you need to know the weather station identifier. Here is an example URL for a weather station in Kirkland, WA (KWAKIRKL8):
http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID=KWAKIRKL8&day=31&month=1&year=2014&graphspan=day&format=1
Here is some R code:
url <- 'http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID=KWAKIRKL8&day=31&month=1&year=2014&graphspan=day&format=1'
s <- getURL(url)
s <- gsub("<br>\n","",s)
wdf <- read.csv(con<-textConnection(s))
And here is a page with which you can manually find stations and their codes.
http://www.wunderground.com/wundermap/
Since you only need a few you can pick them out manually.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
So I was thinking of how a .zip archive is structured and then I thought, how could I create my own archive format.
You would want to know what you want to compress. E.G. zip works great for many things, but not so well for audio files. FLAC works well for audio, but poorly on text files ( provided you could find a way to apply it )
Once you had a compression scheme you would allocate the appropriate metadata so you could later decompress the information, followed by the compressed data.
Perhaps you would research A lossless compression method such as Entropy Encoding. You might decided that Arithmetic coding was more optimal than Huffman coding and decide to implement an Arithmetic codec. You might also look at Dictionary encoding if you are more interested in compressing text.
Edit in response to comment
One would have to include the entropy tables decided upon when encoding the data so it could be later decoded.
Take for example JPEG. JPEG uses a Colorspace transformation to YCrCb, Quantization, A Discrete Cosine Transformation, and then uses Huffman coding on the data. The color space transformation metadata is included in the headers. (how many bits per color and how many samples per channel, along with the size of the image. ) The quantization tables are included and an index of which table match which channel. And the used huffman tables to encode the DC and AC Coefficients. The Discrete Cosine Transformation and ZigZag Coefficient pattern is part of the standard. So after De-quantization you must IDCT the information and dezigzag the coefficients.
Basically for JPEG.
Read the given tables in the header.
Figure out the entropy encoded format with the header info about
size and color.
Use the Huffman table to expand the data segment
Dequantize appropriately
IDCT and de zigzag
You would have to make your own standard, figure out the minimum information needed to recover the information and store it in a way readable without knowing details of whats inside.
I don't know about .zip, but I would imagine it would have a couple dictionary tables and a couple entropy tables. You would de-entropy encode the datasegment (which must be somehow determined by standard or marker ), then use a reverse dictionary substitution.
Download the sources of bzip2 and compile them. And then go from there.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Where can I find some GPS unit test data to validate my code?
For example:
Distance between two coordinates (miles / kilometers)
Heading/bearing from Point A to Point B
Speed from Ponit A to Point B given a duration
Right now I'm using Google Earth to fumble around with this, but it would be nice to know I'm validating my calculations against something, well, valid.
"GPS unit test data" is quite vague. You could easily have a pile of data, but if you don't know what they represent, what value are the tests?
If you're looking for a math sample of latitude/longitude calculations, check out the example on Wikipedia's Great Circle distances article: http://en.wikipedia.org/wiki/Great-circle_distance#Worked_example It has two points and works the math to compute the distance between them.
Or are you looking for the data that comes directly from a GPS unit? These are called NMEA sentences. An NMEA sentence begins with $GP and the next 3 characters are the sentence code, followed by the sentence data. http://aprs.gids.nl/nmea/ has a list.
You could certainly Google for "sample nmea data". The magnalox site appears to have some downloadable sample files, but I didn't check them to see if they'd be useful to you.
A better option would probably be to record your own data. Connect your laptop to your GPS unit, set it up to capture the serial data being emitted from the GPS, set the GPS to record your track, and take it for a short test drive. You can then compare how you processed the captured data based on what you know from the stored track (and from your little drive.) You could even have a web cam record the screen of the GPS to record heading/bearing information that doesn't arrive in the sentences.
Use caution if screen scraping NMEA sentences from a web site. All valid NMEA sentences begin with a "$GP"
RandomProfile offers randomly generated valid NMEA sentences.