asking for advice designing data storage in c++ [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 months ago.
Improve this question
I am building a software to analyze log files from ArduPilot in c++.
The Data from the files comes in the fallowing form:
Sensor name (GYRO, Barometer, ect).
Each sensor has several fields of data, for example the barometer has the fallowing fields:
Altitude, Pressure, Temperature, Offset and some more.
All the inputs in the log file that record the Barometer data will have all these fields.
Example of line in log file:
BARO, 843762779, 0, -1.443359, 94956.91, 43.06, -1.074093, 843762, 0, 28.38455, 1
Here is the general idea:
list of Sensors: BARO, GYRO, BAT ...
Every Sensor has some fields
Every Field should have ether a float array, or a float vector.
This way I can feed the Graph module with the address of the vector to display the data of the field.
I would love some help how to build the data structure.
So I can easily add data every time I read a line with more sensor data.
Easily access an array/vector of a single field for graph display.
Any ideas?
Edit:
To clear things up:
I can have 100,000 readings per field X many fields per sensor X many sensors...
I can't make up my mind if to use vectors on the heap, of pointers to vectors on the stack.
Should I use somthing like unordered_map for quick access
unordered_map<int,somthing>
Where int is the sensor's id

Maybe you can bundle the individual values in a struct? Something like:
struct Sensor {
std::string name;
double pressure;
double temperature
...
};
and then collect all sensors in a std::vector<Sensor> ?

Related

How can I determine a state with a given zip code in Stata? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
Currently, my line of code is really long and I was curious to know if there was a more efficient way of doing this.
As Nick has pointed out your question is missing most of the information that would make it answerable. Please read more here, and add more information to your question.
In the meantime, a useful approach is to merge your zipcode data with a dataframe (or dataset) with the state-zipcode link in it.
* first you need to get the zipcode data from somewhere.
* Here is one way:
!wget "https://www2.census.gov/geo/docs/maps-data/data/rel/zcta_county_rel_10.txt"
* now put this data in a frame
frame create zctaFrame
frame zctaFrame{
import delimited "zcta_county_rel_10.txt"
}
* now I'm making up a dataset (share some of yours with dataex from ssc
input str10 name zip
"sam" 55901
"sasha" 84101
"saul" 84111
end
frlink 1:1 zip, frame(zctaFrame zcta5)
frget state, from(zctaFrame)
If this doesn't match what you're trying to do, please add more detail to the question.

Best Approach to read data from multiple tables [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I have to create an Application to read some live data feed from more than 200 tables simultaneously and process this data. I want to discuss what could be the best approach to solve this problem with optimum speed as for each table we are getting 20+ records in every minute. So far I can think of following solutions :-
1) I can make multiple thread handling some 20 odd symbols independently.
2) I can make two thread one for data read and other for data processing but reader thread will take more time as it has to read all tables sequentially.
my database is MySQL and I am not looking to shift to nosql DB right now.I am using C++ to solve this problem.I feel that if instead of 200+ tables I can get live data feed in a single table then my second approach will become much appropriate and faster.
Is the use of MySQL required if not you might get a speed increase from any nosql "database". Furthermore retrieving data from a database is always a bottleneck, generally when it comes to that much data volume you want to load as much as you can into RAM and read it from there, as it is much faster.
You could make a query that would only retrieve the newest data from a certain timestamp(which is the same timestamp of the execution of your last query) then load that into memory do all the operations that require speed there, and clean up old entries that are not required anymore.

c++ How many classes should I make? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm practicing with classes and I'm given the task of creating employee management system. I'm given two .txt files. One (details.txt) has details of each employee with the following info: ID, name, DOB, SSN, department, and position. A sample of the file looks like such:
5 ali 6/24/1988 126-42-6989 support assistant
13 tim 2/10/1981 131-12-1034 logistics manager
The other .txt (timelog.txt) will contain a daily log of when employees clock in and clock out. The following format for this file is: ID, date, clock in time, and clock out time. Sample:
5 3/11 0800 1800
13 3/11 0830 1830
Firstly, I am to allow users to search up an employee by ID, name, department or position. Doing so will display all of the employees info (multiple employees if they have the same name, position or are from the same department) as well as show the total number of hours they have worked in the company.
Secondly, users are to be given another option to look up employee time logs by ID number. This will display the entire clock in/ clock out history of that employee as well as total hours worked each day.
I'm planning to read in the info from .txt files via ifstream and store them as an array of objects. I'm just wondering how many classes I should create. I'm thinking 2 classes- one for employee info (from details.txt) and one for time logs(timelogs.txt). Is there any other class I should create or should those 2 suffice?
Short answer: At least two.
Long answer: It depends on many things. Especially what part of code you can identify as potentially reusable.
If you asked for the highest possible amount of classes that could accomplish your task, I would think about a single class for:
Employee
EmployeeManager (Factory, Holder etc.) – creates, holds and deletes the Employee objects, provides search feature
DayWork – a row from timelog.txt, can calculate the amount of hours/minutes spent in work that day
WorkLog – a list of DayWork objects for one employee, can calculate the whole spent time
TextLineParser – encapsulation of std::ifstream
The right answer is most likely somewhere between. Keep in mind that C++ is a multi-paradigm language and you can perform some operations without having a class for them. Instead, they can be performed in a function or a set of functions in a C-like unit. That’s especially useful for one-time operations where the functions don’t share common data (potential properties).

Weather data scraping and extraction in R [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm working on a research project and am assigned to do a bit of data scraping and writing code in R that can help extract current temperature for a particular zip code from a site such as wunderground.com. Now this may be a bit of an abstract question but does anyone know how to do the following:
I can extract the current temperature of a particular zip code by doing this:
temps <- readLines("http://www.wunderground.com/q/zmw:20904.1.99999")
edit(temps)
temps //gives me the source code for the website where I can look at the line that contains the temperature
ldata <- temps[lnumber]
ldata
# then have a few gsub functions that basically extracts
# just the numerical data (57.8 for example) from that line of code
I have a cvs file that contains zip code of every city in the country and I have that imported in R. It is arranged in a table according to zip, city and state. My challenge now is to write a method (using java analogy here because I'm new to R) that basically extracts 6-7 consecutive zip codes (after a particular one specified) and runs the above code by modifying the link within the readLines function and putting in the respective zip code after the link segment zmw:XXXXX and running everything after that based on that link. Now I don't quite know how to extract the data from the table. Maybe with a for-loop function? But then I don't know how to use that to modify the link. I think that's where I'm really getting stuck on. I have a bit of Java background so I understand HOW to approach this problem, just not the knowledge of the syntax. I understand this is quite an abstract question as I didn't provide a lot of code but I just want to know they functions/syntax that will help me extract the data from the table and somehow use that to modify the link through a function rather than manually doing it.
So this is about the Weather Underground data.
You can download csv files from individual weather stations in wunderground, however you need to know the weather station identifier. Here is an example URL for a weather station in Kirkland, WA (KWAKIRKL8):
http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID=KWAKIRKL8&day=31&month=1&year=2014&graphspan=day&format=1
Here is some R code:
url <- 'http://www.wunderground.com/weatherstation/WXDailyHistory.asp?ID=KWAKIRKL8&day=31&month=1&year=2014&graphspan=day&format=1'
s <- getURL(url)
s <- gsub("<br>\n","",s)
wdf <- read.csv(con<-textConnection(s))
And here is a page with which you can manually find stations and their codes.
http://www.wunderground.com/wundermap/
Since you only need a few you can pick them out manually.

Reading a .WAV file in c and extracting frequencies from it [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I made a structure in C and read all data in dat structure using fread function,Actually i confused about that,what is actual "audio data" means original sample data?
and how can we extract frequencies from dat audio data.
And I can successfully read that data but cant understand what i have to do further.
Pl explain.
You can easily read a wav file , just follow this document.
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
As for extracting frequencies from the file you would need to apply a Fourier Transform to your data , which would convert your data from Amplitude Time to Frequency time domain.
http://en.wikipedia.org/wiki/Fast_Fourier_transform
An audio file, typically, consists of a header and "samples". The samples can be 8, 16 or 32 bit and integer or floating point. Some audio files store the audio samples in a compressed form (mp3 for example), where others store the data as "raw samples".
To analyse the frequency, you need to perform a "fourier transform", which will give you an array of "how much at this frequency". The actual fourier transform is quite complex to describe (it's certainly more than a few dozen lines).
If the samples are in integer form, you'll have to convert from integer to floating point by dividing each sample by the max value (255, 32767 or 231-1).
Here's a package of C++ code to do FFT. There are several others out there.
http://fftwpp.sourceforge.net/
Here is another example of performing the FFT. This one displays the results in a Windows GUI.
http://www.relisoft.com/Freeware/index.htm