I have a text file that holds values like this:
30 Text
21 Text
12 Text
1 Text
3 Text
I want to read this into a 2D array to keep the number and the text identifier together. Once I've done this I want to sort this into ascending order, as the text file will be unsorted.
What is the best way to go about this in C++, should I put it in an Array? My objective is to just get the top 3 highest values from the text file. Is there a data structure that would be better suited to this or a better way to go about it? I can structure the text file anyway, its not a concrete format if that should be changed.
TIA
If you only want the top three values, the most efficient way may be to define three variables (or a three-element array), read the file line-by-line, and if a newly read line belongs in the top three, put it there.
But if you want to use containers, I'd go with a std::vector and use std::sort, assuming that the file is small enough that all the data fits in memory.
I would prefer to put them into a std::map (if you have unique keys. If not use a std::multipmap instead.) So as you insert data into the map, they will always be sorted. And if you want to get the 3 highest values, just get the first 3 items of the map.
Related
I currently pulling data from a CSV file. The CSV file has ~ 89 columns and 2000 rows worth of data. I am getting several specific columns of data such as all of col:1,2,21,22,66,67 using a variety of getlines and loops. I then store that data into vectors within the loops. Once I have read through the entire file I now have 6 vectors full of data that I want. I make some adjustments to that data and store it back into a vector. I now want to place that new data back into those columns I took them out of without actually picking up/out the other data that I don't want. What would be the best approach for this? As I don't want to make 89 vars to hold all that other data I would much rather write over those columns in particular or something similar.
As I don't want to make 89 vars to hold all that other data I would much rather write over those columns in particular or something similar.
Instead of using 6 vectors to store column data, you can use one vector of strings to hold the data from one row. Then you update the elements at 1,2,21,22,66,67 in that vector and write it to another file.
std::vector<std::string> row; // 89 elements after read and parse a row.
Processing 500,000 rows this way should be fast enough. If it is not, try a column-oriented database, e.g. OpenTSDB
I have a .CSV file that's storing data from a laser. It records the height of the laser beam every second.
The .CSV file ends up having rows for each measurement that are all in this format:
DR,04,#
where the # is the height reading.
For example, if the beam is at a height of 10, the reading would say:
DR,04,10.
I want my program in C++ to read only the height (third column of the .CSV) from each row and put it into an array. I do not want the first two columns at all. That way I end up with an array with just a bunch of height values from each measurement.
How do I do that?
You can use strtok() to separate out the three columns. And then just get the last value.
You could also just take the string and scan for the first comma, and then scan from there for the second comma. What follows is the value you are after.
You could also use sscanf() to parse out the individual values.
This really isn't a difficult problem, and there are many ways to approach it. That is why people are complaining that you probably should've tried something and then ask a question here when you get stuck on a specific question.
I have a CSV file that has about 10 different columns. Im trying to figure out whats the best method to go about here.
Data looks like this:
"20070906 1 0 0 NO"
Theres about 40,000 records like this to be analyzed. Im not sure whats best here, split each column into its own vector, or put each whole row into a vector.
Thanks!
I think this is kind of subjective question but imho I think that having a single vector that contains the split up rows will likely be easier to manage than separate vectors for each column. You could even create a row object that the vector stores to make accessing and processing the data in the rows/columns more friendly.
Although if you are only doing processing on a column level and not on a row or entry level having individual column vectors would be easier.
Since the data set is fairly small (assuming you are using a PC and not some other device, like a smartphone), you can read the file line by line into a vector of strings and then parse the elements one by one and populate a vector of some structures holding the records data.
Suppose i want to store 3 lines in a file both in python and C++ .
I want to store it like this
aaa
bbb
ccc ..
But i am giving ccc input first then bbb then aaa. How will I traverse the file from bottom to top and also store from bottom to top/?
It isn't obvious from the title and question whether you want to store to a file, load from a file, or both, so I'll cover both cases:
Reading
If it's OK to load it all into memory at once (in Python):
list(reversed(list(open('foo.txt'))))
Otherwise, it gets a lot more difficult. Processing a file backwards requires that you read blocks of data a time from the end, scanning backwards through each block for newline marker, and stitching things back together at block boundaries.
Writing
If the data all fit in memory at once, put the numbers into a list (in Python):
open('foo.txt', 'w').writelines(reversed(data))
If data is an iterable, replace it with list(data).
If the data doesn't fit in memory (e.g., you have some generator that spits out a ton of data), the problem will be much harder. The simplest solution that comes to mind is to just push the data into a sqlite database and then copy it into the file. Or you might just find it easier to use the data directly from sqlite.
You might want to use a collections.deque. Afaik those things are optimised for insertion at one of their endpoints, so you could read your file as it is and fill the lines into a deque object with its appendleft method ... just a thought. No idea how efficient that would be. :)
Insert the lines to be generated at the beginning of your linear structure (list, vector<string>) each time, then iterate your structure from beginning to end.
I want to be able to read from an unsorted source text file (one record in each line), and insert the line/record into a destination text file by specifying the line number where it should be inserted.
Where to insert the line/record into the destination file will be determined by comparing the incoming line from the incoming file to the already ordered list in the destination file. (The destination file will start as an empty file and the data will be sorted and inserted into it one line at a time as the program iterates over the incoming file lines.)
Incoming File Example:
1 10/01/2008 line1data
2 11/01/2008 line2data
3 10/15/2008 line3data
Desired Destination File Example:
2 11/01/2008 line2data
3 10/15/2008 line3data
1 10/01/2008 line1data
I could do this by performing the sort in memory via a linked list or similar, but I want to allow this to scale to very large files. (And I am having fun trying to solve this problem as I am a C++ newbie :).)
One of the ways to do this may be to open 2 file streams with fstream (1 in and 1 out, or just 1 in/out stream), but then I run into the difficulty that it's difficult to find and search the file position because it seems to depend on absolute position from the start of the file rather than line numbers :).
I'm sure problems like this have been tackled before, and I would appreciate advice on how to proceed in a manner that is good practice.
I'm using Visual Studio 2008 Pro C++, and I'm just learning C++.
The basic problem is that under common OSs, files are just streams of bytes. There is no concept of lines at the filesystem level. Those semantics have to be added as an additional layer on top of the OS provided facilities. Although I have never used it, I believe that VMS has a record oriented filesystem that would make what you want to do easier. But under Linux or Windows, you can't insert into the middle of a file without rewriting the rest of the file. It is similar to memory: At the highest level, its just a sequence of bytes, and if you want something more complex, like a linked list, it has to be added on top.
If the file is just a plain text file, then I'm afraid the only way to find a particular numbered line is to walk the file counting lines as you go.
The usual 'non-memory' way of doing what you're trying to do is to copy the file from the original to a temporary file, inserting the data at the right point, and then do a rename/replace of the original file.
Obviously, once you've done your insertion, you can copy the rest of the file in one big lump, because you don't care about counting lines any more.
A [distinctly-no-c++] solution would be to use the *nix sort tool, sorting on the second column of data. It might look something like this:
cat <file> | sort -k 2,2 > <file2> ; mv <file2> <file>
It's not exactly in-place, and it fails the request of using C++, but it does work :)
Might even be able to do:
cat <file> | sort -k 2,2 > <file>
I haven't tried that route, though.
* http://www.ss64.com/bash/sort.html - sort man page
One way to do this is not to keep the file sorted, but to use a separate index, using berkley db (BerkleyDB). Each record in the db has the sort keys, and the offset into the main file. The advantage to this is that you can have multiple ways of sorting, without duplicating the text file. You can also change lines without rewriting the file by appending the changed line at the end, and updating the index to ignore the old line and point to the new one. We used this successfully for multi-GB text files that we had to make many small changes to.
Edit: The code I developed to do this is part of a larger package that can be downloaded here. The specific code is in the btree* files under source/IO.
Try a modifed Bucket Sort. Assuming the id values lend themselves well to it, you'll get a much more efficient sorting algorithm. You may be able to enhance I/O efficiency by actually writing out the buckets (use small ones) as you scan, thus potentially reducing the amount of randomized file/io you need. Or not.
Hopefully, there are some good code examples on how to insert a record based on line number into the destination file.
You can't insert contents into a middle of the file (i.e., without overwriting what was previously there); I'm not aware of production-level filesystems that support it.
I think the question is more about implementation rather than specific algorithms, specifically, handling very large datasets.
Suppose the source file has 2^32 lines of data. What would be an efficent way to sort the data.
Here's how I'd do it:
Parse the source file and extract the following information: sort key, offset of line in file, length of line. This information is written to another file. This produces a dataset of fixed size elements that is easy to index, call it the index file.
Use a modified merge sort. Recursively divide the index file until the number of elements to sort has reached some minimum amount - true merge sort recurses to 1 or 0 elements, I suggest stopping at 1024 or something, this will need fine tuning. Load the block of data from the index file into memory and perform a quicksort on it and then write the data back to disk.
Perform the merge on the index file. This is tricky, but can be done like this: load a block of data from each source (1024 entries, say). Merge into a temporary output file and write. When a block is emptied, refill it. When no more source data is found, read the temporary file from the start and overwrite the two parts being merged - they should be adjacent. Obviously, the final merge doesn't need to copy the data (or even create a temporary file). Thinking about this step, it is probably possible to set up a naming convention for the merged index files so that the data doesn't need to overwrite the unmerged data (if you see what I mean).
Read the sorted index file and pull out from the source file the line of data and write to the result file.
It certainly won't be quick with all that file reading and writing, but is should be quite efficient - the real killer is the random seeking of the source file in the final step. Up to that point, the disk access is usually linear and should therefore be reasonably efficient.