I was making my high school project and decided to use something like nested linked lists for some bonus marks. The aim of my project was to create a digital diary containing infinite no of pages and infinite no of lines per page. My program uses a linked list as a queue and each element in the queue has its own linked list as a queue. I am using arrays for headings and for each sub unit (line) of the nested queue and the gets and puts for input/output. My program displays the input data but not all correctly, the last elements of the array are sometimes smileys and arrows instead of what I put.I am using a structure for line, a class to use that queue and a derived class for the page which contains the heading, page no and the class containing lines. The derived class objects are now used as the bigger linked list in another class. Also, I wish to save the data to a binary file, please tell me whether should I store it as line by line or page by page. I am using C++
The only thing that comes to my mind is error in pointer dereferencing. Basically, your linked lists are pulling data from the wrong sector in memory. Try going through it again and see whether everything is referenced properly and data input is going where it must. Trying saving the data line by line to avoid again overflows or errors. If successful then try page by page.
Related
I have what I hope is an easy question. I am using the Google Storage Client library to loop over blobs in a bucket. After I get the list of blobs on the bucket I am unable to loop over the bucket unless I re-run the command to list the bucket.
I read the documentation on page iterators but I still dont quite understand why this sort of thing couldnt just be stored in memory like a normal variable in python. Why is this ValueError being thrown when I try to loop over the object again? Does anyone have any suggestions on how to interact with this data better?
For many sources of data, the potential returned items could be huge. While you may only have dozens or hundreds of objects in your bucket, there is absolutely nothing to prevent you from having millions (billions?) of objects. If you list a bucket, it would make no sense to return a million entries and have any hope of maintaining their state in memory. Instead, Google says you should "page" or "iterate" through them. Each time you ask for a new page, you get the next set of data and are presumed to have lost reference to the previous set of data ... and hence maintain only one set of data at a time at your client.
It is the back-end server that maintains your "window" into that data that is being returned. All you need do is say "give me more data ... my context is " and the next chunk of data is returned.
If you want to walk through your data twice then I would suggest asking for a second iteration. Be careful though, the result of the first iteration may not be the same as the second. If new files are added or old ones removed, the results will be different between one iteration and another.
If you really believe that you can hold the results in memory then as you execute your first iteration, save the results and keep appending new values as you page through them. This may work for specific use cases but realize that you are likely setting yourself up for trouble if the number of items gets too large.
I need to read some text files that contain a huge amount of data, say 4 files each of about 500MB.
Each file contains several lines and each line has about this format:
id timestamp field1 field2 field3 field4
My strategy so far was to parse each file and for every line creating a QTreeWidgetItem with a suitable number of fields to store that line (this because during the program I want to show some of these data in a QTreeWidget) and appending all these items to a QList.
This QList is stored for all the execution of the program, in this way data are always available and I don't need to parse the files anymore.
I need all the data available because at each moment I need to access to data relative to a particular timestamp interval.
However this strategy seems too expansive in terms of resources, because I saw that the program consumes several GBs of memory and it eventually crashes.
How can I approach in a better way the handling of such data?
What you want is called 'lazy loading'.
There is an Example in the Qt documentation which shows you, how to use QAbstractItemModel, canFetchMore() and fetchMore().
basically my whole career is based on reading question here but now I'm stuck since I even do not know how to ask this correctly.
I'm designing a SQLITE database which is meant for the construction of data sheets out of existing data sheets. People like reusing stuff and I want to manage this with a DB and an interface. A data sheet has reusable elements like pictures, text, formulas, sections, lists, frontpages and variables. Sections can contain elements -> This can be coped with recursive CTEs - thanks "mu is too short" for that hint. Texts, Formulas, lists etc. can contain variables. At the end I want to be able to manage variables which must be unique per data sheet, manage elements which are an ordered list making up the data sheet. So selecting a data sheet I must know which elements are contained and what variables within the elements are used. I must be able to create a new data sheet by re-using elements and/or creating new ones if desired.
I came so far to have (see also link to screen shot at the bottom)
a list of variables
which (several of them) can be contained in elements
a list of elements
elements make up the
a list of data sheets
Reading examples like
Store array in SQLite that is referenced in another table
How to store a list in a column of a database table
give me already helpful hints like that I need to create for each data sheet a new atomic list containing the elements and the position of them. Same for the variables which are referenced by each element. But the troubles start when I want to have it consistent and actually how to query it.
How do I connect the the variables which are contained within elements and the elements that are contained within the data sheets. How do I check when one element or variable is being modified, which data sheets need to be recompiled since they are using the same variables and/or elements?
The more I think about this, the more it sounds like I need to write my own search tree based on an object oriented inheritance class structure and must not use data bases. Can somebody convince me that a data base is the right tool for my issue?
I learned data bases once but this is quite some time ago and to be honest the university was not giving good lectures since we never created a database by our own but only worked on existing ones.
To be more specific, my knowledge leads to this solution so far without knowing how to correctly query for a list of data sheets when changing the content of one value since the reference is a text containing the name of a table:
screen shot since I'm a greenhorn
Update:
I think I have to search for unique connections, so it would end up in many-to-many tables. Not perfectly happy with it but I think I can go on with it.
still a green horn, how are you guys using correct high lightning for sql?
I have had the displeasure of being saddled with a textbook that isn't written very well. As it stands, I went from enjoying C++ to being physically ill just thinking about it. However, I refuse to quit the class. So the long and short of it is I have a lab that asks the following:
Write a program that contains two arrays called actors and roles, each of size N. For each i, actors[i] is the name of an actor and roles[i] is a multiset of strings that contains the names of the movies that the actor has appeared in. The program reads the initial information for the these arrays from files in a format that you design. Once the program is running, the user can type in the name of an actor and receive a list of all the movies for that actor. Or the user may type the name of a movie and receive a list of all the actors in that movie.
Now, I don't want the answer. I just need to know what direction to start heading to. I feel pretty comfortable with standard arrays, but the way multisets are described in this textbook confuses me to no end. Any assistance (without just giving me the answer) would be appreciated.
The way its done is to have a third auxiliary multiset linking actors to films.
This third set only needs to contain pairs of unique integers. Say user picks actor 'wayne' first step is to form auxiliary subset of pairs of integers(actor_id,movie_id),each actor has an unique integer id, each movie has an unique integer id , then iterate through this set to obtain all movies as values to these keys.
Going the other way: if user picks film 'rawhide' again form subset of integers and iterate through this to find all actors as values to these keys.
Look up 'many to many relation' for further info.
I have a problem I'm trying to solve but I'm at a stand still due to the fact that I'm in the process of learning Qt, which in turn is causing doubts as to what's the 'Qt' way of solving the problem. Whilst being the most efficient in term of time complexity. So I read a file line by line ( file qty ranging between 10-2000,000). At the moment my approach is to dump ever line to a QVector.
Qvector <QString> lines;
lines.append("id,name,type");
lines.append("1,James,A");
lines.append("2,Mark,B");
lines.append("3,Ryan,A");
Assuming the above structure I would like to give the user with three views that present the data based on the type field. The data is comma delimited in its original form. My question is what's the most elegant and possibly efficient way to achieve this ?
Note: For visual aid , the end result kind of emulates Microsoft access. So there will be the list of tables on the left side.In my case these table names will be the value of the grouping field (A,B). And when I switch between those two list items the central view (a table) will refill to contain the particular groups data.
Should I split the data into x amount of structures ? Or would that cause unnecessary overhead ?
Would really appreciate any help
In the end, you'll want to have some sort of a data model that implements QAbstractItemModel that exposes the data, and one or more views connected to it to display it.
If the data doesn't have to be editable, you could implement a custom table model derived from QAbstractTableModel that maps the file in memory (using QFile::map), and incrementally parses it on the fly (implement canFetchMore and fetchMore).
If the data is to be editable, you might be best off throwing it all into a temporary sqlite table as you parse the file, attaching a QSqlTableModel to it, and attaching some views to it.
When the user wants to save the changes, you simply iterate over the model and dump it out to a text file.