I am struggling to understand what is missing on my application, sorry it seems a little silly question, I am sure is something quite simple that I am not actually looking to.
I have created an API, using REST-FRAMEWORK on my machine and upload it to production, but the content of my database didn't come through.
If you see in the picture the product list appers as empty
But on my machine it actually has some information
It would be redundant to have the same database to both - local and production environment. By default, that is why they have separate database files/services and have to be filled independently.
Related
I am learning Django, so please correct any issues with my way of thinking here.
To start, I am using the Django Rest framework and I'm not interested in just creating a list of items, which is all that I can find examples of on the entire Internet...
I will have some pretty large functional classes that will POST and GET data to and from a pipeline server (TFS to be specific) and obviously, I'll need to insert this logic into a file within my app, I'm not just displaying things from my database here.
So, my question is, where does this go within my app? my understanding is that the "Django way" is to keep models thick.. does that mean that all code logic to make requests and interact with the pipeline server should end up in my models.py?
It's kinda frustrating that everyone else seems to just be listing furniture or some other sales item from their database. Which does not really help me very much... I need to know what a very large web app would look like (one that has a lot of logic and not just a stupid list of red and blue chairs).
I'll provide more background information first. The question is asked again in the last bullet of "My Thoughts & Questions" section.
Background
I am working on a legacy system which looks kind of like a batch processing system: A job is passed along a series of worker programs that each works on part of the whole job until it is finished. The job status/management information is generated by each worker program along the way and written into local text files with names like "status_info.txt" and "mgmt_info.txt".
This processing system does NOT use any databases, just pure text files on the local file system. Changing their code to use a databases is also kind of expensive which I want to avoid.
I am trying to add a GUI to this system for primarily two purposes. First, let the users view (a read operation) the job status and management information so they can get a big picture of what's going on and whether there is any error in any steps. Second, allow the users to redo one or more steps by changing (a write operation) the job status and management information.
My Current Solution
I am thinking of using Django to develop the GUI because:
Django is fast on development; Django is web-based so almost no installation is required;
I want to enable remote monitoring of the system so a web-based, in-browser GUI makes more sense;
I used some Django before so had some experience.
However, I see Django mostly works with a real database: SQLite, MySQL, PostgreSQL, etc.. The user-defined model will be matched into the tables in these databases by Django automatically. However, the legacy system only produces text files.
My Thoughts & Questions
Fortunately, I noticed that the text files are all in one of the two formats:
Multiple lines of strings;
Multiple lines of key-value pairs.
Both formats look easy to match to a database table design. For example, a "multiple lines of strings" can be considered as a DB table of a single column of text, while a "multiple lines of key-value pairs" as a two-column table.
Therefore, my question is: Can I build my model upon local text files instead of a real database, and somehow override some code somewhere in Django that acts as the interface between the core framework and the external database, so these text files will play the role of a "database" to Django and the reading/writing operations will happen to these files?? I've searched on internet and stackoverflow but wasn't lucky enough. Will appreciate if you can give me some helpful links.
What Not to Do.
If you are going to reproduce an RDBMS using files you are in for a lot and I mean a lot of grief and hard work. Even the simplest RDBMS like sqlite has thousands of man hours of work invested on it. If you were to bring your files into django or any other framework you would need to write a custom backend for it.
What To Do
Create django models backed by an RDBMS and import the files into it. Alternatively since this data appears to be mostly in Key Value pairs, you might be able to use Mongodb or redis.
You can use inotify to monitor the file system to detect when a new file has been created by the batch processing system. When that happens you can invoke a django CLI script to process that file and import it's data into the database.
The rest of it is a straight forward django app.
I am currently trying to figure out he best practice in order to design my web services between a django administrated database (+ images) and a mobile app. My main concern is how to separate a bulk update (send every data in the database and all the files on the server) and a lighter, smaller update with only the new and / or modified objects (images or data.)
I have had access to a working code-base using a cronjob and states for each data field (new, modified, up to date) to generate either a reference data file or an update file. I find it to be very redundant and somewhat unelegant, in contradiction with the DRY spirit of Django (there are tons of lines of code, making it nearly unmaintainable.))
I find it very surprising that this aspect is almost un-documented, since web traffic is a crucial matter in mobile developpment.. Fetching everytime all the data served quickly becomes unsustainable as the database grows..
I would be very grateful for any lead or advice you could give me :-) Thx in advance !
Just have a last_modified DateTimeField in your table, and in your user's profile a last_synchronized DateTimeField. When the mobile app wants to synchronize, send the data which was modified after the last synchronization run, and update the last_synchronized field in the user's profile.
I am relatively new to Django and this is a more general 'concept' question.
For a client I need to construct an expansive database holding data returned from a series of questionnaires as well as some basic biological data. The idea is to move away from the traditional tools (i.e. Microsoft Access) and manage the data in a mysql database using a basic CRUD interface. Initially the project doesn't need to live on the web, but the next phase will to be to have a centralized db with login and admin page.
I have started building the db with Django models which is great, and I want to use the Django admin for the management of the data.
My question is: Is this a good use of Django? Is there anything I should consider before relying on django for the whole process? And is it advisable to us the Django runserver for db admin on a client's local machine (before we get to the web phase).
Any advice would be much appreciated.
Actually, your description sounds exactly like the sort of thing for which Django is an ideal solution. It sounds more complex and customized than a CMS, and if it's as straightforward as your description then the ORM is definitely a good tool for this. Then again, this sounds exactly like an appserver-ready problem, so Rails, Express for Node.js, or even ChicagoBoss (if you're brave) would be good platforms for this kind of application.
And sure, Django is solid enough you can run it with the test server for local clients before you go whole-hog and run the thing on the web. For that, though, I recommend Apache/mod_wsgi, and if you're going to be fault tolerant there are diamond architectures (one front end proxy with monitoring failover, two or more appserver machines, one database with hot spare) and more complex (see: sharding) architectural layouts you can approach later.
If you're going to run it in a client's local setting, and you're not running Windows, I recommend looking into the screen program. It will allow you to detach the running job into the background while making diagnostics accessible in an ongoing fashion.
I'm working on a django website that needs to track popularity of items within a given date/time range. I'll need the ability to have a most viewed today, this week, all time, etc...
There is a "django-popularity" app on github that looks promising but only works with mysql (I'm using postgresql).
My initial thoughts are to create a generic ViewCounter model that logs views for all the objects that are tracked and then run a cron that crunches those numbers into the relevant time-based statistics for each item.
Looking forward to hearing your ideas.
Did you try django-popularity with postgres? The github page just says that the developer has not tested it with anything other than MySQL.
The app has only been tested with MySQL but it should fully work for Postgres with few adjustments. If you do manage to get it to work: please inforrm me. (I'm the developer.) I would love to be able to tell people this product is useable for Postgres as well.
Moreover; all the funcitonality relying on raw SQL checks whether there is actually a MySQL database in use. If not, it should throw an assertion error.
Also, the generic viewcounter is already in my package (it's called ViewTracker, but hell). The cron job seems too much of a hassle to me if we could do either SQL or Django caching as well.