I have around 1M string elements against which I have to provide autocomplete. I certainly can't transfer the list to client so have to do remote source autocomplete.
Now, what is the standard way - data structure/ algo of implementing the server side of remote source autocomplete. How should I handle the ajax request sent by client for autocomplete? Should I store the list in database, or keep it in RAM in some specific data structure ..etc?
I feel keeping it in database would slow it down too much, but keeping it in RAM would conflict with limited RAM issue.
It is possible to get very acceptable response times for Autocomplete from a remote database. For example, geonames.org has about 6 million location names. Most people, me included, get sub second response times to display the Autocomplete dropdown selection.
Here is a link to a tutorial about how to build the database, the server manager (in several languages), and the Autocomplete code.
http://www.jensbits.com/2010/03/29/jquery-ui-autocomplete-widget-with-php-and-mysql/
Related
The project I'm working on logs data on distributed devices that needs to be joined in a single database on a remote server.
The logs cannot be streamed as they are recorded (network may not be available etc) so they must be sent in bulky 0.5-1GB text based csv files occasionally.
As far as I understand this means having a web service receive the data in form of post requests is out of the question because of file sizes.
So far I've come up with this approach: Use some file transfer protocol (ftp or similar) to upload files from device to server. Devices would have to figure out a unique filename to do this with. Have the server periodically check for new files, process them by committing them to the database and deleting them afterwards.
It seems like a very naive way to go about it, but simple to implement.
However, I want to avoid any pitfalls before I implement any specifics. Is this approach scaleable (more devices, larger files)? Implementation will either be done using a private/company owned server or a cloud service (Azure for instance) - will it work for different platforms?
You could actually do this through web/http as well, after setting a higher value for post request in the web server (post_max_size andupload_max_filesize for PHP). This will allow devices to interact regardless of platform. Should't be too hard to make a POST request server from any device. A simple cURL request could get this job done.
FTP is also possible. Or SCP, to make it safer.
Either way, I think this does need some application on the server to be able to fetch and manage these files using a database. Perhaps a small web application? ;)
As for the unique name, you could use a combination of the device's unique ID/name along with current unix time. You could even hash this (md5/sh1) afterwards if you like.
I would like to download a course and work offline on that course. How can I track my results?
I would like to record all my progress(slides that I viewed, quiz results, time for each content....), for example saving them on a file or a database, and then generate statements to send to an LRS when I'm online.
Someone could explain me how can I do that?
With TinCan statements (commonly including information about the student(actor) and then what they did, objectives, status etc) are being posted to a endpoint. Depending on how the content is written it may or may not failover to some alternative. If its a native application I would suspect you'll have limited ability to intercept these statements. If its a HTML course you may be able to locate where the content attempts to post these statements and re-direct those to local storage or some other sql/nosql option. Ultimately, it will depend on what content you're attempting to run, and what type of controls you'll have to attempt to. Based on what I know, the content itself would have to detect its 'offline' and store the statements until it is back online. Similar to this post - How tin-can-api works offline?
SCORM ultimately doesn't work like TinCan. LMS exposes a JavaScript API, and the HTML based content locates it in the DOM using JavaScript. Content then makes gets and set calls to it. The LMS is more responsible for committing this information to a server, or persisting the data in another fashion. This doesn't stop content developers from creating new and alternative ways to persist data if the LMS is not present. For this type of content its probably easier to intercept since you can be the LMS in this situation and expose that API for the content to use. In a offline situation you'd just have to manage the student attempts and then once online- sync them with your server.
I am currently trying to figure out he best practice in order to design my web services between a django administrated database (+ images) and a mobile app. My main concern is how to separate a bulk update (send every data in the database and all the files on the server) and a lighter, smaller update with only the new and / or modified objects (images or data.)
I have had access to a working code-base using a cronjob and states for each data field (new, modified, up to date) to generate either a reference data file or an update file. I find it to be very redundant and somewhat unelegant, in contradiction with the DRY spirit of Django (there are tons of lines of code, making it nearly unmaintainable.))
I find it very surprising that this aspect is almost un-documented, since web traffic is a crucial matter in mobile developpment.. Fetching everytime all the data served quickly becomes unsustainable as the database grows..
I would be very grateful for any lead or advice you could give me :-) Thx in advance !
Just have a last_modified DateTimeField in your table, and in your user's profile a last_synchronized DateTimeField. When the mobile app wants to synchronize, send the data which was modified after the last synchronization run, and update the last_synchronized field in the user's profile.
I want to trace user's actions in my web site by logging their requests to database as plain text in Django.
I consider to write a custom decorator and place it to every view that I want to trace.
However, I have some troubles in my design.
First of all, is such logging mecahinsm reasonable or because of my log table will be enlarging rapidly it causes some preformance problems ?
Secondly, how should be my log table's design ?
I want to keep keywords if the user call search view or keep the item's id if the user call details of item view.
Besides, IP addresses of user's should be kept but how can I seperate users if they connect via single IP address as in many companies.
I am glad to explain in detail if you think my question is unclear.
Thanks
I wouldn't do that. If this is a production service then you've got a proper web server running in front of it, right? Apache, or nginx or something. That can do logging, and can do it well, and can write to a form that won't bloat your database, and there's a wealth of analytical tools for log analysis.
You are going to have to duplicate a lot of that functionality in your decorator, such as when you want to switch it on or off, or change the log level. The only thing you'll get by doing it all in django is the possibility of ultra-fine control, such as only logging views of blog posts with id numbers greater than X or something. But generally you'd not want that level of detail, and you'd log everything and do any stripping at the analysis phase. You've not given any reason currently why you need to do it from Django.
If you really want it in a RDBMS, reading an apache log file into Postgres or MySQL or one of those expensive ones is fairly trivial.
One thing you should keep in mind is that SQL databases don't offer you a very good writing performance (in comparison with reading), so if you are experiencing heavy loads you should probably look for a better in-memory solution (eg. some key-value-store like redis).
But keep in mind, that, especially if you would use a non-sql solution you should be aware what you want to do with the collected data (just display something like a 'log' or do some more in-deep searching/querying on the data).
If you want to identify different users from the same IP address you should probably look for a cookie-based solution (if you are using django's session framework the session's are per default identified through a cookie - so you could just simply use sessions). Another solution could be doing the logging 'asynchronously' via javascript after the page has loaded in the browser (which could give you more possibilities in identifying the user and avoid additional load when generating the page).
This'll be my first question on this platform. I've done lots of development using Flex, WebORB and ASP.NET. We have solved Concurrency problems with messaging (Pessimistic Concurrency Control). This works pretty good but it also makes the whole application dependent of the messaging. No messaging, no concurrency control.
I know that ASP.NET has version control in DataSets, but how would you go and use that if you are working on a RIA. It seems hard to go and store each dataset in the session of the client... So, if the Client would like need all products, I would need to store the dataset in the session of the client. When the client would change something to a product and save the product, I could then update the dataset (stored in the session) and try to save it...
Seems a lot of work and a lot of memory that will be used (because those products will be kept in the memory of the client, so the dataset needs to be kept on the server side session).
I think the most easy way would be to provide all DTO's with a version number. If the client would try to save a DTO, I could compare the version number with the one in the database.
Lieven Cardoen
This is something I've done before - as the original data was coming from an SQL Server database we just used a rowversion typed column in each DTO to determine if it had changed while the user was working on it.
At this point you can either barf on the error or try and figure out a way to merge the changes, but at least you can tell that it's changed underneath you :)