I am trying to write a web application that displays the content of a database as a website, e.g. in the form of a table, and lets the user update the table entries, which should automatically be reflected in the database, so that on page reload, the table looks exactly as the user left it.
Since my web development skills are fairly outdated, I wanted to take this as an excuse to try some new stuff. I know my way around SQL and Python quite well, so I thought Django would be a good choice. I don't have a lot of experience in Javascript however. I already worked through the tutorial, which covers classic HTML forms where you enter a bunch of data and then hit "submit" to push to the database.
What I would really prefer though is to have my whole table freely editable and either immediately save any change to the database (e.g. whenever I click a checkbox or "focus out" of a text box). As a second option I thought about having a single "save" button for the whole page (which may easily be several screens in size).
Now, for the first option, I assume I will likely have to use Javascript and Ajax techniques, which I am not comfortable with them yet, so writing greater pieces of Javascript code is something I am not very keen on at the moment.
For the second option, I would probably have my whole table be a huge, single form with a single submit button. I am a bit wary about this as it does not seem very robust to me.
So what my question boils down to: Are there ways to accomplish what I want in a robust and easy way without having to reinvent the wheel? From my understanding, Django does not cover the final rendering in HTML, it only provides the data, so I would assume I need some third party technology to handle that part?
Yes, for your second idea, submitting the whole table at once, Django has a thing called a ModelFormSet where you define a web form which is repeated for each row in the table (or, for the set of records you select). There are a good amount of basic things you'll need to understand to do it.. eg. how to create a Django view, how to set up a url, how to write templates... but you say you want to learn Django.. so.. it's a good exercise. The Django documentation has a good tutorial that leads you through development of a basic working app and from there it's not much further to do what you're seeking.
Here's the part of the Django documentation that discusses ModelFormSets:
https://docs.djangoproject.com/en/3.0/topics/forms/modelforms/#model-formsets
BTW, Django detects which rows have changed so it won't write every row every time, even though you've submitted them all.
Related
Trying to build a simple like system in modx (which uses php snippets of code) I just need a button that logged in users can press which adds a 'like' to a resource.
Would it be best to update a custom table or TV? my thoughts are that if it is a template variable i can use getResource to sort by amount of likes.
Any thoughts on the best way to approach this or how to build this would help. My php knowledge is limited.
Depends how you are going to use it after and if you are storing more data than just a 'like' count. TV's are expensive on resources [even more so if you are going to whip through the entire resource set with getResources] so if you are going to do a lot of processing after the fact I would either look at a custom table ~or~ explore using property sets on your pages [I think it should be pretty easy to write a plugin that will update a page property]
I'd definitely go for a custom table.
While you could simply increment a numeric TV to count the amount of likes, you will come to a situation where anyone may be able to keep on liking a resource without limit - while you didn't specify the exact concept, that hardly can be desired. Using a custom table you could throw in a relational alias to the user ID that liked the resource, add a timestamp so you know when it happened, and let your fantasy run wild on additional features that are now open to you.
While not a hard requirement for custom tables, you will probably want to take the time to learn xPDO, which is the database abstraction layer MODX is based on. There's a great tutorial on the RTFM which walks you through it.
I have a request to return a list of the most popular search terms used when searching a Sitecore site.
I have no idea how to implement this sort of function using Sitecore or whether Sitecore has this kind of functionality all ready. I can't find any documentation detailing this.
I am currently using search based of the LuceneSearch module (http://trac.sitecore.net/LuceneSearch) but altered to bind to a ListView for easy pagination.
At the moment I am probably just going to build a standalone function/class to update an XML file or something unless someone is able to point me in the correct direction...?
I would frankly use OMS for that - this is what it is designed to do. No need of separate database. Just register the search events via API with OMS. There is an out of the box Search report. May require some tweaking, but this seems to be the most out of the box solution.
Take a look here for more details.
I don't know of any standard functionality in Sitecore that would help you achieve this, so you will probably have to approach this from ground up - unless someone else in here is able to point to a package deal somewhere :-)
Solving this, really breaks down into two tasks
1) Collecting search term information. Whenever a user enters a search term in the searchbox that I assume you have; normalise it and store it in a SQL table (essentially a [term] [count] type table. Update the counter on terms you already store.
By normalising, I mean lowercasing it and so on - possibly breaking each search term (word) down and storing them one for one if that is what your solution calls for (probably not the route I would go)
2) Realtime retreiving information from the table, based on what the user is typing in the searchbox. Assuming you want some sort of "amazon-like" - also found on almost all major search engines nowadays - autocompletion. I normally implement these in a web service that then gets called by Ajax, JQuery or whatever rich client implementation you prefer.
As for updating an XML file, I think locking issues and performance would kill that solution; though it could perhaps be made to work on a very small scale.
Sorry that I can't be more specific in my response, but your question is very open-ended.
Very interesting question. One thing you could do it have another database to store these search queries. An insert into this DB would not be very difficult and would get around the issue of locking on a XML file. Maybe insert the search query into a DB table then to get the top results just pull the top x rows ordered by that query field. As Mark Cassidy said before, maybe normalize the data before inserting it.
You could isolate this work on your search layout (or sublayout) so it runs on a specific part of the site, not on every page.
Sitecore has an out of the box "site search" report in the executive insight dashboard, this will give you an indication of what search terms are driving the most visits and of course engagement value.
You just need to configure it by registering a page event on the search page and passing the query otherwise sitecore wouldnt know what form field constitutes a search. See this post it explains it in more detail. For more information you can download the analytics configuration reference document from sdn.http://sdn.sitecore.net/upload/sitecore6/65/engagement_analytics_configuration_reference_sc65-usletter.pdf
And dont forget for performance sitecore caches the reports at various levels so during development it may be handy to know how to force a cache update, I talk about this in the following blog post:
http://andytsitecore.blogspot.co.uk/2013/10/sitecore-dms-and-analytics.html
I work for a university, and in the past year we finally broke away from our static HTML site of several thousand pages and moved to a Drupal site. This obviously entails massive amounts of data entry.
What if you're already using a CMS and are switching to another one that better suits your needs? How do you minimize the mountain of data entry during such a huge change? Are there tools built for this, or some best practices one should follow?
The Migrate module for Drupal would provide a big help. The Economist.com data migration to Drupal will give you an overview of the process.
The video from the Migration: not just for the birds presentation at Drupalcon DC 2009 is probably somewhat out-of-date, but also gives a good introduction.
Expect to have to both pre-process and post-process your data manually, whatever happens. Accept early on that your data is likely to be in a worse state than you think it is: fields will be misused; record-to-record references (foreign keys) might not be implemented properly, or at all; content is likely to need weeding and occasionally to be just bad or incorrect.
Check your database encoding. Older databases won't be in Unicode encodings, and get grumpy if you have to export data dumps and import them elsewhere. Even then, assume that there'll be some wacky nonprintable characters in your data: programs like Word seem to somehow inject them everywhere, and I've seen... codepoints... you people wouldn't believe. Consider sweeping your data before you even start (or even sweeping a database dump) for these characters. Decide whether or not to junk them or try to convert them in the case of e.g. Word "smart" punctuation characters.
It's very difficult to create explicit data structures from implied one. If your incoming data has a separate date field, you can map that to a date field; if it has a date as part of a big lump of HTML, even if that date is in a tag with an id attribute, simple scripting won't work. You could use offline scripting with BeautifulSoup or (if your HTML's a bit nicer) the faster lxml to pre-process your data set, extract those implicit fields, and save them into an implicit format. Consider creating an intermediate database where these revisions are going to go.
The Migrate module is excellent, but to get really good data fidelity and play more clever tricks you might need to learn about its hook system (Drupal's terminology for functions following a particular naming scheme) and the basics of writing a module to put these hooks in (a module is broadly just a PHP file where all the functions begin with the same text, the name of the module file.)
All imported content should be flagged for at least a cursory check. You can do this by importing it with status=0 i.e. unpublished, and then create a view with the Views module to go through the content and open it in other tabs for checking. Views Bulk Operations lets you have a set of checkboxes alongside your view items, so you could approve many nodes at once.
Expect to run and re-run and re-run the import, fixing new things every time. Check ten, or twenty items, as early as possible. If there are any problems, check ten or twenty more. Fix and repeat the import.
Gauge how long a single import run is likely to take. Be pessimistic: we had an import we expected to take ten hours encounter exponential slowdown when we introduced the full data set; until we finally fixed some slow queries, it was projected to take two weeks.
If in doubt, or if you think the technical aspects of the above are just going to take more time than the work itself, then just hire temps to do the data. But you still need decent quality controls, as early as possible during their work. Drupal developers are also for hire: try your country's relevant IRC channel, or post a note in a relevant groups.drupal.org group. They're more expensive than temps but they usually write better PHP...! Consider hiring an agency too: that's a shameless plug, as I work for one, but sometimes it's best to get experts in for these specific jobs.
Really good imports are always hard, harder than you expect. Don't let it get you down!
Migrate + table wizard (and schema + views) is the way to go. With table wizard you can expose any table to drupal and map fields accordingly using migrate.
Look here for a detailed walktrough:
http://www.lullabot.com/articles/drupal-data-imports-migrate-and-table-wizard
You'll want to have an access to existing data from django. This helps me a lot with migrating: http://docs.djangoproject.com/en/1.2/howto/legacy-databases/ . With correct model definitions you'll have full django power including the admin. In fact, I'm using django just as admin backend for several legacy php projects - django's admin can easily outachieve a lot of custom hand-written admin scripts.
Authorization should remain the same. Users should be able to login with their credentials but it is hard to write a migration script for auth data because password hashing schemas may be different and there is no way to convert between them without knowing plain passwords. Django provides a way to support different sources of auth so you can write Drupal auth backend: http://docs.djangoproject.com/en/1.2/topics/auth/#writing-an-authentication-backend
There is no need to do the full rewrite. If some parts are working fine they can still be powered by Drupal. New code can written using Django with same UI. Routing between old and new parts can be performed by web server url rewriting. Both django and drupal parts can be powered by the same DB.
I am writing a website using Django. I need to push the web site out as soon as possible. I don't need a lot of amazing things right now.
I am concern about the future development.
If I enable registration, which means I allow more contents to be writable. If I don't, then only the admins can publish the content. The website isn't exactly a CMS.
This is a big problem, as I will continue to add new features and rewriting codes (either by adapting third-party apps, or rewrites the app itself). So how would either path affects my database contents?
So the bottom line is, how do I ensure as the development continues, I can ensure the safety of my data?
I hope someone can offer a little insights on this matter.
Thank you very much. It's hard to describe my concern, really.
Whatever functionalities you will add after, if you add new fields, etc ... you can still migrate your data to the "new" database.
It becomes more complicated with relationships, because you might have integrity problems. Say you have a Comment model, and say you don't enable registration, so all users can comment on certain posts. If after, you decide to enable registration, and you decide that ALL the comments have to be associated with a user, then you will have problems migrating your data, because you'll have lots of comments for which you'll have to make up a user, or that you'll just have to drop. Of course, in that case there would be work-arounds, but it is just to illustrate some of the problems you might encounter later.
Personally, I try to have a good data-model, with only the minimum necessary fields (more fields will come after, with new functionalities). I especially try to avoid having to add new foreign keys in already existing models. For example, it is fine to add a new model later, with a foreign key to existing model, but the opposite is more complicated.
Finally, I am not sure about why you hesitate to enable registration. It is actually very very simple to do (you can for example use django-registration, and you would just have to write some urlconf, and some templates, and that's all ...)
Hope this helps !
if you are afraid of data migration, just use south...
I am doing some web data classification task and was thinking if I could get the co-ordinates of html elements as they would appear on a web-browser without taking into consideration any css or javascript being referred in the web page.
My language of programming is c++ and the need results for a couple million of pages, so it has to be fast. I know there is a Microsoft COM component which renders the page in a web browser control and then can be queried for position of different html tags. But this is not suitable in my case as it first renders the whole page which takes up a lot of time.
So as I found out, there are open-source layout engines WebKit, Gecko that can probably be used for this. But that's a huge piece of code and I need someone to direct me to the right classes or right modules to look into or any previous/similar work someone has done previously. Also, please let me know what you guys think is a good choice if I want to customize the existing code for use with multiple threads to make it faster.
Thanks
Generally, you would find that different page rendering engines do render the html in their own way and the results will differ.
The thing is that if you stick to any concrete browser engine, what you are to do is somehow bringing this engine into your project and using engine's interface to retrieve these coordinates. Kind of a tough task though, simply because you'll have to read a lot of documentation and crawl through thousands of files.
I think that right approach would be posting this task in some place, that is specific for the page rendering engine you've chosen. (gecko/webkit/...)
If you prefer sticking to something MS-specific, guess it's gonna be easier, but can't help you with something like class names or code chunks that you want to see. Probably somebody else could guide you in this case.