How to prevent Django pages from refreshing after submit? - django

I am using the Django template system. What I want is, when I submit a form, or click to an url link, page does not refreshes, but loads with the data returning from the server. Is it possible?

I recommend a combination of jQuery (easy, powerful, popular javascript library) and dajax/dajaxice (http://www.dajaxproject.com/). Dajax is very easy to set up and use, and jQuery is also easy to set up and use. Dajax is strictly for AJAX communications through Django. jQuery is perfect for taking a simple site and making it more fluid, intuitive, and user-friendly.

You need JavaScript to do that. What you are looking for is called AJAX (Asynchronous JavaScript and XML). Essentially, it means you use JavaScript to send a request to the server as soon as the link/button is clicked. The server returns some data to your Script, which then can be used to manipulate the HTML page, e.g. by inserting the responded data into the DOM. Since you do everything with JavaScript, no reloading of the whole page is required.
To start, read the AJAX tutorial. There are certain JavaScript libraries that make these things more simple for you (e.g. jQuery), but you really should understand how this stuff works first, since else you might get into trubble while trying to debug it.

Related

How to load Django template after enter similar to facebook?

What I have been trying to find, with no answer yet, is how I could have a user click on link to a template, then instead of waiting for the whole page to load, allow the user to view what has already loaded while they wait for more load heavy content to arrive, similar to what Facebook does when you first get to a page and see things loading.
There is not much more I can say as the question is pretty self-explanatory. I have checked google and stack overflow.
In order to allow the use to view the loaded page and dynamically view the rest of the page, you use a technology known as AJAX. It allows you to make asynchronous calls to the database, which can be triggered by some JS event(like onscroll) and load the queried data without reloading the entire page.
AJAX in Django is pretty straightforward, though some knowledge of JQuery(or even Javascript) will be required. You may also use the python package django-dajax which will make things easier. I think you will find the following links useful:
Tango with Django ajax guide (one of the best, but a bit tough)
django-dajax docs
Hope this helps!

When to use Angular with Django?

I have a pretty basic question.
Consider a CRUD web application built on Django. You have templates that render data. Those templates might have forms where you submit data to the backend, and that might reload the page to display changes. Sometime, you can make those requests over AJAX, for example when you need to update data on the UI. You can also submit forms with AJAX and update the HTML with it.
On the other hand you have single page applications. You serve a static file, and there is no reload of pages. You have data that comes from an API and populates some front-end template.
What are some guidelines for when to use what? Not in a mutually exclusive way, but within one Django project, what are some reasons/considerations to use a Django template/forms/AJAX approach and when to use Angular?
Thank you.
Something to consider is how "interactive" you want the client-side to be.
I am in the process of converting an existing Django app to use Angular (and django-rest-framework). The app was highly interactive and relied on a lot of custom JQuery to get various widgets working just right. JQuery's constant looping through the DOM made it pretty slow. I am finding that using Angular instead of JQuery is much faster.
So if you have a lot of complexity in the front-end, I would recommend Angular.

Places where a website stores its data

I have just started with Python Web scraping through Requests. This could be a broad question, I will try to make it as brief as possible.
I came through situation where sometimes an entire page source can be downloaded with r.content (where r is a response object of requests's get call)
Sometimes some part of the data is stored in json format... In files that can be accessed by deeply observing the get and post calls made.
However, I even found websites where the entire content is in DOM but part of it is neither in Page source nor in Json files.
I am wondering how many of such places can a website store a data in?
(Just the names, I am not looking for how to get there)
For these last type of websites, I have observed almost every requests call made, but couldn't find where the data is.
So are there any other place except the 2 mentioned above? Or those are the only two indicating I am not doing my job right of observing the requests call?
You may answer it in brief bullet points and I can take my study from there.
Thanks in advance.
Lets assume we are talking only about HTML data. A web server could serve you data in many other formats (JSON/XML, etc.)
Please note that what I have described is generalisation, and like most generalisations, you could find exceptions that do not fit in it.
Broadly we could divide the type of data displayed (for the end user) into two categories
Pre render
Post render
Pre render
The entire HTML page is constructed at server-side and sent across to the client. Here, the JS side is concerned with the user interaction, and not with the structure of the data.
We are slowly moving away from this type of structure, but currently a large majority of all web pages uses this.
Web scraping is relatively easy here, as we can programatically pull the html page, and not bother about the javascript code that accompanies it.
a combination of requests and beautifulsoup should work in almost all of the cases (assuming that you could identify the general structure of the document).
Post render
Here the HTML page that is returned from the server is just a "skeleton" or placeholders for the actual data. The data is rendered by the accompanying JS code.
In such cases, if you fetch the source file via for eg., requests, you will get an empty shell, with no data in it.
for this if you inspect the calls made by a browser while rendering, (chrome's network tab or firefox's inspect tool or the more popular firebug), you will most likely see ajax requests that brings back the actual data from the server)
depending on how the requests are made, you could hit that ajax endpoint, and get the data in JSON.
you could use response.json() function to extract it into python-dicts.
In certain (rare) cases, there would not be an ajax call, but the HTML served from the server will still be a shell. The actual data is part of that file served, but stored as part of the JS code itself. This could be done for a variety of reasons, for example for dynamic data to be sent to static js files, or just to deter simple attempts of scraping the page.
One approach to scraping such pages would be to 'render' the page in a headless browser, which executes the JS code and returns an HTML that could be parsed via parsers like beautifulsoup
beautifulsoup has the ability to work with many parsers, one of which is html5lib, which could solve this issue.
you could also look at selenium or mechanize
or you could try parsing the js code yourself which might be faster.
Arriving at a conclusion as to what to use requires careful inspection of how the page is rendered on a browser. Even if you don't see an ajax request, the html that is served by the server need not be how the browser displays it.
A good way to start is by looking at the bare-html that is being served, by either downloading the page via curl or requests.get or simply rendering it in your browser with javascript disabled.
Good luck.

Emulating a web browser to wrap the functionality of several similar web sites

I'm interested in emulating the functionality of a web browser in C++ so that I can create a wrapper for several web sites. Right now, the biggest issues with these sites are that they make heavy use of JavaScript that interacts with the HTML DOM. Thus, the simple solution of using curl to download the page, and something like RapidXML to parse its contents is out.
Next, I considered using something like v8 with curl, and that solves the issue of interpreting the JavaScript on the page nicely. However, it doesn't solve the issue of connecting the HTML DOM methods with the JavaScript; in other words, document.getElementById() would fail in v8.
Next, I considered WebKit, which seems like it's perfectly suited to emulate a web browser--after all, Chromium and Safari both utilize it in their web browsers. However, it's a little too complete. I don't need all of the rendering aspects it includes.
So, I'd be looking for some way to:
Make an SSL connection to a web site
Interpret the JavaScript on that web site in connection with the HTML DOM
Set the value of the username/passwords <input> fields with my username and password
Simulate clicking the "Submit" button by calling the formSubmit() function, from <input type="button" onClick="formSubmit()">
Handle the HTTP POST form action and the subsequent HTTP 301 and JavaScript redirects (accomplished using window.location)
Repeat 2-5 as needed
Besides what I've already considered, what other options do I have? Ideally, I'd want this to be extremely lightweight, without requiring linking to many libraries.
I'm primarily concerned with developing for Windows 7 64-bit.
Well, this sounds all too much like a brute-force program. Disregarding that, and since you don't seem to need to render any website, I think you should just fetch the file through cURL or something, then parse it, check for the form through using a regex, retrieve the form action, then make a request using the method taken from the <form> tag and whichever input you want.
Problem is, there would be no proper way to know when is it that you've logged in properly, unless you made some kind of per-site checking. This comes mainly from the fact that many sites use sessions rather than direct cookies or HTTP auth, and since you can't read from sessions directly, it is impossible for you to guess when the session has changed.
That's the most lightweight solution I can come up with right now.

Pass array from HTML to Django application

I developed an application in JSPs and Servlets involving drop down menus that kept growing with how many authors per publication their were.
This was done in JavaScript and then in my application iterated through them using a loop. Is this possible using Django? This would be useful in my application.
This link might help you out if you don't want to dive into javascript (too much)
http://www.dajaxproject.com/
Or have a look at this stackoverflow question/awnser:
What is the best AJAX library for Django?
In any case, you need to serialize your array to a JSON string.
Then pass the JSON with an XMLHTTPRequest (ajax) to the server.
Add the javascript tag to your question if you don't mind more JS solutions.
Otherwise look for a Django Ajax framework to do the heavy lifting for you.