Caching python web requests for offline use - python-2.7

Is there a way to "cache" the requests I make with the python "requests" module in a way that even if I go offline the module still returns the webpage as if I was online? How can I achieve something like this? Currently I only found caching libraries that just cache the webpage but you still have to be online...

I think you can use request_cache module available.
Please check http://requests-cache.readthedocs.org/en/latest/user_guide.html
Once you cache using request_cache data is available even if you are disconnected.

For anyone else searching, the module that does the job is called "vcrpy".

Related

Django communicating with another python application?

Is it possible to have django running on the server and one application from django inter communicating with another python process say that I developed and fetching a response from it or even make it just do a particular action?
It can be synchronous or asynchronous; I have some idea of being asynchronous where some package like hendrix, crossbar.io or even celery can be used. But I don't understand what would be the name for this inter-communication and how should I plan the architecture for this.
Going around my head I have the two following situations I'm seeking a plan to be developed:
1.
Say I have django and an e-mail sender with the python package smtp. A user making a request to a view would make django execute my python module I developed for sending an email to a particular user (with a smpt server from google/gmail). It could be synchronous or asynchronous.
OR
2
I have django (some application) and I want it to communicate with some server I maintain; say for making this server execute some code or just fetch a file (if it is an ftp server). Is this an appropriate situation to point to the term 'microservices'? Or there is another term or workaround here?
Your first solution would be called an installable python module, just like any package you install with pip. You can have this as a separate module if you need your code to be re-usable across multiple or just future projects.
Your second solution would be a microservice. This will require setting your small module as a service that could have a REST API to communicate with and make it do whatever you intend it to do.
If your question is "what is the right approach" then I would tell you it depends on your use case. If this is just some re-usable code that you don't want to repeat over and over through our project then just make it into a separate module. While if this is a service that you expect other built services will use and rely on, then just make it into a microservice. You can use a microframework such as Flask for easier and faster setup of your service. Otherwise, if it's just some code that you will use once and serves a single functionality on your application then just write it and keep it there.
There are no rules or standards on which approach should be taken. I personally judge things depending on the use-case.
Hope this helps!

offline use of the google earth plugin

I have a use case that requires offline access to google earth. I know that google earth enterprise offers a disconnected product, however we may not have access to that product and/or google earth enterprise is prohibitively expensive at $25K for a dev license.
I would prefer to use the google earth plugin since I am building an application and would like to use the JS api. Is it possible to host the google earth plugin on my own disconnected server? We would use google earth connected to a standalone offline WMS server for access to imagery.
said another way, can I host the plugin and corresponding javascript on my own server?
I do not know if i understand well your problem but i can explain you waht I'm currently working on.
Im my current application with google earth plugin js api, I'm able to start the plugin even if offline. But one requirement is to have cached data.
If you have cached data and if you start the plugin offline, then zooming to a level with higher resolution that the one you have in your cached data will have no effect (imagery will not be update to higher resolution)
but depending on what you really need, yes , you can start the plugin offline
This is not really answering your original question but if you are interested, just tell me :-)
I tried to cache Google Earth with a proxy server but I couldn't.
Furthermore I think the api is validated every time it loads against Google Servers and doesn't allow offline use
It's some monthes now since I have worked with this.
I'll try to explain with what i can remember :-)
in the html where i have my plug-in, i have removed:
"script type="text/javascript" src="https://www.google.com/jsapi">
but i have saved locally this jsapi.js file. I also saved locally loader_1-008.js
then, im my code (c++, Qt) I'm using evaluateJavaScript(Qstring source) twice
where source is the text read from my 2 .js files
These 2 evaluateJavaScript calls need to be done before loading my html (the one with the plugin)
in my QWebView
I can not remmeber much more but I hope this can start to help you

Run the orbited server?

Odd question, but I'm trying to follow this tutorial which explains how to set up comet with django. I'm just confused about some stuff when I'm trying to do the tutorial.
Firstly, where does the orbited.cfg file go? I just placed it at the root of my application (where the settings.py file etc. is). Also, in the cfg, It says to use the localhost address as the http, but I'm not running a development server, can I just put the url I'm using there? What about the port issue?
Secondly, at the end of the tutorial, it says to run the orbited server. How do I do this? Do I need to install orbited beforehand? I ask this also because the html file requires an orbited.js file, and I have no clue where to find that. orbited.org doesn't seem to work. Thank you.
I hope this is no longer relevant for OP, but googlers may find this helpful
orbited --config example.cfg
where example.cfg is the config file for orbited

which django cache backend to use?

I been using cmemcache + memcache for a while with positive results.
However cmemcache lately not tagging along well and I also found that it's now recommended. I have now installed python-memcached and its working well. As I have decided to change would like to try some other cache back end any recommendation.
I have also came across pylibmc and python-libmemcached any other??
Have anyone tried nginx memcache module?
Thanks
Only cmemcache and python-memcached are supported by the Django memcached backend. I don't have experience with either of the two libraries you mentioned, but if you want to use them you will need to write a new cache backend.
The nginx memcached module is something completely different. It doesn't really seem like it would work well with Django's caching, but it might with enough hacking. At any rate, I wouldn't use it, since it is much better if you control what gets cached and retrieved from cache from the Python side.

How can I encrypt my django code?

I have to upload my django project to a shared hosting provider.
How can I encrypt my code?
I want to hide my code on the server.
Thanks :)
You can't. You could upload .pyc files I suppose, but they are completely and utterly trivial to decompile.
Who are you trying to conceal it from? If it's other users on the shared system, then make sure you have directory permissions properly restricted to your user. If it's the shared hosting provider itself, then there's not much you can do since obfuscation won't buy you all that much; spend some time to find a reputable hosting provider you can trust.
If you really want to hide your code, you have to build custom python interpreter that uses different opcodes (in python bytecode). Then the server only has your hacked binary and pyc's that are not trivial to decode. You can add encryption on top of that, or at least sign your code so that your binary is not that easy to investigate.
Another possibility is to never have your code on disk, only keep it in RAM. You could start your server process via e.g. execnet.