Try to install django-cms in a shared host - django

can I install django-cms(from www.django-cms.org) in a shared hosting without shell access? Thanks very much!

The real question is: "Can I run Django in a shared host?". django-cms is just package, it needs Django to run and Django is likely going to be your limiting factor there, not django-cms.
So, the answer to that is a resounding "Maybe". Technically, if you run all your Python packages in a virtualenv, including Django and django-cms, you shouldn't need root/sudo privileges. But, and it's a big but, that assumes your shared hosting provide makes virtualenv available to you. If they support Python at all you should be good, but even then, who knows.
However, shared hosting is really not appropriate in general for anything like Django. Even if you can get it to run, performance will surely be an issue. Nowadays, there's really zero reason not to just jump to a VPS. You can get your own little VPS over at Rackspace for ~$11/mo. That's cheap -- like possibly cheaper than what you're paying for shared hosting cheap.
Just make the leap and stop living in limited shared environment. It's worth every penny.

Related

Selecting a git workflow for my situation

I'm new to git. I've read the well-written intro book. But gee, it's still not a trivial topic. I've been bumbling around, experiencing various problems. I realized it might be because I'm unaware of workflow, and specifically, "what are the best practices for doing what I'm trying to do?"
I started out developing a django project on my win7 with Pycharm. Great way to get the initial 95% written.
But then I need to deploy it to my production machine at PythonAnywhere.
So I created a private Github repository, pushed my win7 codebase to github.
Then in pythonAnywhere, I cloned the github repository.
For now, no others work on this project. It will not be released to the public.
Now that the server is running on PythonAnywhere, I still need to tweak settings, which is best done on the PythonAnywhere codebase side. But there are other improvements (new pages, or views) that I'd rather do inside Pycharm IDE on my win7 than in vim on python anywhere.
So I've been kind of clumsily pushing and fetching these changes. It's been kind of ham-handed, and I've managed to lose some minor changes through ignorance.
So I'm wondering if anyone can point to a relatively simple workflow that would handle the various tasks I mentioned:
1) improving functionality of the site (best done in Pycharm IDE)
2) production server issues and tweaks (best done on PythonAnywhere)
3) keeping everythign safely backed-up on Github
The other issue is that I have another django app that I want to build. It's easiest to temporarily hang it off the django project I've already built. But I'd prefer to keep it in its own repository.
So I have Original_Project, Original_App stored in Original_Repository
I want to make new_app, and have it, for the time being, run in Original_Project, but I want to version control it in New_Repository.
I think/hope that I could put a .gitignore in the Original_Repository, saying ignore the new_app/ Then I git init new_app/ as its own repository. Is that sound or mad?
You should avoid editing your code on the production server as much as possible, and never commit from the production server. If you end up having to tweaks things on the server (you shouldn't but well, shit happens and sometimes it's indeed easier to first get the code back to work on the server), then once it's working manually report your edits to your local repo, clear up the changes on the server and deploy the fixed code again. Here the github repo should be considered as the "master" repository for deployments, ie you work on your local repo, push to github, and on the server pull from github. This make sure you keep the github repo in sync.
wrt/ the "improving functionality" (aka "features") vs "server issues and tweaks" (aka "hotfixes"), git flow is a (mostly) sane workflow IMHO but that's a bit opinion-based here (some dislike it and have sensible arguments too).
Finally if you want to factor out one of your apps, the best is to have it in it's own (github) repo with all the proper python packaging stuff and make it a requirement of your main project. On your local dev environment you install it as an editable package, and for the production setup you install it as normal package pinned to the last stable version. Note that in both cases I assume you're using virtualenvs (and if you dont, well that's the very first issue you should address).
Update:
What are the downsides of of editing directly on the production server and committing from the production server?
Well quite simply a production server is not the place for coding - "production" means that you have users trying to do something with your website and they don't want to have the site breaking on them, their data lost or whatever because you are "tweaking" things. You should only deploy stable, well tested code on production, and the one and only one case where editing anything on the server might be a last resort option is when it's already broken and you want to get it back online asap whatever it takes (case of "first make it work, then make it clean").
Point is, I'm a professional developer working on projects that are business criticals and a broken site is not an option, so I'm very strict on this - but even if it's a hobby project, your users deserve some respect (at least if you expect to see them back).
A proper production chain actually involves at least three environments: your local dev environment, a staging server (which should closely mirror the production server - system, system package versions, configurations etc etc) to test out / showcase / eventually do minor config tweak, and the production server which should only ever see stable tested code.
I have always struggled with git, knowing it well enough to get thigs working, but never being sure I am doing thing well.
I would suggest installing git flow (it is probably available in your package manager if you are on Linux). Its a set of extensions that simplify a standard git worklfow. Since using it, this has pretty much been all the documentation I have needed.
https://danielkummer.github.io/git-flow-cheatsheet/

Is it good practice to use a VM for Django projects?

So I was looking at the Getting Started with Django http://gettingstartedwithdjango.com/ tutorial, and everything was done in a vm. The author set up a vm, and then created a virtualenv in the vm. Is this good practice to get started on a django project, or software projects in general? Why the need for a vm? What happens if I have more than one project - should I use two vms? Or just create additional virtualenvs in the original vm?
I'm still a student in school, and I'm working on my own personal side projects, so it'd be useful to get some input on how things are really done in the real world.
Thanks!
You do not need VMs. You can get through just fine using virtualenv with an environment for each project - especially just starting out in Django.
In the future, one of the times you may need a separate VM environment for your project is if it has a lot of unique infrastructure needs. It's much easier to setup a VM, setup the unique environment, and not have to alter it when you want to work on other projects.
Another common reason I see people using VMs is when they have a Windows machine but want to develop in Linux. It's easy to spin up a Linux VM and work there since Linux is more programmer friendly.
It's subjective. I leverage virtualenv and virtualenvwrapper for my development, which I do on Linux. There are instance where you might need to leverage two separate VMs...it just depends, although I haven't encountered this.
There's no unwritten rule that says you have to use a VM. Python (and many other languages/frameworks) simply work better on Linux, so many people will leverage VMs to run Linux on Windows or Mac to do their development in that environment.

Search backend for Django App on Heroku

On my development app I'm using a combo of Haystack for search with Whoosh as the backend.
However, when I deploy to Heroku my search is no longer working, even after running python manage.py update_index.
Upon some research, I discovered that it's because of Heroku's read-only file system.
Is there any free solution to get around this on Heroku so that I can get search working? The addons I have looked at are ~$20/month and I'd prefer to get started with a free solution if possible.
It's not really practical to do this without a separate search server. Storage on Heroku's dynos are not read-only but they are ephemeral and individual to a dyno, and any production application will have at least two dynos. You might be able to set something up to run on a dyno but it's certain to be complex and fragile, whereas a third party service is turnkey. Most third party search add-ons scale with usage and many are free at the cheapest level, and if none of them fit the bill then you can always use non-Heroku search services, of which there are many.
Note that the dyno filesystem is writeable. Can you post the error you are getting?
You might want to take another look at Heroku add-ons. There are several Elastic Search add-ons that are in free beta or have free plans. Haystack supports Elastic Search:
https://addons.heroku.com/searchbox
https://addons.heroku.com/bonsai
https://addons.heroku.com/indexden
https://addons.heroku.com/foundelasticsearch

Django Compressor on a multi-server deployment

I've been fortunate enough to discover django_compressor and implemented it within our stack, which deploys to many servers (Currently 6, but growing as we deploy smaller virtual machines.)
Now this is all fine and dandy if you're using django_compressor at its finest. Compressing raw CSS/JS code
However, say now I want introduce some type of pre-compiler. Let's say for this example it is LESS (css). The thought process for this is fairly simple:
Install node, npm, and the less package onto the server.
Add less to your precompilers!
COMPRESS_PRECOMPILERS = ( ('text/less', 'lessc {infile} {outfile}'), )
Now you deploy, and your server compiles the less file. Everything is fantastic!
Now let's add 8 more servers to that and you have to install node, npm, and less on each server?
This is where something doesn't seem right, and I feel like I'm missing something. I believe the Django community has run into this problem before.
My thoughts thus far have been:
Use a post-commit hook to compile the CSS on the developers machine. This means that via django_compressor, we link to the compiled static file in the HTML, and our repository contains both the compiled and non-compiled versions. My only downside to this is it ends up not using half of the benefits of django_compressor and may be tedious for developers?
Suck it up and make node, npm, and less part of the server stack.
Update
I did some additional looking around and it seems that using the COMPRESS_OFFLINE flag (or just --force) with the management command will produce an offline manifest file that does what I need (only tested locally). So setting this up with a pre-deploy hook likes to be the answer.
Of course, still open to other ideas :-)
Coupled with the tips in the comments about COMPRESS_OFFLINE, you could look at django-staticfiles' storage stuff. You can host the static files on amazon s3, for instance, so hosting it all on one static-hosting server and using that from all your servers could also be a nice solution. You wouldn't need to do anything with the static (and compressed) files on the individual servers.
Alternative solution regarding the multiple servers: I've made a custom fabric (docs.fabfile.org) script that installs/configures stuff on our servers. I've only recently started using coffeescript and less, but those two are definitively ending up in my fabfile. That solves the installation problem for me.
(Alternatives to a fabfile are things like a custom debian package with standard dependencies. Or chef or puppet or something similar.)
you can use puppet for the task

which django cache backend to use?

I been using cmemcache + memcache for a while with positive results.
However cmemcache lately not tagging along well and I also found that it's now recommended. I have now installed python-memcached and its working well. As I have decided to change would like to try some other cache back end any recommendation.
I have also came across pylibmc and python-libmemcached any other??
Have anyone tried nginx memcache module?
Thanks
Only cmemcache and python-memcached are supported by the Django memcached backend. I don't have experience with either of the two libraries you mentioned, but if you want to use them you will need to write a new cache backend.
The nginx memcached module is something completely different. It doesn't really seem like it would work well with Django's caching, but it might with enough hacking. At any rate, I wouldn't use it, since it is much better if you control what gets cached and retrieved from cache from the Python side.