Re-build files in Ember-CLI without running server - ember.js

I am planning on moving from "EmberJS" to the Ember-cli, though I have a small problem. Is it possible only to run file watcher instead of serving/using ember serve that will run local server? As I am running my PHP backend on the Google App Script I have already a local python HTTP server running in localhost:8080 I do not need another one to run in localhost:4200
If I don't run ember serve my local changes in development environment wont get updated. Is there a better way of doing this? Is it possible to use assets in the app folder when running in development environment? and use dist folder for staging/live environments?

As mentioned in the guide, you can use the build command with the --watch flag.
ember build --watch
That will keep rebuilding your changes but not actually run the server.
As for your second question:
Is it possible to use assets in the app folder when running in development environment? and use dist folder for staging/live environments?
I don't believe so. You can change the output-path property in your .ember-cli config file, but you can't have one that's specific to a certain environment. You could always write a quick script to move the files though. :)

Related

Deploy Django app with Docker

I'm attempting to deploy a Django app via docker, first locally, and then to a cloud server. I could not find an answer to my initial question before I attempt this: if I run docker-machine create, I'm guessing this should be run from within my virtualenv, right?
This would then grab all of my specific app dependencies, and begin to build certificates to throw in the container? If not, please explain otherwise..
Yes you are correct.
I will try to help you by my experience, if you wanna deploy django apps via docker.
First you need to setup docker machine in your local machine. Please see the
instruction. By default driver that will be used is --driver
virtualbox default.
List what kind of specifics dependencies images of your apps. Ex:
you need nginx, postgres, uwsgi, or you need to fetch an image then
modified that image you can use dockerfile (its the best practice
for you).
I suggested you to use docker-compose. Really its make our project
pretty easy to manage. You have to define all images that you need
for your app in docker-compose file Please read this reference.
After you finished develop your app then you want to deploy in production server (cloud) you just need to copy all your project then running your docker-compose. All images dependencies will be automatically pulled in the cloud.
As a reference, you can see this project (this is an open source project that I developed.) On that project, I use make file to manage docker-compose command and it make easy to manage.
An example of dockerfile
An example of docker-compose.yml
An example of Makefile
Hope this will help you.

Django on production server (No module named urls)

I'm setting Django on production server and have this strange error(on picture below)
As you can see pythonpath seems to be ok(first row is my project folder), I definitely have module urls.py inside my project/project folder, I have init file there and my ROOT_URLCONF = 'project.urls'(I also tried without project name, but it didn't help either).
So, that is strange why it can't find it :(
I have to say that I tried to create a new project on server and then it seems to be ok, but with this project that is copied from local server it is behaving like this.
Printscreen of error:
The only problem I can think of is the process of package creation. What process have you followed to deploy your Django application?
If you have compiled the Django application on your local machine or CI server and then deployed the compiled package then you will run into Import module issues because pyc files will contain hard coded paths of your local machine or CI servers. To fix it before compiling the python files you should create the same hierarchy on your local/CI server and then compile and deploy.
Hope this helps.
[Edit]
I agree hardcoded paths in pyc files is PITA and we have been doing this in our production environment once we discovered it.
However I do not agree with you to re generate pyc files on the server because as your application will grow and you move towards a large application it will become very slow.
You don't have to keep your development environment directory to follow production directory structure. Instead you can have any directory path on development machine and create a separate bash script which will create a package for you by creating a directory structure that you follow on production. Bash script will have the logic of
Creating a directory structure similar to production
Checking out the code from source control
Compile the code using python -m compileall .
Create a tarball
You can untar this tarball on production server and your application should run fine.
For more information about package creation in python and best practices, check out this video
It doesn't look like your project is in your path, actually. The traceback is only showing Django packages.

Moving from runserver to a production server

I am quite new to programming, and all of my development has been on my local runserver using textmate and terminal. I have written a small app with a few hundred and I'd like to push it to an EC2 server. My only knowledge in terms of 'developing tools' is Django's local runserver, TextMate and Terminal.
What tools or programs should I look into learning to be able to have an effective workflow?Should I be using some sort of IDE over TextMate for this? My main concerns are being able to develop on my local runserver and then painlessly push that to my production server.
As #isbadawi said, use Fabric. It's better than just using the terminal because you can automate things. As far as deployments go, you can simplify it down to: fab -H your.host.com deploy. Inside the file you write commands, a simple one might go:
Cause the server to download the most recent code from SCM
Update the database (syncdb / migrations / what have you)
Cause apache or whatever you're using to reload the configuration
As far as some more general tips go:
If you're using WSGI, put it under source control
Same goes with local settings files, have your deploy script rename them to local_settings.py as part of the build
If you're really going for painless, look into Django hosting services like Gondor or Ep.io. Those will have clients that you can just deploy to pseudo-painlessly, although you will have to change some settings on your side to match theirs better, as there are many many ways to deploy a Django app.
Update: Ep.io is no longer in business as a hosting service. My new go-to is Heroku.
Update 2: I used to link local_settings.py in deployments, but now I'm leaning towards using the DJANGO_SETTINGS_MODULE config variable. See rdegge's "django-skel" settings for a good way to do this.
A DVCS such as git or Mercurial will allow you to develop and test locally, and then push the changes to a remote system for staging and production.

Pydev + Django workflow. Local(test) + remote synchronization. Using git with django

I'm new to django and my very first project is my blog. I wonder how django developers who use pydev normally synchronize with remote hosting server, updating their sites?
I also would like to know, how do you combine usage of git with a django project? Should I just make a repository for the entire project?
At my company we've got an entire git repository for each project, including the Django sources that are put in the PYTHONPATH for each project, making Django versions project dependant. The folder structure is something like:
/.git
/projectname/app1
/projectname/app2
/projectname/manage.py
/django-lib/django/...
As django-lib is not a Python module, we include both / and /django-lib in the PYTHONPATH. If your project is becoming large, you might want to consider using git submodules on your apps.
We've also setup several servers to support the developers. There's a testing server running a central testing database and a setup including Apache with WSGI to make testing on a real server possible, which sometimes is a bit different then the local manage.py the developers use before committing their changes.
The testing server is updated with the master branch of our git repository. We've made several scripts to allow all developers to do this without letting them login to the server via SSH, but that is just during pre-release. After release, that server will become our staging server, and we'll remove all scripts from it to make it just like our production server.
Every developer has setup their local project to make sure that it communicates with the central testing database, containing several test data. I myself push my changes from the commandline, but you could also use EGit for this.
When we've got a release, we put it in a separate branch, called 'release' (obviously) and the production server will pull only from that branch. This is done via SSH, but I don't really know how your server setup looks like, so I guess that that last step is entirely up to you.
I hope that this has helped you a bit. I won't say that this is the best workflow possible, but it works for us and you should figure out what works for you.
Most experienced Django developers use pip(or distribute) and virtualenv deal with all the python packages you might need for your Django projects (including Django itself).
Personally, all I keep in my projects git repository is a bunch of segregated requirements lists generated by pip :
. ~/Dev/environs/$PROJECT_NAME/bin/activate
pip freeze > ./docs/requirements/main.list
I'm fairly sure most django developers would be familiar with Fabric, which I use for :
streamlining local interaction with git and,
pushing to our central repository,
pulling from our production or test server
touching the wsgi on the relevant server
and pretty much any other kind of task you might find yourself using ssh terminal session for.
For those cases where I need to make changes to someone elses django application in order to make it work or suit our purposes, I :
fork it on github,
clone from my forked repo
make the changes
push it up to my own repo
and provide merge requests to the original repo owner
This way, i have a repo where i can use pip requirement lists to keep pulling from until the original application owner gets their own repo updated.

Django Deployment Advice

I have a multi-step deployment system setup, where I develop locally, have a staging app with a copy of the production db, and then the production app. I use SVN for version control.
When deploying my production app I have been just moving the urls.py and settings.py files up a directory, deleting my django app directory with rm -rf command and then doing an svn export from the repository which creates a new django app directory with my updated code. I then move my urls.py and settings.py files back into place and everything works great.
My new problem is that I am now storing user uploads in a folder inside of my django app, so I can't just remove the whole app dir anymore or I would loose all of my users files.
What do you think my best approach is now? Would svn export --force work, since it should just be overwriting all of my changed files? Should I take an entirely new approach? I am open to advice?
You may want to watch this presentation by Jacob. It can help you improve your deployment process.
I use Bitbucket as my repo and I can simply perform push on my Dev box and run pull/update on Stage/Prod box. Actually I don't run them manually, I use fabric to do them for me :).
Your could use rsync or something similar to backup your uploaded files and use this backup when you deploy your project.
For deployment you could try to use buildout:
http://www.buildout.org/
http://pypi.python.org/pypi/djangorecipe
http://jacobian.org/writing/django-apps-with-buildout/
For other deployment methods see this question:
Django deployment tools
You can move your files to S3 servers (http://aws.amazon.com/s3/), so you will not ever have to care about moving them with your project.