I've been working with the PaaS service AppFog and I was able to get my Django up and running, but the problem is that my application uses static files and this are not working because it needs to execute the collectstatic command in shell.
I've been reading about it on the internet but I wasn't able to find a proper solution. Should I make a shell file and execute it? How?
I appreciate your time.
You can execute collectstatic with the following command:
python manage.py collectstatic
This is called an administrative management command. You can find out more about these and implementing them yourself by reading https://docs.djangoproject.com/en/1.4/howto/custom-management-commands/.
Related
Inside the cpanel -> python app i have tried several time to create super user. when I tried to execute this commad inside Execute python script
manage.py createsuperuser
then it will return this error
Superuser creation skipped due to not running in a TTY. You can run `manage.py createsuperuser` in your project to create one manually.
How to solve this problem, or any manuall solution, i found several solution but all the solution for local server.
There is no difference between creating superuser on local server and production server. You have to do next:
Enter your server via ssh.
Go to your project root folder (with manage.py file)
Type python manage.py createsuperuser (use your virtual environment or system interpreter, depends on).
I have a django app that I put inside of a docker container for deployment. I have some initial data that I want to load into the database via the dumpdata and loaddata commands. The initial data lives on my local hard drive. I choose a very naive approach and simply copied the data_backup.json file to the server via scp.
Now, I want to load the data_backup.json file (the file sits on the server not in the docker container) by executing:
sudo docker-compose exec restapi python manage.py loaddata --settings=rest.settings.production ./data_backup_20191004.json
But Django only searches the internal directories for fixtures.
I am looking for a way to populate the database with the data_backup.json file inside the docker container. Can someone help?
Ultimately, I am looking for a way to dump data directly to S3 and load it from there if needed (for db backups). If you have any tips on how to achieve that, this would also be super helpful - I don't seem to be able to find material on that.
Just in case someone has this question in the future. It is possible to loaddata from stdin. So you can just take the backup file and pipe it into the database (within the container with a command like this:
cat <<fixture_name.json>> | sudo docker exec -i <<container_name_or_id>> python manage.py loaddata --format=json -
The last dash tells django that you want to load the data from stdin.
DOCS
You could copy the file into the docker container before running the command with docker cp:
docker cp ./data_backup_20191004.json <container_id>:django_dir/data_backup_20191004.json
Or, if the file is located on an S3 server, you could execute a curl inside the docker container and install from there:
sudo docker-compose exec restapi curl http://s3.example.com/path/to/data.json > data.json
sudo docker-compose exec restapi python manage.py loaddata --settings=rest.settings.production ./data.json
I am looking for a way to populate the database with the
data_backup.json file inside the docker container. Can someone help?
See the answer of Xen_mar, which I think it's perfect.
Ultimately, I am looking for a way to dump data directly to S3 and
load it from there if needed (for db backups). If you have any tips on
how to achieve that, this would also be super helpful - I don't seem
to be able to find material on that.
This seems to be a complete different question. I would consider using some django package like Django Smuggler, which allows you to load and dump fixtures from the admin, and I assume that it may be possible to configure django smuggler upload directory to be handled by django storages. I'm not sure that is possible, so if you try let me know.
Can't understand django command in docker application.
I am trying to run command which normally would work.
source project/bin/activate
Which results in :
-bash: project/bin/activate: No such file or directory
The command would work in non docker django app for sure. Also tried :
docker-compose run web source project/bin/activate
docker-compose up source project/bin/activate
What is right command then?
Have you tried, giving absolute path for the activate file. Something like this:
~/workspace/project/bin/activate
The above might actually work.
On my project i have an app : my_app with Managment command : my_command.py
On SSH i try :
my/folder/project/and/app/python2.4 manage.py my_command all is ok
but if i try : python2.4 /my/folder/project/and/app/manage.py my_command, manage.py doesn't know my command...
i try to run my command on a crontab..
Thx
laurent
In my experience I had this kind of issues for several reasons.
I'd check first the python interpreter used. If you are using virtualenv or something like that you should ensure you are using the correct python executable.
If your server has selinux, you should ensure it's not denying the cron to read some files.
I also had an issue like this because the settings file (I used a separate setting file to make it less verbose) didn't exist.
I've deployed a Django application on Heroku. The application by itself works fine. I can run commands such as heroku run python project/manage.py syncdband heroku run python project/manage.py shell and this works well.
My Django project makes use of the Python web scraping library called Scrapy. Scrapy comes with a command called scrapy crawl abc which helps me scrape websites I have defined in the scrapy application. When I run a scrapy command such as scrapy crawl spidername on my local machine, the application is able to scrape date and copy it to my database. However when I run the same command on Heroku under a sub-directory of my project directory heroku run scrapy crawl spidername, nothing happens.
I don't see anything in the Heroku logs which can point to where I'm going wrong:
2012-01-26T15:45:38+00:00 heroku[run.1]: State changed from created to starting
2012-01-26T15:45:43+00:00 app[run.1]: Awaiting client
2012-01-26T15:45:43+00:00 app[run.1]: Starting process with command `project/spiderMainDir scrapy crawl spidername`
2012-01-26T15:45:44+00:00 heroku[run.1]: State changed from starting to up
2012-01-26T15:45:46+00:00 heroku[run.1]: State changed from up to complete
2012-01-26T15:45:46+00:00 heroku[run.1]: Process exited
Some additional information:
My scrapy app calls pipelines.py to save the scraped items to the database. In the pipelines.py file, this is what I've written to invoke the Django settings so that I can import my models and save data to the database from the scrapy application.
import os,sys
PROJECT_PATH = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.path.append(PROJECT_PATH)
os.environ['DJANGO_SETTINGS_MODULE'] = 'settings'
Any pointers on where exactly am I going wrong? How do I execute the scrapy command on Heroku such that my application can scrape an external website and save that data to the database. Isn't the way external commands are run in Heroku like - heroku run command?
I'm answering my own question because I discovered what the problem was. Heroku for some reason was not able to find scrapy when I executed the command from a sub-directory and not the top-level directory.
The command heroku run ... is generally run from the top-level directory. For my project which uses scrapy, I was required to go to a sub-directory and run the scrapy command from the sub-directory (this is how scrapy is designed). This wasn't working in Heroku. So I went to the Heroku bash by typing heroku run bash to see what was going on. When I ran the scrapy command from the top-level directory, Heroku recognized the command but when I went to a sub-directory, it failed to recognize the scrapy command. I suppose there is some problem related to the path. From the sub-directory, I had to specify the complete path to scrapy (~/bin/scrapy crawl spidername) to be able to execute it.
To run the scrapy command without going to the Heroku bash manually each time, my work around this problem was that I created a shell script containing the following code and put it under the bin directory of my top-level directory and pushed the changes to Heroku.
bin/scrapy.sh :
#!/usr/bin/env bash
cd ~/project/spiderSubDirectory
~/bin/scrapy $#
After this was done, I could execute $ heroku run scrapy.sh crawl spidername from my local bash. I suppose its not the best solution but this works.
Isn't the way external commands are run in Heroku like - heroku run
appdir command?
It's actually heroku run command. By including your appdir in there, it resulted in an invalid command. Heroku's output doesn't give useful error messages when these commands fail, and instead just tells you that the command finished which is what you're seeing. So for you, just change the command to something like:
heroku run scrapy crawl spidername