I'm using Django with Sqlite3 on OpenShift and I need to reset my database (clear all the tables). How do I do that?
You can run flush command to clear data from all the tables.
python manage.py flush
Note that this command will IRREVERSIBLY DESTROY all data currently in the database.
To run a manage.py command in OpenShift,
make sure you have ssh access to the repository.
Step 1
Method 1: With RedHat Client
The easiest way is to install rhc,
You can install rhc by following the official guide
After installation and configuration, run
rhc ssh <app name>
if everything gone correct, this will logs you into your app repo.
(or) Method 2: Without RedHat Client
Add your public key on console settings
Copy the ssh command from the Remote Access section on console.
The command looks like,
ssh <some random string >#your-domain.rhcloud.com
Paste the command to a terminal window and press enter
Step 2
Now navigate to your source directory, run
cd app-root/repo/
Step 3
Now you are at the repo, where you can run your manage.py task
python manage.py makemigrations
or
python3 manage.py migrate
This is how you run a manage.py command in an repo.
make sure you don't share your keys.
Related
I have deployed my django-app on heroku with Github.
It is test server so I am using sqlitedb.
But because of the dyno manager, my sqlitedb resets everyday.
So I am going to download only db on heroku.
I tried this command.
heroku git:clone -a APP-NAME
But this clones empty repository.
And when I run $heroku run bash -a APP-NAME command, I got ETIMEOUT error.
Is there any other way to download the source code on heroku?
What you want to do with git is not possible because changes to the database is not versioned.
The command to run bash on Heroku is heroku run bash, not heroku bash run. You may have to specify the app using the -a flag: https://devcenter.heroku.com/articles/heroku-cli-commands#heroku-run
I have solved with downloading the application slug.
If you have not used git to deploy your application, or using heroku git:clone has only created an empty repository, you can download the slug that was build when you application was last deployed.
First, install the heroku-slugs CLI plugin with heroku plugins:install heroku-slugs,
then run:
heroku slugs:download -a APP_NAME
This will download and compress your slug into a directory with the same name as your application.
I have a django app that is deployed on aws elastic beanstalk when I want to deploy I need to run the migrate, and the collectstatic script.
I have created 01_build.config in .ebextensions directory and this is its content
commands:
migrate:
command: "python manage.py migrate"
ignoreErrors: true
collectstatic:
command: "python manage.py collectstatic --no-input"
ignoreErrors: true
but still, it is not running these scripts.
Sounds like you want to run these scripts after the app has been set up, in which case you need to use the key container_commands rather than commands. From the docs:
The commands run before the application and web server are set up and the application version file is extracted.
and
Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed. Non-container commands and other customization operations are performed prior to the application source code being extracted.
I successfully installed and setup Django (2.0.2) using Python 3.6.4 on IIS on a remote Windows Server 2012 R2 (in a VPN environment) accordingly to this instruction:
http://blog.mattwoodward.com/2016/07/running-django-application-on-windows.html
I'm using SQL Server as backend and I stored the sensitive data such as the DJANGO_SECRET_KEY, DATABASE_PASSWORD etc. as environment variables in the FastCGI Settings at the IIS (see the above link, section Configure FastCGI in IIS step 14)
At this point I asked me, how to deploy my apps and do all the necessary setps like install packages from requirements.txt, collectstatic, makemigrations etc.
With googleing I found a possible solution using git hooks, especially a post-receive hook accordingly to this post:
https://dylanwooters.wordpress.com/2015/02/21/one-command-deployments-to-iis-using-git/
So I successfully setup Bonobo Git Server on the same IIS accordingly to this instruction:
https://bonobogitserver.com/install/
My thought was to put all the required commands into this post-receive hook file as I would do in the development environment. I ended up with this file at location \inetpub\wwwroot\Bonobo.Git.Server\App_Data\Repositories\myapp-master\hooks\post-receive:
#!/bin/sh
DEPLOYDIR=/c/apps/myapp
VIRTENVDIR=/c/virtualenvs/myapp
GIT_WORK_TREE="$DEPLOYDIR" git checkout -f
cd $VIRTENVDIR
source Scripts/activate
cd $DEPLOYDIR
pip install -r requirements.txt
python manage.py collectstatic --noinput
python manage.py makemigrations
python manage.py migrate
I grant permission to the Scripts/activate file and I receive no erros during push from my development environment. But the current problem is that the command at line 6 doesn't activate the virtual environment and therefore no migrations etc. happens. Normally I would activate the virtual environment with the command line prompt using these commands:
cd C:\virtualenvs\myapp
Scripts\activate.bat
but I can't use this in the post-reveive file because it uses unix-based commands.
Can anyone help me or have a better idea how to deploy in a nicer way in a windows environment without going into the cloud?
I'm running a Django application on my Amazon Linux instance using the below command:
python manage.py runserver ec2-instance-ip.us-east-2.compute.amazonaws.com:8000
I want the application to be running even after I quit the shell. How do I run this web server even after quitting the shell on Amazon Linux?
I tried using the & as shown below, but it didn't work.
python manage.py runserver ec2-instance-ip.us-east-2.compute.amazonaws.com:8000 &
Running python manage.py ... is how you run in development, but it's not how you run on a web server. You need to deploy your application.
Take a look at Apache and mod_wsgi.
Install screen with below command
Screen is basically a tool which runs the process always, even we exit.
sudo apt-get update
sudo apt-get install screen
For details you can see: https://www.digitalocean.com/community/tutorials/how-to-install-and-use-screen-on-an-ubuntu-cloud-server
Create a screen for you command, so your command can run as a daemon.
screen -S <processName> //Process name could be any random name for your process.
To enter inside screen.
screen -r <processName>
Now you are inside screen and can run your command here.
python manage.py runserver ec2-instance-ip.us-east-2.compute.amazonaws.com:8000
Now exit screen: ctrl+a and then d
You can create multiple screens as many you want and can any time list them by command:
screen -ls
!important: this is not recommended for Production server.
Look this to run Python app on production server: https://docs.djangoproject.com/en/2.0/howto/deployment/
I'm working on a simple implementation of Django hosted on Google's Managed VM service, backed by Google Cloud SQL. I'm able to deploy my application just fine, but when I try to issue some Django manage.py commands within the Dockerfile, I get errors.
Here's my Dockerfile:
FROM gcr.io/google_appengine/python
RUN virtualenv /venv -p python3.4
ENV VIRTUAL_ENV /venv
ENV PATH /venv/bin:$PATH
# Install dependencies.
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
# Add application code.
ADD . /app
# Overwrite the settings file with the PROD variant.
ADD my_app/settings_prod.py /app/my_app/settings.py
WORKDIR /app
RUN python manage.py migrate --noinput
# Use Gunicorn to serve the application.
CMD gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi
# [END docker]
Pretty basic. If I exclude the RUN python manage.py migrate --noinput line, and deploy using the GCloud tool, everything works fine. If I then log onto the VM, I can issue the manage.py migrate command without issue.
However, in the interest of simplifying deployment, I'd really like to be able to issue Django manage.py commands from the Dockerfile. At present, I get the following error if the manage.py statement is included:
django.db.utils.OperationalError: (2002, "Can't connect to local MySQL server through socket '/cloudsql/my_app:us-central1:my_app_prod_00' (2)")
Seems like a simple enough error, but it has me stumped, because the connection is certainly valid. As I said, if I deploy without issuing the manage.py command, everything works fine. Django can connect to the database, and I can issue the command manually on the VM.
I wondering if the reason for my problem is that the sql proxy (cloudsql/) doesn't exist when the Dockerfile is being deployed. If so, how do I get around this?
I'm new to Docker (this being my first attempt) and newish to Django, so I'm unsure of what the correct approach is for handling a deployment of this nature. Should I instead be positioning this command elsewhere?
There are two steps involved in deploying the application.
In the first step, the Dockerfile is used to build the image, which can happen on your machine or on another machine.
In the second step, the created docker image is executed on the Managed VM.
The RUN instruction is executed when the image is being built, not when it's being run.
You should move manage.py to the CMD command, which is run when the image is being run.
CMD python manage.py migrate --noinput && gunicorn --pythonpath ./my_app -b :$PORT --env DJANGO_SETTINGS_MODULE=my_app.settings my_app.wsgi