So I uploaded my Django API with Rest framework on AWS EC2 instance. However, I have to manually go to Putty and connect to my EC2 instance and turn API on whenever I want to use it by inputting python manage.py runserver 0.0.0.0:8000.
When I turn off my PC, putty closes and API cannot be accessed anymore on the ip address.
How do I keep my API on forever? Does turning it into https help? Or what can be done?
You can make it live always by following means,
connect your ec2 instance using ssh.
Then deploy your backend (django) on that instance and run it at any port.
Once run on your desired port, you can close the terminal, please don't press ctrl+c so that your django server does not stop. you can just cross the terminal. it will be now running.
You can also run the django server on tmux (its terminal inside terminal). here is tutorial on tmux.
https://linuxize.com/post/getting-started-with-tmux/
One other approach is, you can deploy the django using docker container.
I hope you will come over your problem.
Thanks.
Ok I finally solved this. So when you close putty or a ssh client session, the session goes offline. However, if you run the session via daemon, the session continues in the background even when you close your clients. The code is
$ nohup python ./manage.py runserver 0.0.0.0:8000 &
Of course you can use tmux or docker, as suggested by madi, but I think running this one code is much simpler.
You can use pm2.
Please install pm2.
And make a server.json file in the root directory of your django app to run your app.
{
apps:
[{
name: "appname",
script: "manage.py",
args: ["runserver", "0.0.0.0:8888"],
exec_mode: "fork",
instances: "1",
wait_ready: true,
autorestart: false,
max_restarts: 5,
interpreter : "python3"
}]
}
Then you can run this app with pm2 start server.json.
Your app will run on port 8888 .
Related
The Following is my EC2 User Data:
#!/bin/bash
sudo yum update -y
sudo yum install -y httpd
sudo systemctl start httpd
sudo systemctl enable httpd
In Security Group SSH 22 Port and HTTP 80 Port is Open.
Yet when I try accessing http://public_ip_of_instance the HTTP Apache page doesn't load.
Also, on the Instance Apache is not installed when I checked sudo systemctl status httpd.
I then manually tried it on the EC2 Server and it worked. Then I removed it through yum remove as I wanted to see whether User Data works.
I stopped the Instance and started again but I observed that the User Data Script doesn't work as I am unable to access http page through browser and also on Instance http is not installed.
Where is the actual issue? Some months back this same thing worked on another instance I remember.
Your user data is correct. Whatever is happening with your website is not due to the user data code that you provided.
There could be many reasons it does not work. Public IP of the instance has changed, as always happens when you stop/start the instance. Instance may have per-existing software that clashes with httpd.
Here's some general advice on running UserData once or each startup.
Short answer as John mentioned in the comments EC2's only run the UserData (aka Bootstrap) script once on initalization.
The user data Bash/Powershell is Infrastructure-As-Code. You deploy the script and it installs and configures the machine.
This causes confusion with everyone starting AWS. When you think about it though it doesn't make sense to run the UserData script each time when the PCs already been configured.
What people do often instead is make "Golden Images" (aka Amazon Machine Images - AMI's) of pre-setup EC2s, typically for PCs that take long time to install/configure. The beauty of this is you can setup AutoScaleGroups to use the images which saves any long installation during a scale up event.
Pro Tip: When developing an UserData script run through and test it manually on the EC2. Trust me its far quicker than troubleshooting unattended EC2 UserData errors.
Long answer: you can run the UserData on each boot of the machine using Mime multi-part file. A mime multi-part file allows your script to override how frequently user data is run in the cloud-init package.
https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
For all those who will run into this problem, first of all check the log with the command:
sudo cat /var/log/cloud-init-output.log
then if you notice connection errors to the various repositories, the reason is because you don't have an internet connection. However, if once inside your EC2 you manage to launch the update and install commands, then the reason why they fail in the UserData is because your EC2 takes a few seconds to get the Internet connection and executes the commands before having it. So to solve this problem, just add this command after #!/bin/bash
#!/bin/bash
until ping -c1 8.8.8.8 &>/dev/null; do :; done
sudo yum update -y
...
This will prevent your EC2 from executing commands before an internet connection is established
I have never ran into this before because I can always just run the dev server, open up a new tab in terminal and curl from there. I can't do this now because I am running the Django Development server from a Docker container and so if I open a new tab, I will be in the local shell and not the docker container.
How can I leave the development server running and still be able to curl or run other commands?
When I run the development server I'm left with this message:
Django version 1.10.3, using settings 'test.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
and so unable to type any commands.
You can use & to run the server as a background job in the current shell:
$ python manage.py runserver &
[1] <pid>
$
You can use the fg command to get back direct control over the runserver process, then you can stop it as usual using Ctrl+C.
To set a foreground process as a background job, you can pause it using Ctrl+Z, and run the bg command. You can see a list of running backgrounds job in the current shell using the jobs command.
The difference with screen is that this will run the server in the current shell. If you exit the shell, the server will stop as well, while screen uses a separate process that will continue after you exit the current shell.
In a development environment you can do following also.
Let the server run in one terminal window.
Open a new terminal window/tab and run
docker exec -it <Container ID/Name> /bin/bash
It will give you interactive access to your container, i.e. you can execute any command in your container rather than in your local shell.
Type exit to come out container shell to local shell.
I clone this repo (it's pretty much based on docker docs here) and run docker-compose up. Docker builds the 2 containers and I see the output from db_1 (psql looks to be completely ready) but nothing at all from web_1, no output whatsoever.
I go to my host IP + 8000 and nothing is running there. I am using docker toolbox for mac. It's pretty much the simplest possible example of using Docker - any idea why I'm not seeing anything from my Django container?
Thanks in advance,
it might be possible that STDOUT of the web_1 Container is mapped only to display WARN and ERROR level. You say youre using Docker Toolbox for Mac? Have you tried to reach the Website over the IP of the DockerToolBox VM or the HostIP? Im not quite aware with DockerToolbox since there is an native MacClient (https://docs.docker.com/engine/installation/mac/). Maybe try to reach the DockerToolboxIp not HostIP. I would also recommend to use Docker for Mac native, since i had problems with the ToolBox but none with the "Native" Client.
Hope i could Help
After taking a better look to the documentation I was able to start your containers.
After the git clone:
cd sane-django-docker
docker-compose up -d
This is the output
Starting sanedjangodocker_db_1
Starting sanedjangodocker_web_1
[root#localhost sane-django-docker]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cde9e93c1a70 sanedjangodocker_web "python3 manage.py ru" 19 seconds ago Up 1 seconds 0.0.0.0:8000->8000/tcp sanedjangodocker_web_1
73ad8cafe798 postgres:9.4 "/docker-entrypoint.s" 20 seconds ago Up 1 seconds 5432/tcp sanedjangodocker_db_1
When I just performd docker-compose up (running in the forground I saw this issue).
LOG: shutting down
LOG: database system is shut down
After taking a better look in the documentation I saw the problem
Django will complain about the postgres database not existing so we'll
create one:
docker exec sanedjangodocker_db_1 createdb -Upostgres webapp
Now the postgres is fine but I had to restart the webapp to find the db.
docker restart sanedjangodocker_web_1
Now I'm able to acces it on IP:8000
It worked!
Congratulations on your first Django-powered page.
I don't know how the django app really works but the setup is pretty strange.
My setup is the following one :
- a django server running in a docker with port mapping: 8090:8090
- Eclipse with PyDev
I want to be able to put breakpoint on Pydev (click on a line, step by step)
I found several articles like;
http://www.pydev.org/manual_adv_remote_debugger.html
but it's still not working.
1) Should I update the manage.py to "import pydev" ? which lines to add and do I have to copy inside the docker container the pysrc of pydev plugin in order to be able to do the module import ?
2) Is there a port forwarding needed ? python instance running into docker should have access to remote debug server on host machine ?
3) I found article about pycharm and remote debug using ssh ? not possible to do similar with pydev ?
4) How to "link" my local directory and docker "directory" ?
[EDIT] I found the solution
Copy the eclipse/path_to\pydev\plugins\org.python.pydev\pysrc directory into a place where your docker container can access it.
Edit pysrc/pydevd_file_utils.py, and add directory mapping between your host and docker container like:
PATHS_FROM_ECLIPSE_TO_PYTHON = [(r'C:/django',r'/.../lib/django'),
(r'C:/workspace/myapp',r'/var/www/myapp')]
you can set several tuples if you have several paths containing python code
edit manage.py and add the following
sys.path.append('/my_path/to_pysrc_/under_docker/pysrc')
import pydevd
pydevd.settrace(host='172.17.42.1') #IP of your host
In Pydev, in preferences > Pydev > Run/Debug > Port for remote debugger: 5678
In Debug Perspective, click on "Start the Pydev server"
in your docker, run: python manage.py runserver 0.0.0.0:8090 --noreload
(replace 8090 by your http port)
In Pydev: you will see that the code just break after settrace !
Now You can add some breakpoint and use the debug CLI of Pydev:) Enjoy !
I had the similar issue - django project in docker, connect to docker by pycharm 145.1504 & 162.1120 via docker interpreter, run server works OK, but debug is stack after pycharm runs
/usr/bin/python2.7 -u /root/.pycharm_helpers/pydev/pydevd.py --multiproc --qt-support --client '0.0.0.0' --port 38324 --file /opt/project/manage.py runserver 0.0.0.0:8000.
I tried to find out why for a few days, then connected pycharm to docker by ssh connection and everything works fine, run and debug.
Ok, from what you wrote I will assume you have a Django docker container running on your local machine.
From inside your container (e.g. docker-compose exec <container name> bash to get into it)
pip install pydevd
in Eclipse, put a breakpoint like this:
import pydevd; pydevd.settrace('docker.for.mac.localhost')
If you're not using Docker for Mac, you have to do a bit of work to get the IP of your machine from inside of your container, e.g. see this.
go to Debug Perspective and start the PyDev debug server
start your application or test
... and you should see your views for stack, variables, etc., populate as the code stops at the breakpoint.
In Python 3.7, there is now a builtin breakpoint, which you can configure to point to your favorite debugger using an environment variable (the default is pdb):
breakpoint()
It also takes arguments, so you can do:
breakpoint(host='docker.for.mac.localhost')
I found that a bit annoying to type, so I ended up putting inside an app a module that looks like this:
# my_app/pydevd.py
import pydevd
def set_trace():
pydevd.settrace('docker.for.mac.localhost')
I then set the environment variable for the builtin breakpoint (say in your docker-compose.yml):
PYTHONBREAKPOINT=my_app.pydevd.set_trace
Working through this tutorial on setting up ember-cli in a Docker container:
http://www.rkblog.rk.edu.pl/w/p/setting-ember-cli-development-environment-ember-21/
Here are my steps:
Created docker-compose.yml in an empty folder on the host machine
Launched Docker Quickstart to get a terminal
Changed to the folder with the .yml
Ran the two docker-compose commands below from the terminal (added -d because without that you get a message that interactive mode is not supported)
Ran docker ps -a to verify that the container was running
Ran docker inspect CONTAINER_ID to find the ip address of the running container
Found the IP address at an odd location (172.17.0.2)
Attempted to access port 4200 on that IP from the host Windows machine browser and also from the Docker CL via curl but without success.
Ran docker ps -a and found that both containers that had been instantiated had exited.
Now if I try to start the container again it just exits immediately
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
What am I missing to get up and running? Do I need to open ports on the Default VM running in Virtualbox? How do I diagnose why the container keeps exiting?
First I would suggest using docker-compose up, that is most likely what you want.
To see the logs for a detached container you can run docker logs <container name>. If there are any errors you'll see them there.
A likely cause of the "container exit" is because the process goes into the background. Docker requires a process to stay in the foreground, but many serve commands will background by default. To keep the process in the foreground you can sometimes add use a flag like --foreground or --no-daemon, but I'm not sure if one exists for ember.
If that flag doesn't exist, it's likely that ember server is just checking if stdin/stdout are connected to a tty. By default they are not. You can add these lines to your docker-compose.yml to fix it:
stdin_open: True
tty: True
Ok finally resolved it. The issue with the module resolution may have been long file name resolution on windows because after I moved the source folder to the root of the host I was able to get ember serve running under windows.
Then from the terminal window I ran the commands to init and launch ember-server
docker-compose run -d --rm ember init
docker-compose run -d --rm ember server
Then did:
docker-compose up -d
which launched the containers successfully and then I was able to access the Ember page served up at the IP:Port specified earlier in the comments
http://192.168.99.100:4200/