Ajax not working with Django after some time - django

I have a Django 2.0 application hosted on VPS and is run by running command in SSH terminal
/root/.local/bin/pipenv run
/home/user/.local/share/virtualenvs/example.com-IuTkL8w_/bin/gunicorn
myapp.wsgi:application --timeout 300 --workers 1
--log-level=DEBUG &
Till few hours of running the application server using above command, Ajax requests works fine but after few hours it constantly fails.
There are many processes running behind that Ajax request but could not figure out where the request is breaking, since logs displays in console until SSH is live after running the above command. And there is no mean to check for console log after terminating SSH and logging again.
Killing all running process and restarting serving using above command again starts working fine for few hours.
1. What could be the reason for this ghost issue?
2. Is there some way to view console log in between anytime after SSH login?
3. If not, how can I set the server to restart automatically periodically (since, it is working again after restarting the server)?
Edit 2
I see in browser's network console, it is giving
[Errno 5] Input/output error
on print() statement.
I have bunch of print() statements to see output in console like
print('----check_url')
print(check_url)
print('----product_id')
print(product_id)
Edit 3
I have following line in the code
with open(joined_path_with_file, 'wb') as f:
f.write(r.content)
Is the issue with this?

Related

How to keep django rest API server running automatically on AWS?

So I uploaded my Django API with Rest framework on AWS EC2 instance. However, I have to manually go to Putty and connect to my EC2 instance and turn API on whenever I want to use it by inputting python manage.py runserver 0.0.0.0:8000.
When I turn off my PC, putty closes and API cannot be accessed anymore on the ip address.
How do I keep my API on forever? Does turning it into https help? Or what can be done?
You can make it live always by following means,
connect your ec2 instance using ssh.
Then deploy your backend (django) on that instance and run it at any port.
Once run on your desired port, you can close the terminal, please don't press ctrl+c so that your django server does not stop. you can just cross the terminal. it will be now running.
You can also run the django server on tmux (its terminal inside terminal). here is tutorial on tmux.
https://linuxize.com/post/getting-started-with-tmux/
One other approach is, you can deploy the django using docker container.
I hope you will come over your problem.
Thanks.
Ok I finally solved this. So when you close putty or a ssh client session, the session goes offline. However, if you run the session via daemon, the session continues in the background even when you close your clients. The code is
$ nohup python ./manage.py runserver 0.0.0.0:8000 &
Of course you can use tmux or docker, as suggested by madi, but I think running this one code is much simpler.
You can use pm2.
Please install pm2.
And make a server.json file in the root directory of your django app to run your app.
{
apps:
[{
name: "appname",
script: "manage.py",
args: ["runserver", "0.0.0.0:8888"],
exec_mode: "fork",
instances: "1",
wait_ready: true,
autorestart: false,
max_restarts: 5,
interpreter : "python3"
}]
}
Then you can run this app with pm2 start server.json.
Your app will run on port 8888 .

How to easily debug Redis and Django/Gunicorn when developing using Docker?

I'm not referring to more sophisticated debugging techniques, but how to get access to the same kind of error messages that are normally directed to terminal tabs?
Basically I'm adopting Docker in a Django project also using Redis.
In the old way of working I opened a linux terminal tab for gunicorn like this: gunicorn --reload --bind 0.0.0.0:8001 myapp.wsgi:application
And this tab kept running Gunicorn and any Python error was shown in this tab so I could see the problem and fix it.
I could also open a second tab for the celery woker: celery -A myapp worker --pool=solo -l info
The same thing happened, the tab was occupied by Celery and any Python error in a task was shown in the tab and I could see the problem and correct the code.
My question is: Using docker is there a way to make each of the containers direct these same errors that would previously go to the screen, to go to log files so that I can debug my code when an error occurs in Python?
What is the correct way to handle simple debugging during development using Docker containers?
After looking more about this in the docker documentation I found a link that solves this problem: View logs for a container or service
Basically the command "docker logs CONTAINER_ID" shows on the screen exactly what we would see in the terminal running the application.
Works perfectly to see Django, Redis and Angular logs.
Just type:
docker logs CONTAINER_ID
Replace the container_id keyword with the real id of the container you want to log in.
To find the id type:
docker ps

Allow a bash script to run at boot in AWS Centos 7 instance

I need to create AWS CentOS 7 instance images for a customer, and need it to automatically send the ip and instance id to our AWS server every time the instance boots. For example, this is the very basic test version of the script I need to run:
#!/bin/bash
$serverIP=""
curl "https://$serverIP"/myphp.php?id='sentid'&ip='sentip'"
If the script is run directly, it works fine and is received by the server and processed there. But I can't get it to run at boot. I cannot put the script in the "User Data" directly due to security concerns as the customer can then see it easily, it needs to be in a script in the filesystem of the image.
I've tried several things that work fine on a physical Linux server, but not on AWS. I know profile.d runs every time someone logs in but over-sending like that is fine.
/etc/profile.d/myscript.sh
This stops the AWS instance from booting. Even just
#!/bin/bash/
echo "hello world"
prevents it from booting. The instance starts, but when you go to ssh into it you get 'Network Error: connection timed out', which is the standard error if you put a wrong ip in, or upset it by leaving a service like httpd enabled.
However, a blank bash script with just #!/bin/bash will allow the instance to start. Removing the script via user data usually makes it boot, sometimes it just dies.
The first thing I tried was crontab. I did:
crontab -e
#reboot /var/ook/myscript.sh
systemctl enable crond.service
But the instance wouldn't start. So I put "systemctl disable crond.service" in the User Data and one booted, but another still stayed dead. Myscript.sh was just another echo "doob" >> file which worked fine when run directly.
I tried putting in /etc/systemd/system/my-startup.service:
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/var/ook/writedood.sh
[Install]
WantedBy=multi-user.target
then:
systemctl enable my-startup.service
But this did nothing. My script "writedood.sh" was just echo "doob" >> ./file.txt ensuring file.txt was chmod 777. At least it didn't prevent the instance from starting.
To give context, an instance won't start if httpd is left enabled on shutdown, but will if you disable it in User Data.
I wanted to have a go at putting something in init.d but I'm not sure how to simply tell it to run a script once in the background, and given the plethora of success I've had so far with the instance not restarting, I'm not holding out much hope that that would work.
Thanks in advance!
EDIT::: I realised that sometimes AWS EC2 Instances Console is causing the problem where I can't ssh in after stopping and starting. It blanks the public ipv4 address when I click stop, but when I start, it puts the old address up and hangs. If I refresh the page, or uncheck/check the instance; the ip changes to the new address. This has caused much consternation.
Crontab worked if I placed the scripts and output file in different folders. It's very finicky; any errors, such as it not being able to write to the output file, and the instance won't start. I put startscript.sh in /usr/local/src, and output.out to /tmp/ to ensure there were no permissions problems, and now the instance starts and runs the script on boot.
I then realised that sometimes AWS EC2 Instances Console is causing the problem where I can't ssh in after stopping and starting. It blanks the public ipv4 address when I click stop, but when I start, it puts the old address up and hangs. If I refresh the page, or uncheck/check the instance; the ip changes to the new address. This has caused much consternation.

How to access UI in Airflow 1.10?

To start with I am trying to upgrade from 1.9 version to 1.10 so my setup contains two vms running different versions of airflow with different port forwarding.
I can access UI from vm running with 1.9 but not able to access UI from 1.10.
To debug I want to confirm if airflow webserver is running. if I execute
sudo systemctl start airflow-webserver
it throws no error but when
I am looking at netstat I am not seeing any process listening to port 8080(default).
Also I have not created any user as I do not need rbac authentication ? Can that be a problem?
As requested by #kaxil. Below is the output of ps aux | grep airflow
Can someone provide some suggestions on how to fix this problem? Also if you need any further resource can provide it. I am not sure what is relevant here.
Output of journalctl -u airflow-webserver.service -b
The Error message shows that there is an issue with airflow.cfg file i.e. there might be a character in your airflow.cfg that is causing the issue. Recheck your config file, if you don't find an issue, post your config file in your question and we will try to figure it out.

remote long running command

I have a simple bash script only does “sleep 3600” on remote host (Amazon EC2) and I am using fabric to call it via fabric.operations.run (I did NOT set any env.timeout or env.command_timeout).
If the remote bash script sleeps for 3600 seconds, fabric was NOT able to return after the bash script is done running. I printed the stack trace and it kept waiting on channel.exits_status_ready() (https://github.com/fabric/fabric/blob/master/fabric/operations.py LINE: 794) even if the script already returned.
This ONLY happens for long running process. I tried to make bash script sleep for 120 seconds and it worked fine.
I double checked the open connections using netstat, and the ssh session opened by fabric was still alive.
Help needed :) Any idea why this happens?
Figured out, just needed to use env.keepalive = 1