Fatal database error on Heroku dev - django

I have a Django app (music streaming service) running on a free hosting account on Heroku with a Postgres database with one web dyno. Yesterday I wrote a script that uploads mp3 files to Heroku, extract meta data, then upload it the files to Amazon S3. When I'm going through this process, I get this error when I try to refresh
could not fork new process for connection: Cannot allocate memory
or
FATAL: remaining connection slots are reserved for non-replication superuser connections
When I go to postgres.heroku.com to see the statistics, here is what I have :
Plan Dev
Status available
Data Size 9.0 MB
Tables 0
PG Version 9.1.6
Created December 05, 2012 17:33
We have no users yet, in fact, I'm the only one using it at the moment. I'm sure we soon have to upgrade but I'm not sure with what to start. Should we start with the database, get more dynos, or what? I even started thinking of switching to MongoDB as the frequency of read and writing data to the database will be relatively high (incrementing the number of stars and listens per song, number of minutes listened by each user, etc etc).
Any suggestions?

Related

Django extremely slow page loads when using remote database

I have a working Django application that is running locally using an sqlite3 database without problem. However, when I change the Django database settings to use my external AWS RDS database all my pages start taking upwards of 40 seconds to load. I have checked my AWS metrics and my instance is not even close to being fully utilized. When I make a request to a view with no database read/write operations I also get the same problem. My activity monitor shows my local CPU spiking with each request. It shows a process named 'WindowsServer' using most of the CPU during each request.
I am aware more latency is expected when using a remote database but I don't think this should result in 40 second page lags. What other problems that could be causing this behaviour?
AWS database monitoring
Local machine
So your computer has connection to the server in Amazon, that's the problem with latency. Production servers should be in the same place as DB servers(or should have very very good connection, so the latency is lowered as much as possible.)
--edit--
So we need more details. What is your ISP? What is your connection properties? Uplink, downlink? What are pings to servers in AWS?

Django Channels Realtime Chat

Django channels
Realtime chat
Task
build realtime chat, as well as sending / receiving notifications not related to the chat. A total of 2 real-time functions.
Tools
backend - django
frontend - Android Mobile App
Problem
on a localhost, the code works, messages reach the client.
Deployed on Heroku, the tariff is free. It turned out that there is a limit on connections = 20 (which is not enough for one user for 10 minutes).
After each request, a new connection is created through ASGI, everything is ok for WSGI. To the limit - everything works, but when there are 20 connections, messages reach 2-3 times.
Attempts to solve
1. I registered in the code close_old_connections, it did not work to kill the connection. Those for each message creates a new connection. Googled for several days, did not find a solution on this issue.
2. I tried with both Daphne and Uvicorn - the effect is the same
Question
maybe django-channels is not a suitable option for the task.
Perhaps it’s worth abandoning Heroku, deploying to another hosting and raising Nginx, and all the restrictions will disappear?
The offial documentation says that django-channels should support up to 1000 connections, but then again, if a new connection is created with each message, then nothing will work.
If not through django-channels, then through what?

How to avoid Hyper ledger Composer Rest server restart while upgrading(with change in model files) composer network installed?

We have a working setup of 3 peer nodes and a multi user rest server running on 1 of the peers. Now there are multiple user cards created and imported in the rest server(using web based client) which is working fine. I can trigger transactions and query the blockchain with it.
However In case I need to upgrade my network and there is some change in model file(i.e. any participant/asset/transaction parameters changes). I need to restart rest server so that effect can be observed by WEB based client application. So my questions are:
1. Is there a way to upgrade Rest interfaces without restarting the server.
2. In case Rest server crashed or restarted is there some way to use the old cards that were created before server shutdown.
When the REST server starts you can see that it "discovers" the Business Network and then generates the End Points. The discovery is not dynamic, so that when you change the model or other element of a BNA you need to restart the REST server to re-discover the updated network. (In a live scenario I would think changes to the model are infrequent.)
Are you using multi-user mode for the REST server? Assuming that you are, then Configuring the REST server with a persistent Data Source as described in the documentation, or this tutorial should solve the problem of re-importing the cards. You could also "backup" the cards after they have been used the first time by Exporting them.

504 gateway timeout for any requests to Nginx with lot of free resources

We have been maintaining a project internally which has both web and mobile application platform. The backend of the project is developed in Django 1.9 (Python 3.4) and deployed in AWS.
The server stack consists of Nginx, Gunicorn, Django and PostgreSQL. We use Redis based cache server to serve resource intensive heavy queries. Our AWS resources include:
t1.medium EC2 (2 core, 4 GB RAM)
PostgreSQL RDS with one additional read-replica.
Right now Gunicorn is set to create 5 workers (by following the 2*n+1 rule). Load wise, there are like 20-30 mobile users making requests in every minute and there are 5-10 users checking the web panel every hour. So I would say, not very much load.
Now this setup works alright for 80% days. But when something goes wrong (for example, we detect a bug in the live system and we had to switch off the server for maintenance for few hours. In the mean time, the mobile apps have a queue of requests ready in their app. So when we make the backend live, a lot of users hit the system at the same time.), the server stops behaving normally and started responding with 504 gateway timeout error.
Surprisingly every time this happened, we found the server resources (CPU, Memory) to be free by 70-80% and the connection pool in the databases are mostly free.
Any idea where the problem is? How to debug? If you have already faced a similar issue, please share the fix.
Thank you,

MongoDB lost in droplet Digital Ocean

I've built a Django web app on a droplet of Digital Ocean. The app was working fine. Today, when I opened my web app, no data appeared. I had a look at the droplet (server), and I found that all data in my mongodb is lost. Especially, when I type show dbs in mongodb shell, it said:
DB_HAS_BEEN_DROPPED 0.000GB
Then I rebooted the server, and it worked again. The collections come back but only old data is available. New data, that I've been collected in recent days, is lost.
I faced a similar problem before. For that time, my process running mongodb was even turned off.
I suspect that my droplet was hacked by someone. Is that correct or that's the problem of mongodb? I also curious about security policy of Digital Ocean because when I set up a server one month ago, they sent me a message telling me that the server had strange outgoing traffic, and they locked my server just one day after setting up.
Thanks.
Set up MondoDB to listen on address 127.0.0.1 (or localhost) only so it's not open to the world.
See here for more details: https://docs.mongodb.com/v3.2/administration/configuration/#security-considerations