I'm trying to install to my Ubuntu 20.04 local machine using docker-compose. When I run sudo docker-compose -f docker-compose-non-dev.yml up, I got several errors and the process keep giving errors and did not end, so I aborted. Can you please tell me what the problem is?
The errors I get during Init Step 1/4 [Starting] -- Applying DB migrations are:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) >relation "logs" does not exist
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) >relation "ab_permission_view_role" does not exist
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) >relation "report_schedule" does not exist
I had the same same issue on Mac OS. And similar issues have been reported in the GitHub issues page as well, but it was not reproducible by everyone.
There is a possibility that something may have gone wrong in the first run.
Try running docker-compose down -v and then run docker compose up.
If the above fails, try upgrading your docker installation. Installing a new version solved my problem.
I had the same issue (Mac OS Monterey) where I had an instance of docker running Postgres for one of my apps, so when Superset started, it was looking at that instance of Postgres which obviously didn't have the appropriate databases/tables/views/etc.
So just stopping that other instance and restarting the Superset containers fixed the errors and properly started Superset. #embarrassed #oops
I experienced the same issue but these errors were at the end of a long string of cascading errors. I had this error consistently across all runs.
Looking at the first error, it seems like the initialisation script is not waiting for PostgreSQL to be ready and starts transacting right away. If the first transactions fail, many others fail subsequently. In my case the database needed a few more seconds to be ready so, I just added a sleep 60 at the beginning of docker/docker-bootstrap.sh to give time to PostgreSQL to start before other services start working.
I deleted the previously-created docker volumes and ran docker-compose -f docker-compose-non-dev.yml up again and now all works fine.
Related
I have web app in 3 containers running on linux server (ubuntu 20.04), Django webapp, nginx webserver and postgres db. When i run 'docker-compose ps' it does not show any container, or error, only headings, like there is no container, not even crashed one.
I am sure that it is the right folder as there is no other docker-compose.yml on this server.
It seems almost like app is not running there except that it is accessible via browser and working well.
I tried all kinds of commands for showing containers or images using docker and docker-compose with no result
I tried docker service restart - app went offline for a moment and then back online (I have 'restart: allways' set in compose file) also I restarted the whole server with the same result.
Even script which is making db dumb does not see containers and started do fail
When I try to bring up the project running docker-compose up webapp and db containers starts but webserver not because port is already taken
Have anyone experienced this? I work with docker-compose for a while but it never happened to me before, I don't know what to do, I need to update the code of this application and I don't want to lose data in DB (I am also not able to make dump or ssh to the container).
This app was working for years with the same configuration before, on the other server with Ubuntu 18.04. Maybe it is server related problem.
Thanks.
It sounds like there is some fundamental problem going on. Have you tried using simply docker ps to see if your containers are running, or to see if anything is running?
If the containers are listed in the docker ps output, make sure you have the names correct in your docker-compose.yml
If you don't see your containers running with docker ps then maybe they crashed immediately after start (and are therefore no longer running).
I would have expected docker-compose ps to have shown something - even if your containers are crashing.
Please provide more details of your output from docker ps and/or docker-compose ps and maybe the contents of your docker-compose.yml if these things don't help. You said "it doesn't show any container" when you run docker-compose ps - does it show anything at all (errors, blank lines, etc.)
I got the err_connection_refused when trying to accessing django running on wsl2 (http://localhost:8000) from Windows but when I use curl http://localhost:8000 from Windows terminal bash, it's working fine. I have tried to add a new firewall inbound rule for port 8000 but it's still not working. Is there anything else I need to take care of.
Thanks a lot
Seems like a forwarding problem. WSL2's interface is NAT'd, whereas WSL1 was bridged by default. WSL seems to do some "auto-forwarding" of ports, but only on localhost. However, sometimes this auto-forwarding mechanism seems to "break down". The main culprit seems to be hibernation or Windows Fast Startup (which are both closely-related features).
Does the problem resolve if you do a wsl --shutdown and then restart the WSL2 session? If so, try disabling Windows' Fast Startup. I already had Fast Startup disabled due to a different (non-WSL issue) on my system, so that could be related to why I am not able to reproduce.
Along the same lines, do you Hibernate instead of powering off? In that case, a wsl --shutdown might resolve as well.
For future readers, note that the above two points seem to resolve the issue for most people who have upvoted and responded in the comments. However, if that does not work for you, the following were my original "additional suggestions":
For some additional ideas, see this github issue. There are some suggestions on services that might be needed. (Side question - Are you running Windows Home or Professional?)
Is there any chance that your Windows hosts file (e.g. c:\windows\system32\drivers\etc\hosts) points localhost to an IP other than 127.0.0.1? If I attempt to access via my local Windows IP address, rather than 127.0.0.1 or localhost, I get an ERR_CONNECTION_REFUSED as well.
Since you were looking at the firewall rules, maybe look at a forwarding rule instead of just an inbound allow?
If all else fails, try exporting/backing up the WSL2 session (see wsl --export), then import it in as a new WSL1 session. See if it works there.
On my WSL2/Ubuntu 20.04 system, I attempted to reproduce (but haven't been able to yet) with the following steps:
mkdir -p ~/src/dj-test
cd ~/src/dj-test
python3 -m venv dj
source dj/bin/activate
pip install Django
django-admin startproject config .
python manage.py runserver
(although I used activate.fish since I'm running the fish shell)
From Vivaldi web browser in Windows, accessed localhost:8000, which returned "The install worked successfully! Congratulations! ..."
curl under Powershell Core worked as well.
My Heroku Toolbelt is stuck updating.
When I run heroku in a console it says
Heroku Toolbelt is currently updating.
I have tried to uninstall and re-install but still the same issue.
I also tried to remove it with revo uninstaller.
Any ideas on how to fix?
Instead of completely uninstalling and reinstalling, I deleted a file called "updating" in "C:\Users\Profile-name\.heroku" and it started responding to commands again
I had the same issue with the older buggy Heroku toolbelt version. Except my heroku commands were not running from local but from Semaphore as a sequence of deploy steps.
Just running a update command first thing seems to solve the issue. Of this helps with your local toolbelt as well.
heroku update
The 5 minute update by heroku can be bypassed by removing the background_update! line from updater.rb or updating the toolbelt to version 3.15.2. That version will only lock if there really is an update instead of every 5 minutes.
Uninstall Toolbelt.
Manually delete
C:\Program Files (x86)\Heroku
C:\Users\Profile-name\ .heroku
Re-install toolbelt.
As of version 0.3.15 of heroku toolbelt, heroku toolbelt will fail if an autoupdate is already in progress. Heroku toolbelt will check every 5 minutes for an update, so back-to-back heroku commands will fail unless another single heroku command has been run within the last 5 minutes. This behavior was introduced by this commit:
https://github.com/heroku/heroku/commit/023c84d15cde5958631b240eeaadec01a3b49031
I noticed this because it breaks heroku_san which typically makes multiple heroku commands back-to-back. Unfortunately, I don't see a workaround. It would help toolbelt could provide some sort of option to disable autoupdate or increase the time period for checks.
I have a weird problem with Django-Kronos.
I've got it running successfully on my local machine and on our development server. However, on the production server, I can't get kronos to acknowledge my cron.py file. When I run installtasks, it runs but says "0 tasks installed". I've also tried running the tasks manually and kronos tells me the task doesn't exist.
We use git to push everything through to the server, so all the files and the structures are identical between the three locations. I've also checked and the cron.py file exists and has exactly the same content as the working servers.
The only differences between the servers is that the production server is running Postgres (SQlite on the dev server) and it's Ubuntu 12.10, whereas the dev server is 12.01.
Kronos is functioning properly, but it's not picking up our cron.py file for some reason....
Any got any ideas?!
Well, unfortunately, our solution was to scrap Django-Kronos altogether and create a custom management command which we're running from the crontab.
This happens when one of import you are trying to make is not there, your production system might be missing some Python package which is included in your cron.py.
I'm trying to follow the Heroku tutorial on deploying Django applications:
Getting Started with Django on Heroku
I'm able to run most of it without problems, but when it comes to synching the postgreSQL database I'm getting the following message:
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
I've tried promoting databases,
seting the HOST to /tmp,
modifying the postgresql.conf,
and many other stuff I found searching around without success
I'm working on MB-Pro, Mac OS 10.7 (Lion), and read in some places that this OS was giving developers some headache when it comes to postgre. Did anyone have this problem on OS 10.7, and had it fixed after updating to 10.8? I'm considering updating in case it solves the problem.
Thank's in advance.
EDIT:
The command I'm trying to run is: heroku run python manage.py syncdb
EDIT:
I forgot to delete the DATABASE definition that was already defined on the file.
I made use of the PostgresApp to help with running Postgres on my Mac and then in a terminal ran;
psql -h localhost
I forgot to delete the DATABASE definition that was already defined on the file. It's working now.