we are using superset docker compose as per the installation guide, However, once the docker is crashed or machine is restarted, we need to re-configure dashboard.. not sure how to configure with persistence volume that it can continue work after restart.
By default superset docker-compose has PostgreSQL db service and data's are mounted as volume, which is holding all of data related to superset. So instead of using default db service use extranal database instance in that case even docker-compose restart you'll not loose your data.
Related
I am new to deploying to DigitalOcean. I recently created a Droplet, into which I will put my Django API to serve my project. I saw that the droplet I rent has 25 GB of memory, and for the actual size of my project, it could be a good fit for the database. I run my entire code with Docker Compose, which handles Django and PostgreSQL. I was wondering if it is a good idea to store all the data in the PostgreSQL created by Docker Image, and stored in the local memory of the droplet, or do I need to set up an external Database in Digital Ocean?
Technically I think that could work, but I don't know if it is safe to do, if I will be able to scale the droplets once the size scales, or if I will be able to migrate all the data into an external database, on digital ocean, a managed PostgreSQL all my data without too many problems.
And if I save it all locally, what about redundancy? Or can I clone my database on a daily basis in some way?
If someone knows about that and can help me, please I don't know where to start
Very good question!
So your approach is perfectly fine, even though I wouldn't use docker compose for your production environment. The reason being that you will definitely need to rebuild and redeploy your application container multiple times and it's better to decouple it from your database container.
In order to do that, just run something similar to this in your server:
docker run -d \
--name my_db \
-p 5432:5432 \
-e POSTGRES_PASSWORD='mypassword' \
-e POSTGRES_USER="postgres" \
-v /var/run/postgresql:/var/run/postgresql \
--network=mynetwork \
--restart=always \
postgres
Make sure to define a custom network and to register in it your db
container as well as your web container and any other standalone
service that you need (for example Redis or Celery)
The advantage of the Digital Ocean databases is that they are fully managed. Meaning that you can use their infrastructure to handle backups, and that it would run on a separate server that you can scale up and down based on need.
If I were you I would start with a PostgreSQL container in your server, making sure to define a volume so that even if your server instance crashes the data will be stored on file and will be there waiting for you when you restart the server and db.
Then as your needs evolve you can migrate your data on a more reliable and flexible system. This can be a Digital Ocean database or AWS RDS or a number of others. DB migrations can be a bit scary but nowadays there are plenty of tools to help you with those too.
I want to try to dockerize my Django project, but I face a problem that the database doesn't exist. I run docker on WSL2. I use PostgreSQL.
TLDR: Try adding - DATABASE_HOST=db on line 28
Even though you are running all the containers on your host. They do not share localhost nor 127.0.0.1. Docker creates its own network and every containers have its own IP address(es) and Network interface(s).
When using Docker Compose, you can use the service name in this case db and web to point to a container. You can also use host.docker.internal to point at the actual host.
In your case, Django is trying to connect to a database that runs on the web container but there is none.
Currently I'm running Superset in Docker mode. No native installation. The metadata database is an external(non-docker) Postgres DB which has lots of Dashboards, Charts etc.
Current installation is running on git tag 1.0.0. I want to upgrade to v1.1.0. I can do this by switching the repo to git tag 1.1.0 and restarting docker containers.
However as per UPDATING.md notes, v1.1.0 has a DB migration .
In native installation, the way to migrate DB is superset db upgrade
What's the proper method to apply these migration scripts to an existing external database in docker installation?
Lauch you stack if done with compose it will run automaticly the db upgrade command.
if not docker exec -it <supersetcontainerID> /bin/bash
Just ensure to have the correct sqlalchemy chain on the superset config file.
And then fire the superset db upgrade
You're done.
First, chekout your container id ,then use below command to backup superset.db
docker cp 1263b3cdf7e7:/root/.superset/superset.db
Then after the upgrade,you can just simple cp and replace superset.db back into your new version
I made a dump of my local Neo4J database and then launched an AWS instance with Neo4J and I'm trying to run that same database there. Both are versions 3.5.14.
When I launch the local Neo4j AWS instance after it's just set up, it's launches fine and is accessible via bolt.
However, then I stop that instance using systemctl stop neo4j and attempt to load my dumped database onto AWS using the command
neo4j-admin load ....
(which works)
However, when I launch the database again after using systemctl start neo4j it just stops and restarts and cannot launch.
The datastore should not be upgraded as it's the same version, so I don't see what the problem may be.
Just in case, I uncommented the datastore upgrade in
/etc/neo4j/neo4j.template
But that doesn't help either.
Could you please advise me what else I could do?
I am trying to configure a CI job on Bamboo for a Django app, the tests to be run rely on a database (postgres 9.5). It seems that a prudent way to go about is it run the whole test in a docker container, as I do not control the agent environment so I cannot install Postgres there.
Most guides I found recommend running postgres and django in two separate containers and using docker-compose to easily manage them. In this scenario each docker image runs just one service, started with CMD. In Bamboo I cannot use docker-compose however, I need to use just one image, so I am trying to get Postgres and Django to run nicely together in one container but with little success so far.
My problem is that I see no easy way to start Postgres as a service inside docker but NOT as a docker CMD command, official postgre image uses an entrypoint.sh approach, also described in the official docker docs
But it is not clear to me how to implement that. I would appreciate your help!
Well, basically you would start postgres as a background process in the docker-entrypoint shell script that does otherwise start your django application.
The only trick here is that you need to put a 'trap' command in it so that you can send a shutdown/kill to the background process when your master process stops.
Although I have done that a thousand times, I know that it is a good source for programming errors. In general I do just use my docker-systemctl-replacement which takes care of running multiple applications as services, just as if the container is a virtual machine hosting multiple applications.
Your only other option is to add in a startup script in your Dockerfile, or kick it off as part of your docker run ... commands. We don't generally use the "Docker" tasks, as I find them ... distasteful (also why I usually just fall back to running a "Script" task, and directly calling docker run in that script task)
Anyway, you'd have to have your Docker container execute a script that would:
Start up Postgres (like a sudo systemctl start postgresql)
Execute your tests.
Your Dockerfile will have to install Postgresql and do some minor setup work I imagine (like create relevant users and databases with the proper owner). Since we're all good citizens, we remember to never run your containers as root, right?
Note - you can always hack around getting two containers to talk to each other without using docker-compose. It's a bit less convenient, but you could do something like:
docker run --detach --cidfile=db_cidfile --name ci_db postgresql_image
...
docker run --link ci_db testing_image
Make sure that you EXPOSE the right ports on the postgresql image to the testing_image container.
EDIT: I'm looking more at my specific case - we just install Postgresql into a base CentOS host rather than use the postgresql default image (using yum install http://yum.postgresql.org/..../pgdg-centos...rpm and then just install postgresql-server and postgresql-contrib packages from there). There is a CMD [ "/usr/pgsql-ver/bin/postgres", "-D", "/var/lib/pgsql/ver/data"] in our Dockerfile, too. We don't do anything fancy with the docker container, though. NOTE: we don't use this in production at all, this is strictly for local and CI testing.