I am trying to deploy a Django application on Google Compute Engine. I'm using a Debian 7 image and want to set up Django with Nginx, Gunicorn, virtualenv, supervisor and PostgreSQL. I have everything running fine on my development machine which is running Ubuntu 14.04 with Django installed and PostgreSQL as the backend.
I'm using the tutorial located at http://datacommunitydc.org/blog/2013/12/a-tutorial-for-deploying-a-django-application-that-uses-numpy-and-scipy-to-google-compute-engine-using-apache2-and-modwsgi/. I'm also using the tutorial located at http://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/ as it's specific to virtualenv and PostgreSQL which I'm using on my development machine. I've setup my GCE instance, instaled and updated aptitude. I've installed PostgreSQL however when I attempt to create a database user and a new database for the app I get an error and nothing is created.
Following the tutorial I've run:
$ sudo su - postgres
postgres#django:~$ createuser -P
Enter name of role to add: hello_django
Enter password for new role:
Enter it again:
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n
When it attempts to create the new user role I receive the following error:
createuser: could not connect to database postgres: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
When I run the shell script ls /etc/init.d it says that postgresql is running, but I still can't add the new role. Can someone tell me what I'm doing wrong?
Regards.
I wasn't able to reproduce the issue on my end, but it appears to be an issue with PostgreSQL and its dependencies. You can try removing all installed PostgreSQL components and dependencies and then reinstalling PostgreSQL:
sudo apt-get remove --purge postgresql-9.1*
sudo apt-get install postgresql-9.1
If it's still unable to connect to the database, the issue might be originating from your $PATH, in which case you'll need to point it to /usr/local/bin/psql.
I have just had the same problem.
This is most likely cause the postgres cluster has not been initialised yet.
And the reason that this didn't install automatically is because you have set up the locale of the box yet. This is something that has to be done on Amazon EC2 instances as well.
You need to run:
sudo apt-get install locales
And then:
sudo dpkg-reconfigure locales
I had to choose which locales I wanted to setup, I chose en_AU UTF-8.
After this I rebooted, then I could run this to initialise the cluster:
sudo pg_createcluster 9.1 main --start
This started the service and created the pg_hba.conf files etc.
I faced a similar problem a while back. It can resolved using a few simple steps:
As postgres user run : initdb --locale en_US.UTF-8 -E UTF8 -D 'var/lib/postgres/data'. Note depending on the distro postgres in the command can be pgsql. You can easily check if the directory exists with an ls command.
systemctl start postgresql (if you have systemd) or just a /etc/init.d/postgresql start should do. These commands must be rub as the superuser.
All this is from the ArchWiki.
Related
Can anyone provide a steps to Change the setting of apache superset from sqlite to MySQL?
I have create superset_config.py to override the configuration
after adding the property i am able to enabled swagger url
I have added SQLALCHEMY_DATABASE_URI = 'mysql://root:xxxxxx#127.0.0.1:3306/superset' property in superset_config.py file
but still it is connecting with SQLlite.
I wrote article that can help you.
I had problem using docker compose and native installation . Port is closed can be due to networking problem or problem with packages. Host.docker.internal doesn’t worked for me on Ubuntu 22. I would like to recommend to not follow official doc and use better approach with single docker image to start. Instead of running 5 containers by compose, run everything in one. Use official docker image, here image. Than modify docker file as follows to install custom db driver:
FROM apache/superset
USER root
RUN pip install mysqlclient
RUN pip install sqlalchemy-redshift
USER superset
Second step is to build new image based on docker file description. To avoid networking problems start both containers on same network (superset, your db) easier is to use host network. I used this on Google cloud example as follow:
docker run -d --network host --name superset supers
The same command to start container with your database. —network host. This solved my problems. More about in whole step to step tutorial: medium or here blog
Currently I'm running Superset in Docker mode. No native installation. The metadata database is an external(non-docker) Postgres DB which has lots of Dashboards, Charts etc.
Current installation is running on git tag 1.0.0. I want to upgrade to v1.1.0. I can do this by switching the repo to git tag 1.1.0 and restarting docker containers.
However as per UPDATING.md notes, v1.1.0 has a DB migration .
In native installation, the way to migrate DB is superset db upgrade
What's the proper method to apply these migration scripts to an existing external database in docker installation?
Lauch you stack if done with compose it will run automaticly the db upgrade command.
if not docker exec -it <supersetcontainerID> /bin/bash
Just ensure to have the correct sqlalchemy chain on the superset config file.
And then fire the superset db upgrade
You're done.
First, chekout your container id ,then use below command to backup superset.db
docker cp 1263b3cdf7e7:/root/.superset/superset.db
Then after the upgrade,you can just simple cp and replace superset.db back into your new version
I'm trying to create a Docker context that will automatically integrate with AWS's ECS.
I'm following this tutorial
The author just does:
docker context create ecs myecs and gets a "pick an integration" prompt, whereas I get an error saying it needs exactly 1 argument.
docker context create" requires exactly 1 argument.
See 'docker context create --help'.
Usage: docker context create [OPTIONS] CONTEXT
Create a context
You need to install the Docker Compose CLI preview
The below curl is from here: Docker docs
curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
sudo docker context create ecs myecs
It didn't work without sudo for me for some reason.
After the script finished I had some weird errors:
cp: cannot stat '/tmp/tmp.d4QjhW8T6k/docker-compose': No such file or directory and docker context create ecs myecs didn't work at first, but once I tried with sudo it worked fine.
EDIT: . ~/.zshrc (or just close your terminal and open a new one) made it possible for me to run docker context create ecs myecs without sudo.
Author of the blog/tutorial here. It looks like you don't have the pre-requsite installed. In the blog I call out the pre-req in pieces like this.
....In July, Docker released a beta for Docker Desktop that embedded these functionalities and, on September 15th, Docker released an updated experience in their Docker Desktop stable channel....
and then
...For now the only thing you need is Docker Desktop and an AWS account. For this test , I am using Docker Desktop (stable) version 2.5.0.1....
and finally
The core of this integration is built around a new tool dubbed Compose CLI (this is not to be confused with the original docker-compose CLI). This new CLI surfaces to the user as new functionalities in the docker command. While in Docker Desktop all this plumbing is completely hidden and available out of the box, if you are using a Linux machine you can set it up using either a script or a manual install. This new CLI is, essentially, a new version of the docker binary.
Eager to understand more how we could make it more clear / front and center that there were stuff to install and/or minimum software versions you had to use.
Thanks for trying it out!
If you're on Linux and you're running the docker context create ecs myecscontext command from the docs then try enabling experimental features in docker:
Edit /etc/docker/daemon.json
Set contents to
{
"experimental": true
}
Restart docker service sudo systemctl restart docker
Exit your terminal and open a new one so that the changes take effect.
Source1
Source2
I had same issue but after installing Docker Desktop version problem resolved.
Server side version doesn't have such kind of functionality.
I'm trying to setup a django app with memcached. I have the app working via virtualenv on nitrous.io without memcached.
I ran parts install memcached which worked fine. python-memcached is also installed in the virtualenv. I tried running:
memcached -d -m memory -s $HOME/memcached.sock -P $HOME/memcached.pid
which I do on my production server. But I got this error:
failed to set rlimit for open files. Try starting as root or requesting smaller maxconns value.
The user rights and whatnot are a little out of my scope of knowledge?
You should always use parts start memcached to run the service on Nitrous.IO.
To change the configuration for the memcached package, edit /home/action/.parts/etc/memcached.conf.
I have a provisioning setup with vagrant and puppet that works well locally and I'm now tryign to move it to AWS using vagrant-aws.
As I understand it I can make use the AWS user-data field in vagrant as specified to run commands on the first boot of a new vm like so:
aws.user_data = File.read("user_data.txt")
Where user_data.txt contains:
#!/bin/bash
sudo apt-get install -y puppet-common
Then my existing puppet provisioning scripts should be able to run. However this errors out on the vagrant up command with:
[aws] Running provisioner: puppet...
The `puppet` binary appears to not be in the PATH of the guest. This
could be because the PATH is not properly setup or perhaps Puppet is not
installed on this guest. Puppet provisioning can not continue without
Puppet properly installed.
But when I ssh into the machine I see that the user-data did get parsed and puppet is installed successfully. Is the puppet provisioner running before the user-data install puppet maybe? Or is there some better way to install puppet on a vm before trying to provision?
It is broken, but there's a workaround if you're using Ubuntu which is far simpler than building your own AMI.
Add the following line to your config:
aws.user_data = "#cloud-config\nbootcmd:\n - echo 'manual' > /etc/init/ssh.override\npackages:\n - puppet\nruncmd:\n - [ 'rm', '/etc/init/ssh.override' ]\n - [ 'service', 'ssh', 'start' ]\n"
This tells Cloudinit to disable SSH startup early in the boot process and re-enable it once your packages are installed. Now Vagrant can only SSH in to run puppet once the packages are fully installed.
This will probably work for other distros that use Cloudinit aside from Ubuntu, altho it is Upstart specific so the commands may need tweaking.
Well I worked around this by building my own AMI with puppet and other things I need installed, still seems like vagrant-aws is broken or I'm misunderstanding something else here.