I created a transaction processor using javascript sawtooth-sdk. When i run it locally, it works successfully and it gives me this message when running locally (By locally means running the javascript file using node index.js).
Connecting to Sawtooth validator at tcp://localhost:4004
Connected to tcp://localhost:4004
Registration of [myTP 1.0] succeeded
Then i dockerized it and when i start the container, it doesn't connect. It only has
Connecting to Sawtooth validator at tcp://localhost:4004
message. When i check the Sawtooth docker logs, there were no logs
My docker base image is FROM ubuntu:bionic and i expose EXPOSE 4004/tcp like this. What might be the problem? I know its coming from validator and what i cant understand is that this work locally and doesn't work in the docker file.
Looks like the application container and docker-compose containers are residing in 2 different networks.
Find your network (probably this will be the name of the project directory)
docker network ls
Then connect the application container to the same network used by the compose
docker network connect <network> <app container>
If you need to do this in the start up of the app container,
docker run -itd --network=<network name> <app image>
Then, form the application, you can call the validator container name and connect,
tcp://sawtooth-validator-default:4004
Related
I have Druid and superset running locally, but I am not able to connect them together. I have sample data wikiticker in Druid. I already installed pydruid with pip3: pip3 install pydruid (I am not sure if I need to install this to any particular location). I have also installed superset using docker-compose locally using This Link, However, I am not able to connect Druid with Superset. I went to Data->Databases->add database. In Connection, I gave Database name as Druid and not sure what to give in SQLALCHEMY URI*
. I tried these:
druid//admin:admin#localhost:8082/wikiticker
pydruid//admin:admin#localhost:8082/wikiticker
druid://admin:admin#localhost:8082/druid/v2/sql
but nothing is working.
As far as I know, Druid has no built-in authentication. The SQLALCHEMY_URI string should be druid+https://localhost:8082/druid/v2/sql/ (or druid+http://localhost:8082/druid/v2/sql/ if you're using HTTP).
As per documentation the connection string should look like this (third variant in the question):
druid://<User>:<password>#<Host>:<Port-default-9088>/druid/v2/sql
Why you cannot connect might be, because of your docker setup. In the context of your superset docker container localhost refers to that particular docker container. For example the database and the redis cache are referred to as db and redis for the connection setup within the docker-compose.yml and the environment variables set in .env.
So you could extend the docker-compose.yml to include the druid container, named druid as well and then connect to it like this:
druid://admin:admin#druid:PORTTHATYOUEXPOSED/druid/v2/sql
There is a good chance that you didn't add the Root Certificate. You can either do that or disable SSL verification. See the documentation here: https://superset.apache.org/docs/databases/druid
I'm trying to connecting a dockerised c++ application with a dockerised database so that I can get it running and get some outputs, the configuration can be found in this question
when I try to run the model (which inside the application container) against the dockerised database:
>docker run --net xxxxx-network -it xxxxxrun:localbase
root#xxxxxxxx:/run# isql xxx.x.x.x user=root
[ISQL]ERROR: Could not SQLConnect
I'm new to odbc and docker, can someone gave me some hint? Many thanks.
I am assuming that your running each docker container separately.
In this case in order for your C++ application container to be able to connect to
the Mysql container they will need to be on same network.
Create Docker network docker network create mysql-network
Run C++ application container like so: docker run -it --network mysql-network xxxxxrun:localbase (xxxxxrun should be name of image and localbase should be image tag that you want to run)
Run Mysql database with command similar to docker run --network mysql-network -e MYSQL_ROOT_PASSWORD=password -d mysql:5.7
In this situation the two containers should be able to communicate freely with each other across the network.
So I am following the official documentation django with postgres in docker
https://docs.docker.com/compose/django/
I created another database, ( not using the default postgres db ). but when shutdown the server and re run it, It doesn't show the database. How can i create database so that it doesn't vanish when i shut down my docker server.
All data in a container deleted when a container is destroyed or deleted. To save the data, you should save it in some mounted volume in docker. That volume will be on your machine. So all data which will be created during running of any process in that docker container will be stored in your machine. For this, you will have to understand Volume Api of docker.
Create a volume like this
docker volume create hello
And use that volume in your container like this
docker run -d -v hello:/world busybox ls /world
You can get further help from here.
Overall: I don't want my django app to rely on a docker container as my db. In other words, when I run an image
docker run -p 8000:8000 -d imagename
I want this to know to connect to my local db. I have my settings.py configured to connect to a pg db, so everything is fine when I do something like
python manage.py runserver
Feel free to call out any incorrect use of certain terms or just my overall understanding of docker. So from all the tutorials I've seen they usually make a docker compose file that is reliant on a db-image that will be spun up in a container separate from the web app. Examples of things I've gone through:
https://docs.docker.com/compose/django/#connect-the-database, http://ruddra.com/2016/08/14/docker-django-nginx-postgres/, etc. At this point I'm extremely lost, since I don't know if I do this in my Dockerfile, settings.py in my project, or docker-compose.yml (I'm guessing I shouldn't even have this since this is for a multi-container app which I'm trying to avoid). [Aside: Lastly, can one run a django app reliant on celery & rabbitmq in just one container? Like my pg example I only see instances of having them all in separate containers.] As for my Dockerfile it's pretty much this.
FROM python:3
ENV APP 'http://githubproj`
RUN git clone $APP \
&& cd $APP/projectitself \
&& pip install -r requirements.txt
CMD cd $APP_DIR/mydjangoproject && gunicorn mydjangoproject.wsgi:application --bind 0.0.0.0:8000
In order to allow your containerized Django application to connect to a local database running in the host machine, you need to enable incoming connections from your docker interface. You do that by including the following rule in your iptables in your local machine:
$ sudo iptables -A INPUT -i docker0 -j ACCEPT
Next, you have to configure your postgres server to listen on multiple addresses. Open /etc/postgresql/<version>/main/postgresql.conf and search for a line containing listen_addresses='localhost, and change that for:
listen_addresses='*'
After these changes, you should be able to connect to your local postgres database from inside the container.
This answer might give you further clarifications on how to connect to your local machine from your container.
To connect from the container to the host, you can you use the IP address of the docker0 bridge. Run ifconfig on the host to find the docker0 IP address (default is 172.17.0.1 I believe). Then connect to that from inside your container.
This is obviously not super host-portable as that IP might change between machines, so a wrapper script might be useful to find and inject that IP into the container.
Better still, postgres in a container! :p
Also, if connecting to a remote postgres then just provide the IP of the remote instance (no different to regular inter-connectivity here).
I'm developing a web application with Play framework and I'm running it on AWS Elastic Beanstalk using a single docker container and a load balancer. Normally, everything is running fine, but when I rebuild the whole environment I get the following error:
Command failed on instance. Return code: 6 Output: (TRUNCATED)... in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11 nginx: [emerg] host not found in upstream "docker" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:24 nginx: configuration file /etc/nginx/nginx.conf test failed.
When I log into the EC2 I can see that no docker image is running and therefore the Nginx server cannot start. I cannot see any other error in the logs (or maybe I don't know where to look). The strange thing is that the same version worked fine before rebuilding the environment.
I'm using the following Dockerfile for the deployment:
FROM java
COPY <app_folder> /opt/<app_name>
WORKDIR /opt/<app_name>
CMD [ "/opt/<app_name>/bin/<app_name>", "-mem", "512", "-J-server" ]
EXPOSE 9000
Any ideas what the problem could be or where to check for more details?
I had this same problem. elasticbeanstalk-nginx-docker-proxy.conf is referring to proxy_pass http://docker but the definition of that is missing. You need to add something like
# List of application servers
upstream docker {
server 127.0.0.1:8080; # your app
}
(Make sure it's outside of the server directive.)
I have just been working through the same challenge (deploying an updated Docker image to Elastic Beanstalk). And it depends on what you want to do exactly, but what I found out is that (once you have the eb cli setup) you can just use the eb deploy command to push out your code changes without worrying about the image at all.
Granted you'd still want to push your image up to your repo for sharing purposes (with other developers), OR if you actually need to change the environment configuration for some reason... but if you're just looking to push code look into eb deploy
As far as the specifics of your error unfortunately I can't be of much help there. Good luck!