I wanted to use redis with dokku and flask. First issue was installing current version of dokku, i am using latest version from repo now.
Second problem is showing in Flask debugger:
redis.exceptions.ConnectionError
ConnectionError: Error 111 connecting to None:6379. Connection refused.
I set redis url and port in Flask:
app.config['REDIS_URL'] = 'IP:32768'
-----> Checking status of Redis
remote: Found image redis/landing
remote: Checking status...stopped.
remote: Launching redis/landing...COMMAND: docker run -v /home/dokku/.redis/volume-landing:/var/lib/redis -p 6379 -d redis/landing /bin/start_redis.sh
-----> Setting config vars
REDIS_URL: redis://IP:6379
REDIS_IP: IP
REDIS_PORT: 6379
Any idea? REDIS_URL should be set in different way?
This code works ok in localhost:
https://github.com/kwikiel/bounce
(with ['REDIS_IP'] = '172.17.0.13' set to 127.0.0.1)
Problem appears when i try to connect with redis dokku.
Steps to use redis with flask and dokku:
Install redis plugin:
cd /var/lib/dokku/plugins
git clone https://github.com/ohardy/dokku-redis redis
dokku plugins-install
Link your redis container to application container
dokku redis:create [name of app container]
You will receive info about environmental variables that you will have to set - for example:
Host: 172.17.0.91
Public port: 32771
Then set these settings in Flask (or other framework)
app.config['REDIS_URL'] = 'redis://172.17.0.91:6379'
app.config['REDIS_IP'] = '172.17.0.91'
app.config['REDIS_PORT'] = '6379'
Complete example of redis database used with Flask app (A/B testing in Flask):
https://github.com/kwikiel/bounce
Related
I am following this tutorial to upload my existing Django project running locally on sqlite to Google Cloud Run / Postgres.
I have the cloud_sql_proxy service running and can sign into Postgres from the command line.
I am at the point of running the command
python manage.py migrate
And I get the error:
django.db.utils.OperationalError: connection to server on socket "/cloudsql/cgps-registration-2:us-central-1:cgps-reg-2-postgre-sql/.s.PGSQL.5432" failed: No such file or directory
Is the server running locally and accepting connections on that socket?
The answer to that questions is Yes, the server is running locally and accepting connections because I can log in with the Postgres client:
agerson#agersons-iMac ~ % psql "sslmode=disable dbname=postgres user=postgres hostaddr=127.0.0.1"
Password for user postgres:
psql (14.1, server 13.4)
Type "help" for help.
postgres=>
I double checked the connection string in my .env file and it has the correct UN / P
Is this scoket not getting created somehow in a previous step?
/cloudsql/cgps-registration-2:us-central-1:cgps-reg-2-postgre-sql/.s.PGSQL.5432
It looks like there's a mismatch between what the app is looking for and how you're launching the proxy. The error explains the problem.
You're launching the proxy like this with an incorrect region name (us-central):
cloud_sql_proxy -instances="cgps-registration-2:us-central:cgps-reg-2-postgre-sql=tcp:5432
But the app is looking for us-central1. Try this (omitting the =tcp:5432 to create a Unix socket):
cloud_sql_proxy -instances="cgps-registration-2:us-central1:cgps-reg-2-postgre-sql
I'm currently testing Django on Codenvy but I have difficulties to find out how to connect to the build-in development server of Django.
I added a server in the Workspace configuration with port 8000 and http protocole.
I added the following in the run command of Codenvy's project :
Commande line :
cd ${current.project.path} && python manage.py runserver
Preview :
http://${server.port.8000}
The run prompt provide me a url : http://nodexx.codenvy.io:xxxxx
Going to this URL print a message : ERR_CONNECTION_REFUSED
I'm very new to all of this. Do you know what is missing ?
Django dev server by default accepts connections only from localhost. In order to access it from another machine, start runserver by binding it with an IP, or 0 for the app to be accessed from everywhere.
python manage.py runserver 0:8000
The above command runs the server in 8000 port, binding the network to 0.0.0.0, which means the app can be accessed from anywhere
I have a docker container running on my system which i started using this command:
docker run -it -v ~/some/dir -p 8000:80 3cce3211b735 bash
Now docker ps lists this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
44de7549d38e 3cce3211b735 "bash" 14 minutes ago Up 14 minutes 22/tcp, 443/tcp, 8082/tcp, 0.0.0.0:8000->80/tcp hardcore_engelbart
Inside the container i run my django app using the command : python manage.py runserver 80
But i am not able to view the page using either of these:
1.localhost:8000
2.127.0.0.1:8000
I do understand that my 8000 port is mapped to 80 port on the container. But why am i not able to access it. I am using docker for mac not docker toolbox. Please help and comment if you need any more info.
Okay so i found the solution to my problem. The issue was not in the docker port mapping. The actual problem is this line :
python manage.py runserver 80
This runs the server on 127.0.0.1:80 . The localhost inside the docker container is not the localhost on your machine . So the solution is running the server using this command :
python manage.py runserver 0.0.0.0:80
I was able to access the webpage after this. If you run into the same problem where you are not able to connect to the django server running inside your docker container , you should try running the server on 0.0.0.0:port. You will be able to access it in your browser using localhost:port . Hope this helps someone.
I have tried to use docker toolbox to setup Hyperledger V1.0 in my local machines.
I according to this document:
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
But when I tried to deploy chaincode.
$node deploy.js
I got an error message:
info: Returning a new winston logger with default configurations
info: [Chain.js]: Constructed Chain instance: name - fabric-client1, securityEnabled: true, TCert download batch size: 10, network mode: true
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric COP service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a KeyValueStore to save keys, no store was passed in, using the default store C:\Users\daniel\.hfc-key-store
[2017-04-15 22:14:29.268] [ERROR] Helper - Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:8054]
at ClientRequest.<anonymous> (C:\Users\daniel\node_modules\fabric-ca-client\lib\FabricCAClientImpl.js:304:12)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at Socket.socketErrorListener (_http_client.js:310:9)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1278:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[2017-04-15 22:14:29.273] [ERROR] DEPLOY - Error: Failed to obtain an enrolled user
at ca_client.enroll.then.then.then.catch (C:\Users\daniel\helper.js:59:12)
at process._tickCallback (internal/process/next_tick.js:103:7)
events.js:160
throw er; // Unhandled 'error' event
^
Error: Connect Failed
at ClientDuplexStream._emitStatusIfDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:201:19)
at ClientDuplexStream._readsDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:169:8)
at readCallback (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:229:12)
Is this an question about unable to connect to ca? Or other causes?
Edit:
Environment:
OS: Windows 10 Professional Edition
Docker Toolbox: 17.04.0-ce
Go: 1.7.5
Node.js: 6.10.0
My steps:
1.Open Docker Quickstart Terminal and key commands.
$curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null; tar -xvf sfhackfest.tar.gz
$docker-compose -f docker-compose-gettingstarted.yml build
$docker-compose -f docker-compose-gettingstarted.yml up -d
$docker ps
It has been confirmed that six containers have been activated
2.Download examples and install modules.
$curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
//This link didn't work, so I downloaded the required files from GitHub of fabric-sdk-node
$npm install --global windows-build-tools
$npm install
3.Try to deploy chaincode.
$node deploy.js
There were several problems, not the least of which that documentation was outdated and was for a preview release of Hyperledger Fabric. The docs are actually in the process of being removed as we need to update our examples / samples.
You mentioned Docker Toolbox - so are you trying to run all of this on Windows or Mac?
UPDATE:
So one of the issue with Docker Toolbox or Docker for Windows is that you cannot use localhost / 127.0.0.1 as the address when trying to communicate from apps on the host (even in the QuickStart Terminal) to the endpoints of the Docker containers. When the QuickStart Terminal first launches Docker, you'll see that it will output the IP address of the endpoint you should use when communicating with exposed ports.
I was having the same issue while following the latest "Writing Your First Application" tutorial (http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html). I had installed all the pre-requisites and the fabric-samples and started the local network.
When I got to the step of enrolling the Admin user, $ node enrollAdmin.js, I was getting the same error message as above, Error: connect ECONNREFUSED, followed by the localhost domain.
As the first answer suggests, the root cause is that I'm running Docker Toolbox. I'm developing on an older Mac, OSX v10.9.5, so I couldn't use Docker for Mac.
To fix the issue, I replaced 'localhost' in the enrollAdmin.js code with the IP from Docker Toolbox.
Here are the steps I took:
Started Docker with Applications > Docker Quickstart Terminal
Copied the IP from this sentence: docker is configured to use the default machine with IP...
Opened the copy of enrollAdmin.js from fabric-samples/fabcar directory
Found this code:
// be sure to change the http to https when the CA is running TLS enabled
fabric_ca_client = new Fabric_CA_Client('http://localhost:7054', tlsOptions , 'ca.example.com', crypto_suite); // <-- This is the line to change
Replaced 'localhost' with the Docker IP, leaving the port :7054 as is.
Saved
Re-ran the command, $ node enrollAdmin.js
The script connected to the CA and successfully completed the Admin enrollment.
On to the next step!
I created a rabbitmq cluster on two instances on EC2. My django app uses celery for async tasks which in turn uses RabbitMQ for message queue.
Whenever I start celery with the command:
python manage.py celery worker --loglevel=INFO
OR
python manage.py celeryd --loglevel=INFO
I keep getting following error message related to remote RabbitMQ:
[2015-05-19 08:58:47,307: ERROR/MainProcess] consumer: Cannot connect to amqp://myuser:**#<ip-address>:25672/myvhost/: Socket closed.
Trying again in 2.00 seconds...
I set permissions using:
sudo rabbitmqctl set_permissions -p myvhost myuser ".*" ".*" ".*"
and then restarted rabbitmq-server on both the cluster nodes. However, it didn't help.
In log file, I see few entries like below:
=INFO REPORT==== 19-May-2015::08:14:41 ===
accepting AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672)
=ERROR REPORT==== 19-May-2015::08:14:44 ===
closing AMQP connection <0.1981.0> (<ip-address>:38471 -> <ip-address>:5672):
{handshake_error,opening,0,
{amqp_error,access_refused,
"access to vhost 'myvhost' refused for user 'myuser'",
'connection.open'}}
The file /usr/local/etc/rabbitmq/rabbitmq-env.conf contains an entry for NODE_IP_ADDRESS to bind it only to localhost. Removing the NODE_IP_ADDRESS entry from the config binds the port to all network inferfaces.
Source: https://superuser.com/questions/464311/open-port-5672-tcp-for-access-to-rabbitmq-on-mac
Turns out I had not created appropriate configuration files. In my case (Ubuntu 14.04), I had to create below two configuration files:
$ cat /etc/rabbitmq/rabbitmq-env.conf
RABBITMQ_NODE_IP_ADDRESS=<ip_of_ec2_instance>
<ip_of_ec2_instance> has to be the internal IP that EC2 uses. Not the public IP that one uses to ssh into the instance. It can be obtained using ip a command.
$ cat /etc/rabbitmq/rabbitmq.config
[
{mnesia, [{dump_log_write_threshold, 1000}]},
{rabbit, [{tcp_listeners, [25672]}]},
{rabbit, [{loopback_users, []}]}
].
I think the line {rabbit, [{tcp_listeners, [25672]}]}, was one of the most important piece of configuration that I was missing.
Thanks #dgil for the initial troubleshooting help.
The question has been answered. but just leaving notes with a similar issue i faced should anybody else find it useful
I have a flask app running on ec2 with amqp as a broker on port 5672 and ec2 elasticcache memcached as a backend. The amqp broker had trouble picking up tasks that were getting fired - so i resolved it by fixing as such
Assuming you have rabbitmq-server installed (sudo apt-get install rabbitmq-server), add the user and set the properties as such
sudo add_user username password
set_permissions username ".*" ".*" ".*"
restart server: sudo service rabbitmq-server restart
In your flask app for the celery configuration
broker_url=amqp://username:password#localhost:5672// (Set as above)
backend=cache+memcached://(ec2 cache url):11211/
(The cache+memcached:// tripped me up - without it i kept getting an import error (cannot import module)
Open up the port 5672 on your ec2 instance in the security group.
Now if you fire up your celery worker, it should pick up the the tasks that get fired and store the results on your memcached server