I deployed a k8s redis sentinel using the Helm chart: https://github.com/bitnami/charts/tree/master/bitnami/redis
I did change only these values ( https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml ) :
auth:
enabled: false
sentinel: false
sentinel:
enabled: true
masterSet: mymaster
After the deployment, I got this message:
Redis™ can be accessed via port 6379 on the following DNS name from within your cluster:
redis.default.svc.cluster.local for read only operations
For read/write operations, first access the Redis™ Sentinel cluster, which is available in port 26379 using the same domain name above.
To connect to your Redis™ server:
1. Run a Redis™ pod that you can use as a client:
kubectl run --namespace default redis-client --restart='Never' --image docker.io/bitnami/redis:6.2.6-debian-10-r103 --command -- sleep infinity
Use the following command to attach to the pod:
kubectl exec --tty -i redis-client \
--namespace default -- bash
2. Connect using the Redis™ CLI:
redis-cli -h redis -p 6379 # Read only operations
redis-cli -h redis -p 26379 # Sentinel access
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/redis 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379
This is working nicely:
kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-node-0 2/2 Running 0 2m23s
redis-node-1 2/2 Running 0 71s
redis-node-2 2/2 Running 0 43s
But regarding access - to summarize - I have two options to access redis:
read-only access at redis.default.svc.cluster.local:6379
read-write access at redis.default.svc.cluster.local:26379 (some kind of sentinel access, in the docs:
Master-Replicas with Sentinel
When installing the chart with architecture=replication and sentinel.enabled=true, it will deploy a Redis™ master StatefulSet (only one master allowed) and a Redis™ replicas StatefulSet. In this case, the pods will contain an extra container with Redis™ Sentinel. This container will form a cluster of Redis™ Sentinel nodes, which will promote a new master in case the actual one fails. In addition to this, only one service is exposed:
Redis™ service: Exposes port 6379 for Redis™ read-only operations and port 26379 for accessing Redis™ Sentinel.
For read-only operations, access the service using port 6379. For write operations, it's necessary to access the Redis™ Sentinel cluster and query the current master using the command below (using redis-cli or similar):
SENTINEL get-master-addr-by-name <name of your MasterSet. e.g: mymaster>
This command will return the address of the current master, which can be accessed from inside the cluster.
In case the current master crashes, the Sentinel containers will elect a new master node.
Now I want to connect my Flask caching module to it: https://flask-caching.readthedocs.io/en/latest/
As you can see, there is an option to connect to redis sentinel, however I have no idea how. This is the code I have:
from flask_caching import Cache
cache = Cache(app, config={
'CACHE_TYPE': 'RedisSentinelCache',
'CACHE_REDIS_SENTINELS': ['redis.default.svc.cluster.local'],
'CACHE_REDIS_SENTINEL_MASTER': 'mymaster'}
)
My questions are:
What should be in param CACHE_REDIS_SENTINELS? Should I somehow get IP addresses of each node and get those there?
What should be in param CACHE_REDIS_SENTINEL_MASTER? Is it "mymaster" (sentinel -> masterSet?)
Should I connect always to the read-write server (in this case, will other replicas be used)? Or do I need to adjust my app in this way: if I write I always use the sentinel access at port 26379, and in the case that I read I connect always to the read-only 6379 port? Do I need to maintain 2 connections?
Thank you
EDIT: I was digging into the code of flask_caching and it seems this works OK (but I am not sure if replicas are used):
import time
from flask import Flask
from flask_caching import Cache
config = {
"DEBUG": True, # some Flask specific configs
'CACHE_TYPE': 'RedisSentinelCache',
'CACHE_REDIS_SENTINELS': [
['redis.default.svc.cluster.local', 26379]
],
'CACHE_REDIS_SENTINEL_MASTER': 'mymaster'
}
app = Flask(__name__)
# tell Flask to use the above defined config
app.config.from_mapping(config)
cache = Cache(app)
#app.route("/")
#cache.cached(timeout=5)
def index():
return "%d\n" % time.time()
app.run()
EDIT2:
Indeed, a bit digging into flask_caching and it uses replicas as well:
in file flask_caching/backends/rediscache.py
The code is getting hosts for write and read access:
self._write_client = sentinel.master_for(master)
self._read_clients = sentinel.slave_for(master)
Cheers!
EDIT3:
Example with redis driver:
from redis.sentinel import Sentinel
sentinel = Sentinel([('redis.default.svc.cluster.local', 26379)])
redis_conn = sentinel.master_for('mymaster')
redis_conn_read = sentinel.slave_for('mymaster')
redis_conn.set('test', 'Hola!')
print(redis_conn_read.get('test'))
TLDR:
import time
from flask import Flask
from flask_caching import Cache
config = {
"DEBUG": True, # some Flask specific configs
'CACHE_TYPE': 'RedisSentinelCache',
'CACHE_REDIS_SENTINELS': [
['redis.default.svc.cluster.local', 26379]
],
'CACHE_REDIS_SENTINEL_MASTER': 'mymaster'
}
app = Flask(__name__)
# tell Flask to use the above defined config
app.config.from_mapping(config)
cache = Cache(app)
#app.route("/")
#cache.cached(timeout=5)
def index():
return "%d\n" % time.time()
app.run()
For more details see my original question (EDIT and EDIT2).
Related
I have a simple backend srevice that I just deployed with copilot.
However, I don't know where to access it?
According to AWS console it's running and active. I can even see it in the logs that it has been started.
My manifest:
# The manifest for the "user-service" service.
# Read the full specification for the "Backend Service" type at:
# https://aws.github.io/copilot-cli/docs/manifest/backend-service/
# Your service name will be used in naming your resources like log groups, ECS services, etc.
name: user-service
type: Backend Service
# Your service does not allow any traffic.
# Configuration for your containers and service.
image:
# Docker build arguments. For additional overrides: https://aws.github.io/copilot-cli/docs/manifest/backend-service/#image-build
build: ./Dockerfile
port: 9000
cpu: 256 # Number of CPU units for the task.
memory: 512 # Amount of memory in MiB used by the task.
count: 1 # Number of tasks that should be running in your service.
# Optional fields for more advanced use-cases.
#
variables: # Pass environment variables as key value pairs.
SERVER_PORT: 9000
NODE_ENV: test
secrets: # Pass secrets from AWS Systems Manager (SSM) Parameter Store.
ACCESS_TOKEN_SECRET: ACCESS_TOKEN_SECRET
REFRESH_TOKEN_SECRET: REFRESH_TOKEN_SECRET
MONGODB_URL: MONGODB_URL
# You can override any of the values defined above by environment.
environments:
test:
variables:
NODE_ENV: test
# count: 2 # Number of tasks to run for the "test" environment.
My Dockerfile
# Check out https://hub.docker.com/_/node to select a new base image
FROM node:lts-buster-slim
# Set to a non-root built-in user `node`
USER node
# Create app directory (with user `node`)
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY --chown=node package*.json ./
RUN npm install
# Bundle app source code
COPY --chown=node . .
RUN npm run build
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE 9000
CMD [ "node", "." ]
This works fine locally, with docker-compose. But where can I find the URL of the deployed service? I checked ECS console and the task has a public IP. However I can't connect to that.
What's missing here?
Nm.. my bad. Backend services are not supposed to be reachable via internet. They expose endpoints but should talk to each other (or the frontend) via service discovery.
I have a war file deployed as Docker container on linux ec2. But when I try to hit the http://ec2-elastic-ip:8080/AppName, I don't get any response.
I have all the security group inbound rules set up for both http and https. So that's not a problem.
Debugging
I tried debugging by ssh-ing the linux instance. Tried command curl localhost:8080 , this is the response:
curl: (7) Failed to connect to localhost port 8080: Connection refused
Tried with 127.0.0.1:8080 but the same response.
Next thing I did was to list the Docker container: docker ps. I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
<ID> <ecr>.amazonaws.com/<my>-registry:2019-05-16.12-17-02 "catalina.sh run" 24 minutes ago Up 24 minutes 0.0.0.0:32772->8080/tcp ecs-app-24-name
Now, I connected to this container using docker exec -it <name> /bin/bash and tried checking tomcat logs which clearly shows that my application war is there and tomcat has started.
I ever tried checking the docker-machine ip default but this gave me error:
Docker machine "default" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
Now am stuck. Not able to debug further. The result am expecting is to access the app through the url above.
What to do? Is it something am doing wrong?
Also, to mention, the entire infrastructure is managed through terraform. I first create the base image,copy the war to webapps using DockerFile, push the registry image and finally do a terraform apply to apply any changes.
Make sure that apache is listening on all IP addresses inside the docker container, not just localhost. The IP should be like 0.0.0.0.
If any service is running inside docker and is listening to only localhost, it can only be accessed inside that container, not from the host.
You can also try to start apache with port 8080 and bind docker 8080 port with host 8080 port
docker run apache -p 8080:8080
Currently your app is working on a random host port i.e 32772, see the docker ps output .You must be able to access you app on http://ec2-ip:32772 once you allow port 32772 in security groups.
In order to make it work on host port 8080, you need to bind/expose the host port during docker run -
$ docker run -p 8080:8080 ......
If you are on ECS, ideally you should use an ALB & TG with your service.
However, if you are not using ALB etc then you can try giving a static hostPort in TD "hostPort": 8080(I haven't tried this). If it works fine, you will need to make sure to change the deployment strategy as "minimum healthy percentage = 0" else you might face port conflict issues.
If the application needs a network port you must EXPOSE it in the docker file.
EXPOSE <port> [<port>/<protocol>...]
In case you need that port to be mapped to a specific port on the network, you must define that when you spin up the new container.
docker run -p 8080:8080/tcp my_app
If you use run each image separately you must bind the port every time.
If you don't want to do this every time you can use docker-compose and add the ports directive in it.
ports:
- "8080:8080/tcp"
Supposing you added expose in the dockerfile, he full docker-compose.yml would look like this:
version: '1'
services:
web:
build:
ports:
- "8080:8080"
my_app:
image: my_app
Setup:
I have a virtual machine and in the virtual machine running three containers (an nginx proxy, a very minimalistic flask app and redis). Flask should be serving on port 5000 while redis on 6379.
Each of these containers are up and running just fine as stand a lone services, but also available via docker compose as a service.
In the flask app, my aim is to connect to redis and query for some keys.
The nginx container exposes port 80, flask port 5000 and redis port 6379.
In the flask app I have a function that tries to create a redis client
db = redis.Redis(host='localhost', port=6379, decode_responses=True)
Running the flask app I am getting an error that the port cannot be used
redis.exceptions.ConnectionError: Error 99 connecting to localhost:6379. Cannot assign requested address.
I am lost of clarity what could be causing this problem and any ideas would be appreciated.
In the flask app I have a function that tries to create a redis client
db = redis.Redis(host='localhost', port=6379, decode_responses=True)
When your flask process runs in a container, localhost refers to the network interface of the container itself. It does not resolve to the network interface of your docker host.
So you need to replace localhost with the IP address of the container running redis.
In the context of a docker-compose.yml file, this is easy as docker-compose will make service names resolve to the correct container IP address:
version: "3"
services:
my_flask_service:
image: ...
my_redis_service:
image: ...
then in your flask app, use:
db = redis.Redis(host='my_redis_service', port=6379, decode_responses=True)
I had this same problem, except the service I wanted my container to access was remote and mapped via ssh tunnel to my Docker host. In other words, there was no docker-compose service for my code to find. I solved the problem by explicitly telling redis to look for my local host as a string:
pyredis.Redis(host='docker.for.mac.localhost', port=6379)
Anyone using only docker to run a container,
you can add --network=host in the command like docker run --network=host to make docker use the network of the host while running the container.
You can also use a host network for a swarm service, by passing --network host to the docker service create command.
Make sure you don't publish any port while doing this. like -p 80:8000
I am not sure if Docker compose supports this.
N.b. this is only supported in linux.
I'm trying to set up Metabase on a gcloud engine using Google Cloud SQL (MySQL).
I've got it running using this git and this app.yaml:
runtime: custom
env: flex
# Metabase does not support horizontal scaling
# https://github.com/metabase/metabase/issues/2754
# https://cloud.google.com/appengine/docs/flexible/java/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
env_variables:
# MB_JETTY_PORT: 8080
MB_DB_TYPE: mysql
MB_DB_DBNAME: [db_name]
# MB_DB_PORT: 5432
MB_DB_USER: [db_user]
MB_DB_PASS: [db_password]
# MB_DB_HOST: 127.0.0.1
CLOUD_SQL_INSTANCE: [project-id]:[location]:[instance-id]
I have 2 issues:
The Metabase fails in connecting to the Cloud SQL - the Cloud SQL is part of the same project and App Engine is authorized.
After I create my admin user in Metabase, I am only able to login for a few seconds (and only sometimes), but it keeps throwing me to either /setup or /auth/login saying the password doesn't match (when it does).
I hope someone can help - thank you!
So, we just got metabase running in Google App Engine with a Cloud SQL instance running PostgreSQL and these are the steps we went through.
First, create a Dockerfile:
FROM gcr.io/google-appengine/openjdk:8
EXPOSE 8080
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ENV JAVA_TOOL_OPTIONS "-Xmx1g"
ADD https://downloads.metabase.com/enterprise/v1.1.6/metabase.jar $APP_DESTINATION
We tried pushing the memory further down, but 1 GB seemed to be the sweet spot. On to the app.yaml:
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 1
disk_size_gb: 10
readiness_check:
path: "/api/health"
check_interval_sec: 5
timeout_sec: 5
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 600
beta_settings:
cloud_sql_instances: <Instance-Connection-Name>=tcp:5432
env_variables:
MB_DB_DBNAME: 'metabase'
MB_DB_TYPE: 'postgres'
MB_DB_HOST: '172.17.0.1'
MB_DB_PORT: '5432'
MB_DB_USER: '<username>'
MB_DB_PASS: '<password>'
MB_JETTY_PORT: '8080'
Note the beta_settings field at the bottom, which handles what akilesh raj was doing manually. Also, the trailing =tcp:5432 is required, since metabase does not support unix sockets yet.
Relevant documentation can be found here.
Although I am not sure of the reason, I think authorizing the service account of App engine is not enough for accessing cloud SQL.
In order to authorize your App to access your Cloud SQL you can do either of both methods:
Within the app.yaml file, configure an environment variable pointing to a a service account key file with a correct authorization configuration to Cloud SQL :
env_variables:
GOOGLE_APPLICATION_CREDENTIALS=[YOURKEYFILE].json
Your code executes a fetch of an authorized service account key from a bucket, and loads it afterwards with the help of the Cloud storage Client library. Seeing your runtime is custom, the pseudocode which would be translated into the code you use is the following:
.....
It is better to use the Cloud proxy to connect to the SQL instances. This way you do not have to authorize the instances in CloudSQL every time there is a new instance.
More on CloudProxy here
As for setting up Metabase in the Google App Engine, I am including the app.yaml and Dockerfile below.
The app.yaml file,
runtime: custom
env: flex
manual_scaling:
instances: 1
env variables:
MB_DB_TYPE: mysql
MB_DB_DBNAME: metabase
MB_DB_PORT: 3306
MB_DB_USER: root
MB_DB_PASS: password
MB_DB_HOST: 127.0.0.1
METABASE_SQL_INSTANCE: instance_name
The Dockerfile,
FROM gcr.io/google-appengine/openjdk:8
# Set locale to UTF-8
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
# Install CloudProxy
ADD https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 ./cloud_sql_proxy
RUN chmod +x ./cloud_sql_proxy
#Download the latest version of Metabase
ADD http://downloads.metabase.com/v0.21.1/metabase.jar ./metabase.jar
CMD nohup ./cloud_sql_proxy -instances=$METABASE_SQL_INSTANCE=tcp:$MB_DB_PORT & java -jar /startup/metabase.jar
I am using redis as an in-memory database backend for django cache.
In particular, I use django-redis configured as follows:
CACHES = {
'default': {
'BACKEND': 'redis_cache.cache.RedisCache',
'KEY_PREFIX': DOMAIN_NAME,
'LOCATION': 'unix:/tmp/redis_6379.sock:1',
'OPTIONS': {
'PICKLE_VERSION': -1, # default
'PARSER_CLASS': 'redis.connection.HiredisParser',
'CLIENT_CLASS': 'redis_cache.client.DefaultClient',
},
},
}
My django cache seem to work correctly.
The weird thing is that I cannot see django cache keys using the redis-cli command line.
[edit]
Please notice in the following that I tried both with
$ redis-cli
and
$ redis-cli -s /tmp/redis_6379.sock
[endedit]
with no difference.
In particular, using the KEYS * command:
$ redis-cli
redis 127.0.0.1:6379> keys *
(empty list or set)
but
redis 127.0.0.1:6379> set stefano test
OK
redis 127.0.0.1:6379> keys *
1) "stefano"
while from django shell:
In [1]: from django.core.cache import cache
In [2]: cache.keys('*')
Out[2]:
[u'django.contrib.sessions.cachebblhwb3chd6ev2bd85bawuz7g6pgaij8',
u'django.contrib.sessions.cachewpxiheosc8qv5w4v6k3ml8cslcahiwna']
If I'm using MONITOR on the cli:
redis 127.0.0.1:6379> monitor
OK
1373372711.017761 [1 unix:/tmp/redis_6379.sock] "KEYS" "project_prefix:1:*"
I can see a request, using the django cache prefix; which should prove the redis-cli is connected to the same service.
But even searching for that prefix in the redis-cli returns an (empty list or set)
Why is that?
What is the mechanisms that compartmentalize the different caches on the same redis instance?
I would say there are two possibilities:
1/ The django app may not connect to the Redis instance you think it is connected to, or the redis-cli client you launch does not connect to the same Redis instance.
Please note you do not use the same exact connection mechanism in both cases. Django uses a Unix Domain Socket, while redis-cli uses TCP loopback (by default). You may want to launch redis-cli using the same socket path, to be sure:
$ redis-cli -s /tmp/redis_6379.sock
Now since you have verified with a MONITOR command that you see the commands sent by Django, we can assume you are connected to the right instance.
2/ There is a database concept in Redis. By default, you have 16 distinct databases, and the current default database is 0. The SELECT command can be used to switch a session to another database. There is one keyspace per database.
The INFO KEYSPACE command can be used to check whether some keys are defined in several databases.
redis 127.0.0.1:6379[1]> info keyspace
# Keyspace
db0:keys=1,expires=0
db1:keys=1,expires=0
Here I have two databases, let's check the keys defined in the db0 database:
redis 127.0.0.1:6379> keys *
1) "foo"
and now in the db1 database:
redis 127.0.0.1:6379> select 1
OK
redis 127.0.0.1:6379[1]> keys *
1) "bar"
My suggestion would be also to check whether the Django application sends any SELECT command at connection time to the Redis instance (with MONITOR).
I'm not familiar with Django, but the way you have defined the LOCATION parameter makes me think your data could be in database 1 (due to the suffix).
Do this:
redis-cli -h <host> KEYS "trendingKey*"
OUTPUT
"trendingKey:2:1"
"trendingKey:trending102:1"
"trendingKey:trending101:1"