AWS copilot deployed service is not accessible? - amazon-web-services

I have a simple backend srevice that I just deployed with copilot.
However, I don't know where to access it?
According to AWS console it's running and active. I can even see it in the logs that it has been started.
My manifest:
# The manifest for the "user-service" service.
# Read the full specification for the "Backend Service" type at:
# https://aws.github.io/copilot-cli/docs/manifest/backend-service/
# Your service name will be used in naming your resources like log groups, ECS services, etc.
name: user-service
type: Backend Service
# Your service does not allow any traffic.
# Configuration for your containers and service.
image:
# Docker build arguments. For additional overrides: https://aws.github.io/copilot-cli/docs/manifest/backend-service/#image-build
build: ./Dockerfile
port: 9000
cpu: 256 # Number of CPU units for the task.
memory: 512 # Amount of memory in MiB used by the task.
count: 1 # Number of tasks that should be running in your service.
# Optional fields for more advanced use-cases.
#
variables: # Pass environment variables as key value pairs.
SERVER_PORT: 9000
NODE_ENV: test
secrets: # Pass secrets from AWS Systems Manager (SSM) Parameter Store.
ACCESS_TOKEN_SECRET: ACCESS_TOKEN_SECRET
REFRESH_TOKEN_SECRET: REFRESH_TOKEN_SECRET
MONGODB_URL: MONGODB_URL
# You can override any of the values defined above by environment.
environments:
test:
variables:
NODE_ENV: test
# count: 2 # Number of tasks to run for the "test" environment.
My Dockerfile
# Check out https://hub.docker.com/_/node to select a new base image
FROM node:lts-buster-slim
# Set to a non-root built-in user `node`
USER node
# Create app directory (with user `node`)
RUN mkdir -p /home/node/app
WORKDIR /home/node/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY --chown=node package*.json ./
RUN npm install
# Bundle app source code
COPY --chown=node . .
RUN npm run build
# Bind to all network interfaces so that it can be mapped to the host OS
ENV HOST=0.0.0.0 PORT=3000
EXPOSE 9000
CMD [ "node", "." ]
This works fine locally, with docker-compose. But where can I find the URL of the deployed service? I checked ECS console and the task has a public IP. However I can't connect to that.
What's missing here?

Nm.. my bad. Backend services are not supposed to be reachable via internet. They expose endpoints but should talk to each other (or the frontend) via service discovery.

Related

How to connect k8s, redis sentinel, flask_caching?

I deployed a k8s redis sentinel using the Helm chart: https://github.com/bitnami/charts/tree/master/bitnami/redis
I did change only these values ( https://github.com/bitnami/charts/blob/master/bitnami/redis/values.yaml ) :
auth:
enabled: false
sentinel: false
sentinel:
enabled: true
masterSet: mymaster
After the deployment, I got this message:
Redis™ can be accessed via port 6379 on the following DNS name from within your cluster:
redis.default.svc.cluster.local for read only operations
For read/write operations, first access the Redis™ Sentinel cluster, which is available in port 26379 using the same domain name above.
To connect to your Redis™ server:
1. Run a Redis™ pod that you can use as a client:
kubectl run --namespace default redis-client --restart='Never' --image docker.io/bitnami/redis:6.2.6-debian-10-r103 --command -- sleep infinity
Use the following command to attach to the pod:
kubectl exec --tty -i redis-client \
--namespace default -- bash
2. Connect using the Redis™ CLI:
redis-cli -h redis -p 6379 # Read only operations
redis-cli -h redis -p 26379 # Sentinel access
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace default svc/redis 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379
This is working nicely:
kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-node-0 2/2 Running 0 2m23s
redis-node-1 2/2 Running 0 71s
redis-node-2 2/2 Running 0 43s
But regarding access - to summarize - I have two options to access redis:
read-only access at redis.default.svc.cluster.local:6379
read-write access at redis.default.svc.cluster.local:26379 (some kind of sentinel access, in the docs:
Master-Replicas with Sentinel
When installing the chart with architecture=replication and sentinel.enabled=true, it will deploy a Redis™ master StatefulSet (only one master allowed) and a Redis™ replicas StatefulSet. In this case, the pods will contain an extra container with Redis™ Sentinel. This container will form a cluster of Redis™ Sentinel nodes, which will promote a new master in case the actual one fails. In addition to this, only one service is exposed:
Redis™ service: Exposes port 6379 for Redis™ read-only operations and port 26379 for accessing Redis™ Sentinel.
For read-only operations, access the service using port 6379. For write operations, it's necessary to access the Redis™ Sentinel cluster and query the current master using the command below (using redis-cli or similar):
SENTINEL get-master-addr-by-name <name of your MasterSet. e.g: mymaster>
This command will return the address of the current master, which can be accessed from inside the cluster.
In case the current master crashes, the Sentinel containers will elect a new master node.
Now I want to connect my Flask caching module to it: https://flask-caching.readthedocs.io/en/latest/
As you can see, there is an option to connect to redis sentinel, however I have no idea how. This is the code I have:
from flask_caching import Cache
cache = Cache(app, config={
'CACHE_TYPE': 'RedisSentinelCache',
'CACHE_REDIS_SENTINELS': ['redis.default.svc.cluster.local'],
'CACHE_REDIS_SENTINEL_MASTER': 'mymaster'}
)
My questions are:
What should be in param CACHE_REDIS_SENTINELS? Should I somehow get IP addresses of each node and get those there?
What should be in param CACHE_REDIS_SENTINEL_MASTER? Is it "mymaster" (sentinel -> masterSet?)
Should I connect always to the read-write server (in this case, will other replicas be used)? Or do I need to adjust my app in this way: if I write I always use the sentinel access at port 26379, and in the case that I read I connect always to the read-only 6379 port? Do I need to maintain 2 connections?
Thank you
EDIT: I was digging into the code of flask_caching and it seems this works OK (but I am not sure if replicas are used):
import time
from flask import Flask
from flask_caching import Cache
config = {
"DEBUG": True, # some Flask specific configs
'CACHE_TYPE': 'RedisSentinelCache',
'CACHE_REDIS_SENTINELS': [
['redis.default.svc.cluster.local', 26379]
],
'CACHE_REDIS_SENTINEL_MASTER': 'mymaster'
}
app = Flask(__name__)
# tell Flask to use the above defined config
app.config.from_mapping(config)
cache = Cache(app)
#app.route("/")
#cache.cached(timeout=5)
def index():
return "%d\n" % time.time()
app.run()
EDIT2:
Indeed, a bit digging into flask_caching and it uses replicas as well:
in file flask_caching/backends/rediscache.py
The code is getting hosts for write and read access:
self._write_client = sentinel.master_for(master)
self._read_clients = sentinel.slave_for(master)
Cheers!
EDIT3:
Example with redis driver:
from redis.sentinel import Sentinel
sentinel = Sentinel([('redis.default.svc.cluster.local', 26379)])
redis_conn = sentinel.master_for('mymaster')
redis_conn_read = sentinel.slave_for('mymaster')
redis_conn.set('test', 'Hola!')
print(redis_conn_read.get('test'))
TLDR:
import time
from flask import Flask
from flask_caching import Cache
config = {
"DEBUG": True, # some Flask specific configs
'CACHE_TYPE': 'RedisSentinelCache',
'CACHE_REDIS_SENTINELS': [
['redis.default.svc.cluster.local', 26379]
],
'CACHE_REDIS_SENTINEL_MASTER': 'mymaster'
}
app = Flask(__name__)
# tell Flask to use the above defined config
app.config.from_mapping(config)
cache = Cache(app)
#app.route("/")
#cache.cached(timeout=5)
def index():
return "%d\n" % time.time()
app.run()
For more details see my original question (EDIT and EDIT2).

Argo CD - limit time for Application for transit to Degradated state

Here is common scenario we stuck with:
Argocd Application created and synced with Helm, it has deployment with 1 pod, all green.
We updating deployment image tag with some broken value not exists in our Docker Image registry and push changes to git repo.
Argo pick up updates from git repo, sync status is green Synced state, but app health is "Processing"
In result of change Deployment tries to roll out new pod with broken image tag, and obviusly not able to do it.
Agrocd App stuck in app health "Processing" state for about 10 minunes and eventually transit to "Degradated" state
Now the questio, can we limit this time and have "Degradataed" state in 1 or 2 minutes instead 10?
Do you create app with GUI or yaml file?
If you create app with yaml file you can do it by set field limit or maxDuration in retry.
Here is an example
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
# You'll usually want to add your resources to the argocd namespace.
namespace: argocd
# Add a this finalizer ONLY if you want these to cascade delete.
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
# The project the application belongs to.
project: default
# Source of the application manifests
source:
repoURL: https://github.com/argoproj/argocd-example-apps.git
targetRevision: HEAD
path: guestbook
# helm specific config
helm:
# Release name override (defaults to application name)
releaseName: guestbook
# Helm values files for overriding values in the helm chart
# The path is relative to the spec.source.path directory defined above
valueFiles:
- values-prod.yaml
# Optional Helm version to template with. If omitted it will fall back to look at the 'apiVersion' in Chart.yaml
# and decide which Helm binary to use automatically. This field can be either 'v2' or 'v3'.
version: v2
# Destination cluster and namespace to deploy the application
destination:
server: https://kubernetes.default.svc
namespace: guestbook
# Sync policy
syncPolicy:
automated: # automated sync by default retries failed attempts 5 times with following delays between attempts ( 5s, 10s, 20s, 40s, 80s ); retry controlled using `retry` field.
prune: true # Specifies if resources should be pruned during auto-syncing ( false by default ).
selfHeal: true # Specifies if partial app sync should be executed when resources are changed only in target Kubernetes cluster and no git change detected ( false by default ).
allowEmpty: false # Allows deleting all application resources during automatic syncing ( false by default ).
syncOptions: # Sync options which modifies sync behavior
- Validate=false # disables resource validation (equivalent to 'kubectl apply --validate=false') ( true by default ).
- CreateNamespace=true # Namespace Auto-Creation ensures that namespace specified as the application destination exists in the destination cluster.
- PrunePropagationPolicy=foreground # Supported policies are background, foreground and orphan.
- PruneLast=true # Allow the ability for resource pruning to happen as a final, implicit wave of a sync operation
# The retry feature is available since v1.7
retry:
limit: 5 # number of failed sync attempt retries; unlimited number of attempts if less than 0
backoff:
duration: 5s # the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
factor: 2 # a factor to multiply the base duration after each failed retry
maxDuration: 3m # the maximum amount of time allowed for the backoff strategy
I believe the problem is not coming from ArgoCD as, as you mentioned it sync status is ok.
You may want to set progressDeadlineSeconds in your Deployment object

docker exec cli peer channel create | failed to create new connection: context deadline exceeded | amazon managed blockchain

I am trying to setup hyperledger fabric blockchain network using amazon managed blockchain following this guide. In the step 6, to create the channel I have executed the following command,
docker exec cli peer channel create -c hrschannel -f /opt/home/hrschannel.pb -o orderer.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30001 --cafile /opt/home/managedblockchain-tls-chain.pem --tls
But I am getting the following error,
Error: failed to create deliver client: orderer client failed to connect to orderer.n-zzzz.managedblockchain.us-east-1.amazonaws.com:30001: failed to create new connection: context deadline exceeded
Help me to fix this issue.
Edited:
I asked the same question in reddit. One user replied that he added listenAddress environment variable in my configtx.yaml file. He did not say clear information about which listenAddress and where to add that address in configtx.yaml. Here is my configtx.yaml file.
################################################################################
#
# Section: Organizations
#
# - This section defines the different organizational identities which will
# be referenced later in the configuration.
#
################################################################################
Organizations:
- &Org1
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: m-CUB6HI
# ID to load the MSP definition as
ID: m-B6HI
MSPDir: /opt/home/admin-msp
# AnchorPeers defines the location of peers which can be used
# for cross org gossip communication. Note, this value is only
# encoded in the genesis block in the Application section context
AnchorPeers:
- Host:
Port:
################################################################################
#
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults
# Organizations is the list of orgs which are defined as participants on
# the application side of the network
Organizations:
################################################################################
#
# Profile
#
# - Different configuration profiles may be encoded here to be specified
# as parameters to the configtxgen tool
#
################################################################################
Profiles:
OneOrgChannel:
Consortium: AWSSystemConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
Help me to fix this issue.
One must check if the peer container is able to communicate with the orderer container. curl orderer.endpoint port can be used to check the connection. If the peer is unable to communicate then either the orderer container is down or could be due to different security groups.
Update:
As OP mentioned in the comments, changing the port helped in resolving the issue. One must give it a try.

Deploying Django channels app on google flex engine

I am working on django channels and getting problem while deploying them on google flex engine,first I was getting error of 'deployment has failed to become healthy in the allotted time' and resolved it by adding readiness_check in app.yaml,now I am getting below error:
(gcloud.app.deploy) Operation [apps/socketapp-263709/operations/65c25731-1e5a-4aa1-83e1-34955ec48c98] timed out. This operation may still be underway.
App.yaml
runtime: python
env: flex
runtime_config:
python_version: 3
instance_class: F4_HIGHMEM
handlers:
# This configures Google App Engine to serve the files in the app's
# static directory.
- url: /static
static_dir: static/
- url: /.*
script: auto
# [END django_app]
readiness_check:
check_interval_sec: 120
timeout_sec: 40
failure_threshold: 5
success_threshold: 5
app_start_timeout_sec: 1500
How can I fix this issue,any suggestions?
The following error is due to several issues:
1) You aren't configuring correctly your app.yaml file. The resource request in App Engine Flexible is not through instance_class option, to request resources, you've to use resources option like following:
resources:
cpu: 2
memory_gb: 2.3
disk_size_gb: 10
volumes:
- name: ramdisk1
volume_type: tmpfs
size_gb: 0.5
2) You're missing an entrypoint for your app. To deploy Django channels, they suggest to have an entrypoint for Daphne server. Add in your app.yaml the following code:
entrypoint: daphne -b 0.0.0.0 -p 8080 mysite.asgi:application
3) After doing the previous, if you still get the same error, it's possible that your In-use IP addresses quota in the region of your App Engine Flexible application has reached its limit.
To check this issue, you can go to "Activity" tab of your project home page. Their, you can see warnings of quota limits and VM's failing to be created.
App Engine by default leaves the previous versions of your App, and running that may take IP addresses.You can delete the previous versions and/or request an increase of your IP address quota limit.
Also update gcloud tools and SDK which may resolve the issue.
To check your in-use addresses click here and you will be able to increase your quota by clicking the 'Edit Quotas' button in the Cloud Console.

Metabase on Google App Engine

I'm trying to set up Metabase on a gcloud engine using Google Cloud SQL (MySQL).
I've got it running using this git and this app.yaml:
runtime: custom
env: flex
# Metabase does not support horizontal scaling
# https://github.com/metabase/metabase/issues/2754
# https://cloud.google.com/appengine/docs/flexible/java/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
env_variables:
# MB_JETTY_PORT: 8080
MB_DB_TYPE: mysql
MB_DB_DBNAME: [db_name]
# MB_DB_PORT: 5432
MB_DB_USER: [db_user]
MB_DB_PASS: [db_password]
# MB_DB_HOST: 127.0.0.1
CLOUD_SQL_INSTANCE: [project-id]:[location]:[instance-id]
I have 2 issues:
The Metabase fails in connecting to the Cloud SQL - the Cloud SQL is part of the same project and App Engine is authorized.
After I create my admin user in Metabase, I am only able to login for a few seconds (and only sometimes), but it keeps throwing me to either /setup or /auth/login saying the password doesn't match (when it does).
I hope someone can help - thank you!
So, we just got metabase running in Google App Engine with a Cloud SQL instance running PostgreSQL and these are the steps we went through.
First, create a Dockerfile:
FROM gcr.io/google-appengine/openjdk:8
EXPOSE 8080
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ENV JAVA_TOOL_OPTIONS "-Xmx1g"
ADD https://downloads.metabase.com/enterprise/v1.1.6/metabase.jar $APP_DESTINATION
We tried pushing the memory further down, but 1 GB seemed to be the sweet spot. On to the app.yaml:
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 1
disk_size_gb: 10
readiness_check:
path: "/api/health"
check_interval_sec: 5
timeout_sec: 5
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 600
beta_settings:
cloud_sql_instances: <Instance-Connection-Name>=tcp:5432
env_variables:
MB_DB_DBNAME: 'metabase'
MB_DB_TYPE: 'postgres'
MB_DB_HOST: '172.17.0.1'
MB_DB_PORT: '5432'
MB_DB_USER: '<username>'
MB_DB_PASS: '<password>'
MB_JETTY_PORT: '8080'
Note the beta_settings field at the bottom, which handles what akilesh raj was doing manually. Also, the trailing =tcp:5432 is required, since metabase does not support unix sockets yet.
Relevant documentation can be found here.
Although I am not sure of the reason, I think authorizing the service account of App engine is not enough for accessing cloud SQL.
In order to authorize your App to access your Cloud SQL you can do either of both methods:
Within the app.yaml file, configure an environment variable pointing to a a service account key file with a correct authorization configuration to Cloud SQL :
env_variables:
GOOGLE_APPLICATION_CREDENTIALS=[YOURKEYFILE].json
Your code executes a fetch of an authorized service account key from a bucket, and loads it afterwards with the help of the Cloud storage Client library. Seeing your runtime is custom, the pseudocode which would be translated into the code you use is the following:
.....
It is better to use the Cloud proxy to connect to the SQL instances. This way you do not have to authorize the instances in CloudSQL every time there is a new instance.
More on CloudProxy here
As for setting up Metabase in the Google App Engine, I am including the app.yaml and Dockerfile below.
The app.yaml file,
runtime: custom
env: flex
manual_scaling:
instances: 1
env variables:
MB_DB_TYPE: mysql
MB_DB_DBNAME: metabase
MB_DB_PORT: 3306
MB_DB_USER: root
MB_DB_PASS: password
MB_DB_HOST: 127.0.0.1
METABASE_SQL_INSTANCE: instance_name
The Dockerfile,
FROM gcr.io/google-appengine/openjdk:8
# Set locale to UTF-8
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
# Install CloudProxy
ADD https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 ./cloud_sql_proxy
RUN chmod +x ./cloud_sql_proxy
#Download the latest version of Metabase
ADD http://downloads.metabase.com/v0.21.1/metabase.jar ./metabase.jar
CMD nohup ./cloud_sql_proxy -instances=$METABASE_SQL_INSTANCE=tcp:$MB_DB_PORT & java -jar /startup/metabase.jar