Failed to create Elasticache redis cluster - amazon-web-services

I was using the following command to create a Elasticache redis cluster via CLI but it always failed at the end, when I switch to the AWS console I can first see the creating status but after a while it will always fail, is there a way to view the creation logs in AWS console?
aws elasticache create-replication-group --cache-subnet-group group-name --engine redis --engine-version 6.x --security-group-ids security-group-id --num-node-groups 22 --replicas-per-node-group 2 --cache-parameter-group-name parameter-group-name --auto-minor-version-upgrade --replication-group-id some-group-id --replication-group-description 'some description' --cache-node-type cache.r6g.2xlarge --region some-region --automatic-failover-enabled

Related

aws rds describe-db-clusters --db-cluster-identifier with wildcard

I am looking to run aws rds describe-db-clusters --db-cluster-identifier CLI command with wildcard. Something like:
aws rds describe-db-clusters --db-cluster-identifier prod% --region us-east-1
I want to retrieve info about all the RDS clusters whose name start with prod. When I run the above cli command, I get an error
An error occurred (InvalidParameterValue) when calling the DescribeDBClusters operation: Invalid database cluster identifier: prod%
Is there a way (via CLI or Py Code) to get the list of all RDS Clusters whose name start with prod?
Thanks

EKS container cannot reach DynamoDb or other AWS services

We have deployed alpine images on EKS Fargate nodes, and have also associated a service account to an IAM role which has access to DynamoDb and some other services.
When deploying the containers, we can see that AWS has automatically set these env vars on all containers
AWS_ROLE_ARN=arn:aws:iam::1111111:role/my-role
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
But if we execute this command with the cli
aws sts get-caller-identity
or
aws dynamodb list-tables
the command simply hangs and does not return any results.
We have followed the docs on setting up the iam roles for the EKS (k8s) service accounts - is there anything more we need to do to check the connectivity from the containers to the DynamoDb for example? (please note, from Lambda or so we can access DynamoDb - an endpoint exists for the necessary services)
When I execute this on the pod:
aws sts assume-role-with-web-identity \ --role-arn $AWS_ROLE_ARN \ --role-session-name mh9test \ --web-identity-token ```
file://$AWS_WEB_IDENTITY_TOKEN_FILE \ --duration-seconds 1000
I get this error: Connect timeout on endpoint URL: "sts.amazonaws.com" which is strange because the vpc endpoint is sts.eu-central-1.amazonaws.com
I can also not ping endpoint address such as ec2.eu-central-1.amazonaws.com
Thanks
Thomas

ecs-cli refers to old cluster after changing default profile; doesn't show EC2 instances

I've been using AWS's ECS CLI to spin clusters of EC2 instances up and down for various tasks. The problem I'm running into is that it seems to be referring to old information that I don't know how to change.
e.g., I just created a cluster, my-second-cluster successfully, and can see it in the AWS console:
$ ecs-cli up --keypair "my-keypair" --capability-iam --size 4 --instance-type t2.micro --port 22 --cluster-config my-second-cluster --ecs-profile a-second-profile
INFO[0001] Using recommended Amazon Linux 2 AMI with ECS Agent 1.45.0 and Docker version 19.03.6-ce
INFO[0001] Created cluster cluster=my-second-cluster region=us-east-1
INFO[0002] Waiting for your cluster resources to be created...
INFO[0002] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0063] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
INFO[0124] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
VPC created: vpc-123abc
Security Group created: sg-123abc
Subnet created: subnet-123abc
Subnet created: subnet-123def
Cluster creation succeeded.
...but eci-cli ps returns an error referring to an old cluster:
$ ecs-cli ps
FATA[0000] Error executing 'ps': Cluster 'my-first-cluster' is not active. Ensure that it exists
Specifying the cluster explicitly (ecs-cli ps --cluster my-second-cluster --region us-east-1) returns nothing, even though I see the 4 EC2 instances when I log into the AWS console.
Supporting details:
Before creating this second cluster, I created a second profile and set it to the default. I also set the new cluster to be the default.
$ ecs-cli configure profile --access-key <MY_ACCESS_KEY> --secret-key <MY_SECRET_KEY> --profile-name a-second-profile
$ ecs-cli configure profile default --profile-name a-second-profile
$ ecs-cli configure --cluster my-second-cluster --region us-east-1
INFO[0000] Saved ECS CLI cluster configuration default.
It's unclear to me where these ECS profile and cluster configs are stored (I'd expect to see them as files in ~/.aws, but no), or how to manipulate them beyond the cli commands that don't give great feedback. Any ideas on what I'm missing?
The ECS CLI stores it's credentials at ~/.ecs/credentials.
When you create the initial profile it's name is default and is used by default. When you set a-second-profile to default, it sets the metadata to use a-second-profile by default but you still have a profile named default that points to the original creds.
My guess is that to see the first cluster you need to now specify a profile name since you changed the default. If you didn't give your initial profile a name then it will be default.
ecs-cli ps --ecs-profile default
If you deleted your cluster configuration you may need to add the cluster again and associate to the right profile:
ecs-cli configure --cluster cluster_name --default-launch-type launch_type --region region_name --config-name configuration_name
I hope that makes sense. Hopefully looking at how your commands update ~/.ecs/credentials be helpful.
Some resources:
ECS CLI Configurations

kubectl error querying EC2 for volume info

I'm running Kubernetes v1.4.0+776c994 on an EC2 instance in AWS GovCloud.
I can list EC2 volumes with 'aws ec2 describe-volumes', but when I try to create a persistent volume, 'kubectl create -f aws-pv.yaml', I get this error:
{
"kind":"Status",
"apiVersion":"v1",
"metadata":{},
"status":"Failure",
"message":"persistentvolumes \"pv0001\" is forbidden: error querying AWS EBS volume vol-05dffe55de3ac725b: error querying ec2 for volume info: error listing AWS volumes: UnauthorizedOperation: You are not authorized to perform this operation.\n\tstatus code: 403, request id:",
"reason":"Forbidden",
"details": {
"name":"pv0001",
"kind":"persistentvolumes"
},
"code":403
}
I've set these environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_REGION=us-gov-west-1
CURL_CA_BUNDLE=/etc/origin/master/ca.crt
My IAM role, as the AWS user dvogel, allows me to successfully run the query 'aws ec2 describe-volumes', but apparently my permissions aren't passed to the Kubernetes api when I run 'kubectl create -f aws-pv.yaml' in the same terminal. I'm guessing I need to set something, in admin.kubeconfig?, to do this.

Logs to AWS Cloudwatch from Docker Containers

I have a few docker containers running with docker-compose on an AWS EC2 instance. I am looking to get the logs sent to AWS CloudWatch. I was also having issues getting the logs from docker containers to AWS CloudWatch from my Mac running Sierra so I've moved over to EC2 instances running Amazon AMI.
My docker-compose file:
version: '2'
services:
scraper:
build: ./Scraper/
logging:
driver: "awslogs"
options:
awslogs-region: "eu-west-1"
awslogs-group: "permission-logs"
awslogs-stream: "stream"
volumes:
- ./Scraper/spiders:/spiders
When I run docker-compose up I get the following error:
scraper_1 | WARNING: no logs are available with the 'awslogs' log driver
but the container is running. No logs appear on the AWS CloudWatch stream. I have assigned an IAM role to the EC2 container that the docker-containers run on.
I am at a complete loss now as to what I should be doing and would apprecaite any advice.
The awslogs works without using ECS.
you need to configure the AWS credentials (the user should have IAM roles appropriate [cloudwatch logs]).
I used this tutorial, it worked for me: https://wdullaer.com/blog/2016/02/28/pass-credentials-to-the-awslogs-docker-logging-driver-on-ubuntu/
I was getting the same error but when I checked the cloudwatch logs, I was able to see the logs in cloudwatch. Did you check that if you have the logs group created in cloudwatch. Docker doesn't support console logging when we use the custom logging drivers.
The section on limitations here says that docker logs command is only available for json-file and journald drivers, and that's true for built-in drivers.
When trying to get logs from a driver that doesn't support reading, nothing hangs for me, docker logs prints this:
Error response from daemon: configured logging driver does not support reading
There are 3 main steps involved it to it.
Create an IAM role/User
Install CloudAgent
Modify docker-compose file or docker run command
I have referred an article here with steps to send the docker logs to aws cloudwatch.
The AWS logs driver you are using awslogs is for use with EC2 Container Service (ECS). It will not work on plain EC2. See documentation.
I would recommend creating a single node ECS cluster. Be sure the EC2 instance(s) in that cluster have a role, and the role provides permissions to write to Cloudwatch logs.
From there anything in your container that logs to stdout will be captured by the awslogs driver and streamed to Cloudwatch logs.