Copilot Fails to Deploy Cannot Access SSM - amazon-web-services

I'm getting the follow error when I try to deploy my Load Balanced Web Service with Copilot into a new environment. I have everything running in test, created a new prod environment and tried to deploy the service into it, but the task details show a stopped reason of:
ResourceInitializationError: unable to pull secrets or registry auth:
execution resource retrieval failed: unable to retrieve secrets from
ssm: service call has been retried 1 time(s): AccessDeniedException:
User: arn:aws:sts::xxx:assumed-role...

SSM parameters were added specifically for the test env, which I thought would apply by default to all environments, but apparently not. Had to add again for prod environment, with the tag copilot-environment.
aws ssm put-parameter \
--name /copilot/applications/core/environments/test/port \
--value '8000' \
--type SecureString \
--tags Key=copilot-environment,Value=test Key=copilot-application,Value=core
aws ssm put-parameter \
--name /copilot/applications/core/environments/prod/port \
--value '8000' \
--type SecureString \
--tags Key=copilot-environment,Value=prod Key=copilot-application,Value=core
And updated my manifest.yml:
secrets: # Pass secrets from AWS Systems Manager (SSM) Parameter Store.
PORT: /copilot/applications/core/environments/test/port
# You can override any of the values defined above by environment.
environments:
prod:
secrets:
PORT: /copilot/applications/core/environments/prod/port

Related

eksctl create iamserviceaccount with EKS add-on support for ADOT Operator

I am attempting to install the AWS Distro for OpenTelemetry (ADOT) into my EKS cluster.
https://docs.aws.amazon.com/eks/latest/userguide/adot-reqts.html
I am following this guide to create the service account for the IAM role (irsa technique in AWS):
https://docs.aws.amazon.com/eks/latest/userguide/adot-iam.html
When I run the eksctl commands:
eksctl create iamserviceaccount \
--name adot-collector \
--namespace monitoring \
--cluster <MY CLUSTER> \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonPrometheusRemoteWriteAccess \
--attach-policy-arn arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess \
--attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
--approve \
--override-existing-serviceaccounts
I am getting this output:
2 existing iamserviceaccount(s) (hello-world/default,monitoring/adot-collector) will be excluded
iamserviceaccount (monitoring/adot-collector) was excluded (based on the include/exclude rules)
metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
no tasks
This Kubernetes service account does not exist in the target namespace or in any other:
k get sa adot-collector -n monitor
k get serviceAccounts -A | grep abot
Expected output:
1 iamserviceaccount (monitoring/adot-collector) was included (based on the include/exclude rules)
metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
...
created serviceaccount "monitoring/adot-collector"
When I check in the AWS Console under CloudFormation , I see that the stack was complete, with a message of "IAM role for serviceaccount "monitoring/adot-collector" [created and managed by eksctl]"
What can I do to troubleshoot this? Why is the Kubernetes service account not getting built?
This was resolved after discovering there as a ValidatingWebhookConfiguration that was blocking the creating of service accounts without a specific label. Temporarily disabling the webhook enabled the Stack to run to completion, and the SA was created.

SAML2AWS connecting to k8s issues

I use saml2aws with Okta authentication to access aws from my local machine. I have added k8s cluster config as well to my machine.
While trying to connect to k8s suppose to list pods, a simple kubectl get pods returns an error [Errno 2] No such file or directory: '/var/run/secrets/eks.amazonaws.com/serviceaccount/token' Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255
But if i do saml2aws exec kubectl get pods i am able to fetch pods.
I dont understand if the problem is with storing of credentials or where do i begin to even understand the problem.
Any kind of help will be appreciated.
To Integrate Saml2aws with OKTA , you need to create a profile in saml2aws first
Configure Profile
saml2aws configure \
--skip-prompt \
--mfa Auto \
--region <region, ex us-east-2> \
--profile <awscli_profile> \
--idp-account <saml2aws_profile_name>> \
--idp-provider Okta \
--username <your email> \
--role arn:aws:iam::<account_id>:role/<aws_role_initial_assume> \
--session-duration 28800 \
--url "https://<company>.okta.com/home/amazon_aws/......."
URL, region ... can be got from OKTA integration UI.
Login
samle2aws login --idp-account <saml2aws_profile_name>
that should prompt you for password and MFA if exist.
Verification
aws --profile=<awscli_profile> s3 ls
then finally , Just export AWS_PROFILE by
export AWS_PROFILE=<awscli_profile>
and use awscli directly
aws sts get-caller-identity

UnrecognizedClientException error when I try to enable "time to live" on local DynamoDB

I use local DynamoDB on Docker and I want to set up a time to live (TTL) feature for the table.
To table creates I use:
aws dynamodb create-table \
--table-name activity \
--attribute-definitions \
AttributeName=deviceId,AttributeType=S \
AttributeName=time,AttributeType=S \
--key-schema \
AttributeName=deviceId,KeyType=HASH \
AttributeName=time,KeyType=RANGE \
--billing-mode 'PAY_PER_REQUEST' \
--endpoint-url http://dynamo:8000
And it works as need.
But when I try to enable TTL:
aws dynamodb update-time-to-live \
--table-name activity \
--time-to-live-specification Enabled=true,AttributeName=ttl
I got the error: An error occurred (UnrecognizedClientException) when calling the UpdateTimeToLive operation: The security token included in the request is invalid
Dummy credentials for the Docker I sent using docker-compose environment:
AWS_ACCESS_KEY_ID: 0
AWS_SECRET_ACCESS_KEY: 0
AWS_DEFAULT_REGION: eu-central-1
Used Docker images:
For DynamoDB - dwmkerr/dynamodb
For internal AWS CLI - garland/aws-cli-docker
What is wrong? How can I enable the feature using local Docker?
Thanks for any answer.
Best.
After an extra a few hours of failures, I have an answer. I hope it helps somebody save a bit of time:
Even if you use a local environment, you should use real AWS
credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY). You can get it here after register.
If you use --endpoint-url parameter
for creating DB, then you should use it with the same value for
update-time-to-live or any other action for the DB.
Cheers!

Creating Elastic Beanstalk environment with specified VPC

I'm trying to create an Elastic Beanstalk environment using the AWS CLI
aws elasticbeanstalk create-environment \
--application-name my-application \
--environment-name my-environment \
--region us-east-1 \
--solution-stack-name "64bit Amazon Linux 2015.09 v2.0.6 running Docker 1.7.1" \
--version-label my-version
but this dumps everything into the default VPC, whereas I'd like to put it in a specific (non-default) VPC. I know this can be accomplished through the AWS web interface. Can it be accomplished with the CLI? Choosing a VPC is not mentioned in the create-environment docs.
Elastic Beanstalk has it's own CLI implementation that is much more robust than the one integrated into the AWS CLI. You can read more about it and download it here. Then, you can use the eb cli as follows to specify the VPC:
eb create \
--elb-type application \
--region us-east-1 \
--platform "64bit Amazon Linux 2015.09 v2.0.6 running Docker 1.7.1" \
--version my-version \
--vpc.id <vpc to launch into> \
my-environment-name

AWS CodeDeploy - Error deploying - ApplicationDoesNotExistException

I want to deploy a project in AWS using :
$ aws --region eu-central-1 deploy push --application-name DemoApp --s3-location s3://paquirrin-codedeploy/Project1.zip --ignore-hidden-file --source .
But I got this error:
A client error (ApplicationDoesNotExistException) occurred when calling the RegisterApplicationRevision operation: Applications not found for 289558260222
but the application exists:
$ aws deploy list-applications
{
"applications": [
"DemoApp"
]
}
and CodeDeploy agent is running
[root#ip-171-33-54-212 ~]# /etc/init.d/codedeploy-agent status
The AWS CodeDeploy agent is running as PID 2649
but I haven't found the folder deployment-root inside /opt/codedeploy-agent !
You are deploying to region eu-central-1. But you may not be listing the applications in eu-central-1 using following command:
aws deploy list-applications
Instead, use following command to ensure that application exists:
aws deploy list-applications --region eu-central-1