I use saml2aws with Okta authentication to access aws from my local machine. I have added k8s cluster config as well to my machine.
While trying to connect to k8s suppose to list pods, a simple kubectl get pods returns an error [Errno 2] No such file or directory: '/var/run/secrets/eks.amazonaws.com/serviceaccount/token' Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 255
But if i do saml2aws exec kubectl get pods i am able to fetch pods.
I dont understand if the problem is with storing of credentials or where do i begin to even understand the problem.
Any kind of help will be appreciated.
To Integrate Saml2aws with OKTA , you need to create a profile in saml2aws first
Configure Profile
saml2aws configure \
--skip-prompt \
--mfa Auto \
--region <region, ex us-east-2> \
--profile <awscli_profile> \
--idp-account <saml2aws_profile_name>> \
--idp-provider Okta \
--username <your email> \
--role arn:aws:iam::<account_id>:role/<aws_role_initial_assume> \
--session-duration 28800 \
--url "https://<company>.okta.com/home/amazon_aws/......."
URL, region ... can be got from OKTA integration UI.
Login
samle2aws login --idp-account <saml2aws_profile_name>
that should prompt you for password and MFA if exist.
Verification
aws --profile=<awscli_profile> s3 ls
then finally , Just export AWS_PROFILE by
export AWS_PROFILE=<awscli_profile>
and use awscli directly
aws sts get-caller-identity
Related
I'm getting the follow error when I try to deploy my Load Balanced Web Service with Copilot into a new environment. I have everything running in test, created a new prod environment and tried to deploy the service into it, but the task details show a stopped reason of:
ResourceInitializationError: unable to pull secrets or registry auth:
execution resource retrieval failed: unable to retrieve secrets from
ssm: service call has been retried 1 time(s): AccessDeniedException:
User: arn:aws:sts::xxx:assumed-role...
SSM parameters were added specifically for the test env, which I thought would apply by default to all environments, but apparently not. Had to add again for prod environment, with the tag copilot-environment.
aws ssm put-parameter \
--name /copilot/applications/core/environments/test/port \
--value '8000' \
--type SecureString \
--tags Key=copilot-environment,Value=test Key=copilot-application,Value=core
aws ssm put-parameter \
--name /copilot/applications/core/environments/prod/port \
--value '8000' \
--type SecureString \
--tags Key=copilot-environment,Value=prod Key=copilot-application,Value=core
And updated my manifest.yml:
secrets: # Pass secrets from AWS Systems Manager (SSM) Parameter Store.
PORT: /copilot/applications/core/environments/test/port
# You can override any of the values defined above by environment.
environments:
prod:
secrets:
PORT: /copilot/applications/core/environments/prod/port
I am authenticating via the following
First I authenticate into AWS via the following
aws ecr get-login-password --region cn-north-1 | docker login --username AWS --password-stdin xxxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn
Then I created the regcred file that I reference in my deployment config
kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/noobskie/.docker/config.json --type=kubernetes.io/dockerconfigjson
So this was working fine the first 12 hours but now that the AWS token has expired I am having trouble figuring out how to properly refresh it. I have rerun the first command but it doesn't work.
the error I get is
Error response from daemon: pull access denied for xxxxxxxxxxx.dkr.ecr.cn-north-1.amazonaws.com.cn/baopals, repository does not exist or may require 'docker login': denied: Your authorization token has expired. Reauthenticate and try again.
EDIT
I have just discovered that I can just reconfigure with the following command but I am curious if this is the correct way to handle it and if there are any other AWS ways offered.
kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/noobskie/.docker/config.json --dry-run -o yaml | kubectl apply -f -
Use the following command to generate token if aws-cli and aws-iam-authenticator is installed and configured.
aws-iam-authenticator token -i cluster name
SO I am getting a very strange problem when working with aws
I have configured everything according to this tutorial:
https://serverless-stack.com/chapters/login-with-aws-cognito.html
Now the issue that arises is when I tried to create a mock user account. I enter the following into my macOs terminal :
aws cognito-idp sign-up \
--region ca-central \
--client-id 2rj7d9i1mcovi6vv9jbo0njeq3 \
--username admin#example.com \
--password passwordTrial
Now I get the following error:
SO far I have tried the following:
Configured my region to match my user pool, and the command presented above. This is ca-central.
I run the following:
ce
Ok SO the issue was that I was missing the -1 after the region.
should have been
--region ca-central-1 \
But Now I have another error:
zsh: no matches found: passwordTrial
I use Flask to create an API, but I am having trouble uploading when I create custom headers to upload to my Google Cloud Storage. Fyi, the permissions details on my server are the same as my local machine to test upload of images to GCS, admin storage and admin object storage, there are no problems on my local machine. but when I curl or test upload on my server to my Google Cloud Storage bucket, the response is always the same:
"rc": 500,
"rm": "403 POST https://storage.googleapis.com/upload/storage/v1/b/konxxxxxx/o?uploadType=multipart: ('Request failed with status code', 403, 'Expected one of', )"
im testing in postman using custom header :
upload_key=asjaisjdaozmzlaljaxxxxx
and i curl like this :
url --location --request POST 'http://14.210.211.xxx:9001/koxxx/upload_img?img_type=img_x' --header 'upload_key: asjaisjdaozmzlaljaxxxxx' --form 'img_file=#/home/user/image.png'
and I have confirmed with "gcloud auth list" that the login data that I use on the server is correct and the same with my local machine.
you have a permission error, to fix it use service accounts method, it's easy and straightforward.
create a service account
gcloud iam service-accounts create \
$SERVICE_ACCOUNT_NAME \
--display-name $SERVICE_ACCOUNT_NAME
add permissions to your service account
gcloud projects add-iam-policy-binding $PROJECT_NAME \
--role roles/bigtable.user \
--member serviceAccount:$SA_EMAIL
$SA_EMAIL is the service account here. you can get it using:
SA_EMAIL=$(gcloud iam service-accounts list \
--filter="displayName:$SERVICE_ACCOUNT_NAME" \
--format='value(email)')
download the service account to a destination $SERVICE_ACCOUNT_DEST and save it to variable $KEY
export KEY=$(gcloud iam service-accounts keys create $SERVICE_ACCOUNT_DEST --iam-account $SA_EMAIL)
upload to Cloud Storage Bucket using the rest api:
curl -X POST --data-binary #[OBJECT_LOCATION] \
-H "Authorization: Bearer $KEY" \
-H "Content-Type: [OBJECT_CONTENT_TYPE]" \
"https://storage.googleapis.com/upload/storage/v1/b/[BUCKET_NAME]/o?uploadType=media&name=[OBJECT_NAME]"
When running code on an EC2 instance, the SDK I use to access AWS resources, automagically talks to a locally linked web server on 169.254.169.254 and gets that instances AWS credentials (access_key, secret) that are needed to talk to other AWS services.
Also there are other options, like setting the credentials in environment variables or passing them as command line args.
What is the best practice here? I really prefer to let the container access the 169.254.169.254 (by routing the requests) or even better run a proxy container that mimics the behavior of the real server at 169.254.169.254.
Is there already a solution out there?
The EC2 metadata service will usually be available from within docker (unless you use a more custom networking setup - see this answer on a similar question).
If your docker network setup prevents it from being accessed, you might use the ENV directive in your Dockerfile or pass them directly during run, but keep in mind that credentials from IAM roles are automatically rotated by AWS.
Amazon does have some mechanisms for allowing containers to access IAM roles via the SDK and either routing/forwarding requests through the ECS agent container or the host. There is way too much to copy and paste, but using --net host is the LEAST recommended option because without additionally filters that allows your container full access to anything it's host has permission to do.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
declare -a ENVVARS
declare AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
get_aws_creds_local () {
# Use this to get secrets on a non AWS host assuming you've set credentials via some mechanism in the past, and then don't pass in a profile to gitlab-runner because it doesn't see the ~/.aws/credentials file where it would look up profiles
awsProfile=${AWS_PROFILE:-default}
AWS_ACCESS_KEY_ID=$(aws --profile $awsProfile configure get aws_access_key_id)
AWS_SECRET_ACCESS_KEY=$(aws --profile $awsProfile configure get aws_secret_access_key)
AWS_SESSION_TOKEN=$(aws --profile $awsProfile configure get aws_session_token)
}
get_aws_creds_iam () {
TEMP_ROLE=$(aws sts assume-role --role-arn "arn:aws:iam::123456789012:role/example-role" --role-session-name AWSCLI-Session)
AWS_ACCESS_KEY_ID=$(echo $TEMP_ROLE | jq -r . Credentials.RoleAccessKeyID)
AWS_SECRET_ACCESS_KEY=$(echo $TEMP_ROLE | jq -r . Credentials.RoleSecretKey)
AWS_SESSION_TOKEN=$(echo $TEMP_ROLE | jq -r . Credentials.RoleSessionToken)
}
get_aws_creds_local
get_aws_creds_iam
ENVVARS=("AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID" "AWS_SECRET_ACCESS_KEY=$ACCESS_SECRET_ACCESS_KEY" "AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN")
# passing creds into GitLab runner
gitlab-runner exec docker stepName $(printf " --env %s" "${ENVVARS[#]}")
# using creds with a docker container
docker run -it --rm $(printf " --env %s" "${ENVVARS[#]}") amazon/aws-cli sts get-caller-identity