How can I use was sso login and sso-sessions with serverless-better-credentials serverless plugin? - amazon-web-services

I get ProcessCredentialsProviderFailure: Profile default not found when trying to run ProcessCredentialsProviderFailure: Profile default not found which does not seem right sense my ~/.aws/config looks like this:
[sso-session aphexlog]
sso_start_url = https://aphexlog.awsapps.com/start
sso_region = us-west-2
sso_registration_scopes = sso:account:access
[profile elevator-robot]
sso_session = aphexlog
sso_account_id = 12345678910
sso_role_name = AWSAdministratorAccess
region = us-east-1
output = json
serverless.json:
service: dog
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs18.x
functions:
dog:
handler: index.handler
plugins:
- serverless-better-credentials
steps to reproduce:
run npm i --save-dev serverless-better-credentials
run aws sso login --profile elevator-robot
run serverless info --aws-profile elevator-robot
then you get the error
However, if I just export all my env variables (secret keys) then it works fine

Related

Why AWS Lambda is not updating though GitLab CI / CD Pipeline get a green checkmark

Following this tutorial, https://docs.gitlab.cn/14.0/ee/user/project/clusters/serverless/aws.html#serverless-framework
Created a function in AWS Lambda called create-promo-animation
Created a /src/handler.js
"use strict";
module.exports.hello = async (event) => {
return {
statusCode: 200,
body: JSON.stringify(
{
message: "Your function executed successfully!",
},
null,
2
),
};
};
Created gitlab-ci.yml
stages:
- deploy
production:
stage: deploy
before_script:
- npm config set prefix /usr/local
- npm install -g serverless
script:
- serverless deploy --stage production --verbose
environment: production
Created serverless.yml
service: gitlab-example
provider:
name: aws
runtime: nodejs14.x
functions:
create-promo-animation:
handler: src/handler.hello
events:
- http: GET hello
pushed to GitLab, Pipe run well
But code is not updating in AWS, why?

error configuring S3 Backend: no valid credential sources for S3 Backend found

I've been trying to add CI/CD pipeline circleci to my AWS project written in Terraform.
The problem is, terraform init plan apply works in my local machine, but it throws this error in CircleCI.
Error -
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
My circleCi config is this -
version: 2.1
orbs:
python: circleci/python#1.5.0
# terraform: circleci/terraform#3.1.0
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
# Invoke jobs via workflows
workflows:
.......
And my init.sh is -
cd ./Terraform
echo "arg: $1"
if [[ "$1" == "dev" || "$1" == "stage" || "$1" == "prod" ]];
then
echo "environement: $1"
terraform init -migrate-state -backend-config=backend.$1.conf -var-file=terraform.$1.tfvars
else
echo "Wrong Argument"
echo "Pass 'dev', 'stage' or 'prod' only."
fi
My main.tf is -
provider "aws" {
profile = "${var.profile}"
region = "${var.region}"
}
terraform {
backend "s3" {
}
}
And `backend.dev.conf is -
bucket = "bucket-name"
key = "mystate.tfstate"
region = "ap-south-1"
profile = "dev"
Also, my terraform.dev.tfvars is -
region = "ap-south-1"
profile = "dev"
These work perfectly with in my local unix (mac m1), but throws error in circleCI for backend. Yes, I've added environment variables with my aws_secret_access_key and aws_access_key_id, still it doesn't work.
I've seen so many tutorials and nothing seems to solve this, I don't want to write aws credentials in my code. any idea how I can solve this?
Update:
I have updated my pipeline to this -
version: 2.1
orbs:
python: circleci/python#1.5.0
aws-cli: circleci/aws-cli#3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
# Checkout the code as the first step. This is a dedicated
steps:
- checkout
- run:
name: Check pyton version
command: python --version
- run:
name: get current dir
command: pwd
- run:
name: list of things in that
command: ls -a
aws-cli-cred-setup:
executor: aws-cli/default
steps:
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
terraform-setup:
executor: aws-cli/default
working_directory: ~/project
steps:
- checkout
- run:
name: Install terraform
command: bash scripts/install_tf.sh
- run:
name: Init infrastructure
command: bash scripts/init.sh dev
context: terraform
# Invoke jobs via workflows
workflows:
dev_workflow:
jobs:
- build:
filters:
branches:
only: main
- aws-cli-cred-setup
# context: aws
- terraform-setup:
requires:
- aws-cli-cred-setup
But it still throws the same error.
You have probably added the aws_secret_access_key and aws_access_key_id to your project settings. But I don't see them being used in your pipeline configuration. You should do something like, so they are known during runtime:
version: 2.1
orbs:
python: circleci/python#1.5.0
jobs:
build:
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
environment:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
steps:
- run:
name: Check python version
command: python --version
...
I would advise you read about environment variables in the documentation.
Ok I managed to fix this issue. You have to remove profile from provider and other .tf files files.
So my main.tf file is -
provider "aws" {
region = "${var.region}"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
backend "s3" {
}
}
And backend.dev.conf is -
bucket = "bucket"
key = "dev/xxx.tfstate"
region = "ap-south-1"
And most importantly, You have to put acccess key, access key id and region inside circleci-> your project -> environment variable,
And you have to setup AWS CLI on circleci, apparently inside a job config.yml-
version: 2.1
orbs:
python: circleci/python#1.5.0
aws-cli: circleci/aws-cli#3.1.3
jobs:
build:
# will use a python 3.10.2 container
docker:
- image: cimg/python:3.10.2
working_directory: ~/project
plan-apply:
executor: aws-cli/default
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
working_directory: ~/project
steps:
- checkout
- aws-cli/setup:
aws-access-key-id: aws_access_key_id
aws-secret-access-key: aws_secret_access_key
aws-region: region
- run:
name: get aws acc info
command: aws sts get-caller-identity
- run:
name: Init infrastructure
command: sh scripts/init.sh dev
- run:
name: Plan infrastructure
command: sh scripts/plan.sh dev
- run:
name: Apply infrastructure
command: sh scripts/apply.sh dev
.....
.....
This solved the issue. But you have to init, plan and apply inside the job where you set up aws cli. I might be wrong to do setup and plan inside same job, but I'm learning now and this did the job. API changed and old tutorials don't work nowadays.
Comment me your suggestions if any.
Adding a profile to your backend will fix this issue. Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.30"
}
}
backend "s3" {
bucket = "terraform-state"
region = "ap-south-1"
key = "dev/xxx.tfstate"
profile = "myAwsCliProfile"
}
}

Serverless framework is ignoring CLI options

I'm trying to dynamically pass in options to resolve when deploying my functions with serverless but they're always null or hit the fallback.
custom:
send_grid_api: ${opt:sendgridapi, 'missing'}
SubscribedUsersTable:
name: !Ref UsersSubscriptionTable
arn: !GetAtt UsersSubscriptionTable.Arn
bundle:
linting: false
provider:
name: aws
lambdaHashingVersion: 20201221
runtime: nodejs12.x
memorySize: 256
stage: ${opt:stage, 'dev'}
region: us-west-2
environment:
STAGE: ${self:provider.stage}
SEND_GRID_API_KEY: ${self:custom.send_grid_api}
I've also tried:
environment:
STAGE: ${self:provider.stage}
SEND_GRID_API_KEY: ${opt:sendgridapi, 'missing'}
both yield 'missing', but why?
sls deploy --stage=prod --sendgridapi=xxx
also fails if I try with space instead of =.
Edit: Working Solution
In my github action template, I defined the following:
- name: create env file
run: |
touch .env
echo SEND_GRID_API_KEY=${{ secrets.SEND_GRID_KEY }} >> .env
ls -la
pwd
In addition, I explicitly set the working directory for this stage like so:
working-directory: /home/runner/work/myDir/myDir/
In my serverless.yml I added the following:
environment:
SEND_GRID_API_KEY: ${env:SEND_GRID_API_KEY}
sls will read the contents from the file and load them properly
opt is for serverless' CLI options. These are part of serverless, not your own code.
You can instead use...
provider:
...
environment:
...
SEND_GRID_API_KEY: ${env:SEND_GRID_API_KEY}
And pass the value as an environment variable in your deploy step.
- name: Deploy
run: sls deploy --stage=prod
env:
SEND_GRID_API_KEY: "insert api key here"

AWS SSO authorization for EKS fails to call sts:AssumeRole

I'm migrating to AWS SSO for cli access, which has worked for everything except for kubectl so far.
While troubleshooting it I followed a few guides, which means I ended up with some cargo-cult behaviour, and I'm obviously missing something in my mental model.
aws sts get-caller-identity
{
"UserId": "<redacted>",
"Account": "<redacted>",
"Arn": "arn:aws:sts::<redacted>:assumed-role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87/<my username>"
}
kubectl get pods
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts:::assumed-role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87/ is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam:::role/aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87
It's amusing that it seems to be trying to assume the same role that it's already using, but I'm not sure how to fix it.
~/.aws/config (subset - I have other profiles, but they aren't relevant here)
[default]
region = us-east-2
output = json
[profile default]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadonly
region = us-east-2
sso_region = us-east-2
output = json
~/.kube/config (with clusters removed)
apiVersion: v1
contexts:
- context:
cluster: arn:aws:eks:us-east-2:<redacted>:cluster/foo
user: ro
name: ro
current-context: ro
kind: Config
preferences: {}
users:
- name: ro
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-2
- eks
- get-token
- --cluster-name
- foo
- --role
- arn:aws:iam::<redacted>:role/aws-reserved/sso.amazonaws.com/us-east-2/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87
command: aws
env: null
aws-auth mapRoles snippet
- rolearn: arn:aws:iam::<redacted>:role/AWSReservedSSO_DeveloperReadonly_a6a1426b0fdf9f87
username: "devread:{{SessionName}}"
groups:
- view
What obvious thing am I missing? I've reviewed the other stackoverflow posts with similar issues, but none had the arn:aws:sts:::assumed-role -> arn:aws:iam:::role path.
.aws/config had a subtle error - [profile default] isn't meaningful, so the two blocks should have been merged into [default]. Only the non-default profiles should have profile in the name.
[default]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadonly
region = us-east-2
sso_region = us-east-2
output = json
[profile rw]
sso_start_url = https://<redacted>.awsapps.com/start
sso_account_id = <redacted>
sso_role_name = DeveloperReadWrite
region = us-east-2
sso_region = us-east-2
output = json
I also changed .kube/config to get the token based on the profile instead of naming the role explicitly. This fixed the AssumeRole failing since it used the existing role.
apiVersion: v1
contexts:
- context:
cluster: arn:aws:eks:us-east-2:<redacted>:cluster/foo
user: ro
name: ro
current-context: ro
kind: Config
preferences: {}
users:
- name: ro
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-east-2
- eks
- get-token
- --cluster-name
- foo
- --profile
- default
command: aws
env: null
I can now run kubectl config use-context ro or the other profiles I've defined (omitted for brevity).
On a related note, I had some trouble getting an older terraform version to work since the s3 backend didn't handle sso. aws-vault solved this for me

ERROR: Unable to resolve AWS account to use. It must be either configured when you define your CDK or through the environment

I'm learning how to use aws cdk, here is my code, I wanna do "cdk deploy --profile myProfile", got "Unable to resolve AWS account to use. It must be either configured when you define your CDK or through the environment",
but I already specifying my Credentials and Region by using, can anyone help me with that.
cdk doctor
ℹ️ CDK Version: 1.30.0 (build 4f54ff7)
ℹ️ AWS environment variables:
- AWS_PROFILE = myProfile
- AWS_SDK_LOAD_CONFIG = 1
ℹ️ CDK environment variables:
- CDK_DEPLOY_ACCOUNT = 096938481488
- CDK_DEPLOY_REGION = us-west-2
aws configure --profile myProfile
AWS Access Key ID [****************6LNQ]:
AWS Secret Access Key [****************d9iz]:
Default region name [us-west-2]:
Default output format [None]:
import core = require('#aws-cdk/core');
import dynamodb = require('#aws-cdk/aws-dynamodb')
import { AttributeType } from '#aws-cdk/aws-dynamodb';
import { App, Construct, Stack } from "#aws-cdk/core";
export class HelloCdkStack extends core.Stack {
constructor(scope: core.App, id: string, props?: core.StackProps) {
super(scope, id, props);
new dynamodb.Table(this, 'MyFirstTable', {
tableName: 'myTable1',
partitionKey: {
name: 'MyPartitionkey',
type: AttributeType.NUMBER
}
});
}
}
const app = new App();
new HelloCdkStack(app, 'first-stack-us', { env: { account: '***', region: 'us-west-2' }});
app.synth();
It should be the bug as in [master] CDK CLI Authentication Issues #1656.
if you have ~/.aws/credentials and ~/.aws/config they both can't have a default profile section.
cli: cdk deploy issue #3340
removing [profile default] from ~/.aws/config solved the issue! I had both [default] and [profile default]. Please see #1656
resolved the issue insert the AWS keys in the "config" file inside ~/.aws folder, and not inside "credentials" file