ERROR: Unable to resolve AWS account to use. It must be either configured when you define your CDK or through the environment - amazon-web-services

I'm learning how to use aws cdk, here is my code, I wanna do "cdk deploy --profile myProfile", got "Unable to resolve AWS account to use. It must be either configured when you define your CDK or through the environment",
but I already specifying my Credentials and Region by using, can anyone help me with that.
cdk doctor
ℹ️ CDK Version: 1.30.0 (build 4f54ff7)
ℹ️ AWS environment variables:
- AWS_PROFILE = myProfile
- AWS_SDK_LOAD_CONFIG = 1
ℹ️ CDK environment variables:
- CDK_DEPLOY_ACCOUNT = 096938481488
- CDK_DEPLOY_REGION = us-west-2
aws configure --profile myProfile
AWS Access Key ID [****************6LNQ]:
AWS Secret Access Key [****************d9iz]:
Default region name [us-west-2]:
Default output format [None]:
import core = require('#aws-cdk/core');
import dynamodb = require('#aws-cdk/aws-dynamodb')
import { AttributeType } from '#aws-cdk/aws-dynamodb';
import { App, Construct, Stack } from "#aws-cdk/core";
export class HelloCdkStack extends core.Stack {
constructor(scope: core.App, id: string, props?: core.StackProps) {
super(scope, id, props);
new dynamodb.Table(this, 'MyFirstTable', {
tableName: 'myTable1',
partitionKey: {
name: 'MyPartitionkey',
type: AttributeType.NUMBER
}
});
}
}
const app = new App();
new HelloCdkStack(app, 'first-stack-us', { env: { account: '***', region: 'us-west-2' }});
app.synth();

It should be the bug as in [master] CDK CLI Authentication Issues #1656.
if you have ~/.aws/credentials and ~/.aws/config they both can't have a default profile section.
cli: cdk deploy issue #3340
removing [profile default] from ~/.aws/config solved the issue! I had both [default] and [profile default]. Please see #1656
resolved the issue insert the AWS keys in the "config" file inside ~/.aws folder, and not inside "credentials" file

Related

How can I use was sso login and sso-sessions with serverless-better-credentials serverless plugin?

I get ProcessCredentialsProviderFailure: Profile default not found when trying to run ProcessCredentialsProviderFailure: Profile default not found which does not seem right sense my ~/.aws/config looks like this:
[sso-session aphexlog]
sso_start_url = https://aphexlog.awsapps.com/start
sso_region = us-west-2
sso_registration_scopes = sso:account:access
[profile elevator-robot]
sso_session = aphexlog
sso_account_id = 12345678910
sso_role_name = AWSAdministratorAccess
region = us-east-1
output = json
serverless.json:
service: dog
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs18.x
functions:
dog:
handler: index.handler
plugins:
- serverless-better-credentials
steps to reproduce:
run npm i --save-dev serverless-better-credentials
run aws sso login --profile elevator-robot
run serverless info --aws-profile elevator-robot
then you get the error
However, if I just export all my env variables (secret keys) then it works fine

How do I make my CloudFormation / CodePipeline update a domain name to point to an S3 bucket when using CDK?

I'm using CDK to deploy a CodePipeline that builds and deploys a React application to S3. All of that is working, but I want the deployment to update a domain name to point that S3 bucket.
I already have the Zone defined in Route53 but it is defined by a different cloud formation stack because there are a lot of details that are not relevant for this app (MX, TXT, etc). What's the right way for my Pipeline/Stacks to set those domain names?
I could think of two solutions:
Delegate the domain to another zone, so zone example.com delegates staging.example.com.
Have my pipeline inject records into the existing zone.
I didn't try the delegation zone method. I was slightly concerned about manually maintaining the generated nameservers from staging.example.com into my CloudFormation for zone example.com.
I did try injecting the records into the existing zone, but I run into some issues. I'm open to either solving these issues or doing this whichever way is correct.
In my stack (full pipeline at the bottom) I first define and deploy to the bucket:
const bucket = new s3.Bucket(this, "Bucket", {...})
new s3d.BucketDeployment(this, "WebsiteDeployment", {
sources: [...],
destinationBucket: bucket
})
then I tried to retrieve the zone and add the CNAME to it:
const dnsZone = route53.HostedZone.fromLookup(this, "DNS zone", {domainName: "example.com"})
new route53.CnameRecord(this, "cname", {
zone: dnsZone,
recordName: "staging",
domainName: bucket.bucketWebsiteDomainName
})
This fails due to lack of permissions to the zone, which is reasonable:
[Container] 2022/01/30 11:35:17 Running command npx cdk synth
current credentials could not be used to assume 'arn:aws:iam::...:role/cdk-hnb659fds-lookup-role-...-us-east-1', but are for the right account. Proceeding anyway.
[Error at /Pipeline/Staging] User: arn:aws:sts::...:assumed-role/Pipeline-PipelineBuildSynthCdkBuildProje-1H5AV7C28FZ3S/AWSCodeBuild-ada5ef88-cc82-4309-9acf-11bcf0bae878 is not authorized to perform: route53:ListHostedZonesByName because no identity-based policy allows the route53:ListHostedZonesByName action
Found errors
To try to solve that, I added rolePolicyStatements to my CodeBuildStep
rolePolicyStatements: [
new iam.PolicyStatement({
actions: ["route53:ListHostedZonesByName"],
resources: ["*"],
effect: iam.Effect.ALLOW
})
]
which might make more sense in the context of the whole file (at the bottom of this question). That had no effect. I'm not sure if the policy statement is wrong or I'm adding it to the wrong role.
After adding that rolePolicyStatements, I run cdk deploy which showed me this output:
> cdk deploy
✨ Synthesis time: 33.43s
This deployment will make potentially sensitive changes according to your current security approval level (--require-approval broadening).
Please confirm you intend to make the following modifications:
IAM Statement Changes
┌───┬──────────┬────────┬───────────────────────────────┬───────────────────────────────────────────────────────────┬───────────┐
│ │ Resource │ Effect │ Action │ Principal │ Condition │
├───┼──────────┼────────┼───────────────────────────────┼───────────────────────────────────────────────────────────┼───────────┤
│ + │ * │ Allow │ route53:ListHostedZonesByName │ AWS:${Pipeline/Pipeline/Build/Synth/CdkBuildProject/Role} │ │
└───┴──────────┴────────┴───────────────────────────────┴───────────────────────────────────────────────────────────┴───────────┘
(NOTE: There may be security-related changes not in this list. See https://github.com/aws/aws-cdk/issues/1299)
After deployment finishes, there's a role that I can see in the AWS console that has:
{
"Action": "route53:ListHostedZonesByName",
"Resource": "*",
"Effect": "Allow"
}
The ARN of the role is arn:aws:iam::...:role/Pipeline-PipelineBuildSynthCdkBuildProje-1H5AV7C28FZ3S. I'm not 100% if the permissions are being granted to the right thing.
This is my whole CDK pipeline:
import * as path from "path";
import {Construct} from "constructs"
import * as pipelines from "aws-cdk-lib/pipelines"
import * as cdk from "aws-cdk-lib"
import * as s3 from "aws-cdk-lib/aws-s3"
import * as s3d from "aws-cdk-lib/aws-s3-deployment"
import * as iam from "aws-cdk-lib/aws-iam"
import * as route53 from "aws-cdk-lib/aws-route53";
export class MainStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StageProps) {
super(scope, id, props)
const bucket = new s3.Bucket(this, "Bucket", {
websiteIndexDocument: "index.html",
websiteErrorDocument: "error.html",
publicReadAccess: true,
})
const dnsZone = route53.HostedZone.fromLookup(this, "DNS zone", {domainName: "example.com"})
new route53.CnameRecord(this, "cname", {
zone: dnsZone,
recordName: "staging",
domainName: bucket.bucketWebsiteDomainName
})
new s3d.BucketDeployment(this, "WebsiteDeployment", {
sources: [s3d.Source.asset(path.join(process.cwd(), "../build"))],
destinationBucket: bucket
})
}
}
export class DeployStage extends cdk.Stage {
public readonly mainStack: MainStack
constructor(scope: Construct, id: string, props?: cdk.StageProps) {
super(scope, id, props)
this.mainStack = new MainStack(this, "MainStack", {env: props?.env})
}
}
export interface PipelineStackProps extends cdk.StackProps {
readonly githubRepo: string
readonly repoBranch: string
readonly repoConnectionArn: string
}
export class PipelineStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: PipelineStackProps) {
super(scope, id, props)
const pipeline = new pipelines.CodePipeline(this, id, {
pipelineName: id,
synth: new pipelines.CodeBuildStep("Synth", {
input: pipelines.CodePipelineSource.connection(props.githubRepo, props.repoBranch, {connectionArn: props.repoConnectionArn}),
installCommands: [
"npm install -g aws-cdk"
],
commands: [
// First build the React app.
"npm ci",
"npm run build",
// Now build the CF stack.
"cd infra",
"npm ci",
"npx cdk synth"
],
primaryOutputDirectory: "infra/cdk.out",
rolePolicyStatements: [
new iam.PolicyStatement({
actions: ["route53:ListHostedZonesByName"],
resources: ["*"],
effect: iam.Effect.ALLOW
})
]
},
),
})
const deploy = new DeployStage(this, "Staging", {env: props?.env})
const deployStage = pipeline.addStage(deploy)
}
}
You cannot depend on CDK pipeline to fix itself if the synth stage is failing, since the Pipeline CloudFormation Stack is changed in the SelfMutate stage which uses the output of the synth stage. You will need to do one of the following options to fix your pipeline:
Run cdk synth and cdk deploy PipelineStack locally (or anywhere outside the pipeline, where you have the required AWS IAM permissions). Edit: You will need to temporarily set selfMutatation to false for this to work (Reference)
Temporarily remove route53.HostedZone.fromLookup and route53.CnameRecord from your MainStack while still keeping the rolePolicyStatements change. Commit and push your code, let CodePipeline run once, making sure that the Pipeline self mutates and the IAM role has the required additional permissions. Add back the route53 constructs, commit, push again and check whether your code works with the new changes.

Terraform AWS not accessing localstack

I'm having trouble getting a terraform AWS provider to talk to localstack. Whatever I try I just get the same error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: dc96c65d-84a7-4e64-947d-833195464538
This error suggest that the provider is making contact with a HTTP server but the credentials are being rejected (as per any 403). You might imagine the problem is that I'm feeding in the wrong credentials (through environment variables).
However the hostname local-aws exists in my /etc/hosts file, but blahblahblah does not. If I swap the endpoint to point to http://blahblahblah:4566 I still get the same 403. So I think the problem is that the provider isn't using my local endpoint. I can't work out why.
resource "aws_secretsmanager_secret_version" "foo" {
secret_id = aws_secretsmanager_secret.foo.id
secret_string = "bar"
}
resource "aws_secretsmanager_secret" "foo" {
name = "rabbitmq_battery_emulator"
}
provider "aws" {
region = "eu-west-2"
endpoints {
secretsmanager = "http://local-aws:4566"
}
}
Firstly check that localstack is configured to run sts. In docker-compose this was just the SERVICES environment variable:
services:
local-aws:
image: localstack/localstack
environment:
EDGE_PORT: 4566
SERVICES: secretsmanager, sts
Then make sure that you set the sts endpoint as well as the service you require:
provider "aws" {
region = "eu-west-2"
endpoints {
sts = "http://local-aws:4566"
secretsmanager = "http://local-aws:4566"
}
}
In addition to the SERVICES and sts endpoint config mentioned by #philip-couling, I also had to remove a terraform block from my main.tf:
#terraform {
# backend "s3" {
# bucket = "valid-bucket"
# key = "terraform/state/account/terraform.tfstate"
# region = "eu-west-1"
# }
# required_providers {
# local = {
# version = "~> 2.1"
# }
# }
#}

Terraform import fails due to erroneous region although defined in the provider

I am using the following tf configuration:
variable "aws_profile" {
description = "The AWS profile to use for this account"
}
provider "aws" {
version = "~> 2"
region = "us-east-1"
profile = "${var.aws_profile}"
}
provider "aws" {
version = "~> 2"
region = "us-west-2"
alias = "us_west_2"
profile = "profile-us-west-2"
}
where
cat ~/.aws/credentials
[profile-us-west-2]
region=us-west-2
aws_access_key_id = ΧΧΧΧΧΧΧΧΧΧΧ
aws_secret_access_key = ΧΧΧΧΧΧΧΧΧΧΧΧΧ
and trying to import an existing S3 bucket
to the tf resource below
resource "aws_s3_bucket" "my_tf_bucket" {
provider = "aws.us_west_2"
bucket = "my_tf_bucket"
with the following command:
terraform import aws_s3_bucket.my_tf_bucket existing_bucket_name
which fails as follows:
Error importing AWS S3 bucket policy: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'
status code: 400, request id: 64242424244D21946, host id: bddw422424
Why isn't the provider alias working?
The problem was that (for some reason), terraform requires to explicitly pass the provider value when importing
terraform import --provider=aws.us_west_2 aws_s3_bucket.my_tf_bucket existing_bucket_name

How to setup AWS-SDK credentials in NextJS

I need to upload some files to S3 from a NextJs application. Since it is server side I am under the impression simply setting environment variables should work but it doesn't. I know there are other alternative like assigning a role to EC2 but I want to use accessKeyID and secretKey.
This is my next.config.js
module.exports = {
env: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
},
serverRuntimeConfig: {
//..others
AWS_SECRET_ACCESS_KEY: process.env.AWS_SECRET_ACCESS_KEY
}
}
This is my config/index.js
export default {
//...others
awsClientID: process.env. AWS_ACCESS_KEY_ID,
awsClientSecret: process.env.AWS_SECRET_ACCESS_KEY
}
This is how I use in my code:
import AWS from 'aws-sdk'
import config from '../config'
AWS.config.update({
accessKeyId: config.awsClientID,
secretAccessKey: config.awsClientSecret,
});
const S3 = new AWS.S3()
const params = {
Bucket: "bucketName",
Key: "some key",
Body: fileObject,
ContentType: fileObject.type,
ACL: 'public-read'
}
await S3.upload(params).promise()
I am getting this error:
Unhandled Rejection (CredentialsError): Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
If I hard code the credentials in code, it works fine.
How can I make it work correctly?
Looks like the Vercel docs are currently outdated (AWS SDK V2 instead of V3). You can pass the credentials object to the AWS service when you instantiate it. Use an environment variable that is not reserved by adding the name of your app to it for example.
.env.local
YOUR_APP_AWS_ACCESS_KEY_ID=[your key]
YOUR_APP_AWS_SECRET_ACCESS_KEY=[your secret]
Add these env variables to your Vercel deployment settings (or Netlify, etc) and pass them in when you start up your AWS service client.
import { S3Client } from '#aws-sdk/client-s3'
...
const s3 = new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: process.env.TRENDZY_AWS_ACCESS_KEY_ID ?? '',
secretAccessKey: process.env.TRENDZY_AWS_SECRET_ACCESS_KEY ?? '',
},
})
(note: undefined check so Typescript stays happy)
are you possibly hosting this app via vercel?
As per vercel docs, some env variables are reserved by vercel.
https://vercel.com/docs/concepts/projects/environment-variables#reserved-environment-variables
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Maybe that's the reason why it is not getting those env vars
I was able to workaround this by adding my custom env variables into .env.local and then calling for those variables
AWS.config.update({
'region': 'us-east-1',
'credentials': {
'accessKeyId': process.env.MY_AWS_ACCESS_KEY,
'secretAccessKey': process.env.MY_AWS_SECRET_KEY
}
});
As last step would need to add these into vercel UI
obviously not ideal solution and not recommended by AWS.
https://vercel.com/support/articles/how-can-i-use-aws-sdk-environment-variables-on-vercel
If I'm not mistaken, you want to make AWS_ACCESS_KEY_ID into a runtime variable as well. Currently, it is a build time variable, which won't be accessible in your node application.
// replace this
env: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
},
// with this
module.exports = {
serverRuntimeConfig: {
//..others
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID
}
}
Reference: https://nextjs.org/docs/api-reference/next.config.js/environment-variables