I'm trying to figure out how to generate Service Specific Credentials for an IAM User with the AWS CDK.
I can see how to achieve this from:
Admin Console: IAM > Users > Security credentials:
HTTPS Git credentials for AWS CodeCommit, and
Credentials for Amazon Managed Apache Cassandra Service (MCS)
API: CreateServiceSpecificCredential
CLI: create-service-specific-credential
However I can't see how to achieve this with the AWS CDK (or from Cloud Formation for that matter).
If this is not currently supported from the CDK then what would be the recommended approach?
Building on what #JeffreyGoines replied above, a Construct calling CreateServiceSpecificCredential:
export class CodeCommitGitCredentialsProps {
userName: string
}
export class CodeCommitGitCredentials extends Construct {
readonly serviceSpecificCredentialId: string;
readonly serviceName: string;
readonly serviceUserName: string;
readonly servicePassword: string;
readonly status: string;
constructor(scope: Construct, id: string, props: CodeCommitGitCredentialsProps) {
super(scope, id);
// Create the Git Credentials required
const gitCredResp = new AwsCustomResource(this, "gitCredentials", {
// https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/IAM.html#createServiceSpecificCredential-property
onCreate: {
service: "IAM",
action: "createServiceSpecificCredential",
parameters: {
ServiceName: "codecommit.amazonaws.com",
UserName: props.userName
},
physicalResourceId: PhysicalResourceId.fromResponse("ServiceSpecificCredential.ServiceSpecificCredentialId")
},
// https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/IAM.html#deleteServiceSpecificCredential-property
onDelete: {
service: "IAM",
action: "deleteServiceSpecificCredential",
parameters: {
ServiceSpecificCredentialId: new PhysicalResourceIdReference(),
UserName: props.userName
}
},
policy: AwsCustomResourcePolicy.fromSdkCalls({
resources: AwsCustomResourcePolicy.ANY_RESOURCE,
}),
});
this.serviceSpecificCredentialId = gitCredResp.getResponseField("ServiceSpecificCredential.ServiceSpecificCredentialId");
this.serviceName = gitCredResp.getResponseField("ServiceSpecificCredential.ServiceName");
this.serviceUserName = gitCredResp.getResponseField("ServiceSpecificCredential.ServiceUserName");
this.servicePassword = gitCredResp.getResponseField("ServiceSpecificCredential.ServicePassword");
this.status = gitCredResp.getResponseField("ServiceSpecificCredential.Status");
}
}
And a usage example:
// User created for Git Push/Pull
this.user = new User(this, `codeCommitGitMirrorUser`, {
userName: `${props.repository.repositoryName}-GitMirrorUser`
});
props.repository.grantPullPush(this.user);
this.gitCredentials = new CodeCommitGitCredentials(this, "codeCommitGitCredentials", {
userName: this.user.userName
});
Related
Here is my CodeBuild main page, which says "Artifacts upload location" is "alpha-artifact-bucket":
Here is one of the build run, which is not using above bucket:
What's the difference between the two? Why every build run use a random bucket?
Any way to enforce the CodeBuild use the specified S3 bucket "alpha-artifact-bucket"?
CDK code
CodeBuild stack: I deploy this stack to each AWS along the pipeline first, so that the pipeline stack will just query each AWS and find its corresponding CodeBuild, and add it as a "stage". The reason I'm doing this is because each AWS will have a dedicated CodeBuild stage which will need read some values from its SecretManger.
export interface CodeBuildStackProps extends Cdk.StackProps {
readonly pipelineName: string;
readonly pipelineRole: IAM.IRole;
readonly pipelineStageInfo: PipelineStageInfo;
}
/**
* This stack will create CodeBuild for the target AWS account.
*/
export class CodeBuildStack extends Cdk.Stack {
constructor(scope: Construct, id: string, props: CodeBuildStackProps) {
super(scope, id, props);
// DeploymentRole will be assumed by PipelineRole to perform the CodeBuild step.
const deploymentRoleArn: string = `arn:aws:iam::${props.env?.account}:role/${props.pipelineName}-DeploymentRole`;
const deploymentRole = IAM.Role.fromRoleArn(
this,
`CodeBuild${props.pipelineStageInfo.stageName}DeploymentRoleConstructID`,
deploymentRoleArn,
{
mutable: false,
// Causes CDK to update the resource policy where required, instead of the Role
addGrantsToResources: true,
}
);
const buildspecFile = FS.readFileSync("./config/buildspec.yml", "utf-8");
const buildspecFileYaml = YAML.parse(buildspecFile, {
prettyErrors: true,
});
new CodeBuild.Project(
this,
`${props.pipelineStageInfo.stageName}ColdBuild`,
{
projectName: `${props.pipelineStageInfo.stageName}ColdBuild`,
environment: {
buildImage: CodeBuild.LinuxBuildImage.STANDARD_5_0,
},
buildSpec: CodeBuild.BuildSpec.fromObjectToYaml(buildspecFileYaml),
role: deploymentRole,
logging: {
cloudWatch: {
logGroup: new Logs.LogGroup(
this,
`${props.pipelineStageInfo.stageName}ColdBuildLogGroup`,
{
retention: Logs.RetentionDays.ONE_WEEK,
}
),
},
},
}
);
}
}
Pipeline Stack:
export interface PipelineStackProps extends CDK.StackProps {
readonly description: string;
readonly pipelineName: string;
}
/**
* This stack will contain our pipeline..
*/
export class PipelineStack extends CDK.Stack {
private readonly pipelineRole: IAM.IRole;
constructor(scope: Construct, id: string, props: PipelineStackProps) {
super(scope, id, props);
// Get the pipeline role from pipeline AWS account.
// The pipeline role will assume "Deployment Role" of each AWS account to perform the actual deployment.
const pipelineRoleName: string =
"eCommerceWebsitePipelineCdk-Pipeline-PipelineRole";
this.pipelineRole = IAM.Role.fromRoleArn(
this,
pipelineRoleName,
`arn:aws:iam::${this.account}:role/${pipelineRoleName}`,
{
mutable: false,
// Causes CDK to update the resource policy where required, instead of the Role
addGrantsToResources: true,
}
);
// Initialize the pipeline.
const pipeline = new codepipeline.Pipeline(this, props.pipelineName, {
pipelineName: props.pipelineName,
role: this.pipelineRole,
restartExecutionOnUpdate: true,
});
// Add a pipeline Source stage to fetch source code from repository.
const sourceCode = new codepipeline.Artifact();
this.addSourceStage(pipeline, sourceCode);
// For each AWS account, add a build stage and a deployment stage.
pipelineStageInfoList.forEach((pipelineStageInfo: PipelineStageInfo) => {
const deploymentRoleArn: string = `arn:aws:iam::${pipelineStageInfo.awsAccount}:role/${props.pipelineName}-DeploymentRole`;
const deploymentRole: IAM.IRole = IAM.Role.fromRoleArn(
this,
`DeploymentRoleFor${pipelineStageInfo.stageName}`,
deploymentRoleArn
);
const websiteArtifact = new codepipeline.Artifact();
// Add build stage to build the website artifact for the target AWS.
// Some environment variables will be retrieved from target AWS's secret manager.
this.addBuildStage(
pipelineStageInfo,
pipeline,
deploymentRole,
sourceCode,
websiteArtifact
);
// Add deployment stage to for the target AWS to do the actual deployment.
this.addDeploymentStage(
props,
pipelineStageInfo,
pipeline,
deploymentRole,
websiteArtifact
);
});
}
// Add Source stage to fetch code from GitHub repository.
private addSourceStage(
pipeline: codepipeline.Pipeline,
sourceCode: codepipeline.Artifact
) {
pipeline.addStage({
stageName: "Source",
actions: [
new codepipeline_actions.GitHubSourceAction({
actionName: "Checkout",
owner: "yangliu",
repo: "eCommerceWebsite",
branch: "main",
oauthToken: CDK.SecretValue.secretsManager(
"eCommerceWebsite-GitHubToken"
),
output: sourceCode,
trigger: codepipeline_actions.GitHubTrigger.WEBHOOK,
}),
],
});
}
private addBuildStage(
pipelineStageInfo: PipelineStageInfo,
pipeline: codepipeline.Pipeline,
deploymentRole: IAM.IRole,
sourceCode: codepipeline.Artifact,
websiteArtifact: codepipeline.Artifact
) {
const stage = new CDK.Stage(this, `${pipelineStageInfo.stageName}BuildId`, {
env: {
account: pipelineStageInfo.awsAccount,
},
});
const buildStage = pipeline.addStage(stage);
const targetProject: CodeBuild.IProject = CodeBuild.Project.fromProjectName(
this,
`CodeBuildProject${pipelineStageInfo.stageName}`,
`${pipelineStageInfo.stageName}ColdBuild`
);
buildStage.addAction(
new codepipeline_actions.CodeBuildAction({
actionName: `BuildArtifactForAAAA${pipelineStageInfo.stageName}`,
project: targetProject,
input: sourceCode,
outputs: [websiteArtifact],
role: deploymentRole,
})
);
}
private addDeploymentStage(
props: PipelineStackProps,
pipelineStageInfo: PipelineStageInfo,
pipeline: codepipeline.Pipeline,
deploymentRole: IAM.IRole,
websiteArtifact: codepipeline.Artifact
) {
const websiteBucket = S3.Bucket.fromBucketName(
this,
`${pipelineStageInfo.websiteBucketName}ConstructId`,
`${pipelineStageInfo.websiteBucketName}`
);
const pipelineStage = new PipelineStage(this, pipelineStageInfo.stageName, {
stageName: pipelineStageInfo.stageName,
pipelineName: props.pipelineName,
websiteDomain: pipelineStageInfo.websiteDomain,
websiteBucket: websiteBucket,
env: {
account: pipelineStageInfo.awsAccount,
region: pipelineStageInfo.awsRegion,
},
});
const stage = pipeline.addStage(pipelineStage);
stage.addAction(
new codepipeline_actions.S3DeployAction({
actionName: `DeploymentFor${pipelineStageInfo.stageName}`,
input: websiteArtifact,
bucket: websiteBucket,
role: deploymentRole,
})
);
}
}
buildspec.yml:
version: 0.2
env:
secrets-manager:
REACT_APP_DOMAIN: "REACT_APP_DOMAIN"
REACT_APP_BACKEND_SERVICE_API: "REACT_APP_BACKEND_SERVICE_API"
REACT_APP_GOOGLE_MAP_API_KEY: "REACT_APP_GOOGLE_MAP_API_KEY"
phases:
install:
runtime-versions:
nodejs: 14
commands:
- echo Performing yarn install
- yarn install
build:
commands:
- yarn build
artifacts:
base-directory: ./build
files:
- "**/*"
cache:
paths:
- "./node_modules/**/*"
I figured this out. aws-codepipeline pipeline has a built-in artifacts bucket : CDK's CodePipeline or CodeBuildStep are leaving an S3 bucket behind, is there a way of automatically removing it?. That is different from the CodeBuild artifacts.
Because my pipeline role in Account A need to assume the deployment role in Account B to perform the CodeBuild step(of Account B), I need grant the deployment role in Account B the write permission to the pipeline's built-in artifacts bucket. So I need do this:
pipeline.artifactBucket.grantReadWrite(deploymentRole);
I'm working through setting up a new infrastructure with the AWS CDK and I'm trying to get a TypeScript app running in Fargate to be able to read/write from/to a DynamoDB table, but am hitting IAM issues.
I have both my Fargate service and my DynamoDB Table defined, and both are running as they should be in AWS, but whenever I attempt to write to the table from my app, I am getting an access denied error.
I've tried the solutions defined in this post, as well as the ones it links to, but nothing seems to be allowing my container to write to the table. I've tried everything from setting table.grantReadWriteData(fargateService.taskDefinition.taskRole) to the more complex solutions described in the linked articles of defining my own IAM policies and setting the effects and actions, but I always just get the same access denied error when attempting to do a putItem:
AccessDeniedException: User: {fargate-service-arn} is not authorized to perform: dynamodb:PutItem on resource: {dynamodb-table} because no identity-based policy allows the dynamodb:PutItem action
Am I missing something, or a crucial step to make this possible?
Any help is greatly appreciated.
Thanks!
Edit (2022-09-19):
Here is the boiled down code for how I'm defining my Vpc, Cluster, Container Image, FargateService, and Table.
export class FooCdkStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = new Vpc(this, 'FooVpc', {
maxAzs: 2,
natGateways: 1
});
const cluster = new Cluster(this, 'FooCluster', { vpc });
const containerImage = ContainerImage.fromAsset(
path.join(__dirname, '/../app'),
{
platform: Platform.LINUX_AMD64 // I'm on an M1 Mac and images weren't working appropriately without this
}
);
const fargateService = new ApplicationLoadBalancedFargateService(
this,
'FooFargateService',
{
assignPublicIp: true,
cluster,
memoryLimitMiB: 1024,
cpu: 512,
desiredCount: 1,
taskImageOptions: {
containerPort: PORT,
image: containerImage
}
}
);
fargateService.targetGroup.configureHealthCheck({ path: '/health' });
const serverTable = new Table(this, 'FooTable', {
billingMode: BillingMode.PAY_PER_REQUEST,
removalPolicy: cdk.RemovalPolicy.DESTROY,
partitionKey: { name: 'id', type: AttributeType.STRING },
pointInTimeRecovery: true
});
serverTable.grantReadWriteData(fargateService.taskDefinition.taskRole);
}
}
Apparently either order in which the resources are defined matters, or the inclusion of a property from the table in the Fargate service is what did the trick. I moved the table definition up above the Fargate service and included an environment variable holding the table name and it's working as intended now.
export class FooCdkStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const vpc = new Vpc(this, 'FooVpc', {
maxAzs: 2,
natGateways: 1
});
const cluster = new Cluster(this, 'FooCluster', { vpc });
const containerImage = ContainerImage.fromAsset(
path.join(__dirname, '/../app'),
{
platform: Platform.LINUX_AMD64 // I'm on an M1 Mac and images weren't working appropriately without this
}
);
const serverTable = new Table(this, 'FooTable', {
billingMode: BillingMode.PAY_PER_REQUEST,
removalPolicy: cdk.RemovalPolicy.DESTROY,
partitionKey: { name: 'id', type: AttributeType.STRING },
pointInTimeRecovery: true
});
const fargateService = new ApplicationLoadBalancedFargateService(
this,
'FooFargateService',
{
assignPublicIp: true,
cluster,
memoryLimitMiB: 1024,
cpu: 512,
desiredCount: 1,
taskImageOptions: {
containerPort: PORT,
image: containerImage,
environment: {
SERVER_TABLE_NAME: serverTable.tableName
}
}
}
);
fargateService.targetGroup.configureHealthCheck({ path: '/health' });
serverTable.grantReadWriteData(fargateService.taskDefinition.taskRole);
}
}
Hopefully this helps someone in the future who may come across the same issue.
I'm using AWS CDK to create an APIGateway. I want to attach a custom domain to my api so I can use api.findtechjobs.io. In the console, I can see I have a custom domain attached, however I always get a 403 response when using my custom domain.
Below is the following AWS CDK Stack I am using to create my API Gateway attached with a single lambda function.
AWS CDK deploys well, however, when I attempt to make a POST request to https://api.findtechjobs.io/search AWS returns a 403 Forbidden response. I don't have a VPC, WAF, or an API key for this endpoint.
I am very uncertain why my custom domain is returning a 403 response. I have been reading a lot of documentation, and used answers from other questions and I still can't figure out what I am doing wrong.
How can I associate api.findtechjobs.io to my API Gateway well using AWS CDK?
export class HostingStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props: cdk.StackProps) {
super(scope, id, props)
const zonefindtechjobsio = route53.HostedZone.fromLookup(this, 'findtechjobs.io', {
domainName: 'findtechjobs.io'
});
const certificate = new acm.Certificate(this, 'APICertificate', {
domainName: 'findtechjobs.io',
subjectAlternativeNames: ['api.findtechjobs.io'],
validation: acm.CertificateValidation.fromDns(zonefindtechjobsio),
});
const api = this.buildAPI(certificate)
new route53.ARecord( this, "AliasRecord api.findtechjobs.io", {
zone: zonefindtechjobsio,
recordName: `api`,
target: route53.RecordTarget.fromAlias(new route53targets.ApiGateway(api)),
});
}
private buildAPI(certificate: acm.Certificate) {
// API
const api = new apigateway.RestApi(this, "techjobapi", {
domainName: {
domainName: 'findtechjobs.io',
certificate: certificate
},
defaultCorsPreflightOptions: {
allowOrigins: apigateway.Cors.ALL_ORIGINS, // TODO limit this when you go to prod
},
deploy: true,
deployOptions: {
stageName: 'dev',
},
endpointTypes: [apigateway.EndpointType.REGIONAL]
});
const searchResource = api.root.addResource("search", {
defaultMethodOptions: {
operationName: "Search",
},
});
searchResource.addMethod(
"POST",
new apigateway.LambdaIntegration(new lambda.Function(this, "SearchLambda", {
runtime: lambda.Runtime.GO_1_X,
handler: "main",
code: lambda.Code.fromAsset(path.resolve("..", "search", "main.zip")),
environment: {
DB_NAME: "...",
DB_CONNECTION:"...",
},
})),
{
operationName: "search",
}
);
return api;
}
}
Same problem. After some struggle. I found out that the problem may lay in the DNS. Cause my domain was transferred from another registrar. The name server is not changed. After I change them to AWS dns it worked. But I can't 100% sure.
And I found out that the default API gateway domain(d-lb4byzxxx.execute-api.ap-east-1.amazonaws.com ) is always in 403 forbidden state.
I deployed an AWS Cognito UserPool via aws-cdk as CognitoStack.
Now, I want to automate testing of a GraphQL API that uses said AWS Cognito UserPool for authentication.
How can I programmatically get the UserPoolId required for authentication from CogntioStack?
My CognitoStack is:
export class CognitoStack extends Stack {
public readonly userPool: UserPool;
constructor(scope: App, id: string, props?: StackProps) {
super(scope, id, props);
this.userPool = new UserPool(this, 'UserPool', {
signInAliases: {
email: true,
phone: false,
username: false,
preferredUsername: false,
},
autoVerify: {
phone: false,
email: false,
},
selfSignUpEnabled: true,
});
this.userPool.addClient('web', {
authFlows: {
refreshToken: true,
userSrp: true,
},
});
new CfnOutput(this, 'UserPoolId', {
value: this.userPool.userPoolId,
})
}
}
When I do cdk deploy CognitoStack I get:
Outputs:
CognitoStack.UserPoolId = eu-central-1_xafasds
CognitoStack.ExportsOutputRefUserPool6BA7E5F296FD7236 = eu-central-1_xasdfdd
However, when I inspect cdk.out/CognitoStack.template.json (which I easily could require in my test), there is no eu-central-1_xasdfdd appearing.
Best way I've found so far is use the --outputs-file flag when running the deploy command, and then I have a json file with all fields I need.
cdk deploy CognitoStack --outputs-file outputs.json
I have a CDK script that creates an S3 bucket, VPC, and an RDS instance. Deploy is working, but the destroy fails with an error that my user is not authorized to secretsmanager:DeleteSecret.
I used the IAM policy testing tool to check and it passes. I am able to delete the secret via the UI. The CDK destroy command continues to fail though. Any thoughts?
CDK script:
class AcmeCdkStack extends cdk.Stack {
constructor(scope: cdk.App, id: string, props?: cdk.StackProps) {
super(scope, id, props);
// create a general purpose bucket for use with the app
new s3.Bucket(this, 'app-bucket', {
versioned: true
});
// create a vpc for our application
const vpc = new ec2.Vpc(this, 'app-vpc, {
cidr: "10.0.0.0/16",
});
// create a database instance
const db = new rds.DatabaseInstance(this, `app-db`, {
engine: rds.DatabaseInstanceEngine.POSTGRES,
instanceClass: ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MICRO),
vpc,
masterUsername: `dbadmin`,
deleteAutomatedBackups: false,
deletionProtection: false,
// https://github.com/aws/aws-cdk/issues/4036
removalPolicy: cdk.RemovalPolicy.DESTROY,
});
}
}
const app = new cdk.App();
new AcmeCdkStack(app, 'app-stack;);
Error:
User: arn:aws:iam::0000000000:user/user#acme.com is not authorized to perform: secretsmanager:DeleteSecret on resource: arn:aws:secretsmanager:us-east-1:0000000000:secret:appdbdemoSecret0261-mjgIXOsp5rLL-HxFng1 (Service: AWSSecretsManager; Status Code: 400; Error Code: AccessDeniedException; Request ID: 000000000)
Based on the comments, the problem was that the CDK was using different credentials than expected. The solution was to use correct AWS_PROFILE.