Using Pulumi, I created an EFS filesystem.
I want to add the mount to a launch configuration userdata by adding:
mount -t efs -o tls fs-xxx:/ /mnt/efs.
How can I add the efs.id to the launch configuration userdata?
(I can't convert an output to a string)
You can't convert an Output to a string, but you can write a string once the output has resolved. You do this with an apply.
You can also use the #pulumi/cloudinit package to make this easier.
The following example is in typescript, but should apply to all Pulumi SDKs:
import * as cloudinit from "#pulumi/cloudinit";
const efs_fs = new aws.efs.FileSystem("foo", {
});
const userData = efs_fs.id.apply(id => cloudinit.getConfig({
gzip: false,
base64Encode: false,
parts: [{
contentType: "text/cloud-config",
content: JSON.stringify({
packages: [
],
mounts: [\"${id}\", '/mnt/efs'],
bootcmd: [
],
runcmd: [
]
})
},
You can then pass userData.rendered to any resource you're trying to create
Related
In AWS, to gain access to our RDS instance we setup a dedicated EC2 bastion host that we securely access by invoking the SSM Agent in the EC2 dashboard.
This is done by writing a shell script after connecting to the bastion host, now the script usually disappears after a certain time(?). So, is there any way to create this file using CDK when I create the bastion host?
I tried using CFN.init but to no avail.
this.bastionHost = new BastionHostLinux(this, "BastionHost", {
vpc: inspireStack.vpc,
subnetSelection: { subnetType: SubnetType.PRIVATE_WITH_NAT },
instanceType: InstanceType.of(InstanceClass.T2, InstanceSize.MICRO),
init: CloudFormationInit.fromConfigSets({
configSets: {
default: ["install"],
},
configs: {
install: new InitConfig([
InitCommand.shellCommand("cd ~"),
InitFile.fromString("jomar.sh", "testing 123"),
InitCommand.shellCommand("chmod +x jomar.sh"),
]),
},
})
You can write files to an EC2 instance with cloud-init. Either from an existing file or directly from the TS (a json for instance)
const ec2Instance = new ec2.Instance(this, 'Instance', {
vpc,
instanceType: ec2.InstanceType.of(
ec2.InstanceClass.T4G,
ec2.InstanceSize.MICRO,
),
machineImage: new ec2.AmazonLinuxImage({
generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
cpuType: ec2.AmazonLinuxCpuType.ARM_64,
}),
init: ec2.CloudFormationInit.fromConfigSets({
configSets: {
default: ['install', 'config'],
},
configs: {
install: new ec2.InitConfig([
ec2.InitFile.fromObject('/etc/config.json', {
IP: ec2Eip.ref,
}),
ec2.InitFile.fromFileInline(
'/etc/install.sh',
'./src/asteriskConfig/install.sh',
),
ec2.InitCommand.shellCommand('chmod +x /etc/install.sh'),
ec2.InitCommand.shellCommand('cd /tmp'),
ec2.InitCommand.shellCommand('/etc/install.sh'),
]),
config: new ec2.InitConfig([
ec2.InitFile.fromFileInline(
'/etc/asterisk/pjsip.conf',
'./src/asteriskConfig/pjsip.conf',
),
https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.CloudFormationInit.html
I see there are three simple workarounds:
SSM start session contains 'profile' section, where you can add your script as a bash function.
You can create an SSM document that will create this file, so before starting the session you will only need to run this document to create a file...
Save this script on S3 and just download them
Regarding disappearing file - it's strange... This CDK construct is similar to Instance, try to use it instead, and create your script with user-data.
My goal is to create a GCP CloudBuild Trigger using Pulumi. I'm using the Typescript client.
When creating a Google-managed secret (as opposed to customer-managed) I don't use KMS.
What would I put into the required (!) variable build.secrets[0].kmsKeyName? This is trivial when using KMS, but I found no "default" or "global" KMS name that would work when running the trigger with a Google-managed secret. I can create the trigger with a "fake" KMS name, but it doesn't run, complaining with:
Failed to trigger build: generic::invalid_argument: invalid build: invalid secrets: kmsKeyName "?WHAT TO PUT HERE?" is not a valid KMS key resource.
Thank you in advance for any suggestions.
import * as gcp from "#pulumi/gcp";
const ghToken = new gcp.secretmanager.Secret("gh-token", {
secretId: "gh-token",
replication: {
automatic: true,
},
})
const ghTokenSecretVersion = new gcp.secretmanager.SecretVersion("secret-version", {
secret: ghToken.id,
secretData: "the-secret-token",
});
const cloudBuild = new gcp.cloudbuild.Trigger("trigger-name", {
github: {
owner: "the-org",
name: "repo-name",
push: {
branch: "^main$"
}
},
build: {
substitutions: {
"_SERVICE_NAME": "service-name",
"_DEPLOY_REGION": "deploy-region",
"_GCR_HOSTNAME": "gcr.io",
},
steps: [
{
id: "Build",
name: "gcr.io/cloud-builders/docker",
entrypoint: "bash",
args: [
"-c",
`docker build --no-cache
-t $_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA
--build-arg GH_TOKEN=$$GH_TOKEN
.
-f Dockerfile
`,
],
secretEnvs: ["GH_TOKEN"],
},
],
tags: ["my-tag"],
secrets: [
{
kmsKeyName: "?WHAT TO PUT HERE?",
secretEnv: {
"GH_TOKEN": ghTokenSecretVersion.secretData
}
}
]
},
})
I don't think you can use a SecretManager secret with cloud build through Pulumi. I solved it by creating a kms key and encrypting my data using gcp.kms.Ciphertext. Here's what it looks like:
import * as gcp from "#pulumi/gcp";
import * as pulumi from "#pulumi/pulumi";
export const keyRing = new gcp.kms.KeyRing("keyring", {
location: "global",
}, {protect: true});
export const secretsEncryptionKey = new gcp.kms.CryptoKey("secrets-key", {
keyRing: keyRing.id,
rotationPeriod: "100000s",
}, { protect: true });
const config = new pulumi.Config();
export const githubTokenCiphertext = new gcp.kms.SecretCiphertext("github-token", {
cryptoKey: secretsEncryptionKey.id,
plaintext: config.requireSecret("github-token"),
});
const cloudBuild = new gcp.cloudbuild.Trigger("trigger-name", {
github: {...},
build: {
...,
secrets: [
{
kmsKeyName: githubTokenCiphertext.cryptoKey,
secretEnv: {
"GH_TOKEN": githubTokenCiphertext.ciphertext,
}
}
]
},
})
Does anyone know a way to install Cloudwatch agents automatically on EC2 instances while launching them through a launch template/configuration on terraform ?
I have just struggled through the process myself and would have benefited from a clear guide. So here's my attempt to provide one (for Amazon Linux 2 AMI):
Create your Cloudwatch agent configuration json file, which defines the metrics you want to collect. Easiest way is to SSH onto your EC2 instance and run this command to generate the file using the wizard: sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard. This is what my file looks like, it is the most basic config which only collects metrics on disk and memory usage every 60 seconds:
{
"agent": {
"metrics_collection_interval": 60,
"region": "eu-west-1",
"logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log",
"run_as_user": "root"
},
"metrics": {
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
}
}
}
}
Create a shell script template file which will run when the EC2 instance is created. This is what mine looks like, it is called userdata.sh.tpl:
Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Install Cloudwatch agent
sudo yum install -y amazon-cloudwatch-agent
# Write Cloudwatch agent configuration file
sudo cat >> /opt/aws/amazon-cloudwatch-agent/bin/config_temp.json <<EOF
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"metrics": {
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 60,
"resources": [
"*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
}
}
}
}
EOF
# Start Cloudwatch agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -s -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
--==BOUNDARY==--
Create a directory called templates in your terraform module directory and store the userdata.sh.tpl file in there.
Create a data block in the appropriate .tf file as follows:
data "template_file" "user_data" {
template = file("${path.module}/templates/userdata.sh.tpl")
vars = {
...
}
}
In your aws_launch_configuration block, pass in the following value for the user_data variable:
resource "aws_launch_configuration" "example" {
name = "example_server_name"
image_id = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
user_data = data.template_file.user_data.rendered
}
Add the CloudWatchAgentServerPolicy policy to the IAM role used by your EC2 server. This will give your role all the required service-level permissions e.g. "cloudwatch:PutMetricData".
Relaunch your EC2 server, and SSH on to check that the CloudWatch agent is installed and running using systemctl status amazon-cloudwatch-agent.service
Navigate to the CloudWatch UI and select Metrics from the left-hand menu. You should see CWAgent in the list of namespaces.
Yes this can be achieved with a Bash script (assuming Linux)
Steps to consider
Create UserData.sh file
Use templatefile to link userdata.sh to launch template
Write userdata to install AWS Cloudwatch agent (https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/install-CloudWatch-Agent-on-EC2-Instance.html)
Terminate/create instance
Check cloudwatch agent is installed, up and running systemctl status amazon-cloudwatch-agent
[![codecommit containing cloudformation template][1]][1]I have a requirement where I need to create pipeline which is responsible for taking template yaml file as an input and create resources accordingly.
The approach which I took is providing the path of template yaml file in codebuild stage with command as:
"aws cloudformation deploy --template-file D:/pipeline/aws-waf.yml --stack-name waf-deployment"
export class PipelineStack extends Stack {
constructor(app: App, id: string, props: PipelineStackProps) {
super(app, id, props);
const code = codecommit.Repository.fromRepositoryName(this, 'ImportedRepo',
props.repoName);
const cdkBuild = new codebuild.PipelineProject(this, 'CdkBuild', {
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
install: {
commands: 'npm install',
},
build: {
commands: [
'npm run build',
'npm run cdk synth -- -o dist',
'aws cloudformation deploy --template-file D:/pipeline/aws-waf.yml --stack-name waf-deployment',
'echo $?'
],
},
},
artifacts: {
'base-directory': 'dist',
files: [
'LambdaStack.template.json',
],
},
}),
environment: {
buildImage: codebuild.LinuxBuildImage.AMAZON_LINUX_2_3,
},
});
const sourceOutput = new codepipeline.Artifact();
const cdkBuildOutput = new codepipeline.Artifact('CdkBuildOutput');
//const lambdaBuildOutput = new codepipeline.Artifact('LambdaBuildOutput');
new codepipeline.Pipeline(this, 'Pipeline', {
stages: [
{
stageName: 'Source',
actions: [
new codepipeline_actions.CodeCommitSourceAction({
actionName: 'CodeCommit_Source',
repository: code,
output: sourceOutput,
}),
],
},
{
stageName: 'Build',
actions: [
new codepipeline_actions.CodeBuildAction({
actionName: 'CDK_Build',
project: cdkBuild,
input: sourceOutput,
outputs: [cdkBuildOutput],
}),
],
},
],
});
}
}
```[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/u7rRe.jpg
[2]: https://i.stack.imgur.com/xzk6v.png
Im not fully sure of exactly what you are looking for so maybe consider updating your question to be more specific. However, I took the question as you are looking for the correct way to deploy cloud formation/cdk given a file in a codepipeline?
The way that we handle deployments of cloudformation via codepipeline is by leverage codebuild and codedeploy. The pipeline sources the file / change from a repository (optional, could use many other triggers), codebuild then uploads the file to s3 using the aws cli, once that file has been uploaded to s3 you can use codedeploy to deploy cloudformation from a source file in s3.
So for your example above, I would update the build to upload the new artifact to s3, and then create a new step in your pipeline to use codedeploy to deploy that s3 template.
Its entirely possible to build a script/codebuild commands to do the cloudformation deploy as well but because codedeploy already supports tracking that change, error handling etc I would recommend using a codedeploy for cloudformation deploys.
Note:
if you are not using an existing cloudformation template (json/yaml) and instead using cdk you will need synthesize your cdk into a cloudformation template before uploading to s3.
How to add custom logs to CloudWatch? Defaults logs are sent but how to add a custom one?
I already added a file like this: (in .ebextensions)
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/applogs.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/app/current/logs/*
"/opt/elasticbeanstalk/tasks/taillogs.d/cloud-init.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/app/current/logs/*
As I did bundlelogs.d and taillogs.d these custom logs are now tailed or retrieved from the console or web, that's nice but they don't persist and are not sent on CloudWatch.
In CloudWatch I have the defaults logs like
/aws/elasticbeanstalk/InstanceName/var/log/eb-activity.log
And I want to have another one like this
/aws/elasticbeanstalk/InstanceName/var/app/current/logs/mycustomlog.log
Both bundlelogs.d and taillogs.d are logs retrieved from management console. What you want to do is extend default logs (e.g. eb-activity.log) to CloudWatch Logs. In order to extend the log stream, you need to add another configuration under /etc/awslogs/config/. The configuration should follow the Agent Configuration file Format.
I've successfully extended my logs for my custom ubuntu/nginx/php platform. Here is my extension file FYI. Here is an official sample FYI.
In your case, it could be like
files:
"/etc/awslogs/config/my_app_log.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/app/current/logs/xxx.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/app/current/logs/xxx.log"]]}`
log_stream_name = {instance_id}
file = /var/app/current/logs/xxx.log*
Credits where due go to Sebastian Hsu and Abhyudit Jain.
This is the final config file I came up with for .ebextensions for our particular use case. Notes explaining some aspects are below the code block.
files:
"/etc/awslogs/config/beanstalklogs_custom.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/log/tomcat8/catalina.out]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Fn::Select" : [ "1", { "Fn::Split" : [ "-", { "Ref":"AWSEBEnvironmentName" } ] } ] }, "var/log/tomcat8/catalina.out"]]}`
log_stream_name = `{"Fn::Join":["--", [{ "Ref":"AWSEBEnvironmentName" }, "{instance_id}"]]}`
file = /var/log/tomcat8/catalina.out*
services:
sysvinit:
awslogs:
files:
- "/etc/awslogs/config/beanstalklogs_custom.conf"
commands:
rm_beanstalklogs_custom_bak:
command: "rm beanstalklogs_custom.conf.bak"
cwd: "/etc/awslogs/config"
ignoreErrors: true
log_group_name
We have a standard naming scheme for our EB environments which is exactly environmentName-environmentType. I'm using { "Fn::Split" : [ "-", { "Ref":"AWSEBEnvironmentName" } ] } to split that into an array of two strings (name and type).
Then I use { "Fn::Select" : [ "1", <<SPLIT_OUTPUT>> ] } to get just the type string. Your needs would obviously differ, so you may only need the following:
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/log/tomcat8/catalina.out"]]}`
log_stream_name
I'm using the Fn::Join function to join the EB environment name with the instance ID. Note that the instance ID template is a string that gets echoed exactly as given.
services
The awslogs service is restarted automatically when the custom conf file is deployed.
commands
When the files block overwrites an existing file, it creates a backup file, like beanstalklogs_custom.conf.bak. This block erases that backup file because awslogs service reads both files, potentially causing conflict.
Result
If you log in to an EC2 instance and sudo cat the file, you should see something like this. Note that all the Fn functions have resolved. If you find that an Fn function didn't resolve, check it for syntax errors.
[/var/log/tomcat8/catalina.out]
log_group_name = /aws/elasticbeanstalk/environmentType/var/log/tomcat8/catalina.out
log_stream_name = environmentName-environmentType--{instance_id}
file = /var/log/tomcat8/catalina.out*
The awslogs agent looks in the configuration file for the log files which it's supposed to send. There are some defaults in it. You need to edit it and specify the files.
You can check and edit the configuration file located at:
/etc/awslogs/awslogs.conf
Make sure to restart the service:
sudo service awslogs restart
You can specify your own files there and create different groups and what not.
Please refer to the following link and you'll be able to get your logs in no time.
Resources:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html
Edit:
As you don't want to edit the files on the instance, you can add the relevant code to the .ebextensions folder in the root of your code. For example, this is my 01_cloudwatch.config :
packages:
yum:
awslogs: []
container_commands:
01_get_awscli_conf_file:
command: "aws s3 cp s3://project/awscli.conf /etc/awslogs/awscli.conf"
02_get_awslogs_conf_file:
command: "aws s3 cp s3://project/awslogs.conf.${NODE_ENV} /etc/awslogs/awslogs.conf"
03_restart_awslogs:
command: "sudo service awslogs restart"
04_start_awslogs_at_system_boot:
command: "sudo chkconfig awslogs on"
In this config, I am fetching the appropriate config file from a S3 bucket depending on the NODE_ENV. You can do anything you want in your config.
Some great answers already here.
I've detailed in a new Medium blog how this all works and an example .ebextensions file and where to put it.
Below is an excerpt that you might be able to use, the article explains how to determine the right folder/file(s) to stream.
Note that if /var/app/current/logs/* contains many different files this may not work,e.g. if you have
database.log
app.log
random.log
Then you should consider adding a stream for each, however if you have
app.2021-10-18.log
app.2021-10-17.log
app.2021-10-16.log
Then you can use /var/app/current/logs/app.*
packages:
yum:
awslogs: []
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 90
files:
"/etc/awslogs/awscli.conf" :
mode: "000600"
owner: root
group: root
content: |
[plugins]
cwlogs = cwlogs
[default]
region = `{"Ref":"AWS::Region"}`
"/etc/awslogs/config/logs.conf" :
mode: "000600"
owner: root
group: root
content: |
[/var/app/current/logs]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "/var/app/current/logs"]]}`
log_stream_name = {instance_id}
file = /var/app/current/logs/*
commands:
"01":
command: systemctl enable awslogsd.service
"02":
command: systemctl restart awslogsd
Looking at the AWS docs it's not immediately apparent, but there are a few things you need to do.
(Our environment is an Amazon Linux AMI - Rails App on the Ruby 2.6 Puma Platform).
First, create a Policy in IAM to give your EB generated EC2 instances access to work with CloudWatch log groups and stream to them - we named ours "EB-Cloudwatch-LogStream-Access".
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:/aws/elasticbeanstalk/*:log-stream:*"
}
]
}
Once you have created this, make sure the policy is attached (in IAM > Roles) to your IAM Instance Profile and Service Role that are associated with your EB environment (check the environment's configuration page: Configuration > Security > IAM instance profile | Service Role).
Then, provide a .config file in your .ebextensions directory such as setup_stream_to_cloudwatch.config or 0x_setup_stream_to_cloudwatch.config. In our project we have made it the last extension .config file to run during our deploys by setting a high number for 0x (eg. 09_setup_stream_to_cloudwatch.config).
Then, provide the following, replacing your_log_file with the appropriate filename, keeping in mind that some log files live in /var/log on an Amazon Linux AMI and some (such as those generated by your application) may live in a path such as /var/app/current/log:
files:
'/etc/awslogs/config/logs.conf':
mode: '000600'
owner: root
group: root
content: |
[/var/app/current/log/your_log_file.log]
log_group_name = `{"Fn::Join":["/", ["/aws/elasticbeanstalk", { "Ref":"AWSEBEnvironmentName" }, "var/app/current/log/your_log_file.log"]]}`
log_stream_name = {instance_id}
file = /var/app/current/log/your_log_file.log*
commands:
"01":
command: chkconfig awslogs on
"02":
command: service awslogs restart # note that this works for Amazon Linux AMI only - other Linux instances likely use `systemd`
Deploy your application, and you should be set!