When building a docker image with CodeBuild one specifies a imagedefinitions.json file. As I've understood it, it is used to specify the container definition of a task definition.
However, I would like to include environment variables such as in this definition:
{
"containerDefinitions": [{
"secrets": [{
"name": "environment_variable_name",
"valueFrom": "arn:aws:ssm:region:aws_account_id:parameter/parameter_name"
}]
}]
}
How can I update the imagedefinitions.json file to include this information?
You cannot include any other parameter besides container_name and image_uri, as stated here.
imagedefinitions.json file is used by CodePipeline in ECS deploy stages to pass both fields to ECS for further task versioning and eventual deployment.
containerDefinitions in the other hand is a property within a task definition config file. That one is from CloudFormation documentation, just for you to check.
Hope it helps.
Related
I am having a weird issue in CodePipeline + CodeDeploy, we have checked all the aws forums and stackoverflow but no one has had the particular issue and close issues suggestion have been already been taken into account but nothing has helped.
The issue in particular is the following :
We have a CodePipeline:
It happens that "randomly" we get the error:
(x) An AppSpec file is required, but could not be found in the revision
But the required file is in the Revision, we have checked dozens of times, and the files are in there and are the same name and format as the times that follow without problems.
This is happening in the same Deployment Group, with the same configuration, so is not a poorly configured Group because most of the times work without issues.
Just to be sure i add both .yml and .yaml versions in the revision. And the appspec is as simple as this:
version: 0.0
Resources:
- TargetService:
Type: AWS::ECS::Service
Properties:
TaskDefinition: "arn:aws:ecs:us-east-1:xxxxxxxx:task-definition/my_app_cd:258"
LoadBalancerInfo:
ContainerName: "nginx_main"
ContainerPort: 80
PlatformVersion: null
The above error I suspect is related to the wrong configuration for your codepipeline. To perform ECS codedeploy deployments, the provider in your codepipeline stage for deployment must be "ECS (blue/green)" not "Codedeploy" ( codedeploy is used for EC2 deployments.
Even though in the back-end it uses codedeploy, the name of the provider is "ECS (blue/green)".
Pipeline configuration can be checked as:
$ aws codepipeline get-pipeline --name <pipeline_name>
{
"name": "Deploy",
"blockers": null,
"actions": [
{
"name": "Deploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CodeDeploy", <===== should be "CodeDeployToECS"
"version": "1"
},
I'm trying to change the maximum-event-age setting for Lambdas using a bash script. Serverless does not currently appear to support this setting, so I'm planning to do it as a bash script after a deploy from GitHub.
Approach:
I'm considering querying aws for the Lambdas in a specific CloudFormation stack. I'm guessing that when a repo is deployed, a new CF stack is created. Then, I want to iterate over the functions and use the put-function-event-invoke-config to change the maximum-event-age setting on each lambda.
Problem:
The put-function-event-invoke-config seems to require a function name. When querying for CF stacks, I'm getting the lambda ARNs instead. I could possibly do some string manipulation to get just the lambda name, but it seems like a messy way to do it.
Am I on the right track with this? Is there a better way?
Edit:
The lambdas already exist and have been deployed. What I think I need to do is run some kind of script that is able to go through the list of lambdas that have been deployed from a single repository (there are multiple repos being deployed to the same environment) and change the maximum-event-age setting that has a default of 6 hours.
Here's an example output when I use the CLI to query CFN with aws cloudformation describe-stacks :
{
"StackId": "arn:aws:cloudformation:us-east-1:***:stack/my-repository-name/0sdg70gfs-6124-12ea-a910-93c4ahj3d140",
"StackName": "my-repository-name",
"Description": "The AWS CloudFormation template for this Serverless application",
"CreationTime": "2019-11-18T22:05:44.246Z",
"LastUpdatedTime": "2019-03-19T23:26:04.382Z",
"RollbackConfiguration": {},
"StackStatus": "UPDATE_COMPLETE",
"DisableRollback": false,
"NotificationARNs": [],
"Capabilities": [
"CAPABILITY_IAM",
"CAPABILITY_NAMED_IAM"
],
"Outputs": [
{
"OutputKey": "TestLambdaFunctionQualifiedArn",
"OutputValue": "arn:aws:lambda:us-east-1:***:function:my-test-function:3",
"Description": "Current Lambda function version"
},
I know that it is possible to run this command to change the maximum-event-age:
$ aws lambda --region us-east-1 put-function-event-invoke-config --function-name my-test-function --maximum-event-age-in-seconds 3600
But it appears to require the --function-name which I don't see in the CFN output in the query above.
How do I programmatically go through all of the functions in a CFN stack and modify the settings for maximum-event-age?
put-function-event-invoke-config accepts ARNs, which means one could query CFN based on stack-names which would correspond to the repo that it was deployed from.
However, I decided to use list-functions to query for Lambdas and then list-tags because our deploys are tagged by repo names. It seemed like a better option than to query CFN (also CFN output ARNs contain a suffix which means put-function-event-invoke-config won't run on them).
Then I can run the text output through a for loop in bash and use put-function-event-invoke-config to add the maximum-event-age setting.
I have self-managed AWS Cluster over which I am looking to run Docker Containers.
(At present, ECS and EKS are not in my scope though in future they might... but I need focus on present requirement).
I got to add persistence to few containers by attaching AWS efs/ebs/s3fs storages (as appropriate for the use case). AWS has addressed this use case through a lengthy and verbose blog which takes ECS in to picture. Like said my requirement is simple and this article seems to do many things like cloudFormaton etc etc..
Will appreciate if anyone can simplify this a provide the bare bones step I need to follow.
1) I installed the ebs/efs/s3fs drivers -
docker plugin install --grant-all-permissions rexray/ebs
and so on for efs and s3fs too. s3fs installation ran into trouble.
Error response from daemon: dial unix
/run/docker/plugins/b0b9c534158e73cb07011350887501fe5fd071585af540c2264de760f8e2c0d9/rexray.sock:
connect: no such file or directory
But this is not my problem for the moment unless someone wants to volunteer on solving this issue.
Where I am struck is - what are the next steps to create volumes or directly mount them at run time to containers as volumes or mount binds (is this supported? or just volumes).
here are the steps for ec2-based ecs services (since fargate instances do not support docker volumes as of today):
Update your instance role to include the following permissions:
ec2:AttachVolume
ec2:CreateVolume
ec2:CreateSnapshot
ec2:CreateTags
ec2:DeleteVolume
ec2:DeleteSnapshot
ec2:DescribeAvailabilityZones
ec2:DescribeInstances
ec2:DescribeVolumes
ec2:DescribeVolumeAttribute
ec2:DescribeVolumeStatus
ec2:DescribeSnapshots
ec2:CopySnapshot
ec2:DescribeSnapshotAttribute
ec2:DetachVolume
ec2:ModifySnapshotAttribute
ec2:ModifyVolumeAttribute
ec2:DescribeTags
this should be for all resources in the policy. n.b, the createVolume and deleteVolume permissions can be omitted if you don't want to use autoProvision.
Install rexray on the instance (you've already done this)
If you're not using autoprovision, provision your volume and make sure there is a Name tag matching the name of the volume that you want to use in your service definition. In the example below, we set this value to rexray-vol.
Update your task definition to include the necessary values for the volume to be mounted as a docker container. Here is an example:
"volumes": [{
"name": "rexray-vol",
"dockerVolumeConfiguration": {
"autoprovision": true,
"scope": "shared",
"driver": "rexray/ebs",
"driverOpts": {
"volumetype": "gp2",
"size": "5"
}
}
}]
Update the task definition's container definition to refer your swanky ebs volume:
"mountPoints": [
{
"containerPath": "/var/lib/mysql",
"sourceVolume": "rexray-vol"
}
],
I am trying to set up a project on AWS. I am using CodePipeline to deploy my code to Elastic Beanstalk, and the source is coming from a git repository. This works fine.
The project has some configuration files (passwords and settings and such) that I don't want to include in the git repository. Since they are not in the git repository, they are not deployed by CodePipeline.
How can I include the configuration files in the CodePipeline without including them in the git repository?
Idea: I have tried adding an extra S3 source in the CodePipeline, containing the configuration files. I then had to add an extra deployment action to deploy the new S3 source. But then the two deployment processes get in conflict with each other, and only one of them succeeds. If I retry the one that fails, whatever was deployed by the one that succeeded is removed again. It doesn't seem to be possible to add two input artifacts (sources) to a single deployment action.
It is possible to use .ebextensions to copy files from an S3 bucket or another
source during deployment. Amazon describes it well in their documentation.
Here is an example:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["elasticbeanstalk-us-west-2-123456789012"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:autoscaling:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role"
files:
"/tmp/data.json" :
mode: "000755"
owner: root
group: root
authentication: "S3Auth"
source: https://s3-us-west-2.amazonaws.com/elasticbeanstalk-us-west-2-123456789012/data.json
Rather than storing the configuration files in a repository I'd recommend using the Software Configuration feature that Elastic Beanstalk has.
Here's a related answer explaining how to do that: https://stackoverflow.com/a/17878600/7433105
If you want to model your config as a separate source action then you would either have to have a build step that merges the source artifacts into one deployable artifact, or have some independent deployment process for the configuration that won't interfere with your application deployment (eg. copy to S3 in a Lambda function, then pull down the configuration when your application starts).
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html#docker-singlecontainer-dockerrun-privaterepo
Following the instructions here to connect to a private docker hub container from Elastic Beanstalk, but it stubbornly refuses to work. It seems like when calling docker login in Docker 1.12 the resulting file has no email property, but it sounds like aws expects it so I create a file called dockercfg.json that looks like this:
{
"https://index.docker.io/v1/": {
"auth": "Y2...Fz",
"email": "c...n#gmail.com"
}
}
The relevant piece of my Dockerrun.aws.json file looks like this:
"Authentication": {
"Bucket": "elasticbeanstalk-us-west-2-9...4",
"Key": "dockercfg.json"
},
And I have the file uploaded at the root of the S3 bucket. Why do I still get errors that say Error: image c...6/w...t:23 not found. Check snapshot logs for details. I am sure the names are right and that this would work if it was a public repository. The full error is below. I am deploying from GitHub with Circle CI if it makes a difference, happy to provide any other information needed.
INFO: Deploying new version to instance(s).
WARN: Failed to pull Docker image c...6/w...t:23, retrying...
ERROR: Failed to pull Docker image c...6/w...t:23: Pulling repository docker.io/c...6/w...t
Error: image c...6/w...t:23 not found. Check snapshot logs for details.
ERROR: [Instance: i-06b66f5121d8d23c3] Command failed on instance. Return code: 1 Output: (TRUNCATED)...b-project
Error: image c...6/w...t:23 not found
Failed to pull Docker image c...6/w...t:23: Pulling repository docker.io/c...6/w...t
Error: image c...6/w...t:23 not found. Check snapshot logs for details.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-06b66f5121d8d23c3'. Aborting the operation.
ERROR: Failed to deploy application.
ERROR: Failed to deploy application.
EDIT: Here's the full Dockerrun file. Note that %BUILD_NUM% is just an int, I can verify that works.
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "elasticbeanstalk-us-west-2-9...4",
"Key": "dockercfg.json"
},
"Image": {
"Name": "c...6/w...t:%BUILD_NUM%",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
EDIT: Also, I have verified that this works if I make this Docker Hub container public.
OK, let's do this;
Looking at the same doc page,
With Docker version 1.6.2 and earlier, the docker login command creates the authentication file in ~/.dockercfg in the following format:
{
"server" :
{
"auth" : "auth_token",
"email" : "email"
}
}
You already got this part correct I see. Please double check the cases below one by one;
1) Are you hosting the S3 bucket in the same region?
The Amazon S3 bucket must be hosted in the same region as the
environment that is using it. Elastic Beanstalk cannot download files
from an Amazon S3 bucket hosted in other regions.
2) Have you checked the required permissions?
Grant permissions for the s3:GetObject operation to the IAM role in
the instance profile. For details, see Managing Elastic Beanstalk
Instance Profiles.
3) Have you got your S3 bucket info in your config file? (I think you got this too)
Include the Amazon S3 bucket information in the Authentication (v1) or
authentication (v2) parameter in your Dockerrun.aws.json file.
Can't see your permissions or your env region, so please double check those.
If that does not work, i'd upgrade to Docker 1.7+ if possible and use the corresponding ~/.docker/config.json style.
Depending on your Docker version, this file is saved as either ~/.dockercfg or *~/.docker/config.json
cat ~/.docker/config.json
Output:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i"
}
}
}
Important:
Newer versions of Docker create a configuration file as shown above with an outer auths object. The Amazon ECS agent only supports dockercfg authentication data that is in the below format, without the auths object. If you have the jq utility installed, you can extract this data with the following command:
cat ~/.docker/config.json | jq .auths
Output:
{
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i",
"email": "email#example.com"
}
}
Create a file called my-dockercfg using the above content.
Upload the file into the S3 bucket with the specified key(my-dockercfg) in the Dockerrun.aws.json file.
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"bucket": "elasticbeanstalk-us-west-2-618148269374",
"key": "my-dockercfg"
}
}