Use DockerImageAsset image in Asset bundling - amazon-web-services

I want to upload a local file in the repository to s3 after it has been processed by a custom docker image with AWS CDK. I don't want to make the docker image public (Its not a big restriction tho). Also, I don't want to build the image for each s3 deployment
Since I don't want to build the docker image for each bucket deployment, I have created a DockerImageAsset, and tried to give image uri as BucketDeployment's bundle property. Code is below:
const image = new DockerImageAsset(this, "cv-builder-image", {
directory: join(__dirname, "../"),
});
new BucketDeployment(this, "bucket-deployment", {
destinationBucket: bucket,
sources: [
Source.asset(join(__dirname, "../"), {
bundling: {
image: DockerImage.fromRegistry(image.imageUri),
command: [
"bash",
"-c",
'echo "heloo" >> /asset-input/cv.html && cp /asset-input/cv.html /asset-output/cv.html',
],
},
}),
],
});
DockerImageAsset is deployed fine. But it throw this during BucketDeployment's deployment
docker: invalid reference format: repository name must be lowercase
I can see the image being deployed to AWS.
Any help is appreciated. Have a nice dayy

As far as I understand - to simplify - you have a Docker image which you use to launch a utility container that just takes a file and outputs an artifact (another file).
Then you want to upload the artifact to S3 using the BucketDeployment construct.
This is a common problem when dealing with compiling apps like Java to .jar artifacts or frontend applications (React, Angular) to static output (HTML, CSS, JS) files.
The way I've approached this in the past is: Split the artifact generation as a separate step in your pipeline and THEN trigger the "cdk deploy" as a subsequent step.
You would have less headache and you control all parts of the process, including having access to the low level Docker commands like docker build ... and docker run ..., and in effect, leverage local layer caching in the best possible way. If you rely on CDK to do the bundling for you - there's a bit of magic behind the scenes that's not always obvious. I'm not saying it's impossible, it's just more "work".

Related

Next JS serverless deployment on AWS ECS/Fargate: environment variable issue

so my goal is to deploy a serverless Dockerized NextJS application on ECS/Fargate.
So when I docker build my project using the command docker build . -f development.Dockerfile --no-cache -t myapp:latest everything is running successfully except Docker build doesn't consider the env file in my project's root directory. Once build finishes, I push the Docker image to Elastic Container Repository(ECR) and my Elastic Container Service(ECS) references that ECR.
So naturally, my built image doesn't have a ENV file(contains the API keys and DB credentials), and as a result my app is deployed but all of the services relying on those credentials are failing because there isn't an ENV file in my container and all of the variables become undefined or null.
To fix this issue I looked at this AWS doc and implemented a solution that stores my .env file in AWS S3 and that S3 ARN gets refrenced in the container service where the .env file is stored. However, that didn't workout and I think it's because of the way I'm setting my
next.config.js to reference my environmental files in my local codebase. I also tried to set my environmental variables manually(very unsecure, screenshot below) when configuring the container in my task defination, and that didn't work either.
My next.confg.js
const dotEnvConfig = { path: `../../${process.env.NODE_ENV}.env` };
require("dotenv").config(dotEnvConfig);
module.exports = {
serverRuntimeConfig: {
// Will only be available on the server side
xyzKey: process.env.xyzSecretKey || "",
},
publicRuntimeConfig: {
// Will be available on both server and client
appUrl: process.env.app_url || "",
},
};
So on my local codebase in the root directory I have two files development.env (local api keys) and production.env(live api keys) and my next.config.js is located in /packages/app/next.config.js
So apparently it was just a plain NextJS's way of handling env variables.
In next.config.js
module.exports = {
env: {
user: process.env.SQL_USER || "",
// add all the env var here
},
};
and to call the environmental variable user in the app all you have to do is call process.env.user and user will reference process.env.SQL_USER in my local .env file where it will be stored as SQL_USER="abc_user"
You should be setting the environment variables in the ECS task definition. In order to prevent storing sensitive values in the task definition you should use AWS Parameter Store, or AWS Secrets Manager, as documented here.

How to disable encryption on AWS CodeBuild artifacts?

I'm using AWS CodeBuild to build an application, it is configured to push the build artifacts to an AWS S3 bucket.
On inspecting the artifcats/objects in the S3 bucket I realised that the objects has been encrypted.
Is it possible to disable to encryption on the artifcats/objects?
There is now a checkbox named "Disable artifacts encryption" under the artifacts section which allows you to disable encryption when pushing artifacts to S3.
https://docs.aws.amazon.com/codebuild/latest/APIReference/API_ProjectArtifacts.html
I know this is an old post but I'd like to add my experience in this regard.
My requirement was to get front end assets from a code commit repository, build them and put them in s3 bucket. s3 bucket is further connected with cloudfront for serving the static front end content (written in react in my case).
I found that cloudfront is unable to serve KMS encrypted content as I found KMS.UnrecognizedClientException when I hit the cloudfront Url. I tried to fix that and disabling encryption on aws codebuild artifacts seemed to be the easiest solution when I found this
However, I wanted to manage this using aws-cdk. This code snippet in TypeScript may come handy if you're trying to solve the same issue using aws-cdk
Firstly, get your necessary imports. For this answer it'd be the following:
import * as codecommit from '#aws-cdk/aws-codecommit';
import * as codebuild from '#aws-cdk/aws-codebuild';
Then, I used the following snippet in a class that extends to cdk Stack
Note: The same should work if your class extends to a cdk Construct
// replace these according to your requirement
const frontEndRepo = codecommit.Repository
.fromRepositoryName(this, 'ImportedRepo', 'FrontEnd');
const frontendCodeBuild = new codebuild.Project(this, 'FrontEndCodeBuild', {
source: codebuild.Source.codeCommit({ repository: frontEndRepo }),
buildSpec: codebuild.BuildSpec.fromObject({
version: '0.2',
phases: {
build: {
commands: [
'npm install && npm run build',
],
},
},
artifacts: {
files: 'build/**/*'
}
}),
artifacts: codebuild.Artifacts.s3({
bucket: this.bucket, // replace with s3 bucket object
includeBuildId: false,
packageZip: false,
identifier: 'frontEndAssetArtifact',
name: 'artifacts',
encryption: false // added this to disable the encryption on codebuild
}),
});
Also to ensure that everytime I push a code in the repository, a build is triggered, I added the following snippet in the same class.
// add the following line in your imports if you're using this snippet
// import * as targets from '#aws-cdk/aws-events-targets';
frontEndRepo.onCommit('OnCommit', {
target: new targets.CodeBuildProject(frontendCodeBuild),
});
Note: This may not be a perfect solution, but it's working well for me till now. I'll update this answer if I find a better solution using aws-cdk
Artifact encryption cannot be disabled in AWS CodeBuild

How can I set up Continuous Integration of a Dockerized application to Elastic Beanstalk?

I'm new to Docker, and my previous experience is with deploying Java web applications (running in Tomcat containers) to Elastic Beanstalk. The pipeline I'm used to goes something like this: a commit is checked into git, which triggers a Jenkins job, which builds the application JAR (or WAR) file, publishes it to Artifactory, and then deploys that same JAR to an application in Elastic Beanstalk using eb deploy. (Apologies if "pipeline" is a reserved term; I'm using it conceptually.)
Incidentally, I'm also going to be using Gitlab for CI/CD instead of Jenkins (due to organizational reasons out of my control), but the jump from Jenkins to Gitlab seems straight-forward to me -- certainly moreso than the jump from deploying WARs directly to deploying Dockerized containers.
Moving over into the Docker world, I imagine the pipeline will go something like this: a commit is checked into git, which triggers the Gitlab CI, which will then build the JAR or WAR file, publish it to Artifactory, then use the Dockerfile to build the Docker image, publish that Docker image into Amazon ECR (maybe?)... and then I'm honestly not sure how the Elastic Beanstalk integration would proceed from there. I know it has something to do with the Dockerrun.aws.json file, and presumably needs to call the AWS CLI.
I just got done watching a webinar from Amazon called Running Microservices and Docker on AWS Elastic Beanstalk, which stated that in the root of my repo there should be a Dockerrun.aws.json file which essentially defines the integration to EB. However, it seems that JSON file contains a link to the individual Docker image in ECR, which is throwing me off. Wouldn't that link change every time a new image is built? I'm imagining that the CI would need to dynamically update the JSON file in the repo... which almost feels like an anti-pattern to me.
In the webinar I linked above, the host created his Docker image and pushed it ECR manually, with the CLI. Then he manually uploaded the Dockerrun.aws.json file to EB. He didn't need to upload the application however, since it was already contained within the Docker image. This all seems odd to me and I question whether I'm understanding things correctly. Will the Dockerrun.aws.json file need to change on every build? Or am I thinking about this the wrong way?
In the 8 months since I posted this question, I've learned a lot and we've already moved onto different and better technology. But I will post what I learned in answer to my original question.
The Dockerrun.aws.json file is almost exactly the same as an ECS task definition. It's important to use the Multi-Docker container deployment version of Beanstalk (as opposed to the single container), even if you're only deploying a single container. IMO they should just get rid of the single-container platform for Beanstalk as it's pretty useless. But assuming you have Beanstalk set to the Multi-Container Docker platform, then the Dockerrun.aws.json file looks something like this:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "my-container-name-this-can-be-whatever-you-want",
"image": "my.artifactory.com/docker/my-image:latest",
"environment": [],
"essential": true,
"cpu": 10,
"memory": 2048,
"mountPoints": [],
"volumesFrom": [],
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/aws/elasticbeanstalk/my-image/var/log/stdouterr.log",
"awslogs-region": "us-east-1",
"awslogs-datetime-format": "%Y-%m-%d %H:%M:%S.%L"
}
}
}
]
}
If you decide, down the road, to convert the whole thing to an ECS service instead of using Beanstalk, that becomes really easy, as the sample JSON above is converted directly to an ECS task definition by extracting the "containerDefinitions" part. So the equivalent ECS task definition might look something like this:
[
{
"name": "my-container-name-this-can-be-whatever-you-want",
"image": "my.artifactory.com/docker/my-image:latest",
"environment": [
{
"name": "VARIABLE1",
"value": "value1"
}
],
"essential": true,
"cpu": 10,
"memory": 2048,
"mountPoints": [],
"volumesFrom": [],
"portMappings": [
{
"hostPort": 0,
"containerPort": 80
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/aws/ecs/my-image/var/log/stdouterr.log",
"awslogs-region": "us-east-1",
"awslogs-datetime-format": "%Y-%m-%d %H:%M:%S.%L"
}
}
}
]
Key differences here are that with the Beanstalk version, you need to map port 80 to port 80 because a limitation of running Docker on Beanstalk is that you cannot replicate containers on the same instance, whereas in ECS you can. This means that in ECS you can map your container port to host port "zero," which really just tells ECS to pick a random port in the ephemeral range which allows you to stack multiple replicas of your container on a single instance. Secondly with ECS if you want to pass in environment variables, you need to inject them directly into the Task Definition JSON. In Beanstalk world, you don't need to put the environment variables in the Dockerrun.aws.json file, because Beanstalk has a separate facility for managing environment variables in the console.
In fact, the Dockerrun.aws.json file should really just be thought of as a template. Because Docker on Beanstalk uses ECS under-the-hood, it simply takes your Dockerrun.aws.json as a template and uses it to generate its own Task Definition JSON, which injects the managed environment variables into the "environment" property in the final JSON.
One of the big questions I had at the time when I first asked this question was whether you had to update this Dockerrun.aws.json file every time you deployed. What I discovered is that it comes down to a choice of how you want to deploy things. You can, but you don't have to. If you write your Dockerrun.aws.json file so that the "image" property references the :latest Docker image, then there's no need to ever update that file. All you need to do is bounce the Beanstalk instance (i.e. restart the environment), and it will pull whatever :latest Docker image is available from Artifactory (or ECR, or wherever else you publish your images). Thus, all a build pipeline would need to do is publish the :latest Docker image to your Docker repository, and then trigger a restart of the Beanstalk environment using the awscli, with a command like this:
$ aws elasticbeanstalk restart-app-server --region=us-east-1 --environment-name=myapp
However, there are a lot of drawbacks to that approach. If you have a dev/unstable branch that publishes a :latest image to the same repository, you become at risk of deploying that unstable branch if the environment happens to restart on its own. Thus, I would recommend versioning your Docker tags and only deploying the version tags. So instead of pointing to my-image:latest, you would point to something like my-image:1.2.3. This does mean that your build process would have to update the Dockerrun.aws.json file on each build. And then you also need to do more than just a simple restart-app-server.
In this case, I wrote some bash scripts that made use of the jq utility to programmatically update the "image" property in the JSON, replacing the string "latest" with whatever the current build version was. Then I would have to make a call to the awsebcli tool (note that this is a different package than the normal awscli tool) to update the environment, like this:
$ eb deploy myapp --label 1.2.3 --timeout 1 || true
Here I'm doing something hacky: the eb deploy command unfortunately takes FOREVER. (This was another reason we switched to pure ECS; Beanstalk is unbelievably slow.) That command hangs for the entire deployment time, which in our case could take up to 30 minutes or more. That's completely unreasonable for a build process, so I force the process to timeout after 1 minute (it actually continues the deployment; it just disconnects my CLI client and returns a failure code to me even though it may subsequently succeed). The || true is a hack that effectively tells Gitlab to ignore the failure exit code, and pretend that it succeeded. This is obviously problematic because there's no way to tell if the Elastic Beanstalk deployment really did fail; we're assuming it never does.
One more thing on using eb deploy: by default this tool will automatically try to ZIP up everything in your build directory and upload that entire ZIP to Beanstalk. You don't need that; all you need is to update the Dockerrun.aws.json. In order to do this, my build steps were something like this:
Use jq to update Dockerrun.aws.json file with the latest version tag
Use zip to create a new ZIP file called deploy.zip and put Dockerrun.aws.json inside it
Make sure a file called .elasticbeanstalk/config.yml is in place (described below)
Run the eb deploy ... command
Then you need a file in the build directory at .elasticbeanstalk/config.yml which looks like this:
deploy:
artifact: deploy.zip
global:
application_name: myapp
default_region: us-east-1
workspace_type: Application
The awsebcli knows to automatically look for this file when you call eb deploy. And what this particular file says is to look for a file called deploy.zip instead of trying to ZIP up the whole directory itself.
So the :latest method of deployment is problematic because you risk deploying something unstable; the versioned method of deployment is problematic because the deployment scripts are more complicated, and because unless you want your build pipelines to take 30+ minutes, there's a chance that the deployment won't be successful and there's really no way to tell (outside of monitoring each deployment yourself).
Anyways, it's a bit more work to set up, but I would recommend migrating to ECS whenever you can. (Better still to migrate to EKS, though that's a lot more work.) Beanstalk has a lot of problems.

How to deploy Play on Amazon Beanstalk keeping /public editable for a single page application?

I am looking for alternative methods of deploying a Play application to Elastic Beanstalk. It is a single page app that relies on Ember.js. It would be nice to be able to edit the the contents of the /public folder so I don't need to rebuild the docker image every time something is fixed on the Ember side that doesn't affect the Play app itself.
I am currently using sbt's docker:stage command and zipping the generated docker folder along with this Dockerfile and Dockerrun.
Dockerfile
FROM java:latest
WORKDIR /opt/docker
ADD stage /
RUN ["chown", "-R", "daemon:daemon", "."]
EXPOSE 9000
USER daemon
ENTRYPOINT ["bin/myapp", "-Dconfig.resource=application-prod.conf"]
CMD []
Dockerrun
{
"AWSEBDockerrunVersion": "1",
"Ports": [{ "ContainerPort": "9000" }],
"Volumes": []
}
Once I zip the file I upload it using Beanstalk console. But this involves rebuilding the app every time a typo is fixed on the front end. It is annoying because it means all the updated front end code has to wait until I get a chance to push it up so the boss can view it and give feedback. It would be nice if there was a way to have the /public folder (Play just serves /public/index.html) accessible so the front end dev could access it directly for his edits.
Ideally I would like some method that can be used for both development and production. I don't know the requirements imposed by Beanstalk so it can properly spin up extra instances of the app when needed. Maybe something where when the instance starts it does git pull on the backend repo and git pull on the front end repo, then runs my custom build script for ember to generate the /dist folder and move into Play's /public folder and create gzips of each file. Then start the play app. Then let the front end Dev ssh into the development instance and do git pull and ember build as needed for his edits.
It would also be nice for the development server for the Play server to be run using run or ~run so I can just do git pull and have it rebuild the backend.
Or maybe I am approaching this in the completely wrong way. I have never done any of this before so I am sort of guessing my way through all of it.
Thanks for any suggestions and pointers in the correct direction.
Adam
Edit
Since we are really only using Play as a RESTful API would it be better to just run a nginx/Apache server on something like EC2 then use Beanstalk to handle the Play App without it serving any content besides API calls. I would assume the EC2 nginx could be pretty tiny since only the first access would pull files from the http server. After that it is all API calls. Then we run the Play app from Beanstalk so it can handle load balancing for the API. This at least saves me from rebuilding the image for front end edits. Would this be a more correct setup?

Best way to deploy play2 app using Amazon Beanstalk

I found fragmented instructions here and some other places about deploying Play2 app on amazon ec2. But did not find any neat way to deploy using Beanstalk.
Play is a nice framework and AWS beanstalk is one of the most popular services then why is there no official instruction to do this?
Has anyone found any better solution?
Deploying a Play2 app on elastic beanstalk is now easy with Docker Containers in combination with sbt's experimental docker feature.
In build.sbt specify the exposed docker ports:
dockerExposedPorts in Docker := Seq(9000)
You should automate the following steps, but you can try this out manually to test that it works:
Generate a Dockerfile for the project by running the command: sbt docker:stage.
Go to the ./target/docker/ directory.
Create an elastic beanstalk Dockerrun.aws.json file with the following contents:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
Zip up everything in that directory, let's say into a file called play2-test-docker.zip. The zip file should contain the files: Dockerfile, Dockerrun.aws.json, and files/* directory.
Go to aws beanstalk console and create a new application using the m3.medium or any instance type with enough memory for the jvm to run. Any instance with too little memory will result in a JVM error.
Select "Docker Container" in the Predefined Configuration dropdown.
In the application selection screen, select "Upload" and select the zip file you created earlier. Launch the app and then go brew some tea. This can take a very long time. Minutes. Subsequent deployments of the same app version should be slightly quicker.
Once the app is running and green in the aws console, click on the app's url and you should see the welcome screen of the application (or whatever your index file is).
Here's my solution that doesn't require any additional services/containers like Docker or Jenkins.
Create a dist folder in the root of your Play application's directory. Create a Procfile file containing the following contents and put it in the dist folder (EB requires port 5000):
web: ./bin/YOUR_APP_FILE_NAME -Dhttp.port=5000 -Dconfig.file=conf/application.conf
The YOUR_APP_FILE_NAME is the name of the executable in the bin directory, which is inside the .zip created by activator dist.
After running activator dist, you can just upload the created zip file into Elastic Beanstalk and it will automatically deploy the app. You also put whatever .ebextension folders and configuration files into the dist folder that you require for Elastic Beanstalk configuration. Ex. I have dist/.ebextensions/nginx/conf.d/proxy.conf for NGINX reverse proxy settings or dist/.ebextensions/env.config for environment variables.
Edit 2016: There's now a much better way to deploy your Playframework apps onto ElasticBeanstalk using the new Java SE containers.
Here's an article that walks you through deploying step by step using Jenkins to build and deploy your project:
https://www.davemaple.com/articles/deploy-playframework-elastic-beanstalk-jenkins/
You can use custom AMIs that I keep updated here:
https://github.com/davemaple/playframework-nginx-elastic-beanstalk
These run Nginx + Playframework and support standard zip files created using "activator dist".
We also saw this as being too much of a pain and have added native Play 2 support to Boxfuse to address this.
You can now simply do boxfuse run my-play-app-1.0.zip -env=prod and this will automatically:
create a minimal AMI tailor-made for your Play 2 app
create an elastic IP
create a security group with the correct permissions
launch an instance of your app
All future updates are performed as blue/green deployments with zero downtime.
This also works with Elastic Load Balancers and Auto-Scaling Groups and the Boxfuse free tier is designed to fit the AWS free tier.
You can read more about it here: https://boxfuse.com/blog/playframework-aws
Disclaimer: I'm the founder and CEO of Boxfuse
I had some problems with other solutions found here and there. I guess that the problem is that I'm developing on Play 2.4.
Anyway, I could deploy the app to Beanstalk using Typesafe Activator and Docker:
In build.sbt I added this lines:
import com.typesafe.sbt.packager.docker.{ExecCmd, Cmd}
// [...]
dockerCommands := Seq(
Cmd("FROM","java:openjdk-8-jre"),
Cmd("MAINTAINER","myname"),
Cmd("EXPOSE","9000"),
Cmd("ADD","stage /"),
Cmd("WORKDIR","/opt/docker"),
Cmd("RUN","[\"chown\", \"-R\", \"daemon\", \".\"]"),
Cmd("RUN","[\"chmod\", \"+x\", \"bin/myapp\"]"),
Cmd("USER","daemon"),
Cmd("ENTRYPOINT","[\"bin/myapp\", \"-J-Xms128m\", \"-J-Xmx512m\", \"-J-server\"]"),
ExecCmd("CMD")
)
I went to the project's directory and ran this command in the terminal
$ ./activator clean docker:stage
I opened the [project]/target/dockerdirectory and created the file Dockerrun.aws.json. This was its content:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
In the same target/docker directory, I tested the result, built, checked and ran the image:
$ docker build -t myapp .
$ docker images
$ docker run -p 9000:9000 myapp
As everything was ok, I zipped the content:
$ zip -r myapp.zip *
My zip file had Dockerfile, Dockerrun.aws.json and stage/* files
Finally, I created a new Beanstalk app and uploaded the zip created on the last step. I took care of select "Generic Docker" on "Predefined configuration", when I was creating the app.
Beanstalk only supports WAR deployment and Play doesn't officially support WAR deployment. If you want to use EC2 then you should instead just create an EC2 instance and follow the deployment instructions: http://www.playframework.com/documentation/2.2.x/ProductionDist
Deploying play 2.* apps in aws ec2 is very diffrent until you have found this much better way to do it. I mean ansible is promising a great solution to that. though it is still needed to work with new setup of ansible, and its playbook but that must be worthy.
I have found these reads very recently and yet to apply them in my project. I hope following reads will help you to learn more:
Ansible + play + aws ec2
Read it to know more about Ansible to deply play in aws
Thanks!
Hope this will help you to kick your start. Please do share more knowledge you gain during the procedure or if there is any simple way to solve this complicated deployment problem.