According to the aws elasticbeanstalk doc,
"While Elastic Beanstalk deploys your file to your Amazon EC2 instances, you can view the deployment status on the environment's overview."
But I quite cannot understand what exactly is happening when I upload my source files to the elasticbeanstalk(eb).
As I know, if we upload zip file to the elasticbeanstalk(eb), it goes to the s3 first.
But as documentation says, "ec2 deploys those files" with the proper environment.
For example, if I uploaded .zip file which contains simple index.js (simple nodejs server) and package.json (dependencies listed file) to the elasticbeanstalk, it goes to the s3 storage service. And ec2 deploys it with the proper command like npm install, npm start, etc.
The question is that, what is exactly happening between s3 upload and ec2 deploy?
Does elasticbeanstalk makes ec2 instance to access s3 storage and get all the source files from there automatically? otherwise we have to manually connect those two aws resources together?
I've been searched for this all day, but couldn't find anything including official documentation.
Any advice, or any url reference might also really be helpful. Thank you.
I'm sure there are much better informed folks out there, as I am just starting out with AWS and EB in particular, but--since there are no answers to this now 8-month old question--I'll share what I learned so far:
EB CLI tells EC2 to fetch the app artifact and manifest
2022/03/28 23:26:45.063522 [INFO] Downloading EB Application...
2022/03/28 23:26:45.063548 [INFO] Download app version manifest
Then performs some internal magic to ensure the instance is ready to host the app
Then extracts the app artifact and--depending on the platform stack--downloads dependencies and starts the app.
2022/03/28 23:26:47.074138 [INFO] Executing instruction: StageApplication
2022/03/28 23:26:47.223041 [INFO] extracting /opt/elasticbeanstalk/deployment/app_source_bundle to /var/app/staging/
2022/03/28 23:26:52.001284 [INFO] Executing instruction: RunAppDeployPreBuildHooks
2022/03/28 23:26:52.001312 [INFO] Executing platform hooks in .platform/hooks/prebuild/
2022/03/28 23:26:52.049546 [INFO] installing node version 16.14.0
2022/03/28 23:26:52.049555 [INFO] Running command /bin/sh -c uname -m
2022/03/28 23:26:54.962177 [INFO] Executing instruction: Use NPM to install dependencies
2022/03/28 23:26:54.962214 [INFO] checking package.json file
2022/03/28 23:26:54.962224 [INFO] found package.json file, using npm to start application
Then more platform magic follows
....
2022/03/28 23:26:58.109415 [INFO] Executing instruction: RunAppDeployPostDeployHooks
2022/03/28 23:26:58.109428 [INFO] Executing platform hooks in .platform/hooks/postdeploy/
2022/03/28 23:26:58.109448 [INFO] Executing cleanup logic
2022/03/28 23:26:58.109570 [INFO] CommandService Response: {"status":"SUCCESS","api_version":"1.0","results":[{"status":"SUCCESS","msg":"Engine execution has succeeded.","returncode":0,"events":[{"msg":"Instance deployment completed successfully.","timestamp":1648510018,"severity":"INFO"}]}]}
And the app deployment is complete:
2022/03/28 23:26:58.109728 [INFO] Platform Engine finished execution on command: app-deploy
You can see the details of what takes place in eb-engine log.
Related
I am deploying a spring boot war to a single instance AWS Elastic Beanstalk environment and trying to run a postdeploy script.
I have successfully had .ebextension scripts executed and attempted to follow the same pattern for the .platform/hooks/postdeploy directory but unfortunately EB isn't able to find the directory.
I get the following in the eb-engine.log:
[INFO] Executing platform hooks in .platform/hooks/postdeploy/
[INFO] The dir .platform/hooks/postdeploy/ does not exist
[INFO] Finished running scripts in /var/app/current/.platform/hooks/postdeploy
[INFO] Executing cleanup logic
I have verified the directory and script are placed inside the WAR file under /WEB-INF/classes:
Directory structure under .platform is .platform/hooks/postdeploy/myscript.sh
The EB environment is an Amazon Linux 2.
Any ideas why EB can't find the .platform/hooks/postdeploy directory? When I cd in /var/app I see a jar file and a Procfile.
In my case, I spent one entire week wondering why my configs are not correctly detected..
As mentioned here the .platform correct path is {project_root_directory}/.platform/ and thought I was doing everything fine.
But it need to look like this into your archive ! The root of the zip must have the .platform into
In my case it was : foobar.zip/{project_root_directory}/.platform/
But it must be :foobar.zip/.platform/
is there a way to define a docker-compose file with a different name than docker-compose.yml when deploying a docker application (with full source code) to elastic beanstalk with eb-cli?
Details:
We are currently deploying the test stage of an application to elastic beanstalk by using the eb-cli. This is working without any problem as long as we provide a docker-compose.yml. In that case elastic beanstalk gets the complete source code and builds the images during the deployment. However, since the CI/CD Pipeline of our production stage is also using the docker-compose.yml, we need to rename the file to docker-compose.test.yml. Is there a way to upload the complete source code AND define a docker-compose file when using the eb-cli?
If you are on Amazon Linux 2, you can use the prebuild hook to rename your docker compose file:
# .platform/hooks/prebuild/docker_compose_cp.sh
#!/bin/bash
cp docker-compose.test.yml docker-compose.yml
You should see output in the eb-engine.log during deployment indicating that the script has run:
[INFO] Executing platform hooks in .platform/hooks/prebuild/
[INFO] Following scripts will be executed in order: [docker_compose_cp.sh]
I'm trying to get docker-compose deployment to AWS Elastic Beanstalk working, in which the docker images are pulled from a private registry hosted by GitLab.
The strange thing is that initial deployment works perfectly; It pulls the image from the private registry and starts the containers using docker-compose, and the webpage (served by Django) is accessible through the host.
Deploying a new version using the same docker-compose and the same docker image will result in an error while pulling the docker image:
2021/03/16 09:28:34.957094 [ERROR] An error occurred during execution of command [app-deploy] - [Run Docker Container]. Stop running the command. Error: failed to run docker containers: Command /bin/sh -c docker-compose up -d failed with error exit status 1. Stderr:Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating network "current_default" with the default driver
Pulling redis (redis:alpine)...
Pulling mysql (mysql:5.7)...
Pulling project.dockertest(registry.gitlab.com/company/spikes/dockertest:latest)...
Get https://registry.gitlab.com/v2/company/spikes/dockertest/manifests/latest: denied: access forbidden
2021/03/16 09:28:34.957104 [INFO] Executing cleanup logic
Setup
AWS Elastic Beanstalk 64bit Amazon Linux 2/3.2
Gitlab registry credentials are stored within a S3 bucket, with the filename .dockercfg and has the following content:
{
"auths": {
"registry.gitlab.com": {
"auth": "base64 encoded username:personal_access_token"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.03.1-ce (linux)"
}
}
The repository contains a v3 Dockerrun.aws.json file to refer to the credential file in S3:
{
"AWSEBDockerrunVersion": "3",
"Authentication": {
"bucket": "gitlab-dockercfg",
"key": ".dockercfg"
}
}
Reproduce
Setup docker-compose.yml that uses a service with a private docker image (and can be pulled with the credentials setup in the dockercfg within S3)
Create a new applicatoin that uses the docker-platform.
eb init testapplication --platform=docker --region=eu-west-1
Note: region must be the same as the S3 bucket containing the dockercfg.
Initial deployment (this will succeed)
eb create testapplication-test --branch_default --cname testapplication-test --elb-type=application --instance-types=t2.micro --min-instance=1 --max-instances=4
The initial deployment shows that the image is available and can be started:
2021/03/16 08:58:07.533988 [INFO] save docker tag command: docker tag 5812dfe24a4f redis:alpine
2021/03/16 08:58:07.533993 [INFO] save docker tag command: docker tag f8fcde8b9ae2 mysql:5.7
2021/03/16 08:58:07.533998 [INFO] save docker tag command: docker tag 1dd9b65d6a9f registry.gitlab.com/company/spikes/dockertest:latest
2021/03/16 08:58:07.534010 [INFO] Running command /bin/sh -c docker rm `docker ps -aq`
Without changing anything to the local repository and the remote docker image on the private registry, lets do a redeployment which will trigger the error:
eb deploy testapplication-test
This will fail with the following output:
...
2021-03-16 10:02:28 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2021-03-16 10:02:29 ERROR Unsuccessful command execution on instance id(s) 'i-0dc445d118ac14b80'. Aborting the operation.
2021-03-16 10:02:29 ERROR Failed to deploy application.
ERROR: ServiceError - Failed to deploy application.
And logs of the instance show (/var/log/eb-engine.log):
Pulling redis (redis:alpine)...
Pulling mysql (mysql:5.7)...
Pulling project.dockertest (registry.gitlab.com/company/spikes/dockertest:latest)...
Get https://registry.gitlab.com/v2/company/spikes/dockertest/manifests/latest: denied: access forbidden
2021/03/16 10:02:25.902479 [INFO] Executing cleanup logic
Steps I've tried to debug or solve the issue
Rename dockercfg to .dockercfg on S3 (somewhere mentioned on the internet as possible solution)
Use the 'old' docker config format instead of the one generated by docker 1.7+. But later on I figured out that Amazon Linux 2-instances are compatible with the new format together with Dockerrun v3
Having an incorrectly formatted dockercfg on S3 will cause an error deployment regarding the misformatted file (so it actually does something with the dockercfg from S3)
Documentation
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
I'm out of debug options, and I've no idea where to look any further to debug this problem. Perhaps someone can see what is going wrong here?
First of all, the issue describe above is a bug confirmed by Amazon. To get the deployment working on our side, we've contacted Amazon support.
They've a fix in place which should be released this month, so keep an eye on the changelog of the Elastic beanstalk platform: https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/relnotes.html
Although the upcoming release should have the fix, there is a workaround available to get the docker-compose deployment working.
Elastic Beanstalk allows hook to be executed within the deployment, which can be used to fetch the .docker.cfg from a S3 bucket to authenticate with against the private registry.
To do so, create the following file and directories from the root of the project:
File location: .platform/hooks/predeploy/docker_login
#!/bin/bash
aws s3 cp s3://{{bucket_name_to_use}}/.dockercfg ~/.docker/config.json
Important: Add execution rights to this file (for example: chmod +x .platform/hooks/predeploy/docker_login)
To support instance configuration changes, please symlink the hooks directory to confighooks:
ln -s .platform/hooks/ .platform/confighooks/
Updating configuration requires the .dockercfg credentials to be fetched too.
This should enable continuous deployments to the same EB-instance without the authentication errors, because the hook will be execute before the docker image pulling.
Some background:
The docker daemon reads credentials from ~/.docker/config by default on traditional linux systems. On the initial deploy this file will exist on the Elastic Beanstalk instance. On the next deployment this file is removed. Unfortunately, on the next deployment the .dockercfg is not refetched, therefor the docker daemon does not have the correct credentials to authenticate with.
I was dealing the same errors while trying to pull images from a privately hosted GitLab instance. I was able to resolve them by including the email address that was associated with the generated token found in the auth field of the .dockercfg file.
The following file format worked for me:
"registry.gitlab.com" {
"auth": "base64 encoded username:personal_access_token",
"email": "email for personal access token"
}
In my case I used a Project Access Token, which has an e-mail address associated with it once it is created.
The file format in the Elastic Beanstalk documentation for the authentication file here, indicates that this is the required file format, though the versions that it says this format is required for are almost certainly outdated, since we are running Docker ^19.
I recently was able to get my Laravel app deployed using codepipeline on Elastic Beanstalk but ran into a problem. I noticed that my routes where failing because of php.conf Nginx configuration. I had to add a few lines of code to EB's nginx php.conf file to get it to work.
My problem now was that after every deployment, the instance of the application I modified the php.conf file was destroyed and recreated fresh. I wanted a way to dynamically update the file after every successful deployment. I had a version of the file I wanted versioned with my application and so wanted to create a symlink to that file after deployment.
After loads of research, I stumbled on appDeploy Hooks on Elastic Beanstalk that runs post scripts after deployment so did this
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/91_post_deploy_script.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo mkdir /var/testing1
sudo ln -sfn /var/www/html/php.conf.example /etc/nginx/conf.d/elasticbeanstalk/php.conf
sudo mkdir /var/testing
sudo nginx -s reload
And this for some reason does not work. The symlink is not created so my routes are still not working..
I even added some mkdir so am sure the commands in that script runs, none of those commands ran because none of those directories where created.
Please note that if I ssh into the ec2 instance and run the commands there it works. That bash script also exists in the post directory and if I manually run in on the server it works too.
Any pointers to how I could fix this would be helpful. Maybe I am doing something wrong too.
Now I have gotten my scripts to run by following this. However, the script is not running. I am getting an error
2020/06/28 08:22:13.653339 [INFO] Following platform hooks will be executed in order: [01_myconf.config]
2020/06/28 08:22:13.653344 [INFO] Running platform hook: .platform/hooks/postdeploy/01_myconf.config
2020/06/28 08:22:13.653516 [ERROR] An error occurred during execution of command [app-deploy] - [RunPostDeployHooks]. Stop running the command. Error: Command .platform/hooks/postdeploy/01_myconf.config failed with error fork/exec .platform/hooks/postdeploy/01_myconf.config: permission denied
I tried to follow this forum post here to make my file executable by adding to my container command a new command like so:
01_chmod1:
command: "chmod +x .platform/hooks/postdeploy/91_post_deploy_script.sh"
I am still running into the same issue. Permission denied
Sadly, the hooks you are describing (i.e. /opt/elasticbeanstalk/hooks/appdeploy) are for Amazon Linux 1.
Since you are using Amazon Linux 2, as clarified in the comments, the hooks you are trying to use do not apply. Thus they are not being executed.
In Amazon Linux 2, there are new hooks as described here and they are:
prebuild – Files here run after the Elastic Beanstalk platform engine downloads and extracts the application source bundle, and before it sets up and configures the application and web server.
predeploy – Files here run after the Elastic Beanstalk platform engine sets up and configures the application and web server, and before it deploys them to their final runtime location.
postdeploy – Files here run after the Elastic Beanstalk platform engine deploys the application and proxy server.
The use of these new hooks is different than in Amazon Linux 1. Thus you have to either move back to Amazon Linux 1 or migrate your application to Amazon Linux 2.
General migration steps from Amazon Linux 1 to Amazon Linux 2 in EB are described here
Create a folder called .platform in your project root folder and create a file with name 00_myconf.config inside the .platform folder.
.platform/
00_myconf.config
Open 00_myconf.config and add the scripts
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/91_post_deploy_script.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo mkdir /var/testing1
sudo ln -sfn /var/www/html/php.conf.example /etc/nginx/conf.d/elasticbeanstalk/php.conf
sudo mkdir /var/testing
sudo nginx -s reload
Commit your changes or reupload the project. This .platform folder will be considered in each new instance creation and your application will deploy properly in all the new instances Amazon Elastic beanstalk creates.
If you access the documentation here and scroll to the section with the title "Application example with extensions" you can see an example of the folder structure of your .platform folder so it adds your custom configuration to NGINX conf on every deploy.
You can either replace the entire nginx.conf file with your file or add additional configuration files to the conf.d directory
Replace conf file with your file on app deploy:
.platform/nginx/nginx.conf
Add configuration files to nginx.conf:
.platform/nginx/conf.d/custom.conf
I am creating the new device from the documentation provided in this link (https://github.com/wso2/carbon-device-mgt-maven-plugin.git).
I performed the following steps
Step 1: Installing the Maven Archetype. Everything went okay! The maven archtype installed was
git clone -b v1.0.0 --single-branch https://github.com/wso2/carbon-device-mgt-maven-plugin.git
In Step 2: Creating a new device type, when I perform the command mvn archetype: generate -DarchetypeCatalog = local. The output does not show me the archetype for me to choose. Look at the output of this command:
C:\Users\eliazar.carvalho\Documents\Tools\WSO2\wso2iot-3.0.0\samples>mvn archetype:generate -DarchetypeCatalog=local
[INFO] Scanning for projects...
[INFO]
[INFO] --------------------------------------------------------------------- ---
[INFO] Building Maven Stub Project (No POM) 1
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] >>> maven-archetype-plugin:3.0.0:generate (default-cli) > generate-sources # standalone-pom >>>
[INFO]
[INFO] <<< maven-archetype-plugin:3.0.0:generate (default-cli) < generate-sources # standalone-pom <<<
[INFO]
[INFO] --- maven-archetype-plugin:3.0.0:generate (default-cli) # standalone-pom ---
[INFO] Generating project in Interactive mode
[INFO] No archetype defined. Using maven-archetype-quickstart (org.apache.maven.archetypes:maven-archetype-quickstart:1.0)
Choose archetype:
Your filter doesn't match any archetype (hint: enter to return to initial list)
Choose a number or apply filter (format: [groupId:]artifactId, case sensitive contains): :
What could be going wrong?
I am using WSO2 IoT Server 3.0
and OS: Ubuntu 14.04 LTS
I also faced the same issue. This is how I fixed it.
mvn archetype:generate -DarchetypeCatalog=local -X
Will give you the exact local catalog file path it is being read. For me it was ~/.m2/repository/archetype-catalog.xml.
But actualy my local repo catalog file is on ~/.m2/archetype-catalog.xml. So I copied archetype-catalog.xml into the correct path with following command.
cp ~/.m2/archetype-catalog.xml ~/.m2/repository/
Now it works fine. It seems we need to update maven-archetype-plugin version in mentioned repository.
WSO2 IoT 3.1.0 is released and it includes 3 ways of introducing a new device type.
Writing Java Extension using maven archetype
Descriptor based model
API based model
Please refer more information here