Composer Environment variable missing on CodeDeploy Application - amazon-web-services

I'm integrating AWS Auto Scaling Group with Code Deploy.
I wrote a bash script for AfterInstall hook.
The script executes composer update, composer dump-autoload since my code is using PHP.
And here is the problem.
When I deploy, deployment fails with this log.
[RuntimeException]
The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly
But when I get to instance via SSH and run composer it works fine.
How do I fix this? Anyone had worked around this issue?
Any answer will be appreciated. Thank you for your time.

I had a similar problem using Elastic Beanstalk and i did fixed it adding an Environment variable
You should be able to achieve this in CodeDeploy too for example on creating the application.
See also https://github.com/composer/composer/issues/4789

Could you make sure the env variable is also accessible by the user you specify in the appspec file to which runs the hook script? If you have multiple user running on the instance, env variable might not be accessible to every user depends how you set it up.

I have the same concern regarding composer install using CodeDeploy. It runs well in develop but when I ran it in production, I'm getting:
[stderr] [RuntimeException]
[stderr] The HOME or COMPOSER_HOME environment variable must be set for composer to run correctly
I SSH to instance and run composer and I get:
user#server:~/httpdocs$ /opt/plesk/php/7.2/bin/php /usr/lib/plesk-9.0/composer.phar install
Loading composer repositories with package information
Installing dependencies (including require-dev) from lock file
Nothing to install or update
Generating optimized autoload files
user#server:~/httpdocs$
I have one ec2 instance and I deploy in 2 separate directories for stg and prod.
codedeploy deployment error

Related

AWS CodeDeploy agent is deleting files in the wrong folder during install

We have an unusual setup. We use git on Azure Devops for our code repositories, and AWS for our cloud-based services. In our arsenal we have a mixture of AWS Lambda functions, along with console apps, web apps, and Windows services running on EC2 instances. We have been able to create CI/CD pipelines for all three classes of apps. For the apps running on EC2 instances we use AWS CodeDeploy. These deployments are more complicated, but they all work -- except for one.
Another unusual thing about our setup is that both our development and QA environments are on the same EC2 instance. When the CodeDeploy agent running on that instance retrieves the deployment archive, it unpacks it, reads the appspec.yml file, runs our before install script, which backs up the existing installation and shuts down any services that might be using those files. Then, the install phase updates the files in the designated environment, then deletes -- or tries to delete -- all the files in the other environment folder.
In other words, if a DEV deployment is running, it replaces the files in the DEV folder and also tries to delete the files in the QA folder. I know this sounds like a scripting problem, but I have checked all the script and yaml files no where do I reference the opposing environment.
In this case, the app is a Windows service. Normally, I get a Ruby 'Permission denied # unlink_internal' error on a file in the other folder. As an experiment, I shut down the service in the other environment in my before install script and, as I expected, the agent deleted all the files in the other environment. It updated the files in the target environment, but left the folder in the other environment empty!
Here are my files. I suspect, the problem is being caused by something I did, but I can't, for the life of me, find it.
These are all .NET projects. In my solution I have a ConfigFiles folder set up with subfolders for each environment. Then, in my pipeline yaml file I run a script to select the correct files to move into the archive based on the git branch that is being built.
Here's the code for code for the script that selects the correct files.
Here's the Azure pipeline YAML file.
Here's my before install script:
And, finally, here is my appspec.yml file, which the CodeDeploy agent uses to know where to update the files during installation. How I want this to be the wrong path, but in the deployment archive, the environment specific values are all exactly right.
Any ideas on this one would be greatly appreciated.
I encountered the same problem where deployment of an app deletes files from another app in another folder unexpectedly. My solution is to use different deployment groups for each app, even though they are deploying to the same EC2 instance.
Deploying many apps on the same EC2 instance using the same deployment group results in files/folder deletion on other deployed projects.
From AWS Technical Support:
The reason is that codedeploy creates a clean up file by the format '[deployment group 1 ID]_cleanup" in the directory '/opt/codedeploy-agent/deployment-root/deployment-instructions' everytime a deployment is made to the deployment group and this file deletes all the files that had been installed during the previous deployment made to the deployment group. Since the deployment group is the same in your case, when you make a deployment to the deployment group which installs files to the folder "/var/www/project1", files installed by the previous deployment in the folder "/var/www/project2" are being cleaned up and vice versa which is an expected mechanism of the codedeploy agent.
You can find the explaination here: https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent.html#codedeploy-agent-install-files
Please consider creating two different applications/deployment groups
and configure the two pipelines to use different
applications/deployment groups which should fix your problem.

AWS Batch Failing to launch Dockerfile - standard_init_linux.go:219: exec user process caused: exec format error

I am attempting to use AWS Batch to launch a linux server, which will in essence perform the fetch and go example included within AWS (to download a SH from S3 and run it).
Does AWS Batch work at all for anyone?
The aws fetch_and_go example always fails, even followed someone elses guide online which mimicked the aws example.
I have tried creating Dockerfile for amazonlinux:latest and ubuntu:20.04 with numerous RUN and CMD.
The scripts always seem to fail with the error:
standard_init_linux.go:219: exec user process caused: exec format error
I thought at first this was relevant to my deployment access rights maybe within the amazonlinux so have played with chmod 777, chmod -x etc on the she file.
The final nail in the coffin, my current script is litterely:
FROM ubuntu:20.04
Launch this using AWS Batch, no command or parameters passed through and it still fails with the same error code. This is almost hinting to me that there is either a setup issue with my AWS Batch (which im using default wizard settings, except changing to an a1.medium server) or that AWS Batch has some major issues.
Has anyone had any success with AWS Batch launching their own Dockerfiles ? Could they share their examples and/or setup parameters?
Thank you in advance.
A1 instances are ARM based first-generation Graviton CPU. It is highly likely the image you are trying to run something that is expecting x86 CPU (Intel or AMD). Any instance class with a "g" in it ("c6g" or "m5g") are Graviton2 which is also ARM based and will not work for the default examples.
You can test whether a specific container will run by launching an A1 instance yourself and running the container (after installing docker). My guess is that you will get the same error. Running on Intel or AMD instances should work.
To leverage Batch with ARM your containerized application will need to work on ARM. If you point me to the exact example, I can give more details on how to adjust to run on A1 or Graviton2 instances.
I had the same issue, and it was because I build the image locally on my M1 Mac.
Try adding --platform linux/amd64 to your docker build command before pushing if this is your case.
In addition to the other comment. You can create multi-arch images yourself which will provide the correct architecture.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/

Elastic Beanstalk fails on environment variables update

I've deployed my Spring boot application to an ELB with Corretto 11 running on 64bit Amazon Linux 2/3.0.1 platform.
When I am trying to add a new Environment Variable from the AWS Console ( Configuration -> Software) and I hit Apply the update fails and rollbacks to the previous configuration.
This what I get from the AWS Console on my environment dashboard
Here are some of the logs that might be useful
The interesting part is when I create a fresh new environment and upload my .jar file and add the environment variables at the instantiation of my environment it works (meaning the environment variables are set correctly). The problem occurs when I try to update my environment variables when then environment already exists. Am I missing something?
I tried to use $ eb setenv after the $ eb deploy from my circleci but I still get the same error.
I've been digging into this. And now I know why it fails.
The reason is that when you add the env variable to your EB, the EB engine is going to download last application version, unzip and replaces it as current application.
This means, no deployment hooks nor .ebextenstions scripts are not executed. Therefore, if you do any application setup during deployment it is not going to be re-applied, leading to failure.
This is based on my own observations using Python 3.7 running on 64bit Amazon Linux 2/3.0.3 and single-instance EB type.
I found a workaround. If you set your deployment to immutable, this will go away as it’s gonna create a band new ec2 instance for you. Not the best solution if you have quota limitation but it works.

AWS Elastic Beanstalk with docker incorrect version

I'm deploying a docker image from Github to AWS elastic beanstalk using travis. That part goes OK, the actual deployment exits with 0 and there is a .zip file in the S3 bucket.
The issue is that, since this is my first time using AWS I created the app using the Sample Application since the code is deployed from Github, and after the deployment I get the health status as degraded (red exclamation sign) with this message:
ERROR
During an aborted deployment, some instances may have deployed the new application version. To ensure all instances are running the same version, re-deploy the appropriate application version.
If I go to Causes I find this:
Application deployment failed at 2020-05-01T16:01:58Z with exit status 1 and error: Engine execution has encountered an error.
Incorrect application version "travis-e55e05342a8cc16f3f28f8e184735667a9531ffa-1588311901" (deployment 4). Expected version "Sample Application" (deployment 1).
I even deleted the sample application and re-deployed the one that was uploaded and got that particular error. As you can see in the last message I've deployed this 3 times already, getting the same result.
Finally I downloaded the zip file from the S3 bucket and I found inside basically the src and public folders along with all the files in the root folder such as package.json, .gitignore all the docker files, etc.
EDIT
I created two separate repos in github to test this.
The first repo is a static page in a Docker container, quite simple. I create an environment in EB and start everything with the sample app. Then I push the changes to github, travis does it's thing and deploys the app to AWS. This works fine and the app's env is updated with no errors. This is the repo:
https://github.com/rhernandog/docler-static-page-aws
The second repo is a simple react app. Same procedure, create the environment in EB with the sample app. Push the code to github, travis does it's thing and deploys to AWS. This fails and I keep getting the same error:
Environment health has transitioned from Info to Degraded. Command failed on all
instances. Incorrect application version found on all instances. Expected version
"Sample Application" (deployment 1). Application update failed 1 second ago and
took 2 minutes.
This is the repo for the react app:
https://github.com/rhernandog/react-docker-awseb
In terms of Docker, everything works fine in my local machine.
EDIT 2
Based on #stefansundin suggestion I re-deployed the app to EB and check the logs. I ended looking at the full logs for more information and found this:
/var/log/cfn-hup.log
2020-05-14 17:07:42,605 [WARNING] Action for aws-eb-command-handler exited with 1, returning FAILURE
The only place where I found an error was in the engine log file:
/var/log/eb-engine.log
2020/05/14 17:07:42.514601 [INFO] Executing instruction: Docker Specific Build Application
2020/05/14 17:07:42.514605 [INFO] start build docker app
2020/05/14 17:07:42.514615 [INFO] fetch image name
2020/05/14 17:07:42.514639 [INFO] authenticate with ECR if the image is in an ECR repo
2020/05/14 17:07:42.514644 [INFO] pull docker image if update is not false in dockerrun.aws.json
2020/05/14 17:07:42.514657 [INFO] Running command /bin/sh -c docker pull node:12-alpine AS builder
2020/05/14 17:07:42.558923 [ERROR] "docker pull" requires exactly 1 argument.
So basically this is complaining about this in the dockerfile: FROM node:12-alpine AS builder. You can see the whole file in the repo: https://github.com/rhernandog/react-docker-awseb/blob/master/Dockerfile
The point is: Why this doesn't happen in my local machine? And how can I actually get the files from the build command and copy them to the nginx folder?
That is actually the only error I found in the log files.
I solved the issue here:
AWS Elastic Beanstalk Docker Does not support Multi-Stage Build
it is a stage-naming problem of multi-stage Dockerfile. Just use an Unamed one
I also got a similar error in my node app:
Incorrect application version "travis-e55e05342a8cc16f3f28f8e184735667a9531ffa-1588311901" (deployment 4). Expected version "Sample Application" (deployment 1)
What turned out to be an issue with my building and deployment scripts were corrected (debugged in Jenkins) the application successfully deploys in beanstalk with no error.
Turns out the issue was not with Beanstalk or app version but with the build mechanism. Something to look into when nothing else works :)
I had the same issue for java app in docker container.
I tried all the recommendations from this topic, links from this topic and nothing helped.
In the end, the following action helped:
Enable enhanced health panel https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/health-enhanced-enable.html#health-enhanced-enable-console
Go to the extended panel of the desired environment
Select the instance that crashed due to this "version" issue and click reboot
Additionally:
In one of the cases, I had to delete all previous versions (section on the left panel) and push a new one and only after that make the above recommendations.
Also make sure you have sufficient rights to deploy (codepipeline/deployment)
AWS Docs say that
To solve this issue, start another deployment. You can redeploy a previous version that you know works, or configure your environment to ignore health checks during deployment and redeploy the new version to force the deployment to complete.
You can also identify and terminate the instances that are running the wrong application version. Elastic Beanstalk will launch instances with the correct version to replace any instances that you terminate. Use the EB CLI health command to identify instances that are running the wrong application version.
Can you try to delete the instances that runs your applications and start a fresh install?
Also, you can use CodePipeline to deploy your codes to Elastic Beanstalk, you can use your S3 folder for the source stage and skip the build process if your code is build on travis and deploy using the deploy stage to install your new app to your Elastic Beanstalk. There might be some misconfiguration while installing the new app to your environment.
I suggest you to terminate your instances and start new instances sorry if I got your question wrong.
I haven't used Docker on Elastic Beanstalk. When my Ruby on Elastic Beanstalk deployments fail, I find that I usually find the problem if I request the 100 last lines from the logs. If you navigate to "Logs" -> "Request Logs" -> "Last 100 Lines", that may help you.
If that fails, I SSH in to the instance and look in the logs in /var/log. Maybe docker ps and docker logs may help you.
While creating a new webserver environment on platform branch select "Docker running on 64bit Amazon Linux" it will work.

eb deploy does not update the code

I am trying to deploy an application version but eb deploy command fails with:
ERROR: Update environment operation is complete, but with errors. For
more information, see troubleshooting documentation.
I checked the logs, made some changes to the code, committed and deployed again and guess what, it failed again. The logs indicate the same error, disregarding my changes. The error occurs in a file in this directory /var/app/ondeck/app/, when I go check, I can see the previous version is there.
I tried deploying using the Elastic Beanstalk dashboard, but somehow the instance is not receiving the new version. Can someone help me with this? Thanks.
Just had the same problem and noticed in the documentation
"Note
If you have initialized a git repository in your project folder, the EB CLI will always deploy the latest commit, even if you have pending changes. Commit your changes prior to running eb deploy to deploy them to your environment."
made the commits and worked fine