AWS deployment failed - jhipster app - EBS - amazon-web-services

I have tried deploying my jhipster application in AWS Elastic BeanStalk by uploading the war directly. When the environment is created, i am getting this error.
[Instance: i-08f7c9efd8b2c5476] Command failed on instance. Return code: 1
Output: (TRUNCATED).../util/SystemPropertyUtils.class Failed to execute
'/usr/bin/unzip -o -d /var/app/staging
/opt/elasticbeanstalk/deploy/appsource/source_bundle' Failed to execute
'/usr/bin/unzip -o -d /var/app/staging
/opt/elasticbeanstalk/deploy/appsource/source_bundle'. Hook
/opt/elasticbeanstalk/hooks/restartappserver/pre/01_configure_application.sh
failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
Please suggest me what to do next.
enter image description here
I have also tried using yo jhipster:aws command as per the documentation in the jhipster page.
What i am getting is Missing credentails Config .
my question is i have added credentials.properties file in the given loaction
~/.aws/credentials...
Means .aws/credentials/credentials.properties (file). is the file extension right and the folder structure right,.
Create S3 bucket
Error jhipster:aws
Missing credentials in config

I'm not sure about your first error as you are trying to set the environment up manually and we would need information to reproduce.
In regards to yo jhipster:aws failing, the AWS credentials file should be located at ~/.aws/credentials, not ~/.aws/credentials/credentials.properties
After that create a credentials file at ~/.aws/credentials on Mac/Linux or C:\Users\USERNAME.aws\credentials on Windows.
From the docs: https://jhipster.github.io/aws/

For further clarification, use vi and create a file named "credentials" under the "~/.aws" folder.

Related

AWS EB docker-compose deployment from private registry access forbidden

I'm trying to get docker-compose deployment to AWS Elastic Beanstalk working, in which the docker images are pulled from a private registry hosted by GitLab.
The strange thing is that initial deployment works perfectly; It pulls the image from the private registry and starts the containers using docker-compose, and the webpage (served by Django) is accessible through the host.
Deploying a new version using the same docker-compose and the same docker image will result in an error while pulling the docker image:
2021/03/16 09:28:34.957094 [ERROR] An error occurred during execution of command [app-deploy] - [Run Docker Container]. Stop running the command. Error: failed to run docker containers: Command /bin/sh -c docker-compose up -d failed with error exit status 1. Stderr:Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating network "current_default" with the default driver
Pulling redis (redis:alpine)...
Pulling mysql (mysql:5.7)...
Pulling project.dockertest(registry.gitlab.com/company/spikes/dockertest:latest)...
Get https://registry.gitlab.com/v2/company/spikes/dockertest/manifests/latest: denied: access forbidden
2021/03/16 09:28:34.957104 [INFO] Executing cleanup logic
Setup
AWS Elastic Beanstalk 64bit Amazon Linux 2/3.2
Gitlab registry credentials are stored within a S3 bucket, with the filename .dockercfg and has the following content:
{
"auths": {
"registry.gitlab.com": {
"auth": "base64 encoded username:personal_access_token"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.03.1-ce (linux)"
}
}
The repository contains a v3 Dockerrun.aws.json file to refer to the credential file in S3:
{
"AWSEBDockerrunVersion": "3",
"Authentication": {
"bucket": "gitlab-dockercfg",
"key": ".dockercfg"
}
}
Reproduce
Setup docker-compose.yml that uses a service with a private docker image (and can be pulled with the credentials setup in the dockercfg within S3)
Create a new applicatoin that uses the docker-platform.
eb init testapplication --platform=docker --region=eu-west-1
Note: region must be the same as the S3 bucket containing the dockercfg.
Initial deployment (this will succeed)
eb create testapplication-test --branch_default --cname testapplication-test --elb-type=application --instance-types=t2.micro --min-instance=1 --max-instances=4
The initial deployment shows that the image is available and can be started:
2021/03/16 08:58:07.533988 [INFO] save docker tag command: docker tag 5812dfe24a4f redis:alpine
2021/03/16 08:58:07.533993 [INFO] save docker tag command: docker tag f8fcde8b9ae2 mysql:5.7
2021/03/16 08:58:07.533998 [INFO] save docker tag command: docker tag 1dd9b65d6a9f registry.gitlab.com/company/spikes/dockertest:latest
2021/03/16 08:58:07.534010 [INFO] Running command /bin/sh -c docker rm `docker ps -aq`
Without changing anything to the local repository and the remote docker image on the private registry, lets do a redeployment which will trigger the error:
eb deploy testapplication-test
This will fail with the following output:
...
2021-03-16 10:02:28 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2021-03-16 10:02:29 ERROR Unsuccessful command execution on instance id(s) 'i-0dc445d118ac14b80'. Aborting the operation.
2021-03-16 10:02:29 ERROR Failed to deploy application.
ERROR: ServiceError - Failed to deploy application.
And logs of the instance show (/var/log/eb-engine.log):
Pulling redis (redis:alpine)...
Pulling mysql (mysql:5.7)...
Pulling project.dockertest (registry.gitlab.com/company/spikes/dockertest:latest)...
Get https://registry.gitlab.com/v2/company/spikes/dockertest/manifests/latest: denied: access forbidden
2021/03/16 10:02:25.902479 [INFO] Executing cleanup logic
Steps I've tried to debug or solve the issue
Rename dockercfg to .dockercfg on S3 (somewhere mentioned on the internet as possible solution)
Use the 'old' docker config format instead of the one generated by docker 1.7+. But later on I figured out that Amazon Linux 2-instances are compatible with the new format together with Dockerrun v3
Having an incorrectly formatted dockercfg on S3 will cause an error deployment regarding the misformatted file (so it actually does something with the dockercfg from S3)
Documentation
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/single-container-docker-configuration.html
I'm out of debug options, and I've no idea where to look any further to debug this problem. Perhaps someone can see what is going wrong here?
First of all, the issue describe above is a bug confirmed by Amazon. To get the deployment working on our side, we've contacted Amazon support.
They've a fix in place which should be released this month, so keep an eye on the changelog of the Elastic beanstalk platform: https://docs.aws.amazon.com/elasticbeanstalk/latest/relnotes/relnotes.html
Although the upcoming release should have the fix, there is a workaround available to get the docker-compose deployment working.
Elastic Beanstalk allows hook to be executed within the deployment, which can be used to fetch the .docker.cfg from a S3 bucket to authenticate with against the private registry.
To do so, create the following file and directories from the root of the project:
File location: .platform/hooks/predeploy/docker_login
#!/bin/bash
aws s3 cp s3://{{bucket_name_to_use}}/.dockercfg ~/.docker/config.json
Important: Add execution rights to this file (for example: chmod +x .platform/hooks/predeploy/docker_login)
To support instance configuration changes, please symlink the hooks directory to confighooks:
ln -s .platform/hooks/ .platform/confighooks/
Updating configuration requires the .dockercfg credentials to be fetched too.
This should enable continuous deployments to the same EB-instance without the authentication errors, because the hook will be execute before the docker image pulling.
Some background:
The docker daemon reads credentials from ~/.docker/config by default on traditional linux systems. On the initial deploy this file will exist on the Elastic Beanstalk instance. On the next deployment this file is removed. Unfortunately, on the next deployment the .dockercfg is not refetched, therefor the docker daemon does not have the correct credentials to authenticate with.
I was dealing the same errors while trying to pull images from a privately hosted GitLab instance. I was able to resolve them by including the email address that was associated with the generated token found in the auth field of the .dockercfg file.
The following file format worked for me:
"registry.gitlab.com" {
"auth": "base64 encoded username:personal_access_token",
"email": "email for personal access token"
}
In my case I used a Project Access Token, which has an e-mail address associated with it once it is created.
The file format in the Elastic Beanstalk documentation for the authentication file here, indicates that this is the required file format, though the versions that it says this format is required for are almost certainly outdated, since we are running Docker ^19.

Errors when applying AWS eb commands

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_nodejs_express.html
I'm trying to follow these steps to deploy an example of an Express application for the first time. After installing the Elastic Beanstalk Command Line Interface (EB CLI), I can apply eb commands in the Command Prompt (using Windows 10). After initializing a Git repository, I should use commands to configure an EB CLI repository.
These command are being applied in the directory of an an ExpressJS project:
First I enter the command: eb init – platform Node.js – region us-east-2 which results in the message in a separate window Application AWS2 has been created.
Next I enter command: eb create – sample node-express-env which results in the error message ERROR: InvalidParameterValueError - Environment node-express-env already exists.
Then when I enter the command: eb open the message says ERROR: This branch does not have a default environment. You must either specify an environment by typing "eb open my-env-name" or set a default environment by typing "eb use my-env-name".
Then when I enter: eb open node-express-env there's another message ERROR: NotFoundError - Environment "node-express-env" not Found. which contradicts the message from 2.
Make sure that, you configured the CLI to use the same region in which your environment is created.

AWS: ERROR: Pre-processing of application version xxx has failed and Some application versions failed to process. Unable to continue deployment

Hi I am trying to deploy a node application from cloud 9 to ELB but I keep getting the below error.
Starting environment deployment via CodeCommit
--- Waiting for Application Versions to be pre-processed --- ERROR: Pre-processing of application version app-491a-200623_151654 has
failed. ERROR: Some application versions failed to process. Unable to
continue deployment.
I have attached an image of the IAM roles that I have. Any solutions?
Go to your console and open up your elastic beanstalk console. Go to both applications and environments and delete them. Then in your terminal hit
eb init #Follow instructions
eb create --single ##Follow instructions.
It would fix the error, which is due to some application states which are failed. If you want to check those do
aws elasticbeanstalk describe-application-versions
I was searching for this answer as a result of watching a YouTube tutorial for how to pass the AWS Certified Developer Associate exam. If anyone else gets this error as a result of that tutorial, delete the 002_node_command.config file created in the tutorial and commit that change, as that is causing the error to occur.
A failure within the pre-processing phase, may be caused by an invalid manifest, configuration or .ebextensions file.
If you deploy an (invalid) application version using eb deploy and you enable the preprocess option, The details of the error will not be revealed.
You can remove the --process flag and enable the verbose option to improve error output.
in my case I deploy using this command:
eb deploy -l "XXX" -p
And can return a failure when I mess around with .ebextensions:
ERROR: Pre-processing of application version xxx has failed.
ERROR: Some application versions failed to process. Unable to continue deployment.
With that result I can't figure up what is wrong,
but deploying without -p (or --process)and adding -v (verbose) flag:
eb deploy -l "$deployname" -v
It returns something more useful:
Uploading: [##################################################] 100% Done...
INFO: Creating AppVersion xxx
ERROR: InvalidParameterValueError - The configuration file .ebextensions/16-my_custom_config_file.config in application version xxx contains invalid YAML or JSON.
YAML exception: Invalid Yaml: while scanning a simple key
in 'reader', line 6, column 1:
(... details of the error ...)
, JSON exception: Invalid JSON: Unexpected character (#) at position 0.. Update the configuration file.
Now I can fix the problem.

Problem running logstash in aws ec2 linux ami

I am setting up "elasticsearch" in AWS, i am trying to use AWS linux AMI. When i run the
bin/logstash -f "/path to config file"
i get error saying:
"logstash.yml" not found try using "--path.settings"
then when i use
"--path.settings="/etc/logstash"
i again get another error.
I have been following this document of AWS
https://aws.amazon.com/elasticsearch-service/resources/articles/logstash-tutorial/
The error i get after specifying
--path.settings="/etc/logstash" :
"Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}"
I have configured file logstash_simple.conf, specifying input and output.
this is the command line input in the linux ec2 instance
/usr/share/logstash/bin/logstash -f /usr/share/logstash/logstash_simple.conf
--path.settings="/etc/logstash"
Okay i had made a mistake in the config file,
i missed providing aws accesskey and secret key, dumb me!

AWS Elastic Beanstalk - ERROR: No Application Version named 'v0_9_2-76-gf5a4' found

I'm trying to deploy my code to AWS Beanstalk and get this error. I researched that it could be that the number of versions is more than 500, so I deleted a lot of versions. But, I still get this error.
eb deploy
ERROR: No Application Version named 'v0_9_2-76-gf5a4' found.
I also tried
git aws.push
Error: Failed to create the AWS Elastic Beanstalk application version
Edit:
Trying with eb deploy --debug I now get:
Instance: i-2ad238d5 Module: AWSEBAutoScalingGroup ConfigSet: null Command failed on instance. Return code: 1 Output: Error occurred during build: Command hooks failed . Script /opt/elasticbeanstalk/hooks/appdeploy/pre/10_bundle_install.sh failed with returncode 18
ebcli.objects.exceptions.ServiceError: Update environment operation is
complete, but with errors. For more information, see troubleshooting
documentation.
Did you update the file .elasticbeanstalk/config.yml ? It may have a wrong setup.
Make a backup of .elasticbeanstalk/ folder and remove it
Execute eb create
Select the same region you deployed it before. You can check the region on .elasticbeanstalk/config.yml backup
A list with the environments will appear, select the right one
Deploy now
Remove the .elasticbeanstalk/config.yml backup
Check for the .elasticbeanstalk/config.yml file
environment: CORRECT_ENV_NAME
global:
application_name: CORRECT_APP_NAME
In my case, I was doing eb deploy X where X was an environment for a different project.
When I had the error
InvalidParameterValueError: No Application Version named 'app-9f5c-180927_071528' found.
I fixed this by specifying the label I wanted to push up.
eb deploy XXX-env -l XXX.0.0.1
The -l flag is documented AWS EB Deploy Docs
most likely, the deploy is trying an incorrect Elasticbeanstalk Application. it could be because you renamed the application in the AWS console.
so double check you're pointing to the correct elasticbeanstalk Environment and Application. it could be picking out default values from your .elasticbeanstalk/config.yml file.