Why I cant export PATH by command .ebextensions config - amazon-web-services

Hi I want to use goofys on AWS ElasticBeanstalk php 7.0 environment.
I create .ebextentions/00_install_goofy.config.
(install golang from binary because golang version by yum is old.
packages:
yum:
fuse: []
commands:
100_install_golang_01:
command: wget https\://storage.googleapis.com/golang/go1.9.linux-amd64.tar.gz
100_install_golang_02:
command: tar -C /usr/local -xzf go1.9.linux-amd64.tar.gz
100_install_golang_03:
command: export GOROOT=/usr/local/go
test: [ -z "$GOROOT" ]
100_install_golang_04:
command: export GOPATH=/home/ec2-user/go
test: [ -z "$GOPATH" ]
100_install_golang_05:
command: export PATH=$PATH\:$GOROOT/bin\:$GOPATH/bin
100_install_golang_06:
command: echo $GOPATH > gopath
But 100_install_golang_03 not work well...
Test for Command 100_install_golang_03
[2017-09-09T14:39:52.422Z] INFO [3034] - [Application deployment app-f68c-170909_143641#1/StartupStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_1_yubion_website] : Completed activity.
[2017-09-09T14:39:52.422Z] INFO [3034] - [Application deployment app-f68c-170909_143641#1/StartupStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild] : Activity execution failed, because: [Errno 2] No such file or directory (ElasticBeanstalk::ExternalInvocationError)
I cant export env and path. Can I set PATH on .ebextensions?
Or is there better way to install goofys on ElasticBeanstalk automatically.

finally I find commands defined by .ebextensions run NO EVIRONMET VALUE.
It work on an environment like sandbox.
So scope of "export" commands is only "command" section.
if you want to use PATH in commands, you have to add export command to every commands.
Additionally if you want use PATH after eb deployed, see following link.
How can I add PATH on Elastic Beanstalk

Related

Installing Anaconda on Amazon Elastic Beanstalk to use in Django application

I have a Django application which it's deployed to Amazon Elastic Beanstalk. I have to install anaconda for installing pythonocc-core package. I have created a .config file in .ebextensions folder and add the anaconda path in my wsgi.py file such as below and I have deployed it successfully.
.config file:
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_conda_activate_installation:
command: 'source ~/.bashrc'
wsgi.py:
sys.path.append('/anaconda/lib/python3.7/site-packages')
However when I add the 04_conda_install_pythonocc command below to the continuation of this .config file, I got command failed error.
04_conda_install_pythonocc:
command: 'conda install -c dlr-sc pythonocc-core=7.4.0'
I ssh into the instance for checking. I saw the /anaconda folder has occured. When I checked with the conda --version command, I got the -bash: conda: command not found error.
Afterwards, I thought there might be a problem with the PATH and I edited the .config file as follows and I have deployed this .config file successfully.
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_add_path:
command: 'export PATH=$PATH:$HOME/anaconda/bin'
04_conda_activate_installation:
command: 'source ~/.bashrc'
But when I add the conda_install_pythonocc command again to the continuation of this edited version of .config file, it failed again and I got command failed.
In manually, all the commands work but they don't work in my .config file.
How can I fix this issue and install package with conda?
I tried to replicated the issue on my sandbox account, and I successful installed conda using the following (simplified) config file on 64bit Amazon Linux 2 v3.0.3 running Python 3.7:
.ebextensions/60_anaconda.config
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
05_conda_install:
command: '/anaconda/bin/conda install -y -c dlr-sc pythonocc-core=7.4.0'
Note the use off absolute path /anaconda/bin/conda and -y to not ask for manual confirmations. I only verified installation procedure, not how to use it afterwards (e.g. not how to use it in python application). Thus you will probably need to adjust it to your needs.
The EB log file showing successful installation is also provided for your reference (shortened for simplicity):
/var/log/cfn-init-cmd.log

Integrating SonarQube within AWS CodePipeline: Connection Refused

tl;dr
CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file with the following log (I formatted it a bit for better readability):
[ERROR] SonarQube server [http://localhost:9000] can not be reached
...
[ERROR] Failed to execute goal
org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar
(default-cli) on project myproject:
Unable to execute SonarQube:
Fail to get bootstrap index from server:
Failed to connect to localhost/127.0.0.1:9000:
Connection refused (Connection refused) -> [Help 1]
Goal
This is my first project with AWS, so sorry if I'm providing irrelevant informations.
I'm trying to deploy my backend API so that it's reachable by the public. Among other things, I want a CI/CD set up to automatically run tests and abort on failure or if a certain quality gate isn't passed. If everything went fine, then the new version should automatically be deployed online.
Current state
My pipeline automatically aborts when one of the tests fails, but that is about all I've gotten to properly do.
I've yet to figure out how to deploy (even manually) the API to be able to send requests to it. Maybe it's already done and I just don't know which URL to use, though.
Anyways, as it is, the CodePipeline crashes on the mvn sonar:sonar line of my buildspec.yml file.
The files
Here is my buildspec.yml:
version: 0.2
phases:
install:
runtime-versions:
java: openjdk8
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- cd /root
- codeAnalysisFolder="Sonar" # todo: refactor to include "/root"
- mkdir $codeAnalysisFolder && cd $codeAnalysisFolder
# Get SonarQube
- wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-8.1.0.31237.zip
- unzip ./sonarqube-8.1.0.31237.zip
# Launch SonarQube server locally
- cd ./sonarqube-8.1.0.31237/bin/linux-x86-64
- sh ./sonar.sh start
# Get SonarScanner
- cd /root/$codeAnalysisFolder
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.2.0.1873-linux.zip
- unzip ./sonar-scanner-cli-4.2.0.1873-linux.zip
- export PATH=$PATH:/root/$codeAnalysisFolder/sonar-scanner-cli-4.2.0.1873-linux.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Here are the last few lines of the log of the failed build:
[INFO] User cache: /root/.sonar/cache
[ERROR] SonarQube server [http://localhost:9000] can not be reached
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 6.071 s
[INFO] Finished at: 2019-12-18T21:27:23Z
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.7.0.1746:sonar (default-cli) on project myproject: Unable to execute SonarQube: Fail to get bootstrap index from server: Failed to connect to localhost/127.0.0.1:9000: Connection refused (Connection refused) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[Container] 2019/12/18 21:27:23 Command did not exit successfully mvn sonar:sonar exit status 1
[Container] 2019/12/18 21:27:23 Phase complete: PRE_BUILD State: FAILED
[Container] 2019/12/18 21:27:23 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: mvn sonar:sonar. Reason: exit status 1
And because you might also be interested in knowing the build's log related to the sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
Additionally, here is my sonar-project.properties file:
# SONAR SCANNER CONFIGS
sonar.projectKey=bullhubs
# SOURCES
sonar.java.source=8
sonar.sources=src/main/java
sonar.java.binaries=target/classes
sonar.sourceEncoding=UTF-8
# EXCLUSIONS
# (exclusion of Lombok-generated stuff comes from the `lombok.config` file)
sonar.coverage.exclusions=**/*Exception.java
# TESTS
sonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
sonar.junit.reportsPath=target/surefire-reports/TEST-*.xml
sonar.tests=src/test/java
The environment
(Sorry for the hidden infos: not being sure what should remain private, I went on the safe side. If you need any specific information, please let me know!)
I have an Elastic Beanstalk set up with the following properties:
I also have an EC2 instance up and running:
I also use a VPC.
What I've tried
I tried adding a bunch of entries into the inbound rules of my EC2's Security Group:
I started from 0.0.0.0/0 : 9000, to then try 127.0.0.1/32 : 9000, to finally try All traffic. None of it worked, so the problem seems to be somewhere else.
I also tried changing some properties of the sonar-project.properties file, namely sonar.web.host and sonar.host.url, to try to redirect where the SonarQube server is hosted (I thought maybe I was supposed to point it to the EC2's IPv4 Public IP address or its attached Public DNS (IPv4)), but somehow the failing build log keeps displaying the failure to connect on localhost:9000 when trying to contact the SonarQube server.
I've figured it out.
Somehow, SonarQube reports having started properly despite that not being true. Thus, when you see this log after having ran your sh ./sonar.sh start command:
[Container] 2019/12/18 21:25:49 Running command sh ./sonar.sh start
Starting SonarQube...
Started SonarQube.
It isn't necessarily true that SonarQube's local server has successfully started. One would have to go into the logs folder of the SonarQube installation folder and read the sonar.log file to figure out that something was actually wrong and that the server was stopped...
In my case, it reported an error that JDK11 was required to run the server. To solve that, I changed the java: openjdk8 line of my buildspec.yml to java: openjdk11.
Then, I had to figure out that now a new log file was available to be read: es.log. When printing that file in the console, it was revealed to me that the latest ElasticSearch version (which is used by the latest SonarQube server version) does not allow itself to be ran by a root user. Thus, I had to create a new user group and edit some configuration file to run the server with that user:
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
Complete solution
This gives us the following working version of buildspec.yml :
version: 0.2
phases:
install:
runtime-versions:
java: openjdk11
commands:
##############################################################################################
##### "cd / && ls" returns: [bin, boot, codebuild, dev, etc, go, home, lib, lib32, lib64,
##### media, mnt, opt, proc, root, run, sbin, srv, sys, tmp, usr, var]
##### Initial directory where this starts is $CODEBUILD_SRC_DIR
##### That variable contains something like "/codebuild/output/src511423169/src"
##### This folder contains the whole structure of the CodeCommit repository. This means that
##### the actual Java classes are accessed through "cd src" from there, for example.
##############################################################################################
# Upgrade AWS CLI to the latest version
- pip install --upgrade awscli
# Folder organization
- preSonarPath="/opt/"
- codeAnalysisFolder="Sonar"
- sonarPath="$preSonarPath$codeAnalysisFolder"
- cd $preSonarPath && mkdir $codeAnalysisFolder
# Get SonarQube
- cd $sonarPath
- sonarQube="sonarqube-8.1.0.31237"
- wget https://binaries.sonarsource.com/Distribution/sonarqube/$sonarQube.zip
- unzip ./$sonarQube.zip
# Set up non-root user to run SonarQube
- groupadd sonar
- useradd -c "Sonar System User" -d $sonarPath/$sonarQube -g sonar -s /bin/bash sonar
- chown -R sonar:sonar $sonarPath/$sonarQube # recursively changing the folder's ownership
# Launch SonarQube server locally
- cd ./$sonarQube/bin/linux-x86-64
- sed -i 's/#RUN_AS_USER=/RUN_AS_USER=sonar/g' sonar.sh # enabling user execution of server
- sh ./sonar.sh start
# Get SonarScanner and add to PATH
- sonarScanner="sonar-scanner-cli-4.2.0.1873-linux"
- cd $sonarPath
- wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/$sonarScanner.zip
- unzip ./$sonarScanner.zip
- export PATH=$PATH:$sonarPath/$sonarScanner.zip/bin/ # todo: .zip ?!
pre_build:
commands:
- cd $CODEBUILD_SRC_DIR
- mvn clean compile test
# - cd $sonarPath/$sonarQube/logs
# - cat access.log
# - cat es.log
# - cat sonar.log
# - cat web.log
# - cd $CODEBUILD_SRC_DIR
- mvn sonar:sonar
build:
commands:
- mvn war:exploded
post_build:
commands:
- cp -r .ebextensions/ target/ROOT/
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template-file template-export.yml
# Do not remove this statement. This command is required for AWS CodeStar projects.
# Update the AWS Partition, AWS Region, account ID and project ID in the project ARN on template-configuration.json file so AWS CloudFormation can tag project resources.
- sed -i.bak 's/\$PARTITION\$/'${PARTITION}'/g;s/\$AWS_REGION\$/'${AWS_REGION}'/g;s/\$ACCOUNT_ID\$/'${ACCOUNT_ID}'/g;s/\$PROJECT_ID\$/'${PROJECT_ID}'/g' template-configuration.json
artifacts:
type: zip
files:
- 'template-export.yml'
- 'template-configuration.json'
Cheers !

AWS Cloudwatch Agent in a docker container

I am trying to set up Amazon Cloudwatch Agent to my docker as a container. This is an OnPremise installation so it's running locally, not inside AWS Kubernetes or anything of the sorts.
I've set up a basic dockerfile, agent.json and .aws/ folder for credentials and using docker-compose build to actually set it up, then launch it, but I am running into constant problems because Docker does not contain or run systemctl so I cannot run the service using AWS own documentation command:
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json -s
This will fail on an error when I try to run the container:
cloudwatch_1 | /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl: line 262: systemctl: command not found
cloudwatch_1 | unknown init system
I've tried to run the /start-amazon-cloudwatch-agent inside /bin as well, but no luck. No documentation on this.
Basically the issue is how can I run this as a service or a process in the foreground? Anyone have any clues? Otherwise the container won't stay up. Below is my code:
dockerfile
FROM amazonlinux:2.0.20190508
RUN yum -y install https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
COPY agent.json /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
CMD /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m onPremise -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
agent.json
{
"agent": {
"metrics_collection_interval": 60,
"region": "eu-west-1",
"logfile": "/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log",
"debug": true
}
}
.aws/ folder contains config and credentials, but I never got as far for the agent to actually try and make a connection.
just use the official image docker pull amazon/cloudwatch-agent it will handel all the things for you
here
if you insist to use your own , try the following:
FROM amazonlinux:2.0.20190508
RUN yum -y install https://s3.amazonaws.com/amazoncloudwatch-agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
COPY agent.json /opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json
ENV RUN_IN_CONTAINER=True
ENTRYPOINT ["/opt/aws/amazon-cloudwatch-agent/bin/start-amazon-cloudwatch-agent"]
Use the AWS official Docker Image, here is the example of the docker compose
version: "3.8"
services:
agent:
image: amazon/cloudwatch-agent:1.247350.0b251814
volumes:
- ./config/log-collect.json:/opt/aws/amazon-cloudwatch-agent/bin/default_linux_config.json # agent config
- ./aws:/root/.aws # required for authentication
- ./log:/log # sample log
- ./etc:/opt/aws/amazon-cloudwatch-agent/etc # for debugging the config of AWS of container
From config above, only the first 2 volume sync required.
Number 3 & 4 is for debug purpose.
If you interested in learning what each volumes does, you can read more at https://medium.com/#gusdecool/setup-aws-cloudwatch-agent-on-premise-server-part-1-31700e81ab8

Composer install on AWS without development requirements

Is there some way how to override AWS Elastic BeanStalk default composer install command?
I want to run composer install --no-dev.
Yes, you can append extra flags to the composer command in AWS Beanstalk.
You need to create an extra config file in <app-root-dir>/.ebextensions/composer.config:
commands:
10updateComposer:
command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update
option_settings:
aws:elasticbeanstalk:application:environment:
option_name: COMPOSER_HOME
value: /root
aws:elasticbeanstalk:container:php:phpini:
composer_options: --no-dev
Whatever you put in composer_options will be appended to the composer command line.
You can verify this is executed correctly by looking at your /var/log/eb-activity.log file. Look out for something like this:
++ /opt/elasticbeanstalk/bin/get-config optionsettings -n aws:elasticbeanstalk:container:php:phpini -o composer_options
+ PHP_COMPOSER_OPTIONS=--no-dev
+ echo 'Found composer.json file. Attempting to install vendors.'
Found composer.json file. Attempting to install vendors.
+ composer.phar install --no-ansi --no-interaction --no-dev
For more info you can read the Composer File section in the manual.
What's above and in the official EBS documentation didn't work when I tried..
Here's what worked for me:
option_settings:
- namespace: aws:elasticbeanstalk:container:php:phpini
option_name: composer_options
value: --no-dev
Not sure if theres an error in the documentation at this link but I submitted a report on it..
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_PHP.container.html#php-namespaces

How can I add PATH on Elastic Beanstalk

I want to add PATH to packages on eb deploy.
Packages are Installed to /var/www/html/vendor/bin
It can be add by manually through SSH, but how can I add PATH with config file.
I have config file like this .ebextensions/ec2.config. 01-set_timezone works fine 02-set_path dosen't
commands:
01-set_timezone:
command: cp /usr/share/zoneinfo/Japan /etc/localtime
02-set_path:
command: export PATH=$PATH:/var/www/html/vendor/bin
each command is performed in its own shell. so the export won't work. you'd need to put it into ~/.bash_profile to ensure it's executed with every new command.
commands:
set_path:
test: test ! -f /opt/elasticbeanstalk/.post-provisioning-complete
command: echo 'export PATH=$PATH:/var/www/html/vendor/bin' >> /root/.bash_profile
to make it run only once, add the following file:
.ebextensions/99_finalize_setup.config:
commands:
99_write_post_provisioning_complete_file:
command: touch /opt/elasticbeanstalk/.post-provisioning-complete
To have the PATH available during setup you can do the following.
Create a config file .ebextensions/set_path.conf
option_settings:
"aws:elasticbeanstalk:application:environment":
"PATH": "/var/www/html/vendor/bin:$PATH"