.ebextensions not executing not uploading and creating files - amazon-web-services

I am trying to follow these instructions to force SSL on AWS Beanstalk.
I believe this is the important part.
files:
"/tmp/45_nginx_https_rw.sh":
owner: root
group: root
mode: "000644"
content: |
#! /bin/bash
CONFIGURED=`grep -c "return 301 https" /opt/elasticbeanstalk/support/conf/webapp_healthd.conf`
if [ $CONFIGURED = 0 ]
then
sed -i '/listen 80;/a \ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; }\n' /opt/elasticbeanstalk/support/conf/webapp_healthd.conf
logger -t nginx_rw "https rewrite rules added"
exit 0
else
logger -t nginx_rw "https rewrite rules already set"
exit 0
fi
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
For some reason the file is not being uploaded or created.
I also tried this a sudo command for the container commands starting with 00 and 01.
I also manually ssh into the server, manually created the file. Then locally used the aws elasticbeanstalk restart-app-server --environment-name command to restart the server. And this still did not work.
Any help would be greatly appreciated.

Related

AWS EC2 Image Builder issue with authorized_keys

I'm trying to create a custom image of RedHat 8 using the EC2 Image Builder. In one of the recipes added to the pipeline, I've created the ansible user and used S3 to download the authorized_keys and the custom sudoers.d file. The issue I'm facing is that the sudoers file called "ansible" gets copied just fine, the authorized_keys doesn't. CloudWatch says that the recipe get executed without errors, the files are downloaded but when I create an EC2 with this AMI, the authorized_keys file is not in the path.
What's happening?
This is the recipe I'm using:
name: USER-Ansible
description: Creazione e configurazione dell'utente ansible
schemaVersion: 1.0
phases:
- name: build
steps:
- name: UserCreate
action: ExecuteBash
inputs:
commands:
- groupadd -g 2004 ux
- useradd -u 4134 -g ux -c "AWX Ansible" -m -d /home/ansible ansible
- mkdir /home/ansible/.ssh
- name: FilesDownload
action: S3Download
inputs:
- source: s3://[REDACTED]/authorized_keys
destination: /home/ansible/.ssh/authorized_keys
expectedBucketOwner: [REDACTED]
overwrite: false
- source: s3://[REDACTED]/ansible
destination: /etc/sudoers.d/ansible
expectedBucketOwner: [REDACTED]
overwrite: false
- name: FilesConfiguration
action: ExecuteBash
inputs:
commands:
- chown ansible:ux /home/ansible/.ssh/authorized_keys; chmod 600 /home/ansible/.ssh/authorized_keys
- chown ansible:ux /home/ansible/.ssh; chmod 700 /home/ansible/.ssh
- chown root:root /etc/sudoers.d/ansible; chmod 440 /etc/sudoers.d/ansible
Thanks in advance!
AWS EC2 Image Builder cleans up afterwards
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#post-build-cleanup
# Clean up for ssh files
SSH_FILES=(
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
"/etc/ssh/ssh_host_ecdsa_key"
"/etc/ssh/ssh_host_ecdsa_key.pub"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/root/.ssh/authorized_keys"
)
if [[ -f {{workingDirectory}}/skip_cleanup_ssh_files ]]; then
echo "Skipping cleanup of ssh files"
else
echo "Cleaning up ssh files"
cleanup "${SSH_FILES[#]}"
USERS=$(ls /home/)
for user in $USERS; do
echo Deleting /home/"$user"/.ssh/authorized_keys;
sudo find /home/"$user"/.ssh/authorized_keys -type f -exec shred -zuf {} \;
done
for user in $USERS; do
if [[ -f /home/"$user"/.ssh/authorized_keys ]]; then
echo Failed to delete /home/"$user"/.ssh/authorized_keys;
exit 1
fi;
done;
fi;
You can skip individual sections of the clean up script.
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#override-linux-cleanup-script

Run elastic beanstalk .ebextensions config for specific environments

I have the following config
0_logdna.config
commands:
01_install_logdna:
command: "/home/ec2-user/logdna.sh"
02_restart_logdna:
command: "service logdna-agent restart"
files:
"/home/ec2-user/logdna.sh" :
mode: "000777"
owner: root
group: root
content: |
#!/bin/sh
RACK_ENV=$(/opt/elasticbeanstalk/bin/get-config environment -k RACK_ENV)
echo "$RACK_ENV"
if [ $RACK_ENV == production ]
then
rpm --import https://repo.logdna.com/logdna.gpg
echo "[logdna]
name=LogDNA packages
baseurl=https://repo.logdna.com/el6/
enabled=1
gpgcheck=1
gpgkey=https://repo.logdna.com/logdna.gpg" | sudo tee /etc/yum.repos.d/logdna.repo
LOGDNA_INGESTION_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k LOGDNA_INGESTION_KEY)
yum -y install logdna-agent
logdna-agent -k $LOGDNA_INGESTION_KEY # this is your unique Ingestion Key
# /var/log is monitored/added by default (recursively), optionally add more dirs here
logdna-agent -d /var/app/current/log/logstasher.log
logdna-agent -d /var/app/containerfiles/logs/sidekiq.log
# logdna-agent --hostname allows you to pass your AWS env metadata to LogDNA (remove # to uncomment the line below)
# logdna-agent --hostname `{"Ref": "AWSEBEnvironmentName" }`
# logdna -t option allows you to tag the host with tags (remove # to uncomment the line below)
#logdna-agent -t `{"Ref": "AWSEBEnvironmentName" }`
chkconfig logdna-agent on
service logdna-agent start
fi
I want to be able to only run this config for my production environment but each time i run this code, i get an error that says
ERROR [Instance: i-091794aa00f84ab36,i-05b6d0824e7a0f5da] Command failed on instance. Return code: 1 Output: (TRUNCATED)...not found
/home/ec2-user/logdna.sh: line 17: logdna-agent: command not found
/home/ec2-user/logdna.sh: line 18: logdna-agent: command not found
error reading information on service logdna-agent: No such file or directory
logdna-agent: unrecognized service.
Not sure why this is not working. When i echo RACK_ENV i get production as the value so i know that is correct but why is it failing my if statement and why is it not working properly?
Your use of echo will lead to malformed /etc/yum.repos.d/logdna.repo. To set it up properly, please use the following (indentations for EOL2 are important):
files:
"/home/ec2-user/logdna.sh" :
mode: "000777"
owner: root
group: root
content: |
#!/bin/sh
RACK_ENV=$(/opt/elasticbeanstalk/bin/get-config environment -k RACK_ENV)
echo "$RACK_ENV"
if [ $RACK_ENV == production ]
then
rpm --import https://repo.logdna.com/logdna.gpg
cat >/etc/yum.repos.d/logdna.repo << 'EOL2'
[logdna]
name=LogDNA packages
baseurl=https://repo.logdna.com/el6/
enabled=1
gpgcheck=1
gpgkey=https://repo.logdna.com/logdna.gpg
EOL2
LOGDNA_INGESTION_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k LOGDNA_INGESTION_KEY)
yum -y install logdna-agent
logdna-agent -k $LOGDNA_INGESTION_KEY # this is your unique Ingestion Key
# /var/log is monitored/added by default (recursively), optionally add more dirs here
logdna-agent -d /var/app/current/log/logstasher.log
logdna-agent -d /var/app/containerfiles/logs/sidekiq.log
# logdna-agent --hostname allows you to pass your AWS env metadata to LogDNA (remove # to uncomment the line below)
# logdna-agent --hostname `{"Ref": "AWSEBEnvironmentName" }`
# logdna -t option allows you to tag the host with tags (remove # to uncomment the line below)
#logdna-agent -t `{"Ref": "AWSEBEnvironmentName" }`
chkconfig logdna-agent on
service logdna-agent start
fi
For further troubleshooting please check /var/log/cfn-init-cmd.log file.

How to pipe a github secret variable into a file

I have a github pipeline and im piping a github sercret variable into a file but i get the following error.
/home/runner/work/_temp/c6144b9a-c8e3-489a-ae97-795f592c57f0.sh: line 6: /config: Permission denied
echo: write error: Broken pipe
name: pipeline
on: [ push ]
env:
KUBECONFIG_B64DATA: ${{ secrets.KUBECONFIG_B64DATA }}
deploy:
name: Deploy
# if: startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#master
- name: Setup Kubectl
run: |
sudo apt-get -y install curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo echo $KUBECONFIG_B64DATA | base64 --decode > /config
sudo mkdir -p ~/.kube
sudo mv config /root/.kube/
EDIT:
I use a different folder to get passed permissions isuses (/tmp/config)
However i still struggle to pipe a github secret variable into a file because github masks the secret and im returned with an error.
base64: invalid input
I believe this is because when you echo a secret you simply get **** instead of the actual value
I spent 4 hours on this issue. Then found the solution which was actually hidden in the comments.
As pointed out by #Kay, this was caused by the white space. Doing echo "${KUBECONFIG_B64DATA// /}" | base64 --decode > /tmp/config fixed the problem for me.
Just posting this as an official answer, so that it becomes easier for someone to find it later.
Change this line:
sudo echo $KUBECONFIG_B64DATA | base64 --decode > /config
To
sudo bash -c 'base64 --decode <<< "$KUBECONFIG_B64DATA" > /config'
Or
sudo tee /config > /dev/null < <(base64 --decode <<< "$KUBECONFIG_B64DATA")

Errors adding Environment Variables to NodeJS Elastic Beanstalk

My configuration worked up until yesterday. I have added the nginx NodeJS https redirect extension from AWS. Now, when I try to add a new Environment Variable through the Elastic Beanstalk configuration, I get this error:
[Instance: i-0364b59cca36774a0] Command failed on instance. Return code: 137 Output: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf + service nginx stop Stopping nginx: /sbin/service: line 66: 27395 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS}. Hook /opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
When I look at the eb-activity.log, I see this error:
[2018-02-18T17:24:58.762Z] INFO [13848] - [Configuration update 1.0.61#112/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Starting activity...
[2018-02-18T17:24:58.939Z] INFO [13848] - [Configuration update 1.0.61#112/ConfigDeployStage1/ConfigDeployPostHook/99_kill_default_nginx.sh] : Activity execution failed, because: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 14258 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS} (ElasticBeanstalk::ExternalInvocationError)
caused by: + rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
+ service nginx stop
Stopping nginx: /sbin/service: line 66: 14258 Killed env -i PATH="$PATH" TERM="$TERM" "${SERVICEDIR}/${SERVICE}" ${OPTIONS} (Executor::NonZeroExitStatus)
What am I doing wrong? And what has changed recently since this worked fine when I changed an Environment Variable a couple months ago.
I had this problem as well and Amazon acknowledged the error in the documentation. This is a working restart script that you can use in your .ebextensions config file.
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=`/sbin/status nginx`
if [[ $status = *"start/running"* ]]; then
echo "stopping nginx..."
stop nginx
echo "starting nginx..."
start nginx
else
echo "nginx is not running... starting it..."
start nginx
fi
service nginx stop exits with status 137 (Killed).
Your script starts with: #!/bin/bash -xe
The parameter -e makes the script exit immediately whenever something exits with a non-zero status.
If you want to continue the execution, you need to catch the exit status (137).
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=`/sbin/status nginx`
if [[ $status = *"start/running"* ]]; then
set +e
service nginx stop
exitStatus = $?
if [ $exitStatus -ne 0 ] && [ $exitStatus -ne 137 ]
then
exit $exitStatus
fi
set -e
fi
service nginx start
The order of events looks like this to me:
Create a post-deploy hook to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
Run a container command to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
Run the post-deploy hook, which tries to delete /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
So it doesn't seem surprising to me that the post-deploy script fails as the file you are trying to delete probably doesn't exist.
I would try one of two things:
Move the deletion of the temporary conf file from the container command to the 99_kill_default_nginx.sh script, then remove the whole container command section.
Remove the line rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf from the 99_kill_default_nginx.sh script.
/sbin/status nginx seems not to work anymore. I updated the script to use service nginx status:
/opt/elasticbeanstalk/hooks/configdeploy/post/99_kill_default_nginx.sh:
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash -xe
rm -f /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf
status=$(service nginx status)
if [[ "$status" =~ "running" ]]; then
echo "stopping nginx..."
stop nginx
echo "starting nginx..."
start nginx
else
echo "nginx is not running... starting it..."
start nginx
fi
And the faulty script is STILL in amazon's docs... I wonder when they are going to fix it. It's been enough time already

How to install nginx 1.9.15 on amazon linux disto

I try to install the latest version of nginx (>= 1.9.5) on a fresh amazon linux to make use of http2. I followed the instructions that are described here -> http://nginx.org/en/linux_packages.html
I created a repo file /etc/yum.repos.d/nginx.repowith this content:
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/
gpgcheck=0
enabled=1
If I run yum update and yum install nginx I get this:
nginx x86_64 1:1.8.1-1.26.amzn1 amzn-main 557 k
It seems that it fetches still from the amzn-main repo. How do I install a newer version of nginx?
-- edit --
I added "priority=10" to the nginx.repo file and now I can install 1.9.15 with yum install nginx with this result:
Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
--> Running transaction check
---> Package nginx.x86_64 1:1.9.15-1.el7.ngx will be installed
--> Processing Dependency: systemd for package: 1:nginx-1.9.15-1.el7.ngx.x86_64
--> Processing Dependency: libpcre.so.1()(64bit) for package: 1:nginx-1.9.15-1.el7.ngx.x86_64
--> Finished Dependency Resolution
Error: Package: 1:nginx-1.9.15-1.el7.ngx.x86_64 (nginx)
Requires: libpcre.so.1()(64bit)
Error: Package: 1:nginx-1.9.15-1.el7.ngx.x86_64 (nginx)
Requires: systemd
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
If you're using AWS Linux2, you have to install nginx from the AWS "Extras Repository". To see a list of the packages available:
# View list of packages to install
amazon-linux-extras list
You'll see a list similar to:
0 ansible2 disabled [ =2.4.2 ]
1 emacs disabled [ =25.3 ]
2 memcached1.5 disabled [ =1.5.1 ]
3 nginx1.12 disabled [ =1.12.2 ]
4 postgresql9.6 disabled [ =9.6.6 ]
5 python3 disabled [ =3.6.2 ]
6 redis4.0 disabled [ =4.0.5 ]
7 R3.4 disabled [ =3.4.3 ]
8 rust1 disabled [ =1.22.1 ]
9 vim disabled [ =8.0 ]
10 golang1.9 disabled [ =1.9.2 ]
11 ruby2.4 disabled [ =2.4.2 ]
12 nano disabled [ =2.9.1 ]
13 php7.2 disabled [ =7.2.0 ]
14 lamp-mariadb10.2-php7.2 disabled [ =10.2.10_7.2.0 ]
Use the amazon-linux-extras install command to install it, like:
sudo amazon-linux-extras install nginx1.12
More details are here: https://aws.amazon.com/amazon-linux-2/faqs/.
At the time of writing, the latest version of nginx available from the AWS yum repo is 1.8.
The best thing to do for now is to build any newer version from source.
The AWS Linux AMI already has the necessary build tools.
For example, based on the Nginx 1.10 (I've assumed you're logged in as the regular ec2-user. Anything needing superuser rights is preceded with sudo)
cd /tmp #so we can clean-up easily
wget http://nginx.org/download/nginx-1.10.0.tar.gz
tar zxvf nginx-1.10.0.tar.gz && rm -f nginx-1.10.0.tar.gz
cd nginx-1.10.0
sudo yum install pcre-devel openssl-devel #required libs, not installed by default
./configure \
--prefix=/etc/nginx \
--conf-path=/etc/nginx/nginx.conf \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/run/nginx.lock \
--with-http_ssl_module \
--with-http_v2_module \
--user=nginx \
--group=nginx
make
sudo make install
sudo groupadd nginx
sudo useradd -M -G nginx nginx
rm -rf nginx-1.10.0
You'll then want a service file, so that you can start/stop nginx, and load it on boot.
Here's one that matches the above config. Put it in /etc/rc.d/init.d/nginx:
#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig: - 85 15
# description: NGINX is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /etc/sysconfig/nginx
# pidfile: /var/run/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
nginx="/etc/nginx/sbin/nginx"
prog=$(basename $nginx)
NGINX_CONF_FILE="/etc/nginx/nginx.conf"
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/run/nginx.lock
make_dirs() {
# make required directories
user=`$nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
if [ -z "`grep $user /etc/passwd`" ]; then
useradd -M -s /bin/nologin $user
fi
options=`$nginx -V 2>&1 | grep 'configure arguments:'`
for opt in $options; do
if [ `echo $opt | grep '.*-temp-path'` ]; then
value=`echo $opt | cut -d "=" -f 2`
if [ ! -d "$value" ]; then
# echo "creating" $value
mkdir -p $value && chown -R $user $value
fi
fi
done
}
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
make_dirs
echo -n $"Starting $prog: "
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
configtest || return $?
stop
sleep 1
start
}
reload() {
configtest || return $?
echo -n $"Reloading $prog: "
killproc $nginx -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
exit 2
esac
Set the service file to be executable:
sudo chmod 755 /etc/rc.d/init.d/nginx
Now you can start it with:
sudo service nginx start
To load it automatically on boot:
sudo chkconfig nginx on
Finally, don't forget to edit /etc/nginx/nginx.conf to match your requirements and run sudo service nginx reload to refresh the changes.
Note, there is no 1.10 where you're looking. You can see the list here
http://nginx.org/packages/mainline/centos/7/x86_64/RPMS/
After you yum update use yum search nginx to see the different versions you have and choose a specific one:
yum search nginx
on centos 6 gives
nginx.x86_64 : A high performance web server and reverse proxy server
nginx16.x86_64 : A high performance web server and reverse proxy server
nginx18.x86_64 : A high performance web server and reverse proxy server
I have two versions to choose from, 1.6 and 1.8.
You're getting error because those nginx RPMs are built for RHEL7, not Amazon Linux. Amazon Linux is a weird hybrid of RHEL6, RHEL7, and Fedora. You should contact Amazon and ask them to create a proper nginx19 RPM specifically built for their distro.