I have the following config
0_logdna.config
commands:
01_install_logdna:
command: "/home/ec2-user/logdna.sh"
02_restart_logdna:
command: "service logdna-agent restart"
files:
"/home/ec2-user/logdna.sh" :
mode: "000777"
owner: root
group: root
content: |
#!/bin/sh
RACK_ENV=$(/opt/elasticbeanstalk/bin/get-config environment -k RACK_ENV)
echo "$RACK_ENV"
if [ $RACK_ENV == production ]
then
rpm --import https://repo.logdna.com/logdna.gpg
echo "[logdna]
name=LogDNA packages
baseurl=https://repo.logdna.com/el6/
enabled=1
gpgcheck=1
gpgkey=https://repo.logdna.com/logdna.gpg" | sudo tee /etc/yum.repos.d/logdna.repo
LOGDNA_INGESTION_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k LOGDNA_INGESTION_KEY)
yum -y install logdna-agent
logdna-agent -k $LOGDNA_INGESTION_KEY # this is your unique Ingestion Key
# /var/log is monitored/added by default (recursively), optionally add more dirs here
logdna-agent -d /var/app/current/log/logstasher.log
logdna-agent -d /var/app/containerfiles/logs/sidekiq.log
# logdna-agent --hostname allows you to pass your AWS env metadata to LogDNA (remove # to uncomment the line below)
# logdna-agent --hostname `{"Ref": "AWSEBEnvironmentName" }`
# logdna -t option allows you to tag the host with tags (remove # to uncomment the line below)
#logdna-agent -t `{"Ref": "AWSEBEnvironmentName" }`
chkconfig logdna-agent on
service logdna-agent start
fi
I want to be able to only run this config for my production environment but each time i run this code, i get an error that says
ERROR [Instance: i-091794aa00f84ab36,i-05b6d0824e7a0f5da] Command failed on instance. Return code: 1 Output: (TRUNCATED)...not found
/home/ec2-user/logdna.sh: line 17: logdna-agent: command not found
/home/ec2-user/logdna.sh: line 18: logdna-agent: command not found
error reading information on service logdna-agent: No such file or directory
logdna-agent: unrecognized service.
Not sure why this is not working. When i echo RACK_ENV i get production as the value so i know that is correct but why is it failing my if statement and why is it not working properly?
Your use of echo will lead to malformed /etc/yum.repos.d/logdna.repo. To set it up properly, please use the following (indentations for EOL2 are important):
files:
"/home/ec2-user/logdna.sh" :
mode: "000777"
owner: root
group: root
content: |
#!/bin/sh
RACK_ENV=$(/opt/elasticbeanstalk/bin/get-config environment -k RACK_ENV)
echo "$RACK_ENV"
if [ $RACK_ENV == production ]
then
rpm --import https://repo.logdna.com/logdna.gpg
cat >/etc/yum.repos.d/logdna.repo << 'EOL2'
[logdna]
name=LogDNA packages
baseurl=https://repo.logdna.com/el6/
enabled=1
gpgcheck=1
gpgkey=https://repo.logdna.com/logdna.gpg
EOL2
LOGDNA_INGESTION_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k LOGDNA_INGESTION_KEY)
yum -y install logdna-agent
logdna-agent -k $LOGDNA_INGESTION_KEY # this is your unique Ingestion Key
# /var/log is monitored/added by default (recursively), optionally add more dirs here
logdna-agent -d /var/app/current/log/logstasher.log
logdna-agent -d /var/app/containerfiles/logs/sidekiq.log
# logdna-agent --hostname allows you to pass your AWS env metadata to LogDNA (remove # to uncomment the line below)
# logdna-agent --hostname `{"Ref": "AWSEBEnvironmentName" }`
# logdna -t option allows you to tag the host with tags (remove # to uncomment the line below)
#logdna-agent -t `{"Ref": "AWSEBEnvironmentName" }`
chkconfig logdna-agent on
service logdna-agent start
fi
For further troubleshooting please check /var/log/cfn-init-cmd.log file.
Related
I'm trying to create a custom image of RedHat 8 using the EC2 Image Builder. In one of the recipes added to the pipeline, I've created the ansible user and used S3 to download the authorized_keys and the custom sudoers.d file. The issue I'm facing is that the sudoers file called "ansible" gets copied just fine, the authorized_keys doesn't. CloudWatch says that the recipe get executed without errors, the files are downloaded but when I create an EC2 with this AMI, the authorized_keys file is not in the path.
What's happening?
This is the recipe I'm using:
name: USER-Ansible
description: Creazione e configurazione dell'utente ansible
schemaVersion: 1.0
phases:
- name: build
steps:
- name: UserCreate
action: ExecuteBash
inputs:
commands:
- groupadd -g 2004 ux
- useradd -u 4134 -g ux -c "AWX Ansible" -m -d /home/ansible ansible
- mkdir /home/ansible/.ssh
- name: FilesDownload
action: S3Download
inputs:
- source: s3://[REDACTED]/authorized_keys
destination: /home/ansible/.ssh/authorized_keys
expectedBucketOwner: [REDACTED]
overwrite: false
- source: s3://[REDACTED]/ansible
destination: /etc/sudoers.d/ansible
expectedBucketOwner: [REDACTED]
overwrite: false
- name: FilesConfiguration
action: ExecuteBash
inputs:
commands:
- chown ansible:ux /home/ansible/.ssh/authorized_keys; chmod 600 /home/ansible/.ssh/authorized_keys
- chown ansible:ux /home/ansible/.ssh; chmod 700 /home/ansible/.ssh
- chown root:root /etc/sudoers.d/ansible; chmod 440 /etc/sudoers.d/ansible
Thanks in advance!
AWS EC2 Image Builder cleans up afterwards
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#post-build-cleanup
# Clean up for ssh files
SSH_FILES=(
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
"/etc/ssh/ssh_host_ecdsa_key"
"/etc/ssh/ssh_host_ecdsa_key.pub"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/root/.ssh/authorized_keys"
)
if [[ -f {{workingDirectory}}/skip_cleanup_ssh_files ]]; then
echo "Skipping cleanup of ssh files"
else
echo "Cleaning up ssh files"
cleanup "${SSH_FILES[#]}"
USERS=$(ls /home/)
for user in $USERS; do
echo Deleting /home/"$user"/.ssh/authorized_keys;
sudo find /home/"$user"/.ssh/authorized_keys -type f -exec shred -zuf {} \;
done
for user in $USERS; do
if [[ -f /home/"$user"/.ssh/authorized_keys ]]; then
echo Failed to delete /home/"$user"/.ssh/authorized_keys;
exit 1
fi;
done;
fi;
You can skip individual sections of the clean up script.
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#override-linux-cleanup-script
I have a Dockerfile:
FROM public.ecr.aws/bitnami/node:15 AS stage-01
COPY package.json /app/package.json
COPY package-lock.json /app/package-lock.json
WORKDIR /app
RUN npm ci
FROM stage-01 AS stage-02
COPY src /app/src
COPY public /app/public
COPY tsconfig.json /app/tsconfig.json
WORKDIR /app
RUN PUBLIC_URL=/myapp/web npm run build
FROM public.ecr.aws/bitnami/nginx:1.20
USER 1001
COPY --from=stage-02 /app/build /app/build
COPY nginx.conf /opt/bitnami/nginx/conf/server_blocks/nginx.conf
COPY ./env.sh /app/build
COPY window.env /app/build
EXPOSE 8080
WORKDIR /app/build
CMD ["/bin/sh", "-c", "/app/build/env.sh && nginx -g \"daemon off;\""]
If I build this image locally it starts normally and does what it has to do.
My local docker version:
Client: Docker Engine - Community
Version: 20.10.7
API version: 1.41
Go version: go1.13.15
Git commit: f0df350
Built: Wed Jun 2 11:56:40 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:52:16 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
If I build it in Codebuild it does not starts:
/app/build/env.sh: 4: /app/build/env.sh: cannot create ./env-config.js: Permission denied
This is the image I am using in codebuild: aws/codebuild/amazonlinux2-x86_64-standard:3.0
I have also run the same script in local and still no error.
What could be the cause of this? If you have something in mind please let me know, otherwise I will post more code
This is my env.sh
#!/usr/bin/env sh
# Add assignment
echo "window._env_ = {" > ./env-config.js
# Read each line in .env file
# Each line represents key=value pairs
while read -r line || [ -n "$line" ];
do
echo "$line"
# Split env variables by character `=`
if printf '%s\n' "$line" | grep -q -e '='; then
varname=$(printf '%s\n' "$line" | sed -e 's/=.*//')
varvalue=$(printf '%s\n' "$line" | sed -e 's/^[^=]*=//')
fi
# Read value of current variable if exists as Environment variable
eval value=\"\$"$varname"\"
# Otherwise use value from .env file
[ -z "$value" ] && value=${varvalue}
echo name: "$varname", value: "$value"
# Append configuration property to JS file
echo " $varname: \"$value\"," >> ./env-config.js
done < window.env
echo "}" >> ./env-config.js
buildspec:
version: 0.2
env:
git-credential-helper: yes
secrets-manager:
GITHUB_TOKEN: "github:GITHUB_TOKEN"
phases:
install:
runtime-versions:
nodejs: 12
commands:
- npm install
build:
commands:
- echo Build started on `date`
- GITHUB_USERNAME=${GITHUB_USERNAME} GITHUB_EMAIL=${GITHUB_EMAIL} GITHUB_TOKEN=${GITHUB_TOKEN} AWS_REGION=${AWS_DEFAULT_REGION} GITHUB_REPOSITORY_URL=${GITHUB_REPOSITORY_URL} ECR_REPOSITORY_URL=${ECR_REPOSITORY_URL} ENV=${ENV} node release.js
My build project terraform configuration:
resource "aws_codebuild_project" "dashboard_image" {
name = var.project.name
service_role = var.codebuild_role_arn
artifacts {
type = "CODEPIPELINE"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
type = "LINUX_CONTAINER"
privileged_mode = true
environment_variable {
name = "GITHUB_REPOSITORY_URL"
value = "https://github.com/${var.project.github_organization_name}/${var.project.github_repository_name}.git"
}
environment_variable {
name = "ECR_REPOSITORY_URL"
value = var.project.ecr_repository_url
}
environment_variable {
name = "ECR_IMAGE_NAME"
value = var.project.ecr_image_name
}
environment_variable {
name = "ENV"
value = "prod"
}
}
source {
type = "CODEPIPELINE"
buildspec = "buildspec.yml"
}
}
It's all about your Dockerfile and user permissions in it. Try to run docker run public.ecr.aws/bitnami/nginx:1.20 whoami - you will see that this image has not default user. It will be the same if you exec something inside this container. You have to add --user root to run or exec commands. See section "Why use a non-root container?" in Bitnami Nginx image documentation
That's why you don't have permission to create file inside the /app folder. The owner of this folder is root from the first public.ecr.aws/bitnami/node:15 image (which has root user by default).
In order to make it work in your case you have to change the line from USER 1001 to USER root (or someone with proper permissions) and double check that env.sh file has execute permission chmod +x env.sh.
This is the change I had to make to my Dockerfile in order to make it work:
FROM public.ecr.aws/bitnami/node:15 AS stage-01
COPY package.json /app/package.json
COPY package-lock.json /app/package-lock.json
WORKDIR /app
RUN npm ci
FROM stage-01 AS stage-02
COPY src /app/src
COPY public /app/public
COPY tsconfig.json /app/tsconfig.json
WORKDIR /app
RUN PUBLIC_URL=/myapp/web npm run build
FROM public.ecr.aws/bitnami/nginx:1.20
USER root
COPY --from=stage-02 /app/build /app/build
COPY nginx.conf /opt/bitnami/nginx/conf/server_blocks/nginx.conf
COPY ./env.sh /app/build
COPY window.env /app/build
RUN chmod 777 /app/build/env-config.js
EXPOSE 8080
WORKDIR /app/build
USER 1001
CMD ["/bin/sh", "-c", "/app/build/env.sh && nginx -g \"daemon off;\""]
It is probably due to the codebuild permissions when cloning the repository
777 is just temporary, later I will probably test if I can restrict the permissions.
I have a github pipeline and im piping a github sercret variable into a file but i get the following error.
/home/runner/work/_temp/c6144b9a-c8e3-489a-ae97-795f592c57f0.sh: line 6: /config: Permission denied
echo: write error: Broken pipe
name: pipeline
on: [ push ]
env:
KUBECONFIG_B64DATA: ${{ secrets.KUBECONFIG_B64DATA }}
deploy:
name: Deploy
# if: startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#master
- name: Setup Kubectl
run: |
sudo apt-get -y install curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo echo $KUBECONFIG_B64DATA | base64 --decode > /config
sudo mkdir -p ~/.kube
sudo mv config /root/.kube/
EDIT:
I use a different folder to get passed permissions isuses (/tmp/config)
However i still struggle to pipe a github secret variable into a file because github masks the secret and im returned with an error.
base64: invalid input
I believe this is because when you echo a secret you simply get **** instead of the actual value
I spent 4 hours on this issue. Then found the solution which was actually hidden in the comments.
As pointed out by #Kay, this was caused by the white space. Doing echo "${KUBECONFIG_B64DATA// /}" | base64 --decode > /tmp/config fixed the problem for me.
Just posting this as an official answer, so that it becomes easier for someone to find it later.
Change this line:
sudo echo $KUBECONFIG_B64DATA | base64 --decode > /config
To
sudo bash -c 'base64 --decode <<< "$KUBECONFIG_B64DATA" > /config'
Or
sudo tee /config > /dev/null < <(base64 --decode <<< "$KUBECONFIG_B64DATA")
I am trying to follow these instructions to force SSL on AWS Beanstalk.
I believe this is the important part.
files:
"/tmp/45_nginx_https_rw.sh":
owner: root
group: root
mode: "000644"
content: |
#! /bin/bash
CONFIGURED=`grep -c "return 301 https" /opt/elasticbeanstalk/support/conf/webapp_healthd.conf`
if [ $CONFIGURED = 0 ]
then
sed -i '/listen 80;/a \ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; }\n' /opt/elasticbeanstalk/support/conf/webapp_healthd.conf
logger -t nginx_rw "https rewrite rules added"
exit 0
else
logger -t nginx_rw "https rewrite rules already set"
exit 0
fi
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
For some reason the file is not being uploaded or created.
I also tried this a sudo command for the container commands starting with 00 and 01.
I also manually ssh into the server, manually created the file. Then locally used the aws elasticbeanstalk restart-app-server --environment-name command to restart the server. And this still did not work.
Any help would be greatly appreciated.
I try to install the latest version of nginx (>= 1.9.5) on a fresh amazon linux to make use of http2. I followed the instructions that are described here -> http://nginx.org/en/linux_packages.html
I created a repo file /etc/yum.repos.d/nginx.repowith this content:
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/
gpgcheck=0
enabled=1
If I run yum update and yum install nginx I get this:
nginx x86_64 1:1.8.1-1.26.amzn1 amzn-main 557 k
It seems that it fetches still from the amzn-main repo. How do I install a newer version of nginx?
-- edit --
I added "priority=10" to the nginx.repo file and now I can install 1.9.15 with yum install nginx with this result:
Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
--> Running transaction check
---> Package nginx.x86_64 1:1.9.15-1.el7.ngx will be installed
--> Processing Dependency: systemd for package: 1:nginx-1.9.15-1.el7.ngx.x86_64
--> Processing Dependency: libpcre.so.1()(64bit) for package: 1:nginx-1.9.15-1.el7.ngx.x86_64
--> Finished Dependency Resolution
Error: Package: 1:nginx-1.9.15-1.el7.ngx.x86_64 (nginx)
Requires: libpcre.so.1()(64bit)
Error: Package: 1:nginx-1.9.15-1.el7.ngx.x86_64 (nginx)
Requires: systemd
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
If you're using AWS Linux2, you have to install nginx from the AWS "Extras Repository". To see a list of the packages available:
# View list of packages to install
amazon-linux-extras list
You'll see a list similar to:
0 ansible2 disabled [ =2.4.2 ]
1 emacs disabled [ =25.3 ]
2 memcached1.5 disabled [ =1.5.1 ]
3 nginx1.12 disabled [ =1.12.2 ]
4 postgresql9.6 disabled [ =9.6.6 ]
5 python3 disabled [ =3.6.2 ]
6 redis4.0 disabled [ =4.0.5 ]
7 R3.4 disabled [ =3.4.3 ]
8 rust1 disabled [ =1.22.1 ]
9 vim disabled [ =8.0 ]
10 golang1.9 disabled [ =1.9.2 ]
11 ruby2.4 disabled [ =2.4.2 ]
12 nano disabled [ =2.9.1 ]
13 php7.2 disabled [ =7.2.0 ]
14 lamp-mariadb10.2-php7.2 disabled [ =10.2.10_7.2.0 ]
Use the amazon-linux-extras install command to install it, like:
sudo amazon-linux-extras install nginx1.12
More details are here: https://aws.amazon.com/amazon-linux-2/faqs/.
At the time of writing, the latest version of nginx available from the AWS yum repo is 1.8.
The best thing to do for now is to build any newer version from source.
The AWS Linux AMI already has the necessary build tools.
For example, based on the Nginx 1.10 (I've assumed you're logged in as the regular ec2-user. Anything needing superuser rights is preceded with sudo)
cd /tmp #so we can clean-up easily
wget http://nginx.org/download/nginx-1.10.0.tar.gz
tar zxvf nginx-1.10.0.tar.gz && rm -f nginx-1.10.0.tar.gz
cd nginx-1.10.0
sudo yum install pcre-devel openssl-devel #required libs, not installed by default
./configure \
--prefix=/etc/nginx \
--conf-path=/etc/nginx/nginx.conf \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/run/nginx.lock \
--with-http_ssl_module \
--with-http_v2_module \
--user=nginx \
--group=nginx
make
sudo make install
sudo groupadd nginx
sudo useradd -M -G nginx nginx
rm -rf nginx-1.10.0
You'll then want a service file, so that you can start/stop nginx, and load it on boot.
Here's one that matches the above config. Put it in /etc/rc.d/init.d/nginx:
#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig: - 85 15
# description: NGINX is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /etc/sysconfig/nginx
# pidfile: /var/run/nginx.pid
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
nginx="/etc/nginx/sbin/nginx"
prog=$(basename $nginx)
NGINX_CONF_FILE="/etc/nginx/nginx.conf"
[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
lockfile=/var/run/nginx.lock
make_dirs() {
# make required directories
user=`$nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
if [ -z "`grep $user /etc/passwd`" ]; then
useradd -M -s /bin/nologin $user
fi
options=`$nginx -V 2>&1 | grep 'configure arguments:'`
for opt in $options; do
if [ `echo $opt | grep '.*-temp-path'` ]; then
value=`echo $opt | cut -d "=" -f 2`
if [ ! -d "$value" ]; then
# echo "creating" $value
mkdir -p $value && chown -R $user $value
fi
fi
done
}
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
make_dirs
echo -n $"Starting $prog: "
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
configtest || return $?
stop
sleep 1
start
}
reload() {
configtest || return $?
echo -n $"Reloading $prog: "
killproc $nginx -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
exit 2
esac
Set the service file to be executable:
sudo chmod 755 /etc/rc.d/init.d/nginx
Now you can start it with:
sudo service nginx start
To load it automatically on boot:
sudo chkconfig nginx on
Finally, don't forget to edit /etc/nginx/nginx.conf to match your requirements and run sudo service nginx reload to refresh the changes.
Note, there is no 1.10 where you're looking. You can see the list here
http://nginx.org/packages/mainline/centos/7/x86_64/RPMS/
After you yum update use yum search nginx to see the different versions you have and choose a specific one:
yum search nginx
on centos 6 gives
nginx.x86_64 : A high performance web server and reverse proxy server
nginx16.x86_64 : A high performance web server and reverse proxy server
nginx18.x86_64 : A high performance web server and reverse proxy server
I have two versions to choose from, 1.6 and 1.8.
You're getting error because those nginx RPMs are built for RHEL7, not Amazon Linux. Amazon Linux is a weird hybrid of RHEL6, RHEL7, and Fedora. You should contact Amazon and ask them to create a proper nginx19 RPM specifically built for their distro.