AWS EBS - How to pull environment name into .ebextensions script - amazon-web-services

I have a grails app that I deploy to AWS Elastic Beanstalk through Jenkins. I want to add a splunk forwarder to my project so I can keep track of my logs outside of AWS and set up easy notifications.
The problem is, I have multiple environments of the app running (dev, pre-prod, prod, etc), which is fine because you can just change the environment name for the forwarded and be able to easily sort through that in Splunk.
However, the same .ebextensions file has to be used between all the environments, no I need a way to set the environment name to whatever AWS has the name as. Is there a way I can easily do this that I'm overlooking?
Start of the script:
container_commands:
01install-splunk:
command: /usr/local/bin/install-splunk.sh
02set-splunk-outputs:
command: /usr/local/bin/set_splunk_outputs.sh
env:
SPLUNK_SERVER_HOST: "splunk.host"
03add-inputs-to-splunk:
command: /usr/local/bin/add-inputs-to-splunk.sh
env:
ENVIRONMENT_NAME: "Development"
cwd: /root
ignoreErrors: false
That ENVIRONMENT_NAME variable I'm setting that's passed to the 3rd script is what I want to be able to change based on what environment is being deployed. Can I set this in Jenkins or pull it through AWS somehow?

You can try below steps:
Configure your AWS Elasticbeanstalk environment with the environment variable
ENVIRONMENT_NAME = 'Development' or 'QA' or 'Prod'
please refer aws-official-docs for same.
Then update config as below:
container_commands:
01install-splunk:
command: /usr/local/bin/install-splunk.sh
02set-splunk-outputs:
command: /usr/local/bin/set_splunk_outputs.sh
env:
SPLUNK_SERVER_HOST: "splunk.host"
03add-inputs-to-splunk:
command: /usr/local/bin/add-inputs-to-splunk.sh
env:
ENVIRONMENT_NAME: "$ENVIRONMENT_NAME"
cwd: /root
ignoreErrors: false
Hope this should work for you.

Related

NoCredentialProviders error when running "docker compose up" with AWS ECS integration

I'm getting the following error when I try to run docker compose up to deploy my infrastructure to AWS using Docker's ECS integration. Note that I'm running this on Pop!_OS 21.10, which is based on Ubuntu.
NoCredentialProviders: no valid providers in chain. Deprecated. For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Things I've tried, based on an exhaustive search of SO and other sites:
Verified the proper format of my ~/.aws/config and ~/.aws/credentials files are formatted correctly, are in the proper place, and have the correct permissions
Verified that the aws cli works fine
Verify that AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION are all set correctly
Tried copying the config and credentials to /root/.aws
Tried setting AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION in the root user's environment
Created /etc/systemd/system/docker.service.d/aws-credentials.conf and populated it with:
[Service]
Environment="AWS_ACCESS_KEY_ID=********************"
Environment="AWS_SECRET_ACCESS_KEY=****************************************"
Ran docker -l debug compose up (Only extra information it provides is DEBUG deploying on AWS with region="us-east-1"
I'm running out of options. If anyone has any other ideas to try, I'd love to hear it. Thanks!
Update: I've also now tried the following, with no luck:
Tried setting Environment="AWS_SHARED_CREDENTIALS_FILE=/home/kespan/.aws/credentials
Tried setting Environment="AWS_SHARED_CREDENTIALS_FILE=/home/kespan/.aws/credentials in /etc/systemd/system/docker.service.d/override.conf
After remembering my IAM account has MFA enabled, generated a token and added Environment="AWS_SESSION_TOKEN=..." to override.conf
Also to note - each time after I've added/modified files under /etc/systemd/system/docker.service.d/ I've run:
sudo systemctl daemon-reload
sudo systemctl restart docker
Edit:
Here's one of the Dockerfiles (both the scraper and scheduler use an identical Dockerfile):
FROM denoland/deno:alpine
WORKDIR /app
USER deno
COPY deps.ts .
RUN deno cache --unstable --no-check deps.ts
COPY . .
RUN deno cache --unstable --no-check mod.ts
RUN mkdir -p /var/tmp/log
CMD ["run", "--unstable", "--allow-all", "--no-check", "mod.ts"]
Here's my docker-compose (some bits redacted):
version: '3'
services:
grafana:
container_name: grafana
image: grafana/grafana
ports:
- "3000:3000"
volumes:
- grafana:/var/lib/grafana
deploy:
replicas: 1
scheduler:
image: scheduler
x-aws-pull-credentials: "arn..."
container_name: scheduler
environment:
DB_CONNECTION_STRING: "postgres://..."
SQS_URL: "..."
SQS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
deploy:
replicas: 1
scraper:
image: scraper
x-aws-pull-credentials: "arn..."
container_name: scraper
environment:
DB_CONNECTION_STRING: "postgres://..."
SQS_URL: "..."
SQS_REGION: "us-east-1"
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
deploy:
replicas: 1
volumes:
grafana:
Have you attempted to use the Amazon ECS Local Container Endpoints tool that AWS Labs provides? It allows you to create an override file for you docker-compose configurations, and it will simulate the ECS endpoints and IAM roles you would be using in AWS.
This is done using the local AWS credentials you have on your workstation. More information is available on the AWS Blog.

AWS Elastic Beanstalk - Running powershell command not executed on .ebextensions

I am running this command to install web socket protocols in my AWS Elastic Beanstalk EC2 server
commands:
01_install_websockets:
command: "powershell.exe Install-WindowsFeature -name Web-WebSockets"
ignoreErrors: false
02_install_iis_websockets_feature:
command: "powershell.exe Enable-WindowsOptionalFeature -Online -FeatureName IIS-WebSockets"
ignoreErrors: false
The command above was not executed in my server, running those script in the EC2 manually always works but doing this using .ebextensions does not work.
Here's the structure of my code.
And when this one is published, the .ebextensions is added at the root of the zip file
zip
- .ebextensions
- other files...
Please let me know what's missing here. I don't do any special configurations on the AWS EB.
I am able to run with following steps:
I am using container_commands but, I am confident it will work with commands as well.
---
container_commands:
"2":
command: powershell.exe -ExecutionPolicy Bypass -Command".\\.ebextensions\\IISScripts.ps1"
Zip file looks like below:
zip
- .ebextensions
- 01-windows.config
- IISScript.ps1
- other files...
EC2 instance configuration: Windows server 2019

Gitlab CI failing cannot find aws command

I am trying to set up a pipeline that builds my react application and deploys it to my AWS S3 bucket. It is building fine, but fails on the deploy.
My .gitlab-ci.yml is :
image: node:latest
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
S3_BUCKET_NAME: $S3_BUCKET_NAME
stages:
- build
- deploy
build:
stage: build
script:
- npm install --progress=false
- npm run build
deploy:
stage: deploy
script:
- aws s3 cp --recursive ./build s3://MYBUCKETNAME
It is failing with the error:
sh: 1: aws: not found
#jellycsc is spot on.
Otherwise, if you want to just use the node image, then you can try something like Thomas Lackemann details (here), which is to use a shell script to install; python, aws cli, zip and use those tools to do the deployment. You'll need AWS credentials stored as environment variables in your gitlab project.
I've successfully used both approaches.
The error is telling you AWS CLI is not installed in the CI environment. You probably need to use GitLab’s AWS Docker image. Please read the Cloud deployment documentation.

How to run aws configure in a travis deploy script?

I am trying to get travis-ci to run a custom deploy script that uses awscli to push a deployment up to my staging server.
In my .travis.yml file I have this:
before_deploy:
- 'curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"'
- 'unzip awscli-bundle.zip'
- './awscli-bundle/install -b ~/bin/aws'
- 'export PATH=~/bin:$PATH'
- 'aws configure'
And I have set up the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION
with their correct values in the travis-ci web interface.
However when the aws configure runs, it stops and waits for user input. How can I tell it to use the environment variables I have defined?
Darbio's solution works fine but it's not taking into consideration that you may end up pushing your AWS credentials in your repository.
That is a bad thing especially if docker is trying to pull a private image from one of your ECR repositories. It would mean that you probably had to store your AWS production credentials in the .travis.yml file and that is far from ideal.
Fortunately Travis gives you the possibility to encrypt environment variables, notification settings, and deploy api keys.
gem install travis
Do a travis login first of all, it will ask you for your github credentials. Once you're logged in get in your project root folder (where your .travis.yml file is) and encrypt your access key id and secret access key.
travis encrypt AWS_ACCESS_KEY_ID="HERE_PUT_YOUR_ACCESS_KEY_ID" --add
travis encrypt AWS_SECRET_ACCESS_KEY="HERE_PUT_YOUR_SECRET_ACCESS_KEY" --add
Thanks to the --add option you'll end up with two new (encrypted) environment variables in your configuration file. Now just open your .travis.yml file and you should see something like this:
env:
global:
- secure: encrypted_stuff
- secure: encrypted_stuff
Now you can make travis run a shell script that creates the ~/.aws/credentials file for you.
ecr_credentials.sh
#!/usr/bin/env bash
mkdir -p ~/.aws
cat > ~/.aws/credentials << EOL
[default]
aws_access_key_id = ${AWS_ACCESS_KEY_ID}
aws_secret_access_key = ${AWS_SECRET_ACCESS_KEY}
EOL
Then you just need to run the ecr_credentials.sh script from your .travis.yml file:
before_install:
- ./ecr_credentials.sh
Done! :-D
Source: Encription keys on Travis CI
You can set these in a couple of ways.
Firstly, by creating a file at ~/.aws/config (or ~/.aws/credentials).
For example:
[default]
aws_access_key_id=foo
aws_secret_access_key=bar
region=us-west-2
Secondly, you can add environment variables for each of your settings.
For example, create the following environment variables:
AWS_DEFAULT_REGION
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
Thirdly, you can pass region in as a command line argument. For example:
aws eb deploy --region us-west-2
You won't need to run aws configure in these cases as the cli is configured.
There is further AWS documentation on this page.
Following the advice from #Darbio, I came up with this solution:
- stage: deploy
name: "Deploy to AWS EKS"
language: minimal
before_install:
# Install kubectl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- sudo mv ./kubectl /usr/local/bin/kubectl
# Install AWS CLI
- if ! [ -x "$(command -v aws)" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" ; unzip awscliv2.zip ; sudo ./aws/install ; fi
# export environment variables for AWS CLI (using Travis environment variables)
- export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
# Setup kubectl config to use the desired AWS EKS cluster
- aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME}
deploy:
- provider: script
# bash script containing the kubectl commands to setup the cluster
script: bash k8s-config/deployment.sh
on:
branch: master
It is also possible to avoid installing AWS CLI altogether. Then you need to configure kubectl:
kubectl config set-cluster --server= --certificate-authority=
kubectl config set-credentials --client-certificate= --client-key=
kubectl config set-context myContext --cluster= --namespace= --user=
kubectl config use-context myContext
You can find most of the needed values in your users home directory in /.kube/config, after you performed the aws eks update-kubeconfig command on your local machine.
Except for the client certificate and key. I couldn't figure out where to get them from and therefore needed to install AWS CLI in the pipeline as well.

AWS Elastic Beanstalk deploy to multiple applications

I'm try to describe my situation:
Have multiple AWS account, credentials is located under ~/.aws/credential
To swich to other account I'm typing:
eb init -i --profile name
Now to deploy code to accounts I must every time switch to other acc. How I can organize .ebextensions to have possibility to deploy to 10 AWS acc without switching between profiles ?
You don't need to do eb init every time. You can deploy with arguments, eb deploy --profile profile_name.
If you setup your .elasticbeanstalk/config file something like this you can have different profiles and branches for different environments without using arguments.
branch-defaults:
develop:
environment: env-develop
profile: eb-profile
master:
environment: env-master
profile: eb-profile2
global:
application_name: env_name
default_ec2_keyname: key_name
default_platform: Python 2.7
default_region: ap-southeast-1
sc: git
I haven't tried this, but if you call eb deploy environment_name --profile eb-profile3 that is linked to somewhere else it should deploy there with your branch and global specific settings (profile overriden).
eb deploy <environment name> overrides the environment name.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-deploy.html
I have only read this briefly, but maybe this can help you as well.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebcli-compose.html