AWS static ip address - amazon-web-services

I am using AWS code deploy agent and deploying my project to the server through bitbucket plugin.
The code deployment agent first executes the script files which has the command to execute my spring-boot project.
Since I have two environments one development and another production. I want the script to do things differently based on the environment i.e two different instances.
My plan is to fetch the aws static ip-address which is mapped and from that determine the environment
(production or stage).
How to fetch the elastic ip address through sh commands.
edited

Static IP will work.
Here is a more nature CodeDeploy way to solve this is - set up 2 CodeDeploy deployment groups, one for your development env and the other for your production env. Then in your script, you can use environment variables that CodeDeploy will set during the deployment to understand which env you are deploying to.
Here is a blog post about how to use CodeDeploy environment variables: https://aws.amazon.com/blogs/devops/using-codedeploy-environment-variables/

You could do the following:
id=$( curl http://169.254.169.254/latest/meta-data/instance-id )
eip=$( aws ec2 describe-addresses --filter Name=instance-id,Values=${id} | aws ec2 describe-addresses | jq .Addresses[].PublicIp --raw-output )
The above gets the instance-id from metadata, then uses the aws cli to look for elastic IPs filtered by the id from metadata. Using jq this output can then be parsed down to the IP you are looking for.

Query the metadata server
eip=`curl -s 169.254.169.254/latest/meta-data/public-ipv4`
echo $eip

The solution is completely off tangent to what I originally asked but it was enough for my requirement.
I just needed to know the environment I am in to do certain actions. So what I did was to set an environment variable in an independent script file where the environment variable is set and the value is that of the environment.
ex: let's say in a file env-variables.sh
export profile= stage
In the script file where the commands have to be executed based on the environment I access it this way
source /test/env-variables.sh
echo current profile is $profile
if [ $profile = stage ]
then
echo stage
elif [ $profile = production ]
then
echo production
else
echo failure
fi
Hope some one finds it useful

Related

Passing environment variables to `docker-compose` when using ecs context

I am trying to use the new ecs context that the latest docker cli exposes. When I create the context using docker context create ecs my_context, I select the use environment variables method (the recommended way), how now matter how I try to call docker compose, it always returns the message context requires credentials to be passed as environment variables. Searching for this message in google returns nothing :/
I have tried passing using the -e and --environment flags, I have tried with a .env file, and the --env-file flag.
docker --context my_ecs compose -f docker-compose.yml -f docker-compose.dev.yml -e AWS_ACCESS_KEY_ID=XXXXXXX -e AWS_SECRET_ACCESS_KEY=XXXXXXXX -e AWS_DEFAULT_REGION=XXXXXXXX up
If I don't use the environment variable method, it hits another known (frustrating) bug, whereby it uses the wrong aws region, even though the context is setup with the correct region. I feel the environment variable option would resolve that, if I could get it to see the environment variables.
I don't think it works passing the variables during the docker compose up command. The way it works is that you export the access key ID, secret access key and region variables in your shell and then you create the docker context by pointing to the variables as shown here.
You can do that interactively by launching docker context create ecs myecscontext and then picking AWS environment variables or you can just run docker context create ecs myecscontext --from-env.
And yes the region bug is annoying, if this is what you are referring to.
Google search lead me to this post, (I am not using Windows but aws cloud9 IDE,)
I could get rid of the message context requires credentials to be passed as environment variables by passing dummy values to the environment vars, it goes like this.
$ docker context create ecs myecscontext
? Create a Docker context using: AWS environment variables
Successfully created ecs context "myecscontext"
$ docker context use myecscontext
myecscontext
$ docker context ls
context requires credentials to be passed as environment variables
$ AWS_SECRET_ACCESS_KEY=dummy AWS_ACCESS_KEY_ID=dummy AWS_DEFAULT_REGION=dummy docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
myecscontext * ecs credentials read from environment
$ AWS_SECRET_ACCESS_KEY=dummy AWS_ACCESS_KEY_ID=dummy AWS_DEFAULT_REGION=dummy docker context use default
default
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
myecscontext ecs credentials read from environment
$ docker context rm myecscontext
myecscontext
Modify and export environmental variables on Linux or MacOs with Environment variables to configure the AWS CLI:
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
Later, you can see the options with this command:
docker context create ecs --help
Description: Create a context for Amazon ECS
Usage:
docker context create ecs CONTEXT [flags]
Flags:
--access-keys string :Use AWS access keys from file
--description string :Description of the context
--from-env:Use AWS environment variables for profile, or credentials and region
-h, --help: Help for ecs
--local-simulation :Create context for ECS local simulation endpoints
--profile string :Use an existing AWS profile
Answer: docker context create ecs CONTEXT --from-env
I had to resource my terminal with source ~/.bash_profile because I was using .bashrc as my source and this did not have my environment variables in it.
I've handled this situation by setting environment variables. You can use this guide for this.

AWS ElasticBeanstalk - Auto scaled instance can't see environment variables

Using Elastic beanstalk autoscaling for an application, recently added a new SQS queue as an environment variable.
I can see the variable using eb printenv *my-env* | grep "SQS":
SQS_QUEUE = https://sqs.xxxxxx.amazonaws.com/xxxxxxx/xxxxxxx
When I SSH to a newly created instance I can't see that config sudo /opt/elasticbeanstalk/bin/get-config environment --output YAML | grep "SQS"
If I make a code change and deploy then the instance gets the config - this isn't a practical solution when the scaling goes up / down multiple times a day.
I'm using a custom AMI but I can't find any documentation on AWS around environment variables - adding new ones should just work..!?
What am I missing??
Please double check. I think -output YAML is incorrect. The queue should be accessible through:
/opt/elasticbeanstalk/bin/get-config environment -key SQS_QUEUE

Ansible Dynamic inventory with static group with dynamic children

I am sure many who work with Terraform and Ansible or just Ansible on a daily basis must have come across this question.
Some background:
I create my infrastructure on AWS using Terraform and configure my machines using Ansible. my inventory file contains hardcoded public ip addresses with some variables. As the business demands, I create and destroy my machines very often.
My question:
I want don't want to update my inventory file with new public IP addresses every time I destroy and create my instances. So my fundamental requirement is - every time I destroy my machine I should be able run my Terraform script to recreate the machines and when I run my Ansible Playbook, Ansible should be able to pick up the right target machines and run the playbook. I need to know what I need to describe in my inventory file to achieve this automation. Domain name (www.fooexample.com) and static public IP addresses in the inventory file is not an option in my case? I have seen scripts that do it with, what it looks like a hostname (webserver1)
There are forums that talk about using the ec2.py option but ec2.py is getting all the public ip addresses associated with the account but i only want to target some of the machines as you can imagine and not all of them with my playbook.
Any help regarding this would be appreciated.
Thanks in Advance
I do something similar in GCP but the concept should apply to AWS.
Starting with Ansible 2.7 there is a new inventory plugin architecture and some inventory plugins to replace the dynamic inventory scripts (such as ec2.py and gcp.py). The AWS plugin documentation is at https://docs.ansible.com/ansible/2.9/plugins/inventory/aws_ec2.html.
First, you need to tag the groups of hosts you want to target in AWS. You should be able to handle this with Terraform (such as Service = Web).
Next, enable the aws_ec2 plugin in ansible.cfg by adding:
[inventory]
enable_plugins = aws_ec2
Now, convert over to using the new plugin instead of ec2.py. This means creating a aws_ec2.yaml file based on the documentation. An example might look like:
plugin: aws_ec2
regions:
- us-east-1
keyed_groups:
- prefix: tag
key: tags
# Set individual variables with compose
compose:
ansible_host: public_ip_address
The key parts here are the keyed_groups and compose section. This will give you the public IP addresses as the host to connect to in inventory and groups you can limit to with -l or --limit.
Considering you had some instances in us-east-1 tagged with Service = Web you could target them like:
ansible -i aws_ec2.yaml -m ping -l tag_Service_Web
This would target just those tagged hosts on their public IP address. Any dynamic scaling you do (such as increasing the count in Terraform for that resource) will be picked up by the inventory plugin on next run.
You can also use the tag in playbooks. If you had a playbook that you always targeted at these hosts you can set hosts: tag_Service_Web in the playbook.
Bonus:
I've been experimenting with an Ansible Pull model that automates some of this bootstrapping. The idea is to combine cloud-init with a special script to bootstrap the playbook for that host automatically.
Example script that cloud-init kicks off:
#!/bin/bash
set -euo pipefail
lock_files=(
/var/lib/dpkg/lock
/var/lib/apt/lists/lock
/var/lib/dpkg/lock-frontend
/var/cache/apt/archives/lock
/var/lib/apt/daily_lock
)
export ANSIBLE_HOST_PATTERN_MISMATCH="ignore"
export PATH="/tmp/ansible-venv/bin:$PATH"
for file in "${lock_files[#]}"; do
while fuser "$file" >/dev/null 2>&1; do
echo "Waiting for lock $file to be available..."
sleep 5
done
done
apt-get update -qy
apt-get install --no-install-recommends -qy virtualenv python-virtualenv python-nacl python-wheel python-bcrypt
virtualenv -p /usr/bin/python --system-site-packages /tmp/ansible-venv
pip install ansible==2.7.10 apache-libcloud==2.3.0 jmespath==0.9.3
ansible-pull myplaybook.yaml \
-U git#github.com:myorg/infrastructure.git \
-i gcp_compute.yaml \
--private-key /tmp/ansible-keys/infrastructure_ssh_deploy_key \
--vault-password-file /tmp/ansible-keys/vault \
-d /tmp/ansible-infrastructure \
--accept-host-key
This script is a bit simplified from my actual one (leaving out some domain specific authentication and key providing stuff). But you can adapt it to AWS by doing something like bootstrapping keys from S3 or KMS or another boot time configuration service. I find that ansible-pull works well when the playbook only takes a minute or two to run and doesn't have any dependencies on external inventory (like references to other groups such as to gather IP addresses).

AWS - can we set Environment variable to read inside Docker?

Can we set Environment variable against an EC2 instance from AWS console?
Or do we need load any other service to achieve this?
Also can we load this variable insider Docker?
That appears to be out of scope for the AWS Console. EC2 however is basically AWS hosted VMs, So anything you could do normally on that OS can be done.
Just connect to the machine and do what you would normally do. (See the AWS CLI Guide for further help)
Using the AWS Console:
Stop the ec2 instance
For that ec2 instance, from top dropdown - select Actions > Instance Settings > View/Change User Data
Create a script that would add required environment variables to init scripts.
!/bin/sh echo 'export MY_ENV_VARIABLE=value' >> ~/.bashrc
or a script to /etc/init.d or other similar locations..
Start the EC2 instance for the Environment variables to be available.
If you want to pass the EC2 instance environment variable to the Docker container environment variable:
When you start a docker instance through "docker run" you can pass the value through --env param in the command line.
eg: docker run -e DOCKER_ENV_VARIABLE=$MY_ENV_VARIABLE ...
https://docs.docker.com/engine/reference/commandline/run/#options
You can use user_data to load and afterwards pass to your docker container using docker run -e CONTAINER_ENV_VAR=$EC2_ENV_VARor put in your Dockerfile.
for keys or any sensitive data I would advise you to use the Parameter Store, you can put passwords, users, or whatever data you want and use a service call chamber

How to integrate Atlassian Bamboo with AWS Elastic Beanstalk

I want to integrate Atlassian Bamboo with AWS Elastic Beanstalk. Is there anyway to do this?
It depends a bit on your Bamboo and beanstalk config as well as the type of application you are planning to deploy on AWS Beanstalk.
We did some things for Java Web Apps:
Since Bamboo understands maven, you can have a look at the following maven plugin:
http://beanstalker.ingenieux.com.br/beanstalk-maven-plugin/configurations-and-templates.html
We are using it for some environments to create wars and upload them to elastic beanstalk. You can then create a maven task in bamboo to call the plugin.
If you downloaded and installed Bamboo on a machine you own yourself you could use the Elastic Beanstalk command line interface (CLI).
This is probably the most powerful approach, but you need to install the CLI on the bamboo instance. Then you can do almost anything. This approach should also work for other environments besides Java/Tomcat.
Another idea:
If you use Beanstalk using git (i.e. you deploy by making a code change and pushing to Beanstalk), then you can also use the new "Deployment Project" Feature in Bamboo to push the code once it passes all tests.
David's answer provides good options for cross product usage of AWS Elastic Beanstalk (+1). Nowadays I'd recommend the excellent unified AWS Command Line Interface over the now legacy AWS Elastic Beanstalk API Command Line Interface, see the resp. AWS CLI commands for elasticbeanstalk.
If you are looking for a Bamboo specific solution, you might be interested in Utoolity's Tasks for AWS (Bamboo) add-on (commercial, see disclaimer), which provides three dedicated tasks, specifically:
AWS Elastic Beanstalk Application - create, update or delete AWS Elastic Beanstalk applications.
AWS Elastic Beanstalk Application Version - create, update or delete AWS Elastic Beanstalk application versions.
AWS Elastic Beanstalk Environment - create, update, rebuild, restart, swap or terminate AWS Elastic Beanstalk environments and specify configuration settings and advanced options.
Disclaimer: I'm the co-founder of this add-on's vendor, Utoolity.
In case you're interested in C# deployments:
What we do is to simply start the awsdeploy tool (should already be installed on the build server) with a link to the configuration script. I create the environment simply in Visual Studio and when I redeploy the application once, I save the script. Once the script is on the build server, I reference it in the deployment configuration with awsdeploy /r c:\location\of\myscript.txt.
The package itself the is referenced in the AWS deployment configuration script is created at build time with the MSbuild /target:package command and defined as an artifact (default location of the ZIP package is c:\build-dir\...\project\obj\debug\package, but can be overwritten.
Everything works pretty well so far, although I am having problem to start an elastic instance when none is available (e.g. nightly builds).
Take a look at our repo: https://github.com/matzegebbe/docker-aws-login
With that snippet you are able to login with the aws an push images
simple bamboo task script (of course you need docker installed on the agents):
#!/bin/bash
docker images hellmann/awscli | grep -q awscli
[ "$?" -eq "0" ] && exit 0
cat <<'EOF' >> Dockerfile
FROM python
MAINTAINER Mathias Gebbe <mathias.gebbe#hellmann.net>
RUN pip install awscli --ignore-installed six
ENV aws_access_key_id AWS_ACCESS_KEY
ENV aws_secret_access_key AWS_SECRET_ACCESS_KEY
RUN mkdir /root/.aws/
RUN printf "[default]\nregion = eu-west-1\n" > /root/.aws/config
RUN printf "[default]\naws_access_key_id = ${aws_access_key_id}\naws_secret_access_key = ${aws_secret_access_key}\n" > /root/.aws/credentials
ENTRYPOINT ["/bin/bash","-c"]
CMD ["aws ecr get-login"]
EOF
docker build -t hellmann/awscli .
$(docker run --rm hellmann/awscli)