Configuring multiple AWS Accounts for Ansible ec2 dynamic inventory - amazon-web-services

We have five AWS accounts and an IAM user for programmatic access is created in the organizational account. Each of the child accounts have an IAM role with same name. Trust relationship is setup between the user and roles from these accounts. How do I switch between accounts for ec2 dynamic inventory configuration??
Config File - ec2.ini
iam_role = arn:aws:iam::xxxx-xxxx-xxxx:role/RoleName
I have multiple ec2.ini files in different directories.
../env/
├── account -1
│ ├── ec2.ini
│ └── ec2.py
├── account-2
│ ├── ec2.ini
│ └── ec2.py
├── account-3
│ ├── ec2.ini
│ └── ec2.py
└── account-4
├── ec2.ini
└── ec2.py
Ansible Command
ansible-playbook -i ../env/account-x/ec2.py playbook.yml
Is there a process to switch between accounts. My AWS credentials are stored in shared-credentials file.

You can try passing it the profile name before the command:
AWS_PROFILE=account-a ansible-playbook -i ../env/account-x/ec2.py playbook.yml
If role assumption doesn't work then you may need to put together a small script that generates temporary credentials and set the credentials as environmental variables before calling ansible.

Related

Not able to deploy code to ec2 instance using aws code deploy

I'm trying to deploy my code to ec2 instance using aws codedeploy and github, but the logs on my ec2 instance at:
/var/log/aws/codedeploy-agent/codedeploy-agent.log
says :
The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path "appspec.yml". The revision was unpacked to directory "/opt/codedeploy-agent/deployment-root/a018a999-b075-4d7a-a010-d08bd93ecbfb/d-3B1NFWVEL/deployment-archive", and the AppSpec file was expected but not found at path "/opt/codedeploy-agent/deployment-root/a018a999-b075-4d7a-a010-d08bd93ecbfb/d-3B1NFWVEL/deployment-archive/appspec.yml
My flow goes like this:-
code on local machine -> push to github -> aws codedeploy -> aws ec2 instance
Here's my appspec.yml file code:-
version: 0.0
os: linux
files:
- source: "**/*"
destination: /var/www/html/office_new
hooks:
ApplicationStop:
- location: scripts/stop_server.sh
timeout: 300
runas: ubuntu
ApplicationStart:
- location: scripts/start_server.sh
timeout: 300
runas: ubuntu
Here's my folder structure:-
<my-root-folder>/
├── folder_1/
│ ├── file_1
│ ├── file_2
│ └── file_3
├── folder_2/
│ ├── file_1
│ ├── file_2
│ └── file_3
├── appspec.yml
└── otherfiles
Any help will be greatly appreciated thanks.

Guest Software not Installed by an OSConfig Guest Policy onto an Eligible Google Compute Engine VM

A Google Compute Engine (GCE) instance ($GCE_INSTANCE_NAME) was just created within a Google Cloud Platform (GCP) project $GCP_PROJECT_ID. There is an OSConfig guest policy ($GUEST_POLICY_NAME) that is supposed to install guest software packages onto $GCE_INSTANCE_NAME; however, when the Cloud SDK (gcloud) is used to lookup the guest policies applied to $GCE_INSTANCE_NAME:
gcloud beta compute os-config guest-policies lookup \
$GCE_INSTANCE_NAME \
--zone=$GCE_INSTANCE_ZONE
$=>
No effective guest policy found for [projects/$GCP_PROJECT_NAME/zones/$GCE_INSTANCE_ZONE/instances/$GCE_INSTANCE_NAME].
$GUEST_POLICY_NAME is not listed.
When the lookup command is used for another GCE instance ($GCE_ANOTHER_INSTANCE) with identical OS version, GCE metadata and GCE labels:
gcloud beta compute os-config guest-policies lookup \
$GCE_ANOTHER_INSTANCE \
--zone=$GCE_ANOTHER_ZONE
#=>
┌──────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ SOFTWARE RECIPES │
├───────────────────────────────────────────────────────────┬────────────────────┬─────────┬───────────────┤
│ SOURCE │ NAME │ VERSION │ DESIRED_STATE │
├───────────────────────────────────────────────────────────┼────────────────────┼─────────┼───────────────┤
│ projects/$GCP_PROJECT_ID/guestPolicies/. . . │ . . . │ . . . │ . . . │
│ projects/$GCP_PROJECT_ID/guestPolicies/$GUEST_POLICY_NAME │ $GUEST_POLICY_NAME │ 1.0 │ INSTALLED │
│ projects/$GCP_PROJECT_ID/guestPolicies/. . . │ . . . │ . . . │ . . . │
└───────────────────────────────────────────────────────────┴────────────────────┴─────────┴───────────────┘
$GUEST_POLICY_NAME is listed.
Why?
There could be a few reasons why $GUEST_POLICY_NAME isn't showing up in the response from the lookup command on $GCE_INSTANCE_NAME:
latency: it may take some time for OSConfig to propagate $GUEST_POLICY_NAME when $GCE_INSTANCE_NAME was just created
while you might have enabled project-wide GCE metadata, as suggested here, it may help to add:
enable-guest-attributes: TRUE
enable-osconfig: TRUE
to $GCE_INSTANCE_NAME with the add-metadata command:
gcloud compute instances add-metadata \
$GCE_INSTANCE_NAME \
--metadata="enable-guest-attributes=true,enable-osconfig=TRUE" \
--zone=$GCE_INSTANCE_ZONE
#=>
Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME/zones/$GCE_INSTANCE_ZONE/instances/$GCE_INSTANCE_NAME].
if $GUEST_POLICY_NAME uses a Google Cloud Storage (GCS) Bucket to store packages or executables, check to see if the GCE default service account ($GCE_SERVICE_ACCOUNT) has at least one curated role with the storage.objects.get permission (e.g., storage.objectViewer) with the GCS CLI (gsutil):
gsutil iam get "gs://$GCS_BUCKET_NAME"
#=>
{
"bindings": [
. . .
{
"members": [
"serviceAccount:$GCE_SERVICE_ACCOUNT"
],
"role": "roles/storage.objectViewer"
}
. . .
]
}
if $GCE_SERVICE_ACCOUNT does not have a role with the storage.objects.get permission, you can use the ch command for the iam group to grant the storage.objectViewer curated role:
gsutil iam ch \
"serviceAccount:$GCE_SERVICE_ACCOUNT:roles/storage.objectViewer" \
"gs://GCS_BUCKET_NAME"
Make sure that Private Google Access is turned on for the subnet $GCE_INSTANCE_NAME is running in:
Easily discover which subnet $GCE_INSTANCE_NAME is using with both the --flatten and --format flags for the describe command:
gcloud compute instances describe $GCE_INSTANCE_NAME \
--flatten="networkInterfaces" \
--format="value(networkInterfaces.subnetwork)" \
--zone=$GCE_INSTANCE_ZONE
#=>
https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME/regions/$GCE_INSTANCE_REGION/subnetworks/$GCE_INSTANCE_SUBNETWORK
Find out if $GCE_INSTANCE_SUBNETWORK has Google Private Access turned on:
gcloud compute networks subnets describe $GCE_INSTANCE_SUBNETWORK\
--format="value(privateIpGoogleAccess)" \
--region=$GCE_INSTANCE_REGION
#=>
True
if the above is False, then enable Private Google Access with the update subcommand for the same subnets subgroup:
gcloud compute networks subnets update $GCE_INSTANCE_SUBNET \
--enable-private-ip-google-access \
--region=$GCE_INSTANCE_REGION
#=>
Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME/regions/$GCE_INSTANCE_REGION/subnetworks/$GCE_INSTANCE_SUBNETWORK].
And if all of the above fail, make sure that $GCE_INSTANCE_NAME aligns with all of the criteria from $GUEST_POLICY_NAME:
gcloud beta compute os-config guest-policies describe \
$GUEST_POLICY_NAME \
--format="yaml(assignment)"
#=>
assignment:
groupLabels:
- labels: . . .
instances: . . .
instanceNamePrefixes: . . .
osTypes:
- osArchitecture: . . .
osShortName: . . .
osVersion: . . .
zones: . . .

Checking MD5 of files present in an S3 bucket and loading files not already present

I have a requirement to load data from our on-prem servers to S3 buckets. A python script will be scheduled to run every morning for loading of any new files that arrive on on-prem servers.
However, loaded files are not removed from our on-prem servers, and I need to load files that have not already been loaded to S3 buckets.
Folder Structure on on-prem servers and S3 buckets need to be exact, like given below:
MainFolder/
├── SubFolderOne/
│ ├── File1
│ ├── File2
│ ├── File3
│ └── File4
├── SubFolderTwo/
│ ├── File1
│ └── File2
└── SubFolderThree/
├── File1
├── File2
├── File3
└── File4
where MainFolder is the folder that needs to be monitored. A folder in our s3 bucket exists with the same name. Everything under MainFolder on on-prem servers and in S3 bucket, needs to be exactly the same.
I tried using etag values to compare files, but etag values and md5 hash values is not same, for exactly same file.
Is there any way to implement this requirement?
Not sure if this helps, but i have a it-works-for-me here:
% echo "hello world" > test.txt
% md5sum test.txt
6f5902ac237024bdd0c176cb93063dc4 test.txt
% aws s3 cp test.txt s3://<bucket-name>/test.txt
upload: ./test.txt to s3://<bucket-name>/test.txt
% aws s3api head-object --bucket <bucket-name> --key test.txt --query ETag --output text
"6f5902ac237024bdd0c176cb93063dc4"
Can you give more information about how you check the md5 on the python/bash part and how you query the ETag? Maybe there is a newline added somewhere or something like that

Terraform and Elastic Beanstalk with multiple Environments under the same Application

I'm using Terraform to create an Elastic Beanstalk Application and two associated Environments, and am having some difficulties with the setup. Specifically, I have two Terraform configurations for my two environments, production and staging, and an Elastic Beanstalk module. Something like this:
├── environments
│   ├── production
│   │   ├── main.tf
│   │   └── variables.tf
│   └── staging
│   ├── main.tf
│   └── variables.tf
└── modules
└── elastic_beanstalk
   ├── main.tf
   └── variables.tf
With Elastic Beanstalk, the convention is Application > Environment > Application Version, so the EB Application would be something like "elastic_beanstalk", and then there would be EB Environments for production and staging.
The problem: I don't know how to handle the EB Application creation with TF, because it needs to be shared between the two TF environments. If I handle the EB Application creation inside of the module called from the staging config, then calling the module from the production config throws errors because it doesn't recognize that the EB Application is already created and should be used. Maybe some sort of global config that handles this and is output so it's available in the module?
Terraform doesn't typically handle some of the versioned resources that AWS handles, instead it's typically easier to create completely decoupled resources that represent those stages. This is particularly true for things like AWS' API Gateway which has a concept of stages that Terraform doesn't handle well at all.
With Elastic Beanstalk you could choose to ignore the environment function that EB offers and instead just create a separate application and environment for each of your production and staging environments so a very basic module might look something like this:
variable "environment" {}
resource "aws_elastic_beanstalk_application" "application" {
name = "my-application-${var.environment}"
}
resource "aws_elastic_beanstalk_environment" "environment" {
name = "my-application-${var.environment}"
application = "${aws_elastic_beanstalk_application.application.name}"
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
}
You could then call the module the same but passing in a different environment name to get a completely separate EB application in AWS that happens to be named by the environment.
Alternatively, if you wanted to stick to EB's environment model you could define the application separately and then deploy just the environment at the environment level.
So in this case your layout might look something like:
.
├── application
│ ├── main.tf
│ └── variables.tf
├── environments
│ ├── production
│ │ ├── main.tf
│ │ └── variables.tf
│ └── staging
│ ├── main.tf
│ └── variables.tf
└── modules
└── elastic_beanstalk_environment
├── main.tf
└── variables.tf
└── elastic_beanstalk_application
├── main.tf
└── variables.tf
And you would have to apply the application directory first before deploying the environment directories later.
Without having any experience of Elastic Beanstalk I'd probably lean towards the first model because it simplifies how I would deploy things with Terraform, knowing that if I apply the staging environment and things are fine then applying the production environment is also going to work well. With the second model there's a possibility that someone applies changes to the application after the staging environment has been applied and then you are potentially deploying changes to production that haven't been deployed to staging.
With API Gateway and Lambda, which also support some form of internal versioning, I have found it's generally better to ignore this versioning and create completely distinct resources and use Terraform modules and symlinked configuration to keep things in line properly.

How to add PRE deployment script to AWS Elastic Beanstalk Docker EC2 instance, custom AMI not working

I'm really liking the fact that I'm able to use Docker images pushed into AWS ECR with Elastic Beanstalk. Only thing that has given me a headache is lack of information on how to add pre-deploy hooks into your Elastic Beanstalk EC2 instance?
If I've understood correctly, .ebextensions scripts run POST deploy so those are not resolving my problem here.
I came up with a solution where I added the script needed to be run at pre deploy phase into the EB EC2 instance manually. More specifically into directory:
/opt/elasticbeanstalk/hooks/appdeploy/pre
Now that script gets executed everytime I deploy new application version into my instance which is exactly what I want. But if load balancer attached to my EB environment launches a new EB instance, it doesn't obviously contain the manually added script and therefore will not be able to run my application.
I also tried to create an AMI from the running EB EC2 container containing my custom pre deploy script but for some reason Docker is unable to start on the new EB instance based on my custom AMI.
eb-activity.log says:
[2016-08-29T07:38:36.580Z] INFO [3887] - [Initialization/PreInitStage0/PreInitHook/01setup-docker-options.sh] : Activity execution failed, because: Stopping docker: [FAILED]
Starting docker: ..........[FAILED
(ElasticBeanstalk::ExternalInvocationError)
caused by: Stopping docker: [FAILED]
Starting docker: ..........[FAILED] (Executor::NonZeroExitStatus)
if I SSH into the EB instance and try to start the Docker service manually, same result:
sudo service docker restart
Stopping docker: [FAILED]
Starting docker: .......... [FAILED]
I would highly appreciate if someone could tell me what is the correct way to add PRE DEPLOYMENT script into Elastic Beanstalk, thanks in advance :)
UPDATE
I solved the AMI problem by launching new instance from the same AMI that Elastic Beanstalk uses by default (aws-elasticbeanstalk-amzn-2016.03.3.x86_64-docker-hvm-201608240450). I customized that instance by adding my script into /opt/elasticbeanstalk/hooks/appdeploy/pre even though hooks directory structure:
/opt/elasticbeanstalk/hooks/
├── appdeploy
│   ├── enact
│   ├── post
│   └── pre
├── configdeploy
│   ├── enact
│   ├── post
│   └── pre
├── postinit
├── preinit
└── restartappserver
├── enact
├── post
└── pre
isn't present at this fresh instance. The directory structure will be fully generated when the Docker application is deployed and scripts added to that directory structure in advance will be executed.
Even though that solves my problem, I would like to use similar solution as .ebextensions provide but is there similar support for other than POST DEPLOY scripts?