Not able to deploy code to ec2 instance using aws code deploy - amazon-web-services

I'm trying to deploy my code to ec2 instance using aws codedeploy and github, but the logs on my ec2 instance at:
/var/log/aws/codedeploy-agent/codedeploy-agent.log
says :
The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path "appspec.yml". The revision was unpacked to directory "/opt/codedeploy-agent/deployment-root/a018a999-b075-4d7a-a010-d08bd93ecbfb/d-3B1NFWVEL/deployment-archive", and the AppSpec file was expected but not found at path "/opt/codedeploy-agent/deployment-root/a018a999-b075-4d7a-a010-d08bd93ecbfb/d-3B1NFWVEL/deployment-archive/appspec.yml
My flow goes like this:-
code on local machine -> push to github -> aws codedeploy -> aws ec2 instance
Here's my appspec.yml file code:-
version: 0.0
os: linux
files:
- source: "**/*"
destination: /var/www/html/office_new
hooks:
ApplicationStop:
- location: scripts/stop_server.sh
timeout: 300
runas: ubuntu
ApplicationStart:
- location: scripts/start_server.sh
timeout: 300
runas: ubuntu
Here's my folder structure:-
<my-root-folder>/
├── folder_1/
│ ├── file_1
│ ├── file_2
│ └── file_3
├── folder_2/
│ ├── file_1
│ ├── file_2
│ └── file_3
├── appspec.yml
└── otherfiles
Any help will be greatly appreciated thanks.

Related

Checking MD5 of files present in an S3 bucket and loading files not already present

I have a requirement to load data from our on-prem servers to S3 buckets. A python script will be scheduled to run every morning for loading of any new files that arrive on on-prem servers.
However, loaded files are not removed from our on-prem servers, and I need to load files that have not already been loaded to S3 buckets.
Folder Structure on on-prem servers and S3 buckets need to be exact, like given below:
MainFolder/
├── SubFolderOne/
│ ├── File1
│ ├── File2
│ ├── File3
│ └── File4
├── SubFolderTwo/
│ ├── File1
│ └── File2
└── SubFolderThree/
├── File1
├── File2
├── File3
└── File4
where MainFolder is the folder that needs to be monitored. A folder in our s3 bucket exists with the same name. Everything under MainFolder on on-prem servers and in S3 bucket, needs to be exactly the same.
I tried using etag values to compare files, but etag values and md5 hash values is not same, for exactly same file.
Is there any way to implement this requirement?
Not sure if this helps, but i have a it-works-for-me here:
% echo "hello world" > test.txt
% md5sum test.txt
6f5902ac237024bdd0c176cb93063dc4 test.txt
% aws s3 cp test.txt s3://<bucket-name>/test.txt
upload: ./test.txt to s3://<bucket-name>/test.txt
% aws s3api head-object --bucket <bucket-name> --key test.txt --query ETag --output text
"6f5902ac237024bdd0c176cb93063dc4"
Can you give more information about how you check the md5 on the python/bash part and how you query the ETag? Maybe there is a newline added somewhere or something like that

Configuring multiple AWS Accounts for Ansible ec2 dynamic inventory

We have five AWS accounts and an IAM user for programmatic access is created in the organizational account. Each of the child accounts have an IAM role with same name. Trust relationship is setup between the user and roles from these accounts. How do I switch between accounts for ec2 dynamic inventory configuration??
Config File - ec2.ini
iam_role = arn:aws:iam::xxxx-xxxx-xxxx:role/RoleName
I have multiple ec2.ini files in different directories.
../env/
├── account -1
│ ├── ec2.ini
│ └── ec2.py
├── account-2
│ ├── ec2.ini
│ └── ec2.py
├── account-3
│ ├── ec2.ini
│ └── ec2.py
└── account-4
├── ec2.ini
└── ec2.py
Ansible Command
ansible-playbook -i ../env/account-x/ec2.py playbook.yml
Is there a process to switch between accounts. My AWS credentials are stored in shared-credentials file.
You can try passing it the profile name before the command:
AWS_PROFILE=account-a ansible-playbook -i ../env/account-x/ec2.py playbook.yml
If role assumption doesn't work then you may need to put together a small script that generates temporary credentials and set the credentials as environmental variables before calling ansible.

Terraform and Elastic Beanstalk with multiple Environments under the same Application

I'm using Terraform to create an Elastic Beanstalk Application and two associated Environments, and am having some difficulties with the setup. Specifically, I have two Terraform configurations for my two environments, production and staging, and an Elastic Beanstalk module. Something like this:
├── environments
│   ├── production
│   │   ├── main.tf
│   │   └── variables.tf
│   └── staging
│   ├── main.tf
│   └── variables.tf
└── modules
└── elastic_beanstalk
   ├── main.tf
   └── variables.tf
With Elastic Beanstalk, the convention is Application > Environment > Application Version, so the EB Application would be something like "elastic_beanstalk", and then there would be EB Environments for production and staging.
The problem: I don't know how to handle the EB Application creation with TF, because it needs to be shared between the two TF environments. If I handle the EB Application creation inside of the module called from the staging config, then calling the module from the production config throws errors because it doesn't recognize that the EB Application is already created and should be used. Maybe some sort of global config that handles this and is output so it's available in the module?
Terraform doesn't typically handle some of the versioned resources that AWS handles, instead it's typically easier to create completely decoupled resources that represent those stages. This is particularly true for things like AWS' API Gateway which has a concept of stages that Terraform doesn't handle well at all.
With Elastic Beanstalk you could choose to ignore the environment function that EB offers and instead just create a separate application and environment for each of your production and staging environments so a very basic module might look something like this:
variable "environment" {}
resource "aws_elastic_beanstalk_application" "application" {
name = "my-application-${var.environment}"
}
resource "aws_elastic_beanstalk_environment" "environment" {
name = "my-application-${var.environment}"
application = "${aws_elastic_beanstalk_application.application.name}"
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
}
You could then call the module the same but passing in a different environment name to get a completely separate EB application in AWS that happens to be named by the environment.
Alternatively, if you wanted to stick to EB's environment model you could define the application separately and then deploy just the environment at the environment level.
So in this case your layout might look something like:
.
├── application
│ ├── main.tf
│ └── variables.tf
├── environments
│ ├── production
│ │ ├── main.tf
│ │ └── variables.tf
│ └── staging
│ ├── main.tf
│ └── variables.tf
└── modules
└── elastic_beanstalk_environment
├── main.tf
└── variables.tf
└── elastic_beanstalk_application
├── main.tf
└── variables.tf
And you would have to apply the application directory first before deploying the environment directories later.
Without having any experience of Elastic Beanstalk I'd probably lean towards the first model because it simplifies how I would deploy things with Terraform, knowing that if I apply the staging environment and things are fine then applying the production environment is also going to work well. With the second model there's a possibility that someone applies changes to the application after the staging environment has been applied and then you are potentially deploying changes to production that haven't been deployed to staging.
With API Gateway and Lambda, which also support some form of internal versioning, I have found it's generally better to ignore this versioning and create completely distinct resources and use Terraform modules and symlinked configuration to keep things in line properly.

Elastic beanstalk Nigix Configuration for Golang app

I am following standard EBS config structure to override NGINX configuration as below. .ebextensions/nginx/conf.d/rate-limit.conf .ebextensions/nginx/conf.d/elasticbeanstalk/error429.conf
Not able to see the changes after deployment. Even when I SSH to the respective ec2 I dont see the expected files in
/etc/nginx/ or /etc/nginx/conf.d/elasticbeanstalk/
folder structure -
.ebextensions
.elasticbeanstalk
main.go
... ...etc

How to add PRE deployment script to AWS Elastic Beanstalk Docker EC2 instance, custom AMI not working

I'm really liking the fact that I'm able to use Docker images pushed into AWS ECR with Elastic Beanstalk. Only thing that has given me a headache is lack of information on how to add pre-deploy hooks into your Elastic Beanstalk EC2 instance?
If I've understood correctly, .ebextensions scripts run POST deploy so those are not resolving my problem here.
I came up with a solution where I added the script needed to be run at pre deploy phase into the EB EC2 instance manually. More specifically into directory:
/opt/elasticbeanstalk/hooks/appdeploy/pre
Now that script gets executed everytime I deploy new application version into my instance which is exactly what I want. But if load balancer attached to my EB environment launches a new EB instance, it doesn't obviously contain the manually added script and therefore will not be able to run my application.
I also tried to create an AMI from the running EB EC2 container containing my custom pre deploy script but for some reason Docker is unable to start on the new EB instance based on my custom AMI.
eb-activity.log says:
[2016-08-29T07:38:36.580Z] INFO [3887] - [Initialization/PreInitStage0/PreInitHook/01setup-docker-options.sh] : Activity execution failed, because: Stopping docker: [FAILED]
Starting docker: ..........[FAILED
(ElasticBeanstalk::ExternalInvocationError)
caused by: Stopping docker: [FAILED]
Starting docker: ..........[FAILED] (Executor::NonZeroExitStatus)
if I SSH into the EB instance and try to start the Docker service manually, same result:
sudo service docker restart
Stopping docker: [FAILED]
Starting docker: .......... [FAILED]
I would highly appreciate if someone could tell me what is the correct way to add PRE DEPLOYMENT script into Elastic Beanstalk, thanks in advance :)
UPDATE
I solved the AMI problem by launching new instance from the same AMI that Elastic Beanstalk uses by default (aws-elasticbeanstalk-amzn-2016.03.3.x86_64-docker-hvm-201608240450). I customized that instance by adding my script into /opt/elasticbeanstalk/hooks/appdeploy/pre even though hooks directory structure:
/opt/elasticbeanstalk/hooks/
├── appdeploy
│   ├── enact
│   ├── post
│   └── pre
├── configdeploy
│   ├── enact
│   ├── post
│   └── pre
├── postinit
├── preinit
└── restartappserver
├── enact
├── post
└── pre
isn't present at this fresh instance. The directory structure will be fully generated when the Docker application is deployed and scripts added to that directory structure in advance will be executed.
Even though that solves my problem, I would like to use similar solution as .ebextensions provide but is there similar support for other than POST DEPLOY scripts?