Cloud Custodian: resources:ec2 not available in region - amazon-web-services

I am using Cloud Custodian and policies written in YAML for automating tasks related with AWS. For now, I am trying to stop a running instance. Following is the policy, custodian.yml, that I am using:
policies:
- name: my-first-policy
resource: ec2
filters:
- "tag:test": present
actions:
- stop
The instance is tagged with tag test. And the location, where the instance is running is us-east-2b. I am using the following command to use the policy:
AWS_DEFAULT_REGION=us-east-2b AWS_ACCESS_KEY_ID="the_value_of_the_key_ID" AWS_SECRET_KEY="the_value_of_secret_key" custodian run --output-dir=. custodian.yml
The problem is that there are no errors/logs that are generated but it is unable to locate the instance and throws the following warning:
2017-06-17 08:28:17,926: c7n.policies:WARNING policy:my-first-policy resources:ec2 not available in region:us-east-2b
2017-06-17 08:28:17,927: custodian.commands:WARNING Empty policy file(s). Nothing to do.
I am using the guidelines on working with Cloud Custodian from the following links:
http://www.capitalone.io/cloud-custodian/docs/quickstart/index.html#write-your-first-policy
http://www.capitalone.io/cloud-custodian/docs/quickstart/index.html#write-your-first-policy
Can somebody help?

Modify:
AWS_DEFAULT_REGION=us-east-2b
to:
AWS_DEFAULT_REGION=us-east-2
us-east-2 is a region (US Ohio). us-east-2b is an availability zone.

Related

CodeDeploy events not running

This is how my CodeDeploy status looks like:
This is first time I'm trying to set this up. I created EC2 and added following policies to attached IAM role:
and edited Trust relationships like this:
also I installed code deploy agent on EC2 instance.
this is my appspec.yml
version: 0.0
os: linux
files:
- source: .
destination: /home/ubuntu
hooks:
ApplicationStop:
- location: scripts/stop_server.sh
timeout: 5
runas: root
stop_server.sh is just an empty file
any ideas?
The most likely problem you're facing is that the agent either isn't installed or the instance doesn't have sufficient permissions. When there are no events started on the instance for the deployment, it means that CodeDeploy couldn't talk to the host for some reasons.
Here's the steps I would take:
Confirm that you installed the CodeDeploy agent
Confirm that you've created the IAM service role
Confirm that you have the IAM Instance Profile and that it's associated with the instance
Check that you can reach the CodeDeploy commands endpoint in your region from the box. i.e. ping codedeploy.us-east-1.amazonaws.com Otherwise, your networking setup might be too restrictive.
Look at the logs on the host to see what's going on

Restrict Elastic Beanstalk from creating a security group and use provided one instead

When I create a beanstalk environment using a saved configuration, it works fine but creates a new security group for no reason and attaches it to the instances. I already provide a security group to allow SSH access to the instances from VPC sources.
I followed this thread and tried to restrict this behaviour with the following config inside .ebextentions:
Resources:
AWSEBSecurityGroup: { "CmpFn::Remove" : {} }
AWSEBAutoScalingLaunchConfiguration:
Properties:
SecurityGroups:
- sg-07f419c62e8c4d4ab
Now the creation process gets stuck at:
Creating application version archive "app-210517_181530".
Uploading stage/app-210517_181530.zip to S3. This may take a while.
Upload Complete.
Environment details for: restrict-sg-poc
Application name: stage
Region: ap-south-1
Deployed Version: app-210517_181530
Environment ID: e-pcpmj9mdjb
Platform: arn:aws:elasticbeanstalk:ap-south-1::platform/Tomcat 8.5 with Corretto 11 running on 64bit Amazon Linux 2/4.1.8
Tier: WebServer-Standard-1.0
CNAME: UNKNOWN
Updated: 2021-05-17 12:45:35.701000+00:00
Printing Status:
2021-05-17 12:45:34 INFO createEnvironment is starting.
2021-05-17 12:45:35 INFO Using elasticbeanstalk-ap-south-1-############ as Amazon S3 storage bucket for environment data.
How can I do this properly so that my SG is added to the instances and no new SGs are created.
PS: I am using a shared ALB so SG created for load balancers is not a problem right now.

How to create and verify a cross region public certificate through CloudFormation?

I'm attempting to achieve the following through CloudFormation.
From a stack created in EU region I want to create (and verify) a public certificate against Route53 in US-EAST-1 due to using Cloudfront. Aiming to have zero actions performed in the console or AWS CLI.
The new CloudFormation support for ACM was a little sketchy last week but seems to be working now.
Certifcate
Resources:
Certificate:
Type: AWS::CertificateManager::Certificate
Properties:
DomainName: !Sub "${Env}.domain.cloud"
ValidationMethod: DNS
DomainValidationOptions:
-
DomainName: !Sub "${Env}.domain.cloud"
HostedZoneId: !Ref HostedZoneId
All I need to do is use Cloudformation to deploy this into the US-EAST-1 region from stack in a different region. Everything else is ready for this.
I thought that using Codepipeline's cross region support would be great so I started to look into [this documentation][1] after getting setting things up in my template I met the following error message...
An error occurred while validating the artifact bucket {...} The bucket named is not located in the `us-east-1` AWS region.
To me this makes no sense as it seems that you already need at least a couple of resources to exist in target region for it to work. Cart before the horse kind of behavior. To test this I create an artifact bucket in the target region by hand and things worked fine, but requires using CLI or the console when I'm aiming for a CloudFormation based solution.
Note: I'm running out of time to write this so I'll update it when I can in a few hours time. any help before I can do that would be great though
Sadly, that's required for cross-region CodePipeline. From docs:
When you create or edit a pipeline, you must have an artifact bucket in the pipeline Region and then you must have one artifact bucket per Region where you plan to execute an action.
If you want to fully automate this through CloudFormation, you either have to use custom resource to create buckets in all the regions in advance or look at stack sets to deploy one template bucket in multiple regions.
p.s.
Your link does not work, thus I'm not sure if you refer to the same documentation page.

Is it possible to execute commands and then update security groups in a CloudFormation template?

I would like to perform the following operations in order with CloudFormation.
Start up an EC2 instance.
Give it privileges to access the full internet using security group A.
Download particular versions of Java and Python
Remove its internet privileges by removing security group A and adding a security group B.
I observe that there is a DependsOn attribute for specifying the order in which to create resources, but I was unable to find a feature that would allow me to update the security groups on the same EC2 instance twice over the course of creating a stack.
Is this possible with CloudFormation?
Not in CloudFormation natively, but you could launch the EC2 instance with a configured userdata script that itself downloads Java/Python and the awscli, as necessary, and then uses the awscli to switch security groups for the current EC2 instance.
However, if all you need is Java and Python pre-loaded then why not simply create an AMI with them already installed and launch from that AMI?
The best way out is to utilise a Cloudformation custom resource here. You can create a lambda function that does exactly what you need. This lambda function can then be called as a custom resource function in the cloud formation template.
You can pass your new security group ID and instance ID to the lambda function and code the lambda function to use AWS SDK and do the modifications that you need.
I have leveraged it to post an update to my web server about the progress of the cloud formation template. Below is the sample code of the template.
EC2InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles: [!Ref 'EC2Role']
MarkInstanceProfileComplete:
Type: 'Custom::EC2InstanceProfileDone'
Version: '1.0'
DependsOn: EC2InstanceProfile
Properties:
ServiceToken: !Ref CustomResourceArn
HostURL: !Ref Host
LoginType: !Ref LoginType
SecretId: !Ref SecretId
WorkspaceId: !Ref WorkspaceId
Event: 2
Total: 3
Here the resource MarkInstanceProfileComplete is a custom resource that calls a Lambda function. It takes the event count and total count as input and processes them to calculate percentage progress. Based on that it sends out a request to my web server. For all we care, this Lambda function can do potentially anything you want it to do.

Ansible module to attach an IAM role to existing EC2 instances

I am trying to attach an IAM role to multiple EC2 instances based on tags. Is there a module already available which I can use. I have been searching for a bit but couldn't find anything specific.
Attaching an IAM role to existing EC2 instances is a relatively new feature (announced in Feb 2017). There is no support for that in Ansible currently. If you AWS CLI 1.11.46 or higher installed, then you can use shell module to invoke the AWS CLI and achieve desired result.
See: New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
I submitted a PR last year to add 2 AWS modules : boto3 and boto3_wait.
These 2 modules allow you to interact with AWS API using boto3.
For instance, you could attach a role to an existing EC2 instance by calling associate_iam_instance_profile method on EC2 service :
- name: Attach role MyRole
boto3:
service: ec2
region: us-east-1
operation: associate_iam_instance_profile
parameters:
IamInstanceProfile:
Name: MyRole
InstanceId: i-xxxxxxxxxx
Feel free to give the PR a thumbs-up if you like it! ;)
In addition to this, you can use AWS dynamic inventory to target instances by tag.