I am faced with the following situation:
There is an EC2 instance on say eu-west-1.
When selecting Snapshots on the EC2 service, I see that periodically, every 7 days on the exact same time, a snapshot is taken from the particular image.
The problem is I cannot find:
any related policy on Lifecycle Manager service
any relevant Lambda function that could carry out such a task.
Via what other (managed) means could such a process be carried out periodically with such an accuracy on time?
edit: The corresponding CloudTrail log entry is:
(actual values regarding user, event and request id have been scrambled of course)
AWS access key:
AWS region: eu-west-1
Error code:
Event ID: 454g0236-x4e6-43c1-3565-4xb6d541c2h1
Event name: CreateSnapshot
Event source: ec2.amazonaws.com
Event time: 2019-11-23, 05:00:44 AM
Read only: false
Request ID: zedfbc42-2513-459e-3241-ffcb8442ba44
Source IP address: events.amazonaws.com
User name: g45tg34m3l53mmm53333421knbb43
There are multiple other options,
Check Cloudwatch events, if there is any event triggering. Most probably this one is in your case.
Cronjob on an EC2 instance.
If i understood you question you are looking for a way to know if Lifecycle Manager is available for EC2 snapshots.
Below given links should be able to help you on the same.
For enabling a custom Snapshot Lifecycle policy manually refer Snapshot Lifecycle
For automating a solution for the same please referautomation of snapshot lifecycle
Related
I am receiving the following errors in the EC2 CloudWatch Agent logs, /var/logs/awslogs.log:
I verified the EC2 has a role:
And the role has the correct policies:
I have set the correct region in /etc/awslogs/awscli.conf:
I noticed that running aws configure list in the EC2 gives this:
Is this incorrect? Should it list the profile (EC2_Cloudwatch_Profile) there?
I was using terraform and reprovisioning by doing:
terraform destroy && terraform apply
Looks like due to IAM being a global service it is "eventually consistent" and not "immediately consistent", when the profile instance was destroyed, the terraform apply began too quickly. Despite the "destroy" being complete, the arn for the previous profile instance was still there, and was re-used. However, the ID changed to a new ID.
Replacing the EC2 would bring it up to speed with the correct ID. However, my solution is to just wait longer between terraform destroy and apply.
I'm trying to use CloudFormation AddOn template in the following scenario:
Service 1
creates an SNS Topic and a Managed Policy that has all the necessary permissions to publish to it. The SNS Topic will collect "Activity" records and then fan them out to multiple subscribers.
A common code library abstracts away the usage of SNS - any applications that need to post activity messages do so without any knowledge that SNS is being used underneath the covers.
Service N needs to publish activity messages using the common code library and needs whatever permissions are necessary.
So service 1 writes the Managed Policy ARN out as an exported output to the AddOn stack like so:
Outputs:
activityPublishPolicy:
Description: "Activity Publish Policy ARN"
Value: !Ref activitySnsTopicPublishPolicy
Export:
Name: !Sub ${App}-${Env}-activity-publish-policy
Then in service N, I was hoping to import the ARN of the publishing policy and get it attached to the task role:
Outputs:
activityPublishAccessPolicy:
Description: "The IAM::ManagedPolicy to attach to the task role."
Value: !ImportValue
'Fn::Sub': '${App}-${Env}-activity-publish-policy'
The ARN is imported just fine and written out to the Cloud Formation stack of Service N; however, the Task Role does not get the Managed Policy attached to it.
I did a quick test to see if adding the policy directly to the AddOn stack would attach and that does indeed work.
Outputs:
activityPublishAccessPolicy:
Description: "The IAM::ManagedPolicy to attach to the task role."
Value: !Ref activityPolicy
This leads me to believe that Copilot only attaches ManagedPolicies to the Task Role that are created in its own AddOn Stack, but that's just a guess.
I'd prefer not to write a new policy in every service to do this, and I'd prefer not to open up the topic policy our whole VPC if possible.
Is there a better way of doing this?
Thanks!
This is because Copilot scans the Addons template to determine the type of the resource you're outputting. There are several "magic" outputs for addons. They are:
Security Groups
Managed Policies
Secrets
To detect these outputs, we scan the template looking for the logical ID of the referenced resource. This means that we don't currently have a way of deriving the resource type of the results of Fn::ImportValue calls, since they don't refer to a logical ID defined in that addons template!
I'm sorry this is causing you problems--it seems like you may need to add the managed policy to the addons stack of each service you want to grant this access to. This is something we might be able to do something about, though, and would love if if you could cut us a Github issue so we can prioritize and gather feedback on a proposal.
In short, I want to enable cloud trail for several objects in different S3 buckets. I am able to directly mention all the objects when creating CloudTrail from CloudFormation. But i want to add them at later point in time.
Create an AWS CloudTrail trail in a CloudFormation stack and export the trail's ARN.
Then when creating objects in S3 bucket to which i need CloudTrail data events for, I want to add them as this existing CloudTrail.
Here is the spot in console where I can manually add it.
CloudTrail AWS Console
So, Looking to add data events to an existing CloudTrail via CloudFormation.
Looked entire documentation several times, I can only see a way to add while creating the CloudTrail:
Create a CloudWatch Events Rule for an Amazon S3 Source (AWS CloudFormation Template) - CodePipeline
Please advice what is the resource type that supports this?
you can probably get some hint from the CFT I have created - from an S3 Event probably an putObject operations logs the events details into an separate bucket from where using CloudWatch Events trigger the execution of the Step Function State Machine.
cloudtrail:
Type: AWS::CloudTrail::Trail
Properties:
EnableLogFileValidation: Yes
EventSelectors:
- DataResources:
- Type: AWS::S3::Object
Values:
- arn:aws:s3:::s3-event-step-bucket/
IncludeManagementEvents: Yes
ReadWriteType: All
IncludeGlobalServiceEvents: Yes
IsLogging: Yes
IsMultiRegionTrail: Yes
S3BucketName: s3-event-step-bucket-storage
TrailName: xyz
When you deploy this CFT , it will update the existing Trail with CloudTrail data events as the Trigger Point.
I'm setting up an auto-scaling group and to dynamically tag instances, is it possible to auto-increment name tag?
for eg:-
Key: Name Value: Instance-1
Key: Name Value: Instance-2
.
.
.
Key: Name Value: Instance-N
Yes, it possible. But not directly using any AWS functionality.
You have to embed a function where in as the instance is launched by AutoScaling Group a notification (AWS SNS) is triggered and is stored in SQS.
You will have to use a Program/Script which runs on a server and keeps a constant watch on the SQS, get the latest notification and Information regarding the Instance. Then change the Tag Name of the instance using a incremental functionality.
I've created a worker environment for my eb application in order to take advantage of its "periodic tasks" capabilities using cron.yaml (located in the root of my application). It's a simple sinatra app (for now) that I would like to use to use to issue requests to my corresponding web server environment.
However, I'm having trouble deploying via the eb cli. Below is what happens I run eb deploy.
╰─➤ eb deploy
Creating application version archive "4882".
Uploading myapp/4882.zip to S3. This may take a while.
Upload Complete.
INFO: Environment update is starting.
ERROR: Service:AmazonCloudFormation, Message:Stack named 'awseb-e-1a2b3c4d5e-stack'
aborted operation. Current state: 'UPDATE_ROLLBACK_IN_PROGRESS'
Reason: The following resource(s) failed to create: [AWSEBWorkerCronLeaderRegistry].
I've looked around the CloudFormation dashboard to see to check for possible errors. After reading a bit of about what I could find regarding AWSEBWorkerCronLeaderRegistry, I found it that it's most likely a DynamoDB table that gets updated/created. However, when I look in the DynamoDB dashboard, there are no tables listed.
As always, any help, feedback, or guidance is appreciated.
If you are reluctant to add full DynamoDB access (like I was), Beanstalk now provides a Managed Policy for Worker environment permissions (AWSElasticBeanstalkWorkerTier). You can try adding one of these to your instance profile role instead.
See http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html
We had the same issue and fixed it by attaching AmazonDynamoDBFullAccess to Elastic Beanstalk role (which was named aws-elasticbeanstalk-ec2-role in our case).
I was using Codepipeline to deploy my worker and was getting the same error. Eventually I tried giving AWS-CodePipeline-Service the AmazonDynamoDBFullAccess policy and that seemed to resolve the issue.
As Anthony suggested, when triggering the deploy from other services such as CodePileline, its service role needs the dynamodb:CreateTable permission to create the Leader Registry table (more info below) in DynamoDB.
Adding Full Access permission is a bad practice and should be avoided. Also, the managed policy AWSElasticBeanstalkWorkerTier does not have the appropriate permissions since it is for the worker to access DynamoDB and check if they are the current leader.
1. Find the Role that is trying to create the table:
Go to CloudTrail > Event History
Filter Event Name: CreateTable
Make sure the error code is AccessDenied
Locate the role name (i.e. AWSCodePipelineServiceRole-us-east-1-dev):
2. Add the permissions:
Go to IAM > Roles
Find the role in the list
Attach a policy with:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CreateCronLeaderTable",
"Effect": "Allow",
"Action": "dynamodb:CreateTable",
"Resource": "arn:aws:dynamodb:*:*:table/*-stack-AWSEBWorkerCronLeaderRegistry*"
}
]
}
3. Check results:
Redeploy by triggering the pipeline
Check Elasticbeanstalk for errors
Optionally go to CloudTrail and make sure the request succeded this time.
You may use this technique any time you are not sure of what permission should be attached to what.
About the Cron Leader Table
From the Periodic Tasks Documentation:
Elastic Beanstalk uses leader election to determine which instance in your worker environment queues the periodic task. Each instance attempts to become leader by writing to an Amazon DynamoDB table. The first instance that succeeds is the leader, and must continue to write to the table to maintain leader status. If the leader goes out of service, another instance quickly takes its place.
For those wondering, this DynamoDB table uses 10 RCU and 5 WCU which covered by the always free tier.