i am getting error while running this script :
def set_asg_launch_template_version_latest(asg_name, lt_id):
try:
response = client.update_auto_scaling_group(
AutoScalingGroupName=asg_name,
LaunchTemplate={
'LaunchTemplateId': lt_id,
'Version': '$Latest'
}
)
print("Set launch template: {} version for asg: {} to $Latest".format(lt_id, asg_name))
return response
except ClientError as e:
print('Error setting launch template version to $Latest')
raise e
set_asg_launch_template_version_latest(ASGName,launch_template_id )
============>
ClientError - An error occurred (AccessDenied) when calling the UpdateAutoScalingGroup operation: You are not authorized to use launch template: lt-xxxxxxxxx
hint :
all these permissions are authorized (check last one):
- resource-groups:ListGroupResources
- tag:GetResources
- s3:PutObject
- s3:PutObjectAcl
- s3:List*
- ec2:Describe*
- ec2:CreateSnapshot
- ec2:CreateImage
- kms:CreateGrant
- ec2:StartInstances
- ec2:RunInstances
- ec2:TerminateInstances
- autoscaling:StartInstanceRefresh
- ec2:CreateSecurityGroup
- ec2:AuthorizeSecurityGroupEgress
- ec2:DeleteSecurityGroup
- ec2:RevokeSecurityGroupEgress
- ec2:ModifyLaunchTemplate
- ec2:CreateLaunchTemplateVersion
- autoscaling:Describe*
- ec2:DescribeLaunchTemplateVersions
- ec2:DescribeLaunchTemplates
- autoscaling:UpdateAutoScalingGroup
Related
TableBackupVault:
Type: AWS::Backup::BackupVault
Properties:
BackupVaultName: tabel-vault
What permission are required for creating backup vault?
I tried these
- Sid: Backup
Effect: Allow
Action:
- backup:CreateBackupVault
- backup:CreateBackupPlan
- backup:CreateBackupSelection
- backup:TagResource
- backup:UntagResource
Resource:
- *
But I am getting
Error:
CREATE_FAILED: BackupVault (AWS::Backup::BackupVault)
Resource handler returned message: "Insufficient privileges to perform this action"
For anyone with this error, you have to add the following iam rule additionnaly:
backup-storage:MountCapsule
As it is required here: https://docs.aws.amazon.com/aws-backup/latest/devguide/access-control.html
I have a multi account structure in AWS, where I have a master and child accounts. I am following this guide in order to propagate tags from the child instances to the master account, once they have been activated and I can manage the instances in the master account (systems manager).
So far it all works to the point where the lambda in the master account has all of the tags it needs. However, it is unable to add the tags to the managed instances in systems manager. Not sure why the role still can't access the tags, given the permissions...
This is the error I get:
[ERROR] 2019-03-29T09:14:02.419Z a00a68ba-9904-4199-bcae-cad75f6f5232 An error occurred (ValidationException) when calling the AddTagsToResource operation: Caller is an end user and not allowed to mutate system tags instanceId: mi-0d3bfce27d073c0f2
This is the lambda function with the attached role:
AWSTemplateFormatVersion: '2010-09-09'
Description: Management function that copies tags
Resources:
rSSMTagManagerRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: Automation-SSMTagManagerRole
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service:
- "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/aws/"
Policies:
- PolicyName: "CopyInstanceTagsToSSMPolicy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Action:
- ssm:AddTagsToResource
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
- tag:*
Resource: "*"
fnSSMTagManager:
Type: AWS::Lambda::Function
Properties:
FunctionName: Automation-SSM-Tag-Manager
Handler: index.lambda_handler
Role: !GetAtt [rSSMTagManagerRole, Arn]
Description: >
Copies tags from the list of instances in the event
context to the specified managed instances.
Code:
ZipFile: |+
import boto3
import json
import logging
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel( logging.WARN )
client = boto3.client( 'ssm' )
def lambda_handler( event, context ):
"""Copies tags from the list of instances in the event
context to the specified managed instances.
"""
for instance in event[ "instances" ]:
addTags( instance[ "instanceId" ], instance[ "tags" ] )
def addTags( resourceid, tags ):
logger.info( "Configuring " + resourceid + " with " + str(tags) )
try:
response = client.add_tags_to_resource(
ResourceType='ManagedInstance',
ResourceId=resourceid,
Tags=tags
)
logger.info( response )
return response
except Exception as e:
errorMessage = str(e) + "instanceId: " + resourceid
logger.error( errorMessage )
return errorMessage
Runtime: python3.6
Timeout: '90'
Using the same guide. Faced the exact same error. It turned out that the instances in the agency account were having too many(10 plus) tags which caused the Tag Manager to give this error. Modified the Tag collector lambda function to propagate only specific tags instead of all tags. That cleared the error.
I can't see the Log group defined by Cloud Watch agent on my EC2 instance
Also, the default log group /var/log/messages is not visible.
I can't see these logs also on root account.
I have other log groups configured and visible.
I have the following setup:
Amazon Linux
AMI managed role attached to instance: CloudWatchAgentServerPolicy
Agent installed via awslogs - https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
Agent started successfully
No errors in /var/log/awslogs.log. Looks like working normally. Log below.
Configuration done via /etc/awslogs/config/FlaskAppAccessLogs.conf
Instance has outbound access to internet
Instance security groups allows all outbound traffic
Any ideas what to check or what can be missing?
/etc/awslogs/config/FlaskAppAccessLogs.conf:
cat /etc/awslogs/config/FlaskAppAccessLogs.conf
[/var/log/nginx/access.log]
initial_position = start_of_file
file = /var/log/nginx/access.log
datetime_format = %d/%b/%Y:%H:%M:%S %z
buffer_duration = 5000
log_group_name = FlaskApp-Frontends-access-log
log_stream_name = {instance_id}
/var/log/awslogs.log
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Loading additional configs from /etc/awslogs/config/FlaskAppAccessLogs.conf
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to use gzip encoding.
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Missing or invalid value for queue_size config. Defaulting to use 10
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Using default logging configuration.
2019-01-05 17:50:21,544 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting publisher for [c17fae93047ac481a4c95b578dd52f94, /var/log/messages]
2019-01-05 17:50:21,550 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting reader for [c17fae93047ac481a4c95b578dd52f94, /var/log/messages]
2019-01-05 17:50:21,551 - cwlogs.push.reader - INFO - 24838 - Thread-4 - Start reading file from 0.
2019-01-05 17:50:21,563 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting publisher for [8ff79b6440ef7223cc4a59f18e5f3aef, /var/log/nginx/access.log]
2019-01-05 17:50:21,587 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting reader for [8ff79b6440ef7223cc4a59f18e5f3aef, /var/log/nginx/access.log]
2019-01-05 17:50:21,588 - cwlogs.push.reader - INFO - 24838 - Thread-6 - Start reading file from 0.
2019-01-05 17:50:27,838 - cwlogs.push.publisher - WARNING - 24838 - Thread-5 - Caught exception: An error occurred (ResourceNotFoundException) when calling the PutLogEvents operation: The specified log group does not exist.
2019-01-05 17:50:27,839 - cwlogs.push.batch - INFO - 24838 - Thread-5 - Creating log group FlaskApp-Frontends-access-log.
2019-01-05 17:50:27,851 - cwlogs.push.publisher - WARNING - 24838 - Thread-3 - Caught exception: An error occurred (ResourceNotFoundException) when calling the PutLogEvents operation: The specified log group does not exist.
2019-01-05 17:50:27,851 - cwlogs.push.batch - INFO - 24838 - Thread-3 - Creating log group /var/log/messages.
2019-01-05 17:50:27,966 - cwlogs.push.batch - INFO - 24838 - Thread-5 - Creating log stream i-0d7e533f67870ff8d.
2019-01-05 17:50:27,980 - cwlogs.push.batch - INFO - 24838 - Thread-3 - Creating log stream i-0d7e533f67870ff8d.
2019-01-05 17:50:28,077 - cwlogs.push.publisher - INFO - 24838 - Thread-5 - Log group: FlaskApp-Frontends-access-log, log stream: i-0d7e533f67870ff8d, queue size: 0, Publish batch: {'skipped_events_count': 0, 'first_event': {'timestamp': 1546688052000, 'start_position': 0L, 'end_position': 161L}, 'fallback_events_count': 0, 'last_event': {'timestamp': 1546708885000, 'start_position': 4276L, 'end_position': 4468L}, 'source_id': '8ff79b6440ef7223cc4a59f18e5f3aef', 'num_of_events': 24, 'batch_size_in_bytes': 5068}
Status of awslogs
sudo service awslogs status
awslogs (pid 25229) is running...
IAM role policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"ec2:DescribeTags",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:DescribeLogGroups",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter"
],
"Resource": "arn:aws:ssm:*:*:parameter/AmazonCloudWatch-*"
}
]
}
It's seems that posting a question may quickly help to find an answer.
There is additional configuration in which i have made typo:
sudo cat /etc/awslogs/awscli.conf
[plugins]
cwlogs = cwlogs
[default]
region = us-west-1
As described above the logs are delivered to us-west-1 region.
I was checking us-west-2 :)
I have a CloudFormation template that contains an Application Load Balancer ListenerRule. One of the required properties of a ListenerRule is its Priority (a number between 1 and 50000). The priority for each ListenerRule must be unique.
I need to deploy the same template multiple times. The Priority for the ListenerRule should change every time I launch the template.
At the moment, I have turned the Priority into a parameter you can set when launching the stack and this works fine. Is there a way I can automatically set the priority of the ListenerRule to the next available priority?
No it's currently not possible to have it automatically allocated using only the AWS::ElasticLoadBalancingV2::ListenerRule resource. However, it can be achieved using a custom resource.
First let's create the actual custom resource Lambda code.
allocate_alb_rule_priority.py:
import json
import os
import random
import uuid
import boto3
from botocore.vendored import requests
SUCCESS = "SUCCESS"
FAILED = "FAILED"
# Member must have value less than or equal to 50000
ALB_RULE_PRIORITY_RANGE = 1, 50000
def lambda_handler(event, context):
try:
_lambda_handler(event, context)
except Exception as e:
# Must raise, otherwise the Lambda will be marked as successful, and the exception
# will not be logged to CloudWatch logs.
# Always send a response otherwise custom resource creation/update/deletion will be stuck
send(
event,
context,
response_status=FAILED if event['RequestType'] != 'Delete' else SUCCESS,
# Do not fail on delete to avoid rollback failure
response_data=None,
physical_resource_id=uuid.uuid4(),
reason=e,
)
raise
def _lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
physical_resource_id = event.get('PhysicalResourceId', str(uuid.uuid4()))
response_data = {}
if event['RequestType'] == 'Create':
elbv2_client = boto3.client('elbv2')
result = elbv2_client.describe_rules(ListenerArn=os.environ['ListenerArn'])
in_use = list(filter(lambda s: s.isdecimal(), [r['Priority'] for r in result['Rules']]))
priority = None
while not priority or priority in in_use:
priority = str(random.randint(*ALB_RULE_PRIORITY_RANGE))
response_data = {
'Priority': priority
}
send(event, context, SUCCESS, response_data, physical_resource_id)
def send(event, context, response_status, response_data, physical_resource_id, reason=None):
response_url = event['ResponseURL']
response_body = {
'Status': response_status,
'Reason': str(reason) if reason else 'See the details in CloudWatch Log Stream: ' + context.log_stream_name,
'PhysicalResourceId': physical_resource_id,
'StackId': event['StackId'],
'RequestId': event['RequestId'],
'LogicalResourceId': event['LogicalResourceId'],
'Data': response_data,
}
json_response_body = json.dumps(response_body)
headers = {
'content-type': '',
'content-length': str(len(json_response_body))
}
try:
requests.put(
response_url,
data=json_response_body,
headers=headers
)
except Exception as e:
print("send(..) failed executing requests.put(..): " + str(e))
According to your question, you need to create multiple stacks with the same template. For that reason I suggest the Custom Resource is placed within a template that is deployed only once. Then have the other template import its ServiceToken.
allocate_alb_rule_priority_custom_resouce.yml:
Resources:
AllocateAlbRulePriorityCustomResourceLambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: ''
Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Path: /
Policies:
- PolicyName: DescribeRulesPolicy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- elasticloadbalancing:DescribeRules
Resource: "*"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
AllocateAlbRulePriorityCustomResourceLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Handler: allocate_alb_rule_priority.lambda_handler
Role: !GetAtt AllocateAlbRulePriorityCustomResourceLambdaRole.Arn
Code: allocate_alb_rule_priority.py
Runtime: python3.6
Timeout: '30'
Environment:
Variables:
ListenerArn: !Ref LoadBalancerListener
Outputs:
AllocateAlbRulePriorityCustomResourceLambdaArn:
Value: !GetAtt AllocateAlbRulePriorityCustomResourceLambdaFunction.Arn
Export:
Name: AllocateAlbRulePriorityCustomResourceLambdaArn
You can notice that we're passing a ListenerArn to the Lambda function. It's because we want to avoid priority number collision on new allocation.
Lastly, we can now use our new custom resource in the template that is meant to be deployed multiple times.
template_meant_to_be_deployed_multiple_times.yml:
AllocateAlbRulePriorityCustomResource:
Type: Custom::AllocateAlbRulePriority
Condition: AutoAllocateAlbPriority
Properties:
ServiceToken:
Fn::ImportValue: AllocateAlbRulePriorityCustomResourceLambdaArn
ListenerRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
Priority: !GetAtt AllocateAlbRulePriorityCustomResource.Priority
[...]
These are snippets and may not work as-is, although they were taken from working code. I hope it gives you a general idea of how it can be achieved. Let me know if you need more help.
i am new to serverless framework and i want to get an instance's status, so i used boto3 describe-instance-status() but i keep getting error that i am not authorized to perform this kind of operation althought i have administrator access to all aws services; please help, do i need to change, or add something to be recognized
here is my code :
import json
import boto3
import logging
import sys
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
from botocore.exceptions import ClientError
def instance_status(event, context):
"""Take an instance Id and return its status"""
#print "ttot"
body = {}
status_code = 200
client = boto3.client('ec2')
response = client.describe_instance_status(InstanceIds=['i-070ad071'])
return response
and here is my serverless.yml file
service: ec2
provider:
name: aws
runtime: python2.7
timeout: 30
memorySize: 128
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- "ec2:DescribeInstanceStatus"
Resource: "*"
functions:
instance_status:
handler: handler.instance_status
description: Status ec2 instances
events:
- http:
path: ''
method: get
and here is the error message i am getting:
"errorType": "ClientError", "errorMessage": "An error occurred
(UnauthorizedOperation) when calling the DescribeInstanceStatus
operation: You are not authorized to perform this operation."
...i have administrator access to all aws services...
Take note that the Lambda function is NOT running under your user account. You're supposed to define its role and permissions in your YAML.
In the provider section in your serverless.yaml, add the following:
iamRoleStatements:
- Effect: Allow
Action:
- ec2:DescribeInstanceStatus
Resource: <insert your resource here>
Reference: https://serverless.com/framework/docs/providers/aws/guide/iam/
You are not authorized to perform this operation
This means you have no permission to perform this action client.describe_instance_status.
There some ways to make your function can get right permission:
Use IAM Role: Create IAM Role with permission accroding to your requirement. Then assign this IAM role for lambda function in the setting page. So your lambda will automatic get rotate key to perform actions.
Create AccessKey/SecretKey with permission accroding to your requirement. Setting in yaml file, in your lambda function, set boto3 to accquire these access/secretKey, then perform action.
Read more from this http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html