I am trying to catch Cloudwatch logs for my firehose to find any errors when sending data to S3 destination. I created a cloudformation template with logging details
"CloudWatchLoggingOptions" : {
"Enabled" : "true",
"LogGroupName": "/aws/firehose/firehose-dev", -->firehose-dev is my firehosedeliverystream name
"LogStreamName" : "s3logs"
},
I have given necesary IAM permission to firehose for creating loggroupname and streamname.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
When i triggered the template i didnt find any of the loggroupname and streamname is created in cloudwatch logs.
But when we give same IAM permissions to AWS::Lambda resource it will automatically create a loggroupname(i.e./aws/lambda/mylambdaname) and send the logs to the that group. But why this scenario is not working for firehose ?
As a Workaround
I am manually creating AWS::Logs::LogGroup resource with name as /aws/firehose/firehose-dev and AWS::Logs::LogStream resource with name as s3logs.
And also firehose will create a loggroup name and logstream name
automatically, if we configure the firehose deliverystream using
console.
Can't firehose create loggroup name and logstream name automatically like aws lambda do when configured through cloudformation?
Thanks
Any help is appreciated
Its resource dependent. Some resources will create the log group for you, some not. Sometimes console does create them in the background. When you use CloudFormation, usually you have to do everything yourself.
In case of Firehose you can create the AWS::Logs::LogGroup and AWS::Logs::LogStream resources in CloudFormation. For example (yaml):
MyFirehoseLogGroup:
Type: AWS::Logs::LogGroup
Properties:
RetentionInDays: 1
MyFirehoseLogStream:
Type: AWS::Logs::LogStream
Properties:
LogGroupName: !Ref MyFirehoseLogGroup
Then when you define your AWS::KinesisFirehose::DeliveryStream, you could reference them:
CloudWatchLoggingOptions:
Enabled: true
LogGroupName: !Ref MyFirehoseLogGroup
LogStreamName: !Ref MyFirehoseLogStream
Related
I'm trying to create a queue and a subscription to it from an (existing) SNS Topic. All resources in the same account. I know that, in order to do so, the queue needs to have a QueuePolicy that allows SNS to SendMessage to the queue.
However, I've found that the QueuePolicy I've created via Cloudformation does not appear to be respected - messages are not delivered to the queue, and Cloudwatch logs from the Topic report that delivery failed because permission was denied. If I re-apply that same policy after creation, however, it takes effect and messages are delivered.
Here's what I tried first:
$ cat template.yaml
---
AWSTemplateFormatVersion: "2010-09-09"
Description:
...
Parameters:
TopicParameter:
Type: String
Resources:
Queue:
Type: AWS::SQS::Queue
Subscription:
Type: AWS::SNS::Subscription
DependsOn: QueuePolicy
Properties:
Endpoint:
Fn::GetAtt:
- "Queue"
- "Arn"
Protocol: "sqs"
RawMessageDelivery: "true"
TopicArn: !Ref TopicParameter
QueuePolicy:
Type: AWS::SQS::QueuePolicy
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: '1'
Effect: Allow
Principal: "*"
Action: "SQS:SendMessage"
Resource: !Ref Queue
Condition:
ArnEquals:
aws:SourceArn: !Ref TopicParameter
Queues:
- !Ref Queue
Outputs:
QueueArn:
Value:
Fn::GetAtt:
- "Queue"
- "Arn"
Export:
Name: "QueueArn"
$ aws cloudformation create-stack --stack-name my-test-stack --template-body file://template.yaml --parameters ParameterKey=TopicParameter,ParameterValue=<topicArn>
{
"StackId": "<stackId>"
}
# ...wait...
$ aws cloudformation describe-stacks --stack-name my-test-stack --query "Stacks[0] | Outputs[0] | OutputValue"
"<queueArn>"
# Do some trivial substitution to get the QueueUrl - it's *probably* possible via the CLI, but I don't think you need me to prove that I can do it
$ aws sqs get-queue-attributes --queue-url <queueUrl> --attribute-names ApproximateNumberOfMessages --query "Attributes.ApproximateNumberOfMessages"
"0"
# The above is consistently true, even if I wait and retry after several minutes. I've confirmed that messages *are* being published from the topic via other subscriptions
$ aws sqs get-queue-attributes --queue-url <queueUrl> --attribute-names Policy --query "Attributes.Policy"
"{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"1\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"SQS:SendMessage\",\"Resource\":\"<queueUrl>\",\"Condition\":{\"ArnEquals\":{\"aws:SourceArn\":\"<topicArn>\"}}}]}"
$ aws sqs get-queue-attributes --queue-url <queueUrl> --attribute-names Policy --query "Attributes.Policy" | | perl -pe 's/^.(.*?).$/$1/' | perl -pe 's/\\"/"/g' | python -m json.tool
{
"Statement": [
{
"Action": "SQS:SendMessage",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "<topicArn>"
}
},
"Effect": "Allow",
"Principal": "*",
"Resource": "<queueUrl>",
"Sid": "1"
}
],
"Version": "2012-10-17"
}
At this point, everything looks correct. If I go to the AWS Console, I see a QueuePolicy on the queue that is exactly what I expect - but no messages.
If I re-apply the QueuePolicy, though...
$ aws sqs get-queue-attributes --queue-url <queueUrl> --attribute-names Policy --query "Attributes" > policyInFile
$ cat policyInFile
{
"Policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"1\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"SQS:SendMessage\",\"Resource\":\"<queueUrl>\",\"Condition\":{\"ArnEquals\":{\"aws:SourceArn\":\"<topicArn>\"}}}]}"
}
$ aws sqs set-queue-attributes --queue-url <queueUrl> --attributes policyInFile
Then, a few seconds later, the queue starts receiving messages.
Even weirder, I can reproduce this same behaviour by doing the following:
set up the Stack
going to the queue in the console
confirm that the queue is not receiving messages
hit "Edit" on the queue's Policy
hit "Save" (that is - not changing anything in the policy)
observe the queue receiving messages
How can I make the QueuePolicy in the Cloudformation Stack take effect at the time of Queue Creation?
The issue was that I should have used the Queue's ARN for the resource, not the URL. I guess that, when setting a QueuePolicy for a Queue (via Console or CLI but not via Cloudformation), the resource field is overwritten to the ARN of the queue in question.
Access Denied for bucket: appdeploy-logbucket-1cca50r865s65.
Please check S3bucket permission (Service: AmazonElasticLoadBalancingV2; Status Code: 400; Error Code:
InvalidConfigurationRequest; Request ID: e5e2245f-2f9b-11e9-a3e9-2dcad78a31ec)
I want to store my ALB logs to s3 bucket, i have added policies to s3 bucket, but it says access denied, i have tried alot, and worked with so many configurations, but it failed again and again, And my stack Roll back, I have used Troposphere to create template.
I have tried my policies using but it's not wokring.
BucketPolicy = t.add_resource(
s3.BucketPolicy(
"BucketPolicy",
Bucket=Ref(LogBucket),
PolicyDocument={
"Id": "Policy1550067507528",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1550067500750",
"Action": [
"s3:PutObject",
"s3:PutBucketAcl",
"s3:PutBucketLogging",
"s3:PutBucketPolicy"
],
"Effect": "Allow",
"Resource": Join("", [
"arn:aws:s3:::",
Ref(LogBucket),
"/AWSLogs/",
Ref("AWS::AccountId"),
"/*"]),
"Principal": {"AWS": "027434742980"},
}
],
},
))
Any help?
troposphere/stacker maintainer here. We have a stacker blueprint (which is a wrapper around a troposphere template) that we use at work for our logging bucket:
from troposphere import Sub
from troposphere import s3
from stacker.blueprints.base import Blueprint
from awacs.aws import (
Statement, Allow, Policy, AWSPrincipal
)
from awacs.s3 import PutObject
class LoggingBucket(Blueprint):
VARIABLES = {
"ExpirationInDays": {
"type": int,
"description": "Number of days to keep logs around for",
},
# See the table here for account ids.
# https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#attach-bucket-policy
"AWSAccountId": {
"type": str,
"description": "The AWS account ID to allow access to putting "
"logs in this bucket.",
"default": "797873946194" # us-west-2
},
}
def create_template(self):
t = self.template
variables = self.get_variables()
bucket = t.add_resource(
s3.Bucket(
"Bucket",
LifecycleConfiguration=s3.LifecycleConfiguration(
Rules=[
s3.LifecycleRule(
Status="Enabled",
ExpirationInDays=variables["ExpirationInDays"]
)
]
)
)
)
# Give ELB access to PutObject in the bucket.
t.add_resource(
s3.BucketPolicy(
"BucketPolicy",
Bucket=bucket.Ref(),
PolicyDocument=Policy(
Statement=[
Statement(
Effect=Allow,
Action=[PutObject],
Principal=AWSPrincipal(variables["AWSAccountId"]),
Resource=[Sub("arn:aws:s3:::${Bucket}/*")]
)
]
)
)
)
self.add_output("BucketId", bucket.Ref())
self.add_output("BucketArn", bucket.GetAtt("Arn"))
Hopefully that helps!
The principal is wrong in the CloudFormation template. You should use the proper principal AWS Account Id for your region. Lookup the proper value in this document:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html#access-logging-bucket-permissions
Also, you could narrow down your actions. If you just want to push ALB logs to S3, you only need:
Action: s3:PutObject
Here's a sample BucketPolicy Cloudformation that works (you can easily translate that into the troposphere PolicyDocument element):
Resources:
# Create an S3 logs bucket
ALBLogsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub "my-logs-${AWS::AccountId}"
AccessControl: LogDeliveryWrite
LifecycleConfiguration:
Rules:
- Id: ExpireLogs
ExpirationInDays: 365
Status: Enabled
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
DeletionPolicy: Retain
# Grant access for the load balancer to write the logs
# For the magic number 127311923021, refer to https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html#access-logging-bucket-permissions
ALBLoggingBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref ALBLogsBucket
PolicyDocument:
Statement:
- Effect: Allow
Principal:
AWS: 127311923021 # Elastic Load Balancing Account ID for us-east-1
Action: s3:PutObject
Resource: !Sub "arn:aws:s3:::my-logs-${AWS::AccountId}/*"
I am trying to build a CloudFormation script that sets up a Cognito User Pool and configures it to use a custom email for sending users their validation code in the signup process (i.e. FROM: noreply#mydomain.com).
I am getting this error when executing my AWS CloudFormation script:
"ResourceStatusReason": "Cognito is not allowed to use your email identity (Service: AWSCognitoIdentityProvider; Status Code: 400; Error Code: InvalidEmailRoleAccessPolicyException;
I have attached a Policy for Cognito to use my SES email identity e.g. noreply#mydomain.com. I have manually setup and validated this email identity in SES prior to running CloudFormation script.
Here is my CloudFormation configuration for the policy to allow Cognito to send emails on my behalf e.g. From noreply#mydomain.com:
CognitoSESPolicy:
Type: AWS::IAM::ManagedPolicy
Description: "Allow Cognito the send email on behalf of email identity (e.g. noreply#example.org)"
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: "ucstmnt0001"
Effect: "Allow"
Action:
- "ses:SendEmail"
- "ses:SendRawEmail"
Resource: !FindInMap [ environment, !Ref "Environment", emailARN ]
SESRole:
Type: AWS::IAM::Role
Description: "An IAM Role to allow Cognito to send email on behalf of email identity"
Properties:
RoleName: uc-cognito-ses-role
ManagedPolicyArns:
- Ref: CognitoSESPolicy
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- cognito-idp.amazonaws.com
DependsOn: CognitoSESPolicy
I am not sure what I am doing wrong here...
Answering my own question for others' benefit. AWS SES has its own managed identity for emails, requiring a user to verify ownership of the email before it can be used by other AWS services. My solution was to manually setup the SES email account using AWS portal, verify the email account, then reference the ARN for the identity created in SES for email in my CloudFormation script. Maybe AWS will have a way in the future to create SES identity via CloudFormation scripts, but at this time it seems that manual process is required for initial setup.
Recently ran into this issue and could not find a way to add it via Cloudformation still. Was able to use aws ses put-identity-policy instead.
ses_policy=$(cat << EOM
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "cognito-idp.amazonaws.com"
},
"Action": [
"ses:SendEmail",
"ses:SendRawEmail"
],
"Resource": "${email_arn}"
}
]
}
EOM
)
aws ses put-identity-policy \
--identity "${email_arn}" \
--policy-name "${policy_name}" \
--policy "${ses_policy}"
Instead of cat you can use read but my script was already using set -o errexit and not worth changing to be purist for no particular reason.
I'm trying to confgure a dashboard with a basic widget to expose CpUUtilization metric.
I cannot reference the previous created EC2 instance, since it seems that in the json that describe the dashboard the !Ref function is not interpreted.
metrics": [
"AWS/EC2",
"CPUUtilization",
"InstanceId",
"!Ref Ec2Instance"
]
Any idea how to reference it by logical name?
You can use Fn::Join to combine the output of Intrinsic functions (like Ref) with strings. For example:
CloudWatchDashboardHOSTNAME:
Type: "AWS::CloudWatch::Dashboard"
DependsOn: Ec2InstanceHOSTNAME
Properties:
DashboardName: HOSTNAME
DashboardBody: { "Fn::Join": [ "", ['{"widgets":[
{
"type":"metric",
"properties":{
"metrics":[
["AWS/EC2","CPUUtilization","InstanceId",
"', { Ref: Ec2InstanceHOSTNAME }, '"]
],
"title":"CPU Utilization",
"period":60,
"region":"us-east-1"
}
}]}' ] ] }
Documentation:
Fn::Join - AWS CloudFormation
Ref - AWS CloudFormation
AWS::CloudWatch::Dashboard - AWS CloudFormation
Dashboard Body Structure and Syntax - Amazon CloudWatch
I have created an AWS lambda that works well when I test it and when I create a cron job manually through a cloudwatch rule.
It reports metrics as invocations (not failed) and also logs with details about the execution.
Then I decided to remove that manually created cloudwatch rule in order to create one with ansible.
- name: Create lambda service.
lambda:
name: "{{ item.name }}"
state: present
zip_file: "{{ item.zip_file }}"
runtime: 'python2.7'
role: 'arn:aws:iam::12345678901:role/lambda_ecr_delete'
handler: 'main.handler'
region: 'eu-west-2'
environment_variables: "{{ item.env_vars }}"
with_items:
- name: lamda_ecr_cleaner
zip_file: assets/scripts/ecr-cleaner.zip
env_vars:
'DRYRUN': '0'
'IMAGES_TO_KEEP': '20'
'REGION': 'eu-west-2'
register: new_lambda
- name: Schedule a cloudwatch event.
cloudwatchevent_rule:
name: ecr_delete
schedule_expression: "rate(1 day)"
description: Delete old images in ecr repo.
targets:
- id: ecr_delete
arn: "{{ item.configuration.function_arn }}"
with_items: "{{ new_lambda.results }}"
That creates almost the exact same cloudwatch rule. The only difference I can see with the manually created one is in the targets, the lambda version / alias is set to Default when created manually while it is set to version, with a corresponding version number when created with ansible.
The cloudwatch rule created with ansible has only failed invocations.
Any idea why this is? I can't see any logs. Is there a way I can set the version to Default as well with the cloudwatchevent_rule module in ansible?
I've lost hours with this too, same error and same confusion (Why there isn't a log for failed invokations?), I'm going to share my ""solution"", it will solve the problem to someone, and will help others to debug and find the ultimate solution.
Note: Be carefull, this could allow any AWS account execute your lambda functions
Since you got invoke the function by creating the rule target manually, I assume you added the invoke permission to the lambda from CloudWatch, however it looks like the Source Account ID is different when the event is created by cli/api and when is created by de AWS dashboard/console
If you are adding the Source Account condition in the lambda invoke permission from principal "events.amazonaws.com" to prevent any AWS account execute your lambdas just remove it (under your responsability!).
So, if your lambda policy looks like this:
{
"Sid": "<sid>",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "lambda:InvokeFunction",,
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "<account-id>"
}
},
"Resource": "arn:aws:lambda:<region>:<account-id>:function:<lambda-function>"
}
Remove the "Condition" field
{
"Sid": "sid",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "lambda:InvokeFunction",,
"Resource": "arn:aws:lambda:<region>:<account-id>:function:<lambda-function>"
}
And "maybe" it will work for you.
I think something weird it is happening with the cloudwatch event owner/creator data when the event is created by cli/api... maybe a bug? Not sure. I will keep working on it
To extend the answered here https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CWE_Troubleshooting.html#LAMfunctionNotInvoked. Since you are creating it via API you should add permission to Lambda as mentioned before. Without compromising security you could do the following:
Add rule with PutRule api call, it will return you
{
"RuleArn": "string"
}
Use the RuleArn in Lambda AddPermission call
aws lambda add-permission \
--function-name MyFunction \
--statement-id MyId \
--action 'lambda:InvokeFunction' \
--principal events.amazonaws.com \
--source-arn arn-from-PutRule-request
If you are looking for the reason your invocations are failing, see the other answers UNLESS you're trying to implement AWS::Events::Rule and you're seeing failed invocations. The following answer may resolve the issue and negate to need to find these non-existent logs.
Cloudwatch failedinvocation error no logs available