How to define Resource Policy for CloudWatch Logs with CloudFormation? - amazon-web-services

When I configure DNS Query Logging with Route53, I can create a resource policy for Route53 to log to my log group. I can confirm this policy with the cli aws logs describe-resource-policies and see something like:
{
"resourcePolicies": [
{
"policyName": "test-logging-policy",
"policyDocument": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"route53.amazonaws.com\"},\"Action\":[\"logs:CreateLogStream\",\"logs:PutLogEvents\"],\"Resource\":\"arn:aws:logs:us-east-1:xxxxxx:log-group:test-route53*\"}]}",
"lastUpdatedTime": 1520865407511
}
]
}
The cli also has a put-resource-policy to create one of these. I also see that Terraform has a resource aws_cloudwatch_log_resource_policy which does the same.
So the question: How do I do this with CloudFormation???

You can't use the CloudWatch console to create or edit a resource policy. You must use the CloudWatch API, one of the AWS SDKs, or the
AWS CLI.
There is no Cloudformation support for creating a resource policy right now, but you create a custom lambda resource to do this.
https://gist.github.com/sudharsans/cf9c52d7c78a81818a4a47872982bd76
CloudFormation Custom resource:
AddResourcePolicy:
Type: Custom::AddResourcePolicy
Version: '1.0'
Properties:
ServiceToken: arn:aws:lambda:us-east-1:872673965194:function:test-lambda-deploy-Lambda-15R963QKCI80A
CloudWatchLogsLogGroupArn: !GetAtt LogGroup.Arn
PolicyName: "testpolicy"
lambda:
import cfnresponse
import boto3
def PutPolicy(arn,policyname):
response = client.put_resource_policy(
policyName=policyname,
policyDocument="....",
)
return
def handler(event, context):
......
if event['RequestType'] == "Delete":
DeletePolicy(PolicyName)
if event['RequestType'] == "Create":
PutPolicy(CloudWatchLogsLogGroupArn,PolicyName)
responseData['Data'] = "SUCCESS"
status=cfnresponse.SUCCESS
.....

4 years later, this still doesn't seem to work through Cloudformation although there is apparently support for this included now

Related

AWS CDK look up ARNs from lambda

I am quite new to AWS and have a maybe easy to answer question.
(I am using localstack to develope locally, if this makes any difference)
In a lambda, I got the following code, which should publish a message to an aws-sns.
def handler(event, context):
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
logger.info("confirmed user!")
notification = "A test"
client = boto3.client('sns')
response = client.publish(
TargetArn="arn:aws:sns:us-east-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
Message=json.dumps({'default': notification}),
MessageStructure='json'
)
return {
'statusCode': 200,
'body': json.dumps(response)
}
For now I "hardcode" the ARN of the sns topic which is output to console when deploying (with cdklocal deploy).
I am wondering, if there is any convenient way, to lookup the ARN of a AWS ressource?
I have seen, there is the
cdk.Fn.getAtt(logicalId, 'Arn').toString();
function, but I don't know the logicalID of the sns before deployment. So, how can I lookup ARNs during runtime? What is best practice?
(It's a quite annoying task keeping track of all the ARNs if I just hardcode them as strings, and definitly seems wrong to me)
You can use the !GetAtt function in your CloudFormation template to retrieve and pass your SNS topic ARN to to your Lambda.
Resources:
MyTopic:
Type: AWS::SNS::Topic
Properties:
{...}
MyLambda:
Type: AWS::Lambda::Function
Properties:
Environment:
Variables:
SNS_TOPIC_ARN: !GetAtt MyTopic.Arn

How to create an 'AWS::SSM::Document' with DocumentType of Package using CloudFormation

This AWS CloudFormation document suggests that it is possible to administer an 'AWS::SSM::Document' resource with a DocumentType of 'Package'. However the 'Content' required to achieve this remains a mystery.
Is it possible to create a Document of type 'Package' via CloudFormation, and if so, what is the equivalent of this valid CLI command written as a CloudFormation template (preferably with YAML formatting)?
ssm create-document --name my-package --content "file://manifest.json" --attachments Key="SourceUrl",Values="s3://my-s3-bucket" --document-type Package
Failed Attempt. The content used is an inline version of the manifest.json which was provided when using the CLI option. There doesn't seem to be an option to specify an AttachmentSource when using CloudFormation:
AWSTemplateFormatVersion: 2010-09-09
Resources:
Document:
Type: AWS::SSM::Document
Properties:
Name: 'my-package'
Content: !Sub |
{
"schemaVersion": "2.0",
"version": "Auto-Generated-1579701261956",
"packages": {
"windows": {
"_any": {
"x86_64": {
"file": "my-file.zip"
}
}
}
},
"files": {
"my-file.zip": {
"checksums": {
"sha256": "sha...."
}
}
}
}
DocumentType: Package
CloudFormation Error
AttachmentSource not provided in the input request. (Service: AmazonSSM; Status Code: 400; Error Code: InvalidParameterValueException;
Yes, this is possible! I've successfully created a resource with DocumentType: Package and the package shows up in the SSM console under Distributor Packages after the stack succeeds.
Your YAML is almost there, but you need to also include the Attachments property that is now available.
Here is a working example:
AWSTemplateFormatVersion: "2010-09-09"
Description: Sample to create a Package type Document
Parameters:
S3BucketName:
Type: "String"
Default: "my-sample-bucket-for-package-files"
Description: "The name of the S3 bucket."
Resources:
CrowdStrikePackage:
Type: AWS::SSM::Document
Properties:
Attachments:
- Key: "SourceUrl"
Values:
- !Sub "s3://${S3BucketName}"
Content:
!Sub |
{
"schemaVersion": "2.0",
"version": "1.0",
"packages": {
"windows": {
"_any": {
"_any": {
"file": "YourZipFileName.zip"
}
}
}
},
"files": {
"YourZipFileName.zip": {
"checksums": {
"sha256": "7981B430E8E7C45FA1404FE6FDAB8C3A21BBCF60E8860E5668395FC427CE7070"
}
}
}
}
DocumentFormat: "JSON"
DocumentType: "Package"
Name: "YourPackageNameGoesHere"
TargetType: "/AWS::EC2::Instance"
Note: for the Attachments property you must use the SourceUrl key when using DocumentType: Package. The creation process will append a "/" to this S3 bucket URL and concatenate it with each file name you have listed in the manifest that is the Content property when it creates the package.
Seems there is no direct way to create an SSM Document with Attachment via CloudFormation (CFN). You can use a workaround as using a backed Lambda CFN where you will use a Lambda to call the API SDK to create SSM Document then use Custom Resource in CFN to invoke that Lambda.
There are some notes on how to implement this solution as below:
How to invoke Lambda from CFN: Is it possible to trigger a lambda on creation from CloudFormation template
Sample of a Lambda sending response format (when using Custom Resource in CFN): https://github.com/stelligent/cloudformation-custom-resources
In order to deploy Lambda with best practices and easy upload the attachment, Document content from local, you should use sam deploy instead of CFN create stack.
You can get information of the newly created resource from lambda to the CFN by adding the resource detail into the data json in the response lambda send back and the CFN can use it with !GetAtt CustomResrc.Attribute, you can find more detail here.
There are some drawbacks on this solution:
Add more complex to the original solution as you have to create resources for the Lambda execution such as (S3 to deploy Lambda, Role for Lambda execution and assume the SSM execution, SSM content file - or you have to use a 'long' inline content). It won't be a One-call CFN create-stack anymore. However, you can put everything into the SAM template because at the end of the day, it's just a CFN template
When Delete the CFN stack, you have to implement the lambda when RequestType == Delete for cleaning up your resource.
PS: If you don't have to work strictly on CFN, then you can try with Terraform: https://www.terraform.io/docs/providers/aws/r/ssm_document.html

Read only AWS CLI access to strictly CloudWatch billing metrics

I need to provide somebody with read only AWS CLI access to our CloudWatch billing metrics ONLY. I'm not sure how to do this since CloudWatch doesn't have any specific resources that one can control access to. This means there are no ARN's to specify in an IAM policy and as a result, any resource designation in the policy is "*". More info regarding CloudWatch ARN limitations can be found here. I looked into using namespaces but I believe the "aws-portal" namespace is for the console. Any direction or ideas are greatly appreciated.
With the current CloudWatch ARN limitations the IAM policy would look something like this.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudwatch:DescribeMetricData",
"cloudwatch:GetMetricData"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
As you say, you will not be able to achieve this within CloudWatch. According to the docs:
CloudWatch doesn't have any specific resources for you to control access to... For example, you can't give a user access to CloudWatch data for only a specific set of EC2 instances or a specific load balancer. Permissions granted using IAM cover all the cloud resources you use or monitor with CloudWatch.
An alternative option might be to:
Use Scheduled events on a lambda function to periodically export relevant billing metrics from Cloudwatch to an S3 bucket. For example, using the Python SDK, the lambda might look something like this:
import boto3
from datetime import datetime, timedelta
def lambda_handler(event, context):
try:
bucket_name = "so-billing-metrics"
filename = '-'.join(['billing', datetime.now().strftime("%Y-%m-%d-%H")])
region_name = "us-east-1"
dimensions = {'Name': 'Currency', 'Value':'USD'}
metric_name = 'EstimatedCharges'
namespace = 'AWS/Billing'
start_time = datetime.now() - timedelta(hours = 1)
end_time = datetime.now()
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch', region_name=region_name)
# Get billing metrics for the last hour
metrics = cloudwatch.get_metric_statistics(
Dimensions=[dimensions],
MetricName=metric_name,
Namespace=namespace,
StartTime=start_time,
EndTime=end_time,
Period=60,
Statistics=['Sum'])
# Save data to temp file
with open('/tmp/billingmetrics', 'wb') as f:
# Write header and data
f.write("Timestamp, Cost")
for entry in metrics['Datapoints']:
f.write(",".join([entry['Timestamp'].strftime('%Y-%m-%d %H:%M:%S'), str(entry['Sum']), entry['Unit']]))
# Upload temp file to S3
s3 = boto3.client('s3')
with open('/tmp/billingmetrics', 'rb') as data:
s3.upload_fileobj(data, bucket_name, filename)
except Exception as e:
print str(e)
return 0
return 1
Note: You will need to ensure that the Lambda function has the relevant permissions to write to S3 and read from cloudwatch.
Restrict the IAM User/Role to read only access to the S3 bucket.

ValidationException: Before you can proceed, you must enable a service-linked role to give Amazon ES permissions to access your VPC

I am trying to create a VPC controlled Elastic Search Service on AWS. The problem is I keep getting the error when I run the following code: 'ValidationException: Before you can proceed, you must enable a service-linked role to give Amazon ES permissions to access your VPC'.
const AWS = require('aws-sdk');
AWS.config.update({region:'<aws-datacenter>'});
const accessPolicies = {
Statement: [{
Effect: "Allow",
Principal: {
AWS: "*"
},
Action: "es:*",
Resource: "arn:aws:es:<dc>:<accountid>:domain/<domain-name/*"
}]
};
const params = {
DomainName: '<domain>',
/* required */
AccessPolicies: JSON.stringify(accessPolicies),
AdvancedOptions: {
EBSEnabled: "true",
VolumeType: "io1",
VolumeSize: "100",
Iops: "1000"
},
EBSOptions: {
EBSEnabled: true,
Iops: 1000,
VolumeSize: 100,
VolumeType: "io1"
},
ElasticsearchClusterConfig: {
DedicatedMasterCount: 3,
DedicatedMasterEnabled: true,
DedicatedMasterType: "m4.large.elasticsearch",
InstanceCount: 2,
InstanceType: 'm4.xlarge.elasticsearch',
ZoneAwarenessEnabled: true
},
ElasticsearchVersion: '5.5',
SnapshotOptions: {
AutomatedSnapshotStartHour: 3
},
VPCOptions: {
SubnetIds: [
'<redacted>',
'<redacted>'
],
SecurityGroupIds: [
'<redacted>'
]
}
};
const es = new AWS.ES();
es.createElasticsearchDomain(params, function (err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
console.log(JSON.stringify(data, null, 4)); // successful response
}
});
The problem is I get this error: ValidationException: Before you can proceed, you must enable a service-linked role to give Amazon ES permissions to access your VPC. I cannot seem to figure out how to create this service linked role for the elastic search service. In the aws.amazon.com IAM console I cannot select that service for a role. I believe it is supposed to be created automatically.
Has anybody ran into this or know the way to fix it?
The service-linked role can be created using the AWS CLI.
aws iam create-service-linked-role --aws-service-name opensearchservice.amazonaws.com
Previous answer: before the service was renamed, you would do the following:
aws iam create-service-linked-role --aws-service-name es.amazonaws.com
You can now create a service-linked role in a CloudFormation template, similar to the Terraform answer by #htaccess. See the documentation for the CloudFormation syntax for Service-Linked Roles for more details
YourRoleNameHere:
Type: 'AWS::IAM::ServiceLinkedRole'
Properties:
AWSServiceName: es.amazonaws.com
Description: 'Role for ES to access resources in my VPC'
For terraform users who hit this error, you can use the aws_iam_service_linked_role resource to create a service linked role for the ES service:
resource "aws_iam_service_linked_role" "es" {
aws_service_name = "es.amazonaws.com"
description = "Allows Amazon ES to manage AWS resources for a domain on your behalf."
}
This resource was added in Release 1.15.0 (April 18, 2018) of the AWS Provider.
Creating a elasticsearch domain with VPC and using aws-sdk/cloudformation is currently not supported. The elasticsearch service requires a special service linked role to create the network interfaces in the specified VPC. This currently possible using console / cli(#Oscar Barrett's answer below).
However, there is a workaround to get this working and it is described as follows:
Create a test elasticsearch domain with VPC access using console.
This will create a service linked role named AWSServiceRoleForAmazonElasticsearchService [Note: You can not create the role with specified name manually or through thr console]
Once this role is created, use aws-sdk or cloudformation to create elasticsearch domain with VPC.
You can delete the test elasticsearch domain later
Update: The more correct way to create the service role is described in #Oscar Barrett's answer. I was thinking to delete my answer; but the other facts about the actual issue is still more relevant, thus keeping my answer here.
Do it yourself in CDK:
const serviceLinkedRole = new cdk.CfnResource(this, "es-service-linked-role", {
type: "AWS::IAM::ServiceLinkedRole",
properties: {
AWSServiceName: "es.amazonaws.com",
Description: "Role for ES to access resources in my VPC"
}
});
const esDomain = new es.CfnDomain(this, "es", { ... });
esDomain.node.addDependency(serviceLinkedRole);

use serverless to get instance's status

i am new to serverless framework and i want to get an instance's status, so i used boto3 describe-instance-status() but i keep getting error that i am not authorized to perform this kind of operation althought i have administrator access to all aws services; please help, do i need to change, or add something to be recognized
here is my code :
import json
import boto3
import logging
import sys
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
from botocore.exceptions import ClientError
def instance_status(event, context):
"""Take an instance Id and return its status"""
#print "ttot"
body = {}
status_code = 200
client = boto3.client('ec2')
response = client.describe_instance_status(InstanceIds=['i-070ad071'])
return response
and here is my serverless.yml file
service: ec2
provider:
name: aws
runtime: python2.7
timeout: 30
memorySize: 128
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- "ec2:DescribeInstanceStatus"
Resource: "*"
functions:
instance_status:
handler: handler.instance_status
description: Status ec2 instances
events:
- http:
path: ''
method: get
and here is the error message i am getting:
"errorType": "ClientError", "errorMessage": "An error occurred
(UnauthorizedOperation) when calling the DescribeInstanceStatus
operation: You are not authorized to perform this operation."
...i have administrator access to all aws services...
Take note that the Lambda function is NOT running under your user account. You're supposed to define its role and permissions in your YAML.
In the provider section in your serverless.yaml, add the following:
iamRoleStatements:
- Effect: Allow
Action:
- ec2:DescribeInstanceStatus
Resource: <insert your resource here>
Reference: https://serverless.com/framework/docs/providers/aws/guide/iam/
You are not authorized to perform this operation
This means you have no permission to perform this action client.describe_instance_status.
There some ways to make your function can get right permission:
Use IAM Role: Create IAM Role with permission accroding to your requirement. Then assign this IAM role for lambda function in the setting page. So your lambda will automatic get rotate key to perform actions.
Create AccessKey/SecretKey with permission accroding to your requirement. Setting in yaml file, in your lambda function, set boto3 to accquire these access/secretKey, then perform action.
Read more from this http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html