I have a Elasticsearch inside the VPC running in account A.
I want to deliver logs from Firehose in Account B to the Elasticsearch in Account A.
Is it possible?
When I try to create delivery stream from AWS CLI I am getting below exception,
$: /usr/local/bin/aws firehose create-delivery-stream --cli-input-json file://input.json --profile devops
An error occurred (InvalidArgumentException) when calling the CreateDeliveryStream operation: Verify that the IAM role has access to the ElasticSearch domain.
The same IAM role, and same input.json works when modified to the Elasticsearch in Account B. I have Transit gateway connectivity enabled between the AWS accounts and I can connect telnet to the Elasticsearch in Account A from EC2 instance in Account B.
Adding my complete terraform code(i got same exception in AWS CLI and also in Terraform):
https://gist.github.com/karthikeayan/a67e93b4937a7958716dfecaa6ff7767
It looks like you haven't granted sufficient permissions to the role that is used when creating the stream (from the CLI example provided I'm guessing its a role named 'devops'). At minimum you will need firehose: CreateDeliveryStream.
I suggest adding the below permissions to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:CreateDeliveryStream",
"firehose:UpdateDestination"
],
"Resource": "*"
}
]
}
https://forums.aws.amazon.com/message.jspa?messageID=943731
I have been informed from AWS forum that this feature is currently not supported.
You can set up Kinesis Data Firehose and its dependencies, such as Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch, to stream across different accounts. Streaming data delivery works for publicly accessible OpenSearch Service clusters whether or not fine-grained access control (FGAC) is enabled
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/
Related
I have an AWS OpenSearch cluster configured with an IAM master user role. I have an AWS Lambda which I want to be able to query both OpenSearch and other AWS services like DynamoDB. I don't want to modify the OpenSearch master user role to be able to access other AWS services - it should have zero permissions.
My current solution is letting my Lambda call assumeRole to assume the master user role before querying OpenSearch. Is this the approved way to do it? Seems like it would be more efficient not to have to do the assume role step. And it has the downside that the Lambda then has full access to OpenSearch - I would prefer to give it more granular permissions, e.g. only es:ESHttpGet.
This AWS documentation https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ac.html seems to imply that you can set a resource-based access policy on domain setup which grants permissions to specific users. But I tried creating a maximally permissive policy and I still can't access the domain except as the master role. Am I misunderstanding the docs?
The permissive access policy I tried to use:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-west-1:REDACTED:domain/*/*"
}
]
}
I'm implementing something like that at the moment and it's not quite finished, but I am using API Gateway and a Lambda authoriser function to allow basic authentication. You could try that. The policy I have is almost the same as yours except after domain I have the name of the domain, not a star. I also have vpcs for security locked down to a cidr range.
I have two AWS accounts (E.g. Account A & Account B). I have created a user with and attached a policy (Costumer Managed) Which has the following permission in account A.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "cloudfront:CreateInvalidation",
"Resource": "arn:aws:cloudfront::{ACCOUNT-B_ACCOUNT-ID-WITHOUT-HYPHENS}:distribution/{ACCOUNT_B-CF-DISTRIBUTION-ID}"
}
]
}
From AWS-CLI (Which is configured with Account A's user) I'm trying to create invalidation for the above mentioned CF distribution ID in Account B. I'm getting access denied.
Do we need any other permission to create invalidation for CF distribution in different AWS account?
I have been able to successfully perform a cross-account CloudFront invalidation from my CodePipeline account (TOOLS) to my application (APP) accounts. I achieve this with a Lambda Action that is executed as follows:
CodePipeline starts a Deploy stage I call Invalidate
The Stage runs a Lambda function with the following UserParameters:
APP account roleArn to assume when creating the Invalidation.
The ID of the CloudFront distribution in the APP account.
The paths to be invalidated.
The Lambda function is configured to run with a role in the TOOLS account that can sts:AssumeRole of a role from the APP account.
The APP account role permits being assumed by the TOOLS account and permits the creation of Invalidations ("cloudfront:GetDistribution","cloudfront:CreateInvalidation").
The Lambda function executes and assumes the APP account role. Using the credentials provided by the APP account role, the invalidation is started.
When the invalidation has started, the Lambda function puts a successful Job result.
It's difficult and unfortunate that cross-account invalidations are not directly supported. But it does work!
Cross account access only available for few AWS Services like Amazon Simple Storage Service (S3) buckets, S3 Glacier vaults, Amazon Simple Notification Service (SNS) topics, and Amazon Simple Queue Service (SQS) queues.
Refer: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html (Role for cross-account access section)
im looking for a bucket policy where I have a specific principal ID for a complete account 'arn:aws:iam::000000000000:root' which is allowed to write to a my bucket.
I now want to implement a condition which will only give firehose as a service the abillity to write to my bucket.
My current ideas were:
{
"Sid": "AllowWriteViaFirehose",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::000000000000:root"
},
"Action": "s3:Put*",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
#*#
}
}
Whereas #*# should be the specific condition.
I already tried some things like :
{"IpAddress": {"aws:SourceIp": "firehose.amazonaws.com"}}
I thought the requests would come from a firehose endpoint of AWS. But it seems not :-/
"Condition": {"StringLike": {"aws:PrincipalArn": "*Firehose*"}}
i thought this would work since the role which firehose uses to write files should contain a session name with something like 'firehose' in it. But it didn't work.
Any idea how to get this working?
Thanks
Ben
Do not create a bucket policy.
Instead, assign the desired permission to an IAM Role and assign the role to your Kinesis Firehose.
See: Controlling Access with Amazon Kinesis Data Firehose - Amazon Kinesis Data Firehose
This answer is for the situation where the destination S3 bucket is in a different account.
From AWS Developer Forums: Kinesis Firehose cross account write to S3, the method is:
Create cross account roles in Account B and enable trust relationships for Account A to assume Account B's Role.
Enable Bucket policy in Account B to allow Account A to write records into Account B.
Map Account B's S3 bucket to Firehose, had to create the firehose to point to a temporary bucket and then use AWS CLI commands to update the S3 bucket in account A.
CLI Command:
aws firehose update-destination --delivery-stream-name MyDeliveryStreamName --current-delivery-stream-version-id 1 --destination-id destinationId-000000000001 --extended-s3-destination-update file://MyFileName.json
MyFileName.json looks like the one below:
{
"BucketARN": "arn:aws:s3:::MyBucketname",
"Prefix": ""
}
I'm trying to add aws cloudwatch agent to see additional metrics using tutorial
A brief review of what I did:
Create AIM role and attach to EC2 instance doc (NOTE: I do not use Parameter Store just for communication between EC2 and cloudwatch)
Install Agent using s3 link
Create agent configuration file docs
Run agent using CLI dosc
But it still not working and in agent log, I see errors like
ec2tagger: Unable to initialize EC2 Instance Tags : +NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
While googling I found not much related to cloudwath just only that in AIM role in 'Trust Relationship' config ec2 should be mentioned in service section and it is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Any ideas, thanks!?
In my case the instance had an IAM role attached, but the role was missing the ec2:DescribeTags permission. Adding that fixed the problem.
"The first procedure creates the IAM role that you must attach to each Amazon EC2 instance that runs the CloudWatch agent. This role provides permissions for reading information from the instance and writing it to CloudWatch." in docs
please attach IAM role that you created to your ec2 instance first,it works for me
The cloudwatch agent process that runs in the ec2 should be able to describe the tags of ec2. The permission required for that is ec2:DescribeTags.
Attaching instance role with the managed policy arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy will resolve the problem.
Check to see if the CloudWatch Agent service is running (started)
I got the same issue, resolve by using below command, refresh routes
Import-Module C:\ProgramData\Amazon\EC2-Windows\Launch\Module\Ec2Launch.psm1; Add-Routes
Solved by running aws configure from inside the instance
I am exploring IAM. I want to give access to a single ec2 instance to a user. I have created a policy for this as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1392113879000",
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": [
"arn:aws:ec2:us-east-1:account:instance/instance_id"
]
}
]
}
But I am getting this error:
I have referred to this link
Any lead is appriciated.
The Resource-Level Permissions for EC2 and RDS Resources you are referring to are not yet available for all API actions, but AWS in gradually adding more, see this note from Amazon Resource Names for Amazon EC2:
Important Currently, not all API actions support individual ARNs; we'll add support for additional API actions and ARNs for additional
Amazon EC2 resources later. For information about which ARNs you can
use with which Amazon EC2 API actions, as well as supported condition
keys for each ARN, see Supported Resources and Conditions for Amazon
EC2 API Actions.
You will find that all ec2:Describe* actions are indeed absent still from Supported Resources and Conditions for Amazon EC2 API Actions at the time of this writing, and these are the ones required for listing resources e.g. in the AWS Management Console and triggering the errors you are seeing in turn ("You are not authorized to describe ...").
See also Granting IAM Users Required Permissions for Amazon EC2 Resources for a concise summary of the above and details on the ARNs and Amazon EC2 condition keys that you can use in an IAM policy statement to grant users permission to create or modify particular Amazon EC2 resources - this page also mentions that AWS will add support for additional actions, ARNs, and condition keys in 2014.