How do I add permissions in Amazon Keyspaces? - amazon-web-services

When I try to query AWS Keyspaces (managed Cassandra) from an AWS Lambda, I get this error:
{
"errorType": "AggregateException",
"errorMessage": "One or more errors occurred. (All hosts tried for query failed (tried 11.11.111.11:9142: UnauthorizedException 'User arn:aws:iam::111111111111:user/user-for-keyspaces has no permissions.'; 11.11.111.11:9142: UnauthorizedException 'User arn:aws:iam::111111111111:user/user-for-keyspaces has no permissions.'))",
"stackTrace": [
"at lambda_method(Closure , Stream , Stream , LambdaContextInternal )"
],
"cause": {
"errorType": "NoHostAvailableException",
...
But in the AWS console for Keyspaces, I don't see anywhere to adder permissions.
The user policy for user-for-keyspaces already has this attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cassandra:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
How do I add permissions in AWS Keyspaces?

You should only require cassandra
{
"Statement": [
{
"Sid": "keyspaces-full-access",
"Principal": "*",
"Action": [
"cassandra:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Additionally, Amazon Keyspaces populates the system.peers table in your account with an entry for each availability zone where a VPC endpoint is available. To look up and store available interface VPC endpoints in the system.peers table, Amazon Keyspaces requires that you grant the IAM entity used to connect to Amazon Keyspaces access permissions to query your VPC for the endpoint and network interface information.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"ListVPCEndpoints",
"Effect":"Allow",
"Action":[
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcEndpoints"
],
"Resource":"*"
}
]
}
Learn more about VPC endpoints here

The problem was actually nothing to do with the user in the error message, but the VPC endpoint I had created for Keyspaces.
The endpoint requires cassandra:* permissions to perform queries, e.g.
{
"Statement": [
{
"Sid": "keyspaces-full-access",
"Principal": "*",
"Action": [
"cassandra:*",
"keyspaces:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

Related

AWS InvalidParameter when calling the ImportImage operation

I have .ova VM's stored on my S3 bucket, I am trying to create AMI from these OVA.
I was going through this video to Import a VM as an Image Using VM Import/Export to Amazon EC2.
I have created an EC2 Instance which I will use to trigger the necessary CLI commands for Importing.
I have created an IAM Role and attached it to the EC2 Instance.
Please refer to the details of the Role:
Trust Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "vmie.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Inline Policy for Access to S3 and EC2
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ec2:CopySnapshot",
"s3:ListAccessPointsForObjectLambda",
"s3:GetAccessPoint",
"s3:PutAccountPublicAccessBlock",
"s3:ListAccessPoints",
"ec2:RegisterImage",
"s3:ListJobs",
"s3:PutStorageLensConfiguration",
"s3:ListMultiRegionAccessPoints",
"s3:ListStorageLensConfigurations",
"ec2:Describe*",
"s3:GetAccountPublicAccessBlock",
"ec2:ModifySnapshotAttribute",
"s3:ListAllMyBuckets",
"s3:PutAccessPointPublicAccessBlock",
"s3:CreateJob",
"ec2:ImportImage"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::vms"
},
{
"Sid": "AllowStsDecode",
"Effect": "Allow",
"Action": "sts:DecodeAuthorizationMessage",
"Resource": "*"
}
]
}
Inline Policy for KMS Decrypt
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "kms:Decrypt",
"Resource": "*"
}
]
}
Also, I have attached the AWSImportExportFullAccess managed policy to the Role.
I am using the following command to Import the VM to AMI:
aws ec2 import-image --description "MY_VM_Image" --disk-containers "file://configuration.json"
Here are the contents of configuration.json
[{
"Description": "Image",
"Format": "ova",
"UserBucket": {
"S3Bucket": "vm",
"S3Key": "xzt.ova"
}
}
]
But I am facing the following error:
An error occurred (InvalidParameter) when calling the ImportImage operation: The service role vmimport provided does not exist or does not have sufficient permissions
I tried to have a look at the Troubleshooting document. It states the following
This error can also occur if the user calling ImportImage has Decrypt permission but the vmimport role does not.
So, I have also disabled the default encryption at S3.
Still no luck.
What else permissions are needed to run the command successfully.
I was facing the same issue and it turned out to be an issue with the clock not being in sync with the NTP servers (it was around 6 minutes off). As soon as the time was synced, the aws ec2 import-image worked as expected.
Here is a link for the importance of Time Synchronization in Kerberos:
https://tldp.org/HOWTO/Kerberos-Infrastructure-HOWTO/time-sync.html#:~:text=If%20you%20allow%20your%20clocks,errors%20and%20refuse%20to%20function.

Access S3 bucket from VPC

I'm running a NodeJS script and using the aws-sdk package to write files to an S3 bucket. This works fine when I run the script locally, but not from a ECS Fargate service, that's when I get Error: AccessDenied: Access Denied.
The service has the allowed VPC vpc-05dd973c0e64f7dbc. I've tried adding an Internet Gateway to this VPC, and also an endpoint (as seen in the attached image) - but nothing resolves the Access Denied error. Any ideas what I'm missing here?
SOLVED: the problem was me misunderstanding aws:sourceVpce. It requires the VPC endpoint id and not the VPC id. **
Endpoint
Internet Gateway
Bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E3MKW5OAU5CHLI"
},
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::mywebsite.com/*"
},
{
"Sid": "Stmt1582486025157",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mywebsite.com/*",
"Principal": "*",
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpc-05dd973c0e64f7dbc"
}
}
}
]
}
Please add an bucket policy that allows access from the VPC endpoint.
Update your bucket policy with a condition, that allows users to access the S3 bucket when the request is from the VPC endpoint that you created. To white list those users to download objects, you can use a bucket policy that's similar to the following:
Note: For the value of aws:sourceVpce, enter the VPC endpoint ID of the endpoint that you created.
{
"Version": "2012-10-17",
"Id": "Policy1314555909999",
"Statement": [
{
"Sid": "<<Access-to-specific-VPConly>>",
"Principal": "*",
"Action": "s3:GetObject",
"Effect": "Allow",
"Resource": ["arn:aws:s3:::awsexamplebucket/*"],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-1c2g3t4e"
}
}
}
]
}

AWS IAM Policy to Grant All service permission to specific instances

I have setup separate IAM users from the root account with various privilege levels and I need provide all EC2 services access for 2 specific instances to a particular IAM user
I used AWS policy generator and got the below policy but it doesn't work
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": [
"arn:aws:ec2:us-east-1:ACCOUNT_ID:instance/INSTANCE_ID",
"arn:aws:ec2:us-east-1:ACCOUNT_ID:instance/INSTANCE_ID"
]
}
]
}
How can I grant permission to the specific instances so the IAM user can only manage those specific instances without accessing any other instances or services.
You can achieve this via Tags. As stated by the AWS Docs, you can try the below policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:RebootInstances"
],
"Condition": {
"StringEquals": {
"ec2:ResourceTag/Owner": "Bob"
}
},
"Resource": [
"arn:aws:ec2:us-east-1:111122223333:instance/*"
],
"Effect": "Allow"
},
{
"Effect": "Allow",
"Action": "ec2:Describe*",
"Resource": "*"
}
]
}

AWS CloudWatch Cross-Account Logging with EC2 Instance Profile

When I originally setup CloudWatch, I created an EC2 Instance Profile to automatically grant access to write to the account's own CloudWatch service. Now, I would like to consolidate the logs from several accounts into a central account.
I'd like to implement a simplified architecture that is based on Centralized Logging on AWS. However, these logs will feed an on-premise ELK stack, so I'm only trying to implement the components outlined in red. I would like to solve this without the use of Kinesis.
Either the CloudWatch Agent (CWAgent) doesn't support assuming a role or I can't wrap my mind around how to craft the EC2 Instance Profile to allow the CWAgent to assume a role in a different account.
Logging Target (AWS Account 111111111111)
IAM LogStreamerRole:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::999999999999:role/EC2CloudWatchLoggerRole"
]
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
Logging Source (AWS Account 999999999999)
IAM Instance Profile Role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::111111111111:role/LogStreamerRole"
}
]
}
The CWAgent is producing the following error:
/opt/aws/amazon-cloudwatch-agent/logs/amazon-cloudwatch-agent.log:
2018-02-12T23:27:43Z E! CreateLogStream / CreateLogGroup with log group name Linux/var/log/messages stream name i-123456789abcdef has errors. Will retry the request: AccessDeniedException: User: arn:aws:sts::999999999999:assumed-role/EC2CloudWatchLoggerRole/i-123456789abcdef is not authorized to perform: logs:CreateLogStream on resource: arn:aws:logs:us-west-2:999999999999:log-group:Linux/var/log/messages:log-stream:i-123456789abcdef
status code: 400, request id: 53271811-1234-11e8-afe1-a3c56071215e
It is still trying to write to its own CloudWatch service, instead of to the central CloudWatch service.
From the logs, I see that the instance profile is used.
arn:aws:sts::999999999999:assumed-role/EC2CloudWatchLoggerRole/i-123456789abcdef
Just add the following to the /etc/awslogs/awscli.conf to assume the LogStreamerRole role.
role_arn = arn:aws:iam::111111111111:role/LogStreamerRole
credential_source=Ec2InstanceMetadata

AWS Firehose cross region/account policy

I am trying to create Firehose streams that can receive data from different regions in Account A, through AWS Lambda, and output into a redshift table in Account B. To do this I created an IAM role on Account A:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I gave it the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::b-bucket/*",
"arn:aws:s3:::b-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"firehose:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"redshift:*"
],
"Resource": "*"
}
]
}
On Account B I created a role with this trust policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "11111111111"
}
}
}
]
}
I gave that role the following access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::b-bucket",
"arn:aws:s3:::b-bucket/*",
"arn:aws:s3:::b-account-logs",
"arn:aws:s3:::b-account-logs/*"
]
},
{
"Effect": "Allow",
"Action": [
"firehose:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "redshift:*",
"Resource": "arn:aws:redshift:us-east-1:cluster:account-b-cluster*"
}
]
}
I also edited the access policy on the S3 buckets to give access to my Account A role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::11111111111:role/AccountAXAccountBPolicy"
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::b-bucket","arn:aws:s3:::b-bucket/*"]
}
]
}
However, none of this works. When I try to create the the stream in Account A it does not list the buckets in Account B nor the redshift cluster. Is there any way to make this work?
John's answer is semi correct. I would recommend that the account owner of the Redshift Cluster creates the FireHose Stream. Creating through CLI requires you to supply the user name and password. Having the cluster owner create the stream and sharing IAM Role permissions on the stream is safer for security and in case of credential change. Additionally, you cannot create a stream that accesses a database outside of the region, so have the delivery application access the correct stream and region.
Read on to below to see how to create the cross account stream.
In my case both accounts are accessible to me and to lower the amount of changes and ease of monitoring I created the stream on Account A side.
The above permissions are right however, you cannot create a Firehose Stream from Account A to Account B through AWS Console. You need to do it through AWS Cli:
aws firehose create-delivery-stream --delivery-stream-name testFirehoseStreamToRedshift
--redshift-destination-configuration 'RoleARN="arn:aws:iam::11111111111:role/AccountAXAccountBRole", ClusterJDBCURL="jdbc:redshift://<cluster-url>:<cluster-port>/<>",
CopyCommand={DataTableName="<schema_name>.x_test",DataTableColumns="ID1,STRING_DATA1",CopyOptions="csv"},Username="<Cluster_User_name>",Password="<Cluster_Password>",S3Configuration={RoleARN="arn:aws:iam::11111111111:role/AccountAXAccountBRole",
BucketARN="arn:aws:s3:::b-bucket",Prefix="test/",CompressionFormat="UNCOMPRESSED"}'
You can test this by creating a test table on the other AWS Account:
create table test_schema.x_test
(
ID1 INT8 NOT NULL,
STRING_DATA1 VARCHAR(10) NOT NULL
)
distkey(ID1)
sortkey(ID1,STRING_DATA1);
You can send test data like this:
aws firehose put-record --delivery-stream-name testFirehoseStreamToRedshift --record '{"DATA":"1,\"ABCDEFGHIJ\""}'
This with the permissions configuration above should create the cross account access for you.
Documentation:
Create Stream - http://docs.aws.amazon.com/cli/latest/reference/firehose/create-delivery-stream.html
Put Record - http://docs.aws.amazon.com/cli/latest/reference/firehose/put-record.html
No.
Amazon Kinesis Firehose will only output to Amazon S3 buckets and Amazon Redshift clusters in the same region.
However, anything can send information to Kinesis Firehose by simply calling the appropriate endpoint. So, you could have applications in any AWS Account and in any Region (or anywhere on the Internet) send data to the Firehose and then have it stored in a bucket or cluster in the same region as the Firehose.