AWS elasticache redis sharding configuration - amazon-web-services

I have two AWS accounts, Prod and Staging. I need to migrate data from prod elasticache redis to staging. The Redis clusters in prod and staging are both 1 node and 0 shards each. I believe you cant update an already running cluster so I'm trying to
seed a new cluster with the RDB file with the same cluster configuration as already exists in staging, with a view to deleting the original staging cluster after the new one stands up
Problem is every time I go through the console and create a Redis cluster it stands up with 1 shard and 1 node. The original cluster had 0 shards. I selected 0 number of replicas, no multi AZ etc so I'm not sure why its defaulting to sharding. Am I missing an option somewhere, are you able to stand up a 1 node 0 shard cluster via the console?
I also tried creating a cluster via the AWS CLI to see if i get the same behaviour but get the error message:
An error occurred (InvalidParameterValue) when calling the CreateCacheCluster operation: No permission to access S3 object: mybucket/folder/file.rdb
I've set the bucket policy to the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0123456789:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0123456789:root"
},
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "ExampleStatement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0123456789:user/my-user"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket/folder/file.rdb"
]
},
{
"Sid": "Stmt15399483",
"Effect": "Allow",
"Principal": {
"Service": "eu-west-1.elasticache-snapshot.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket/folder/file.rdb"
]
}
]
}
Note this policy is in the permissions tab of the top level bucket. when i click on the file there is no area to add/edit a bucket policy only an ACL edit option. I did grant elasticache canonical ID read/write as it suggests in the AWS docs but still get permission denied

I was able to get the cluster configuration (1 node 0 shards) by using the CLI command:
aws elasticache delete-replication-group --replication-group-id my-cluster --retain-primary-cluster
although this requires running the command after cluster creation. I'm hoping theres a way to select this config at creation time

Related

AWS InvalidParameter when calling the ImportImage operation

I have .ova VM's stored on my S3 bucket, I am trying to create AMI from these OVA.
I was going through this video to Import a VM as an Image Using VM Import/Export to Amazon EC2.
I have created an EC2 Instance which I will use to trigger the necessary CLI commands for Importing.
I have created an IAM Role and attached it to the EC2 Instance.
Please refer to the details of the Role:
Trust Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "vmie.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Inline Policy for Access to S3 and EC2
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ec2:CopySnapshot",
"s3:ListAccessPointsForObjectLambda",
"s3:GetAccessPoint",
"s3:PutAccountPublicAccessBlock",
"s3:ListAccessPoints",
"ec2:RegisterImage",
"s3:ListJobs",
"s3:PutStorageLensConfiguration",
"s3:ListMultiRegionAccessPoints",
"s3:ListStorageLensConfigurations",
"ec2:Describe*",
"s3:GetAccountPublicAccessBlock",
"ec2:ModifySnapshotAttribute",
"s3:ListAllMyBuckets",
"s3:PutAccessPointPublicAccessBlock",
"s3:CreateJob",
"ec2:ImportImage"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::vms"
},
{
"Sid": "AllowStsDecode",
"Effect": "Allow",
"Action": "sts:DecodeAuthorizationMessage",
"Resource": "*"
}
]
}
Inline Policy for KMS Decrypt
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "kms:Decrypt",
"Resource": "*"
}
]
}
Also, I have attached the AWSImportExportFullAccess managed policy to the Role.
I am using the following command to Import the VM to AMI:
aws ec2 import-image --description "MY_VM_Image" --disk-containers "file://configuration.json"
Here are the contents of configuration.json
[{
"Description": "Image",
"Format": "ova",
"UserBucket": {
"S3Bucket": "vm",
"S3Key": "xzt.ova"
}
}
]
But I am facing the following error:
An error occurred (InvalidParameter) when calling the ImportImage operation: The service role vmimport provided does not exist or does not have sufficient permissions
I tried to have a look at the Troubleshooting document. It states the following
This error can also occur if the user calling ImportImage has Decrypt permission but the vmimport role does not.
So, I have also disabled the default encryption at S3.
Still no luck.
What else permissions are needed to run the command successfully.
I was facing the same issue and it turned out to be an issue with the clock not being in sync with the NTP servers (it was around 6 minutes off). As soon as the time was synced, the aws ec2 import-image worked as expected.
Here is a link for the importance of Time Synchronization in Kerberos:
https://tldp.org/HOWTO/Kerberos-Infrastructure-HOWTO/time-sync.html#:~:text=If%20you%20allow%20your%20clocks,errors%20and%20refuse%20to%20function.

Why is aws lambda getting "AccessDeniedExceptionKMS" Error Message?

I just deployed a lambda (using Terraform from gitlab runner) to a new aws account. This pipeline deploys a lambda to another (dev/test) account without issues, but when I try to deploy to my prod account, I get the following error:
I'm honing in on the statement, "The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access."
I have confirmed that the encryption config for the env vars are set to use default aws/lambda key instead of a customer master key. That seems to contradict the language of the error which refers to a customer master key...?
The role assumed by the lambda does have a policy which includes two kms actions:
"Sid": "AWSKeyManagementService",
"Action": [
"kms:Decrypt",
"kms:DescribeKey"
]
By process of elimination, I wonder if the issue is a lack on the part of the resource-based policy on the kms key. Looking in the kms keys, under aws managed, I find the aws/lambda key has the following key policy:
{
"Version": "2012-10-17",
"Id": "auto-awslambda",
"Statement": [
{
"Sid": "Allow access through AWS Lambda for all principals in the account that are authorized to use AWS Lambda",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:CreateGrant",
"kms:DescribeKey"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"kms:ViaService": "lambda.us-east-1.amazonaws.com",
"kms:CallerAccount": "REDACTED"#<-- Account where lambda deployed
}
}
},
{
"Sid": "Allow direct access to key metadata to the account",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::REDACTED:root"#<-- Account where lambda deployed
},
"Action": [
"kms:Describe*",
"kms:Get*",
"kms:List*",
"kms:RevokeGrant"
],
"Resource": "*"
}
]
}
This is very puzzling. Any pointers appreciated!
This was solved by simply deleting the lambda and then re-running my pipeline to re-deploy it. All I can conclude is that something was corrupted in the first deployment.
Also deleting and redeploying the lambda sorted it out for me after many other tries to sort this.

How do I add permissions in Amazon Keyspaces?

When I try to query AWS Keyspaces (managed Cassandra) from an AWS Lambda, I get this error:
{
"errorType": "AggregateException",
"errorMessage": "One or more errors occurred. (All hosts tried for query failed (tried 11.11.111.11:9142: UnauthorizedException 'User arn:aws:iam::111111111111:user/user-for-keyspaces has no permissions.'; 11.11.111.11:9142: UnauthorizedException 'User arn:aws:iam::111111111111:user/user-for-keyspaces has no permissions.'))",
"stackTrace": [
"at lambda_method(Closure , Stream , Stream , LambdaContextInternal )"
],
"cause": {
"errorType": "NoHostAvailableException",
...
But in the AWS console for Keyspaces, I don't see anywhere to adder permissions.
The user policy for user-for-keyspaces already has this attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cassandra:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
How do I add permissions in AWS Keyspaces?
You should only require cassandra
{
"Statement": [
{
"Sid": "keyspaces-full-access",
"Principal": "*",
"Action": [
"cassandra:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Additionally, Amazon Keyspaces populates the system.peers table in your account with an entry for each availability zone where a VPC endpoint is available. To look up and store available interface VPC endpoints in the system.peers table, Amazon Keyspaces requires that you grant the IAM entity used to connect to Amazon Keyspaces access permissions to query your VPC for the endpoint and network interface information.
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"ListVPCEndpoints",
"Effect":"Allow",
"Action":[
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeVpcEndpoints"
],
"Resource":"*"
}
]
}
Learn more about VPC endpoints here
The problem was actually nothing to do with the user in the error message, but the VPC endpoint I had created for Keyspaces.
The endpoint requires cassandra:* permissions to perform queries, e.g.
{
"Statement": [
{
"Sid": "keyspaces-full-access",
"Principal": "*",
"Action": [
"cassandra:*",
"keyspaces:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}

Monitor Amazon EC2 API requests in Grafana

I'm unable to pull Amazon Cloudwatch metrics in an EC2 instance with Grafana. The setup is the following:
EC2 instance with Grafana:
Security group: (ssh,tpc,22,0.0.0.0/0;custom tcp,tpc,3000,0.0.0.0/0;)
Policy attached to the role in the EC2 instance with Grafana:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowReadingMetricsFromCloudWatch",
"Effect": "Allow",
"Action": [
"cloudwatch:DescribeAlarmsForMetric",
"cloudwatch:DescribeAlarmHistory",
"cloudwatch:DescribeAlarms",
"cloudwatch:ListMetrics",
"cloudwatch:GetMetricStatistics",
"cloudwatch:GetMetricData"
],
"Resource": "*"
},
{
"Sid": "AllowReadingLogsFromCloudWatch",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups",
"logs:GetLogGroupFields",
"logs:StartQuery",
"logs:StopQuery",
"logs:GetQueryResults",
"logs:GetLogEvents"
],
"Resource": "*"
},
{
"Sid": "AllowReadingTagsInstancesRegionsFromEC2",
"Effect": "Allow",
"Action": [
"ec2:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeRegions"
],
"Resource": "*"
},
{
"Sid": "AllowReadingResourcesForTags",
"Effect": "Allow",
"Action": "tag:GetResources",
"Resource": "*"
}
]
}
Grafana query screenshot without retrieving any data:
https://docs.aws.amazon.com/AWSEC2/latest/APIReference/monitor.html
This is an opt-in feature. To enable this feature for your AWS account, contact AWS Support.
Are you sure you have enabled this feature for your AWS account?
Are you sure that InstanceId is the right dimension and that only one
dimension is required?
The best option is to go to the AWS CloudWatch console and generate graph there. Then you can compare generated CloudWatch query with the CloudWatch query generated by Grafana.
I finally could pull some data by changing the namespace to AWS/ElasticBeanstalk and the Dimensions with EnvironmentName to the one I want to target. Before I had to go to the configuration dashboard of the Elasticbeanstalk and add the proper metrics in the CloudWatch Custom Metric in the input select

Copy S3 object to another S3 location Elastic Beanstalk SSH setup error

Getting this Elastic Beanstalk permission error when trying to do:
eb ssh --setup
2020-07-06 07:36:50 INFO Environment update is starting.
2020-07-06 07:36:53 ERROR Service:Amazon S3, Message:You don't have permission to copy an Amazon S3 object to another S3 location. Source: bucket = 'tempsource', key = 'xxx'. Destination: bucket = 'tempdest', key = 'yyy'.
2020-07-06 07:36:53 ERROR Failed to deploy configuration.
Is there a specific policy that I should be adding to my IAM permissions? I've tried adding full S3 access to my IAM User, but the error remains. Or is a permissions error associated with the source bucket?
Some more details:
Both buckets are in the same AWS account. The copying operation doesn't work for AWS CLI copy commands.
Bucket Profiles
Source Bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::SOURCE_BUCKET/*"
},
{
"Sid": "Stmt2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::SOURCE_BUCKET"
}
]
}
Destination Bucket (elasticbeanstalk-us-west-2-XXXXXXXXXXXX)
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "eb-ad78f54a-f239-4c90-adda-49e5f56cb51e",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-us-west-2-XXXXXXXXXXXX/*",
"arn:aws:s3:::elasticbeanstalk-us-west-2-XXXXXXXXXXXX/resources/environments/logs/*"
]
},
{
"Sid": "eb-af163bf3-d27b-4712-b795-d1e33e331ca4",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXXXXXXX:role/aws-elasticbeanstalk-ec2-role"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::elasticbeanstalk-us-west-2-XXXXXXXXXXXX",
"arn:aws:s3:::elasticbeanstalk-us-west-2-XXXXXXXXXXXX/resources/environments/*"
]
},
{
"Sid": "eb-58950a8c-feb6-11e2-89e0-0800277d041b",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:DeleteBucket",
"Resource": "arn:aws:s3:::elasticbeanstalk-us-west-2-XXXXXXXXXXXX"
}
]
}
I've tried adding full S3 access to my IAM User, but the error remains.
The error is not about about your IAM permissions (i.e. your IAM user). But its about a role that EB is using your the instance (i.e. instance role/profile):
Managing Elastic Beanstalk instance profiles
The defualt role used on the instances in aws-elasticbeanstalk-ec2-role. Thus you can locate it in IAM console, and add required S3 permissions. Depending on your setup, you may be using different role.
Or is a permissions error associated with the source bucket?
If you have bucket policies that deny the access, it could also be the reason.