Denied AWS Opensearch write permission - amazon-web-services

I'm trying to connect a spring boot application from AWS EKS to AWS Opensearch both of which reside in a VPC. Though the connection is successful im unable to write any data to the index.
All the AWS resources - EKS and Opensearch are configured using terraform. I have mentioned the elasticsearch subnet CIDR in the egress which is attached to the application. Also, the application correctly assumes the EKS service account and the pod role - which I mentioned in the services stanza for Elasticsearch. In the policy which is attached to the pod role, I see all the permissions mentioned - ESHttpPost, ESHttpget, ESHttpPut, etc.
This is the error I get,
{"error":{"root_cause": [{"type":"security_exception", "reason":"no
permissions for [indices:data/write/index] and User
[name=arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-PodRle-
hellodemo-role-1,backend_roles=
[arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-PodRle-hellodemo
role-1], requested
Tenant=null]"}],"type":"security_exception", "reason":"no
permissions for [indices:data/write/index] and User
[name=arn:aws:iam::ACCOUNT_NO:role/helloworld demo-eks-PodRle-
hellodemo-role-1,
backend_roles=[arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-
PodRle-hellodemo role-1], requested Tenant=null]"},"status":403}
Is there anything that I'm missing out on while configuring?

This error can be resolved by assigning the pod role to additional_roles key in the Elasticsearch terraform. This internally is taken care by AWS STS when it receives a request from EKS.

Related

How to configure read only access and admin access for EKS in aws sso with azure ad?

I want to configure AWS SSO to access EKS with Azure AD. However, I want to have two access types i.e. Read only and Admin.
You will have to map the AD groups to AWS IAM roles via AWS SSO AssumedRoles. We will then use Kubernetes configMaps and roles to map the associated IAM roles to Kubernetes RBAC roles. This will be done by leveraging the open source AWS IAM Authenticator to pass an IAM identity from kubectl.
In the below example, users in the AWS-EKS-Admins group have full access to the EKS cluster (similar to the permissions assigned to the default cluster-admin role within Kubernetes) and users in the AWS-EKS-Dev group have readOnly access to certain Kubernetes resources. The mapping is summarized in the table below:
AWS EKS Permission mapping
Please refer the below aws documentation link for more reference: -
https://aws.amazon.com/blogs/opensource/integrating-ldap-ad-users-kubernetes-rbac-aws-iam-authenticator-project/
Thanking you,

Service role EMR_DefaultRole has insufficient EC2 permissions

While creating AWS EMR cluster, always i get the issue- Service role EMR_DefaultRole has insufficient EC2 permissions
And the cluster terminates automatically, have even done steps as per aws documentation of recreating emr specific roles, but no progress please guide how to resolve the issue- Service role EMR_DefaultRole has insufficient EC2 permissions
EMR needs two roles to start the cluster 1) EC2 Instance profile role 2)EMR Service role. The service role should have enough permissions to provision new resources to start the cluster, EC2 instances, their network etc. There could be many reasons for this common error:
Verify the resources and their actions. Refer https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-iam-role.html.
Check if you are passing the tag that signifies if cluster needs to use emr managed policy.
{
"Key": "for-use-with-amazon-emr-managed-policies",
"Value": "true"
}
At last try to find out the exact reason from cloud trail. Go to aws>cloud trail. From the event history configuration enable the error code so that you can see the exact error. If you find the error code something like 'You are not authorized to perform this operation. Encoded authorization failure message'. Then open the event history details, pick up the encrypted error message and decrypt using aws cli
aws sts decode-authorization-message message. This will show you the complete role details, event, resources, action. Compare it with AWS IAM permissions and you can find out the missing permission or parameter that you need to pass while creating the job flow.

"aws dynamodb list-tables" is not working on ec2 instance

I've created a node.js application which connects to DynamoDB. Everything is working fine locally Now I'm trying to setup on AWS servers.
First I've created DynamoDB tables from AWS DynamoDB console. It is working fine.
I've created a new role from IAM management console > Roles to access DynamoDB. And attached that role to EC2 instance.
But when I fire any aws dynamodb cli command, it gave me error to mention the region.
So I went to IAM management console > Users, and created an access key to my admin type user.
Now I'm login to EC2 CLI using ec2-user and aws configure with previously generated access key.
AWS Access Key ID [None]: ACCESS KEY
AWS Secret Access Key [None]: SECRET
Default region name [None]: us-east-1
Default output format [None]: json
But when I use following command aws dynamodb list-tables. It gives no output, no error.
As I commented, The main issue was outbound rules for attached security group. Here are the necessary things to do
Set a security group outbound rule to HTTPS
Setup Credentials
Create Access Key from IAM management console > Users.
SSH to EC2 instance.
Configure the credentials to EC2 instance using aws configure command or directly modify ~/.aws/credentials file.
Attach Role
Create Role from IAM management console > Roles. Select the role which is necessary to perform operation on AWS service. Eg AmazonDynamoDBFullAccess
Open VPC console and select the EC2 instance.
Attach the role from Actions menu
It is good, though optional, to create VPC endpoint. If you face UnauthorizedOperation error while creating endpoint, assign AmazonEC2FullAccess permission to the user from IAM console. Remove it later if you don't need it.
To use the AWS service from your application, find the relevant endpoint from this list.
It sounds like you are having problems connecting to DynamoDB because of the way you have configured your VPC.
There are some options but if you would prefer to keep your VPC isolated from the internet then you could enable VPC endpoints for DynamoDB. That way you can access DynamoDB from within your VPC without those connections going over the public internet.
There is a step-by-step guide for how to do that here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpoints-dynamodb.html
Essentially, it involves the following steps:
you have to get the VPC id for the VPC where your EC2 instance is located
create a VPC endpoint for DynamoDB, specifying the VPC id and the regional dynamodb service name:
aws ec2 create-vpc-endpoint --service-name com.amazonaws.<region>.dynamodb --vpc-id <yourvpcid>

How we can restrict an IAM User to launch EC2 Instance and VPC via Cloudformation only?

How we can restrict an IAM User to launch EC2 Instance and VPC via Cloudformation only. I don't want user to launch the EC2 instance and VPC directly by console.
Two options:
Use a role with AWS CloudFormation
When launching a CloudFormation stack, a role can be specified. This role can have the necessary permissions to launch the stack, even if the user doesn't have it.
See: AWS CloudFormation Service Role - AWS CloudFormation
Use AWS Service Catalog
AWS Service Catalog allows you to create a portfolio of offerings that users can launch. It uses a role to launch services even if the user themselves doesn't have permission to launch the services themselves.
See: AWS Service Catalog Documentation

AWS Codedploy Agent Access denied from EC2 instance to S3

I have set up the Codedeploy Agent, however when I run it, I get the error:
Error: HEALT_CONSTRAINTS
By going further , this is the entry in the code deploy log from the EC2 instance:
InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Cannot reach InstanceService: Aws::S3::Errors::AccessDenied - Access Denied
I have done a simple wget from the bucket and it results:
Connecting to s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)|xxxxxxxxx|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
On the opposite, if I use the AWS cli I can correctly reach the S3 bucket.
The EC2 instance is on a VPC, it has a role associated with full permission on S3, firewall settings inbound and outbound seem correct. So it is obviously something related to permissions in accessing from https.
The questions:
Under which credentials Code Deploy Agent runs ?
What permissions or roles have to be set on S3 bucket ?
The EC2 instance's credentials (the instance role) will be used when pulling from S3.
To be clear, the Service Role that CodeDeploy needs does not need S3 permissions. The ServiceRole CodeDeploy needs allows CodeDeploy to call AutoScaling & EC2 APIs to describe the instances so CodeDeploy knows how to deploy to them.
That being said, for your AccessDenied issue for S3, there are 2 things you need to check
The role that the EC2 instance(s) has s3:Get* and s3:List* (or more specific) permissions
The S3 bucket you want to deploy has a policy attached that allows the EC2 instance role to get the object.
Documentation for permissions: http://docs.aws.amazon.com/codedeploy/latest/userguide/instances-ec2-configure.html#instances-ec2-configure-2-verify-instance-profile-permissions
CodeDeploy uses "Service Roles" to access AWS resoures. In the AWS console for CodeDeploy, look for "Service role". Assign the IAM role that you created for CodeDeploy in your application settings.
If you have not created a IAM role for CodeDeploy, do so and then assign it to your CodeDeploy application.