This IAM user does not have permission to use AWS Config - amazon-web-services

Currently, I am exploring the (application) load balancing option in AWS.
I was able to create instances, subnets, target group and application scaling group in a VPC. Each of the instances is accessible through the instance’s ‘Public DNS’. (i.e., I run a toy nodejs based program in port 80 these instances - and it works fine and accessible through the public DNS url.)
However, when I try to access it through the load balancer, I get the error
An error occurred while a request was made to AWS Config
However, when I tried to access the AWS Config, I get the error
This IAM user does not have permission to use AWS Config. Check your
IAM user permissions or contact your administrator to get access.
Also, in the ‘Target Group(s)’, these instances are marked as ‘unhealthy’. On the other hand, the same instances are marked as ‘healthy’ in the ‘Application Scaling Group’.
Could this be an IAM user permission issue (i.e., of the account that I am using)?

Related

Terraform gcp with shared vpc, gke

I am writing terraform file in GCP to create a shared vpc, GKE, compute engine in the service project of shared vpc.
I am facing an error for GKE saying error
403 permission error service.hostagent even though it has required permissions.
And also I am using service account key. Not sure whether it's correct approach like I created service account in host project and I added that service account id in the iam of service project. Using host project service key. Is that right approach?.
Thanks.
While creating a shared VPC, sharing the subnet from host project to service project allows all the members mentioned in the service account of the service project.
From the error message, it looks like IAM permissions are missing. While creating a shared VPC with GKE, make sure that you have following permissions:
To create a shared VPC, a shared VPC admin role is required(which you seemingly already have).
To share your subnets, you need to give users the Compute Network User role.
While creating GKE configuration, make sure to enable Google Kubernetes Engine API in all projects. Enabling the API in a project creates a GKE service account for the project.
When attaching a service project, enabling Kubernetes Engine access grants the service project's GKE service account the permissions to perform network management operations in the host project.
Each service project's GKE service account must have a binding for the Host Service Agent User role on the host project. This role is specifically used for shared VPC clusters which include the following permissions:
a) compute.firewalls.get
b) container.hostServiceAgent.*
For additional information, you can see here.

Why do hostnames of connections to Aurora Serverless come from outside the VPC?

I have a php website running in Beanstalk behind a load balancer.
The website is connecting to a MySQL compatible database running as Aurora Serverless.
Both the elastic beanstalk instance and Aurora is set up in the same VPC.
The VPC CIDR is 10.10.0.0/24
The elastic beanstalk instance has local IP 10.10.0.18
The serverless Aurora cluster is using VPC endpoints in the two subnets of the VPC and their IP addresses are 10.10.0.30 and 10.10.0.75.
Even though Aurora Serverless is limited to only accepting connections from within the VPC, out of habit I have still only granted my user permission if they are coming from the VPC.
So for instance I have granted permissions to 'user'#'10.10.0.%'
When my website tries to connect to the database however it gets permission denied because it is trying to access it with a user that was not granted permission because the host is not in the 10.10.0.0/24 subnet.
Here are some of the errors that I am getting:
Access denied for user 'user'#'10.1.17.79' (using password: YES)
Access denied for user 'user'#'10.1.18.17' (using password: YES)
Access denied for user 'user'#'10.1.19.1' (using password: YES)
Access denied for user 'user'#'10.1.19.177' (using password: YES)
As you can see, none of those hosts are within my VPC.
Is this because the cluster is running in its own VPC, linked to mine via the private links?
And if so, are my only option to use % as the host for users I grant privileges?
Personally I would like to have locked it down to only my VPC just in case Serverless Aurora opens up for connections from the internet in the future.
Don't whitelist to any specific IP addresses like that for RDS. Especially with Aurora serverless where the node IPs can change at a moments notice a it scales you will find there is no way to know the true IP address of the node.
Remember that all the RDS database services technically run within a AWS managed VPC, you do however get an ENI attached to your VPC that allows you to connect to the instance. This allows you to communicate as if the resource is actually created within your VPC.
The best way to enhance security is going to be through security groups and NACLs, combined with using TLS and encryption at rest. Finally ensure your passwords are strong and rotated frequently.
The RDS Best Practices should help you to dive into other practices you can follow to enhance your security.

AWS ECS Fargate Platform 1.4 error ResourceInitializationError: unable to pull secrets or registry auth: execution resource

I am using docker containers with secrets on ECS, without problems. After moving to fargate and platform 1.4 for efs support i start getting the following error.
Any help please?
ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve secret from asm: service call has been retried 1 time(s): secret arn:aws:secretsmanager:eu-central-1:.....
Here's a checklist:
If your ECS tasks are in a public subnet (0.0.0.0/0 routes to Internet Gateway) make sure your tasks can call the "public" endpoint for Secrets Manager. Basically, outbound TCP/443.
If your ECS tasks are in a private subnet, make sure that one of the following is true: (a) your instances need to connect to the Internet through a NAT gateway (0.0.0.0/0 routes to NAT gateway) or (b) you have an AWS PrivateLink endpoint to secrets manager connected to your VPC (and to your subnets)
If you have an AWS PrivateLink connection, make sure the associated Security Group has inbound access from the security groups linked to your ECS tasks.
Make sure you have set GetSecretValue IAM permission to the ARN(s) of the secrets manager entry(or entries) set in the ECS "tasks role".
Edit: Here's another excellent answer - https://stackoverflow.com/a/66802973
I had the same error message, but the checklist above misses the cause of my problem. If you are using VPC endpoints to access AWS services (ie, secretsmanager, ecr, SQS, etc) then those endpoints MUST permit access to the security group that is associated with the VPC subnet that your ECS instance is running in.
Another watchit is, if you are using EFS to host volumes, ensure that your volumes can be mounted by the same security group identified above. Go to EFS, select the appropriate file system, Network tab, then Manage.

How to add the ip of an instance in a vpc to the security group of rds ec2 classic instance with aws cli

I describe my scenario which is not like the one described here Unable to add Ec2 VPC Security group in Non VPC RDS MySQL Security group? or here Adding Spot Instances to the Security Group of an RDS Instance:
I have a fleet of spots in a ec2 vpc and I want to give you access to a rds data base that is in ec2 classic. Just like the second link, my spots are renewed from time to time and I have to be able to add the ip of the lawnched machine to the security group of the rds instance.
The configuration from the console is possible and works fine, just go to the security group of your rds instance and add a rule with a CIDR/IP.
But by doing so by cli with this command:
aws rds authorize-db-security-group-ingress --db-security-group-name default --cidrip xxx.xx.x.xxx/32
I get this error:
HTTPSConnectionPool(host='ec2.eu-west-1c.amazonaws.com', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<botocore.awsrequest.AWSHTTPSConnection object at 0x__________>: Failed to establish a new connection: Errno -2] Name or service not known',))
Details
I created an IAM user with this Permissions boundary: AuthorizeDBSecurityGroupIngress
Both spots vpc and rds ec2 classic instances are in the same eu-west-1c availability zone.
In the documentation of the command don't specify specifically that you can't do https://docs.aws.amazon.com/cli/latest/reference/rds/authorize-db-security-group-ingress.html. Also it would be strange that it can be done from the console and not from the cli.
I don't know what I'm missing, any ideas?
There's another way of using Security Groups, instead of using an IP, you use a security group ID.
For example:
You create a new security group, let's call it "MySpecialSG". Don't add any rules to this SG.
Then create a new SG, let's call it "Allow my Other SG". Now you will add an inbound rule, but instead of using IPs, you will use "MySpecialSG" group ID and the port you need.
This last SG is the one that you will assign to your DB instance.
I've finally solved the problem. The solution was that I was not adding the IAM user credentials with the access policy necessary to perform that action.
To use aws cli through the user-data of the instance you have to export the credentials of that IAM user as environment variables.
Info:
Policies for the classic link
Credentials export

Connecting to Amazon RDS DB through Sequel Pro

I am trying to connect Sequel Pro to my Amazon RDS Instance, and while it looks like I have set my security groups correctly to allow access to all-traffic, attempting to connect to it still fails.
This is what I did:
In IAM, I created a new user, and added that user to a group that has a policy that allows full access to RDS.
For the security group that is attached to my RDS instance, I added an Inbound rule to allow all traffic of type MYSQL/Aurora
However, when I enter the endpoint displayed in the RDS screen, along with the username and password I created through IAM, I get an access denied message. Any ideas what I may be missing?