I have a couple of ECS tasks (fargate, executed via Lambda function) that need to connect to an RDS.
Each of the tasks has its own role defining policies to (for example) access S3 buckets.
Each of my tasks also has its own security group.
I can now whitelist each and every tasks security group on the RDS, but this is cumbersome since new tasks are added on a daily basis.
I thought it must be possible to add a policy that allows access to the RDS (as described in the docs):
- PolicyName: RDSAccess
PolicyDocument:
Statement:
- Effect: Allow
Action:
- rds-db:connect
Resource: 'arn:aws:rds-db:REGION:ID:dbuser:DB_ID/DB_USER'
Unfortunately this does not work - I can still not connect to the database.
As mentioned before: When explicitly adding each tasks security group to the RDS, I can connect to the DB without issues.
Two questions:
Am misunderstanding the docs?
Can I add a ECS task to a pre-defined security group, so that I only need to whitelist this one specific security group?
This policy enables you to use the CLI to generate temporary credentials as a specific IAM user/role, you will still need inbound access via the network to connect.
If you want to simplify this process to connect there are 2 options for the security groups you can use:
Whitelist the subnet ranges that the tasks sit in, if this is a security issue can the tasks be moved into specific subnets to make whitelisting easier.
Create a blank security group you attach to any task that needs to connect to your RDS, then add this a source inbound. The security group could be reused across all tasks simply for identifying that it should have access.
Related
I need to enforce tag suite on aws resources.
The solution - use SCP Policies
The problem - it would affect existing ec2 instances in the autoscaling groups and new ec2s won't be created.
Anyone knows of a good way to condition the policy not to have effect on ec2 instances part of any autoscaling groups?
Goal is to visualise the relationship of resources within AWS account(may have multiple VPC's).
This would help daiy operations. For example: Resources getting affected after modifying the security group
Each resource has ARN assigned in AWS cloud
Below are some example relationsships among resources:
Route table has-many relationship with subnets
NACL has-many relationship with subnets
Availability zone has-many relationship with subnets
IAM resource has-many regions
has-many is something like compose relation
security group has association relation with any resource in VPC
NACL has association relation with subnet only
We also have VPC flow logs to find the relationships
Using AWS SDK's,
1)
For on-prem networks, we take IP range and send ICMP requests to verify existence of devices in the IP range and then we send snmp query to classify the device as (windows/linux/router/gateway etc...)
How to find the list of resources allocated within an AWS account? How to classify resources?
2)
What are the parameters that need to be queried from AWS resources(IAM, VPC, subnet, RTable, NACL, IGW etc...) that help create relationsip view of the resources within an AWS account?
you don't have to stitch your ressources together by your self in your app. you can use the ressourcegrouptagging api from aws. take a look on ressourcegroups in your aws console. there you can group things based on tags. then, you can tag the groups itself. requesting with the boto3 python lib will give you a bunch of information. read about boto3, its huge! another thing which might be intresting for you is "aws config".. there you can have your compliance, config histoty, relationship of ressources and plenty of other stuff!
also, check out aws cloudwatch for health monitoring
I have created an AWS lambda function to shut down an EC2 instance in my account. The function is called from CloudWatch at a certain time.
Suppose you have to accomplish the same task in an AWS Organization. You have full control over the master account and you are the owner of the Organization.
If you want to shut down all the EC2 instances in the organization at a certain time, first of all, it is possible to control that from your master account? If it is, then what would be the approach?
Master CloudWatch --calls--> Master Lambda --> shuts down EC2 instances in the organization
Member CloudWatch --> Member Lambda --> shuts down EC2 in their organization.
If 2. is the only option, is it possible to push CloudWatch rules and Lambda functions from the Master account into each member account?
Any other approach to address this problem?
Many thanks!
Option one is probably the better of the two, as it's a bit simpler (no cross-account events to deal with).
To do this you'll need to understand AWS Security Token Service's Assume Role.
This would allow your lambda to systematically:
Assume a role in Account 1 that can list and shutdown EC2 instances
Shutdown EC2 instances
Assume a role in Account 2 ... etc.
To do this you'll have to make an IAM role to be assumed in each 'slave' account, and an IAM role that is allowed to use sts:AssumeRole in the master account to invoke that lambda with.
I would challenge you to make sure this is what you need. Typically life is much easier in AWS if you can keep your accounts with only very loose dependencies on each other; instead considering an approach where each account is responsible for shutting down their own EC2 instances based on a trigger.
I have several EC2 instances in my AWS amazon account. I have one specific EC2 instance that I want an outsourcer to use (stop,start, manage security group, resize disk space, etc).
I tried to do it with IAM policies, but from what I see, the DescribeInstances allows the user to see all instances in my account. And when I try to edit the policy for a specific resource it shows error because it DescribeInstances is not a resource-level policy, so it must have Resource '*'.
I was thinking maybe allow him access to a different region, and put the instance there. Another option is using organizations (a little complex, but looks promising, would be happy to understand if this is the way to go).
Am I missing something? What is the best solution to achieve what I need?
If you want to give the outsourcer permission to call AWS services in your account, then from a security perspective, it would be much safer to put those resources in a child account.
That way, you are guaranteed that their credentials are not able to impact any of your other resources and services.
The alternative would be way too complex to manage. For example, security groups can be associated with many instances and one instance can have many security groups. That would not be possible to code within an IAM policy.
From a brief search - there does not seem to be a method to set dynamic hostnames for members of an autoscaling group. The functionality exists within OpenStack Heat using index - but I cannot find anything on doing so with AWS autoscaling groups.
For example, using OpenStack Heat - nodes are automatically given a hostname based on the number of nodes in the autoscaling group:
instance_group:
type: OS::Heat::ResourceGroup
properties:
count: { get_param: instance_number }
resource_def:
type: OS::Nova::Server
properties:
name: instance%index%
Would give me the following if I were to have 3 instances in the autoscaling group
instance0
instance1
instance2
Is there a similar method I can use with the AWS autoscaling groups launch configuration and or cloud-init?
I've found a solution that works pretty well, if you stick to some not-unreasonable conventions.
Every kind of EC2 instance that I launch, whether there are N servers of this kind in an autoscaling group or it's stand-alone instance, I create an Instance Profile for it. This is a good idea anyway in my experience, even if you don't need the instance to access any aws services it doesn't hurt to have a role/profile with empty permissions, it makes it that much easier to give it access to an s3 bucket or whatever else in the future if you need to.
Then at server launch in the user_data script (or your configuration management tool if you're using something like puppet or ansible), I query the instance profile name from the metadata service and append something unique to each server like the private ip and set that as the hostname.
You'll end up with hostnames like webserver-10-0-12-58 which is both human readable and unique to each server.
(The downside of this vs incrementing integers is that these aren't predictable, and can't be used to set up unique behavior for a single server. For example if you had webserver-{0-8} and needed to run some process on exactly one server, you could use logic like if hostname == webserver-0 then run_thing.)