I would like to create a Service Control Policies (SCP) policy at the Organization Level that can block 3 things:
Don't allow creating a database Publicly Accessible
Don't allow creating a database without the option of encryption marked
Don't allow creating a database without the option of backup marked
Anyone know if is it possible?
I don't think that Service Control Policies can act at this level.
They basically say which API calls are permitted (eg CreateDbInstance, RebootDbInstance) and don't get down to the level of parameters.
In fact, I don't think it would be possible to create normal IAM policies that have that level of detail, let alone SCPs.
Such rules would likely need to be monitored by Evaluating Resources with AWS Config Rules - AWS Config rather than controlling permissions.
Related
We want to have deployment users to use in our pipelines, purely for programmatic access. These users will be created per project, rather than using one deployment user for all stacks.
I'm trying to lock down the resources that these deployment users have permission to change, but I'm struggling due to the fact that the ARN is not yet known until the stack is created, meaning that creating the IAM policy to restrict it to only certain resources is proving difficult.
For example, say I want to create an application load balancer (with listeners, rules etc) - I want the deployment user to have permission to create an ALB (easy enough) but I want the deployment user to only have permission to delete or modify the newly created ALB, not any other ALBs.
Any tips / smart ways to do this? The ARNs are generated and "random" as I dislike naming my resources and having to modify the names if I change a setting that requires replacement.
You can use IAM policy conditions to restrict access to resources based on tags.
For example, you can add two policy statements with a condition element to allow specific actions on a resource:
User1 can create a resource only if the request contains owner=user1 tag.
User1 can update or delete a resource only if owner=${aws:username} tag is attached to the resource.
You can find policy example in this guide:
https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html
I'm trying to set up a Site to Site connection between our on-premise server and our cloud infrastructure. In our premises we have a SonicWall firewall installed and, since SonicOS 6.5.1.0 it's now easy to put an AWS access key and AWS Secret Key and let the software configure everything via SDK.
The problem is that the tutorial on how to configure the firewall (p. 8) says:
The security policy used, either for a group to which the user belongs or attached to the user directly, must
include the following permissions:
• AmazonEC2FullAccess – For AWS Objects and AWS VPN
• CloudWatchLogsFullAccess – For AWS Logs
Since it's not ideal to give anyone the full access to Amazon EC2 do you know which features SonicWall actually needs so I can disable everything else and follow the principle of least privilege?
Without looking into the code for SonicWall itself, it is not going to be easy to know exactly which API calls it's going to make to EC2. If you are prepared to at least temporarily grant full EC2 access, you could use AWS CloudTrail to monitor exactly which API calls are being made by the IAM user associated with your on-premises server, and then update your specific policy to match those calls.
Alternatively, start with the full access IAM policy template and go through and deny any calls you think are completely unrelated to SonicWall's functionality.
If you trust SonicWall then probably the easiest thing to do is to just allow the full EC2 access it claims is required (or start there and gradually remove them until something breaks!)
I'm looking for way to restrict deployment to production assuming I'm not using multiple accounts for dev and prod.
My use case would go as follow (I still not sure if this is possible, pls help me on that). I want to create multiple users into a same account but allow only one user/group to exec commands like sls deploy -s prod and maybe, allow only that user/group to be the only able to create sources name prod_{name}, for example, dynamo tables name prod_users.
Is this possible? or the only way to separate concern is thought the consolidate billing and multiple accounts?
Thanks!
By default, users don't have any privileges, so you have to explicitly allow them to do something on AWS.
Simplest way to do that is to go to IAM console and create group for users that are allowed to do what you require. After naming group, next step in IAM console is to attach policy to the group. In that step, you would choose CloudFormation, EC2, RDS, ElasticBeanstalk, and whatever services you want them to access. For each service, you can choose more granulary (read, access, admin, ...). You can either choose from AWS predesigned policies, or create one of your own, if it's so specific that it isn't covered by already existing policies.
I'd like to help you further (ie. tell you what policies to include), but for that I'd need to know specific types of users and services that you want covered.
Regards,
Can AWS IAM be used to control access for custom applications? I heavily rely on IAM for controlling access to AWS resources. I have a custom Python app that I would like to extend to work with IAM, but I can't find any references to this being done by anyone.
I've considered the same thing, and I think it's theoretically possible. The main issue is that there's no call available in IAM that determines if a particular call is allowed (SimulateCustomPolicy may work, but that doesn't seem to be its purpose so I'm not sure it would have the throughput to handle high volumes).
As a result, you'd have to write your own IAM policy evaluator for those custom calls. I don't think that's inherently a bad thing, since it's also something you'd have to build for any other policy-based system. And the IAM policy format seems reasonable enough to be used.
I guess the short answer is, yes, it's possible, with some work. And if you do it, please open source the code so the rest of us can use it.
The only way you can manage users, create roles and groups is if you have admin access. Power users can do everything but that.
You can create a group with all the privileges you want to grant and create a user with policies attached from the group created. Create a user strictly with only programmatic access, so the app can connect with access key ID and secure key from AWS CLI.
Normally, IAM can be used to create and manage AWS users and groups, and permissions to allow and deny their access to AWS resources.
If your Python app is somehow consuming or interfacing to any AWS resource as S3, then probably you might want to look into this.
connect-on-premise-python-application-with-aws
The Python application can be upload to an S3 bucket. The application is running on a server inside the on-premise data center of a company. The focus of this tutorial is on the connection made to AWS.
Consider placing API Gateway in front of your Python app's routes.
Then you could control access using IAM.
I am in the early stages of writing an AWS app for our users that will run our research algorithms using their AWS resources. For example, our code will need to spin up EC2 instances running our 'worker' app, access RDS databases, and create access SQS queues. The AWS Java SDK examples (we are writing this in Java) use a AwsCredentials.properties file to store the Access Key ID and Secret Access Key, which is fine for examples, but obviously not acceptable for our users, who are would be in essence giving us access to all their resources. What is a clean way to go about running our system on their behalf? I discovered AWS Identity and Access Management (IAM) which seems to be for this purpose (I haven't got my head around it yet), esp. Cross-account access between AWS accounts. This post makes it sound straightforward:
Use the amazon IAM service to create a set of keys that only has
permission to perform the tasks that you require for your script.
http://aws.amazon.com/iam/
However, other posts (e.g., Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?) suggest there are limitations to using IAM with EC2 in particular.
Any advice would be really helpful!
The key limitation with regards to RDS and EC2 is that while you can restrict access to certain API actions there are no resource level constraints. For example with an IAM S3 policy you can restrict a user to only being able to perform certain actions on certain buckets. You can write a policy for EC2 that says that user is allowed to stop instances, but not one that says you can only stop certain instances.
Another option is for them to provide you with temporary credentials via the Security Token Service. Another variant on that is to use the new IAM roles service. With this an instance has a set of policies associated with it. You don't need to provide an AwsCredentials.proprties file because the SDK can fetch credentials from the metadata service.
Finally one last option might be consolidated billing. If the reason you are using their AWS resources is just because of the billing, then setup a new account which is billed from their account. The accounts are isolated from each other so you can't for example delete their instances by accident. Equally you can't access their RDS snapshots and things like that (access to an RDS instance via mysql (as opposed to the AWS api) would depend on the instance's security group). You can of course combine this with the previous options - they could provide you with credentials that only allow you to perform certain actions within that isolated account.