Hide services from management console - amazon-web-services

I have setup IAM permissions for a certain group to only have read only access to S3, however, the group can still see all the other services in the management console and go into them. As soon as the user tries to do something, a message will read "Not authorised" and so on, however, I would like this group to only see the one service in the management console.
So when a user from this group logs in, all they see is S3.
How is this possible ?

Hiding services from the AWS Management Console is not possible right now, unfortunately. AWS is currently redesigning the console though, and this might include such options down the road as per the respective FAQ Why are you changing the console design?:
Our goal is to improve information display, make interactions more consistent, support devices such as tablets, and deliver a customizable experience. You will see these improvements and visual updates rolled out across our services over the coming months. [...] [emphasis mine]
However, at this point the mentioned customizable experience likely only refers to the recently introduced Resource Groups and Tagging for AWS, which allow you to easily create, maintain, and view a collection of resources that share common tags:
[...] By default, the AWS Management Console is organized by AWS service. But with the Resource Groups tool, you can create a custom console that organizes and consolidates the information you need based on your project and the resources you use. If you manage resources in multiple regions, you can create a resource group to view resources from different regions on the same screen.[emphasis mine]
Based on this new cross region Resource Groups approach, it is indeed possible to create and share a resource group that is constrained to the resource type S3 Buckets (i.e. the initial view would be limited to just S3 buckets) - however, just like with the regular console view, this doesn't prevent your users to roam freely around other areas of the console by themselves, i.e. you cannot enforce the desired limitation, rather only guide in this direction.

Related

How to block storage class pd-extreme for Google Cloud VMs?

I'd like to block the block storage class pd-extreme for Google Cloud VMs for my colleagues in order to avoid it being selected accidentally. We use Google VMs as throw-away testing systems and the class has no purpose for this use case. The costs however shoot through the roof when using it. And let's be honest, "extreme" sounds like something you'd click to try it. I did it...
I tried setting quota for "Extreme PD IOPS" to 0 which seems to stand for unlimited ("Don't make me think, Google!) and 1, but both had no effect (after an hour). It's not the first time that setting quotas didn't work for me - and others apparently. If you convince me that setting quotas is the only solution, I'll contact Google support.
I still want to be able to use storage classes other than pd-extreme.
An organization policy is a configuration of restrictions. As the organization policy administrator, you can define and set that organization policy on organizations, folders, and projects in order to enforce the restrictions on the specific resource.To define an organization policy, you choose a constraint, which is a particular type of restriction against either a Google Cloud service or a group of Google Cloud services. You configure that constraint with your desired restrictions.
For restricting the Extreme persistent disks, you can use Compute Storage resource use restrictions (CE disks,images, and snapshots) constraint (constraints/compute.storageResourceUseRestrictions). When using this constraint, users will be restricted to access the resource.Projects, folders, and organizations specified in denied lists must be in the form: under:projects/PROJECT_ID, under:folders/FOLDER_ID, under:organizations/ORGANIZATION_ID.
You can set an organization policy on your organization resource that uses a list constraint to deny access to a particular service. The process is described in the document on how to set an organization policy using the gcloud command-line tool. For instructions on how to view and set organization policies using the Cloud Console, see Creating and Managing Policies.

Separating Environments in AWS

Is there a best practice around separating environments in AWS?
I've got a solution that employs the following services:
Lambda
SNS
SQS
DyanmoDB
API Gateway
S3
IAM
We're not live yet, but we're getting close. By the time we go-live, I'd like a proper production, test, and development environment with a "reasonable" amount of isolation between them.
Separate account per environment
Single Account and separate VPC per environment
I read the article AWS NETWORKING, ENVIRONMENTS AND YOU by Charity Majors. I'm down with segmentation via VPC, but I don't know that all the services in my stack are VPC scoped? Here are some of my requirements:
Limit Service Name Collision (for non global services)
Establish a very clear boundary between environments
Eventually, grant permissions at the environment level
I am using an AWS Organization.
P.S. Apologies if this isn't the right forum for the question. If there is something better, just let me know and I'll move it.
I recommend one AWS account per environment. The reasons, in no particular order:
security: managing complex IAM policies to create boundaries within a single account is really hard; conversely, one account per environment forces boundaries by default: you can enable cross account access but you have to be very deliberate
auditing access to your different environments is more difficult when all activity happens in the same account
performance: some services don't have the same performance characteristics when operating in VPC vs non-VPC (ie. Lambda cold starts increased latency when operating in VPC)
naming: instead of using the AWS account id to identify the environment you're operating in, you have to add prefixes or suffixes to all the resources in the account - this is a matter of preference but nonetheless..
compliance: if you ever need to adhere to some compliance standard such as HIPAA which imposes strict restrictions on how long you can hold on to data and who can access data, it becomes really difficult to prove which data is production and which data is test etc. (this goes back to #1 and #2 above)
cost control: in dev, test, staging environments you may want to give people pretty wide permissions to spin up new resources but put low spending caps to prevent accidental usage spikes; conversely in a production account you'll want restricted ability to spin up resources but higher spending caps; easy to enforce via separate account - not so much in the same account
Did I miss anything? Possibly! But these are the reasons why I would use separate accounts.
By the way - I am NOT advocating against using VPCs. They exist for a reason and you should definitely use VPCs for network isolation. What I am trying to argue is that anybody who also uses other services such as DynamoDb, Lambda, SQS, S3 etc - VPCs are not really the way to isolate resources, IMO.
The downsides to one account per stage that I can think of are mostly around continuous deployment if you use tools that are not flexible enough to be able to deploy to different accounts.
Finally, some people like to call on billing as a possible issue but really, wouldn’t you want to know how much money you spend on Production vs Staging vs Development ?!
Avoid separate accounts for each environment to avoid additional complexity and obstacles in accessing shared resources.
Try rather using:
resource groups
tagging
as recommended by AWS:
https://aws.amazon.com/blogs/startups/managing-resources-across-multiple-environments-in-aws/
The account separation is recommended by the AWS Well Architected Framework security pillar.

Manage multiple aws accounts

I would like to know a system by which I can keep track of multiple aws accounts, somewhere around 130+ accounts with each account containing around 200+ servers.
I wanna know methods to keep track of machine failure, service failure etc.
I also wanna know methods by which I can automatically turn up a machine if the underlying hardware failed or the machine terminated while on spot.
I'm open to all solutions including chef/terraform automation, healing scripts etc.
You guys will be saving me a lot of sleepless nights :)
Thanks in advance!!
This is purely my take on implementing your problem statement.
1) Well.. for managing and keeping track of multiple aws accounts you can use AWS Organization. This will help you manage centrally with one root account all the other 130+ accounts. You can enable consolidated billing as well.
2) As far as keeping track of failures... you may need to customize this according to your requirements. For example: You can build a micro service on top of docker containers or ecs whose sole purpose is to keep track of failures, generate a report and push to s3 on a daily basis.You can further create a dashboard using AWS quicksight out of this reports in S3.
There can be another micro service which will rectify the failures. It just depends on how exhaustive and fine grained you want your implementation to be.
3) For spawning instances when spot instances are terminated, it can be achieved through you simple autoscaling configurations. Here are some of the articles you may want to go through which will give you some ideas:
Using Spot Instances with On-Demand instances
Optimizing Spot Fleet+Docker with High Availability
AWS Organisations are useful for management. You can also look at multiple account billing strategy and security strategy. A shared services account with your IAM users will make things easier.
Regarding tracking failures you can set up automatic instance recovery using CloudWatch. CloudWatch can also have alerts defined that will email you when something happens you don't expect, though setting them up individually could be time consuming. At your scale I think you should look into third party tools.

Is there a way to nuke all AWS resources in an AWS account?

I have an AWS account where multiple EC2 instances, load balancers, target groups, security groups etc are setup by multiple owners.
We use terraform to set this up but sometimes due to corruption, the state becomes inconsistent. Current mechanism to recover is to manually destroy all resources in that account owned by a particular owner.
Is there an easy way to nuke all resources in an AWS account belonging to a particular owner?
There is no way to delete all resources in an account owned by a particular user but there is a way to delete all resources in an account.
You can use aws-nuke which was created somewhat out of the same use case you described.
At first, you need to set an account alias for your account.
You must create a config file.
Then you can list down all resources that will be deleted using the following command:
aws-nuke -c config/nuke-config.yml --profile aws-nuke-example
Add --no-dry-run option to permanently delete all resources in the same command.
There are also multiple filter options available such as target, resource type, exclude, etc. that you can leverage to suit your needs.
Agree with the other answer that there is no easy way delete orphan resources.
But I see the original issue is that the terraform state is corrupted.
You can checkout the terraform import feature which lets you generate state file from aws resources. In that way you can connect your config to resources again.
Short answer: no.
Longer answer: actually, that's also no. There's no built-in capabillity for this.
The case you're describing is not within the bounds of typical AWS usage... destroying everything in an account -- usually -- should not be easy.
Of course, you could script it, fairly trivially, by wrapping calls to aws-cli to custom code to iterate through the resources and generate additional requests to destroy them... but if you do, lock that code away, since such capability is inherently dangerous.
You can delete all your resources you created, you'll need to automate, see a sample here:
Creation
https://github.com/jouellnyc/AWS/tree/master/create_aws_vpc2
Deletion
https://github.com/jouellnyc/AWS/blob/master/create_aws_vpc2/delete_lb_and_vpc.sh
Other
I've had some success with cloud nuke (played around for a few min; not in depth):
https://github.com/gruntwork-io/cloud-nuke
I dont think there is any state forward way to do it but to check if you have any active resources in your account, do the following:
Open the Billing and Cost Management console.
Choose Bills in the navigation pane.
You can see the charges incurred by different services in the Bill details by service section.
You can see the charges incurred in different AWS Regions in the Bill details by account section.
For each service, identify the Regions where the services have incurred charges.
To terminate the identified active resources under different services, do the following:
Open the AWS Management Console.
For Find services, enter the service name.
After opening the service console, terminate all your active resources. Be sure to check each Region where you have allocated resources.
if the issue is a corrupted terraform state, perhaps storing the state in a versioned S3 bucket would help reduce the impact of that.
Use Terraformer to import all resources into terraform configuration then do whatever you want:
terraformer import aws --resources="*"
https://github.com/GoogleCloudPlatform/terraformer
Take care of your state file lock f.e. by using dynamodb & enable s3 versioning.
Is there an easy way to nuke all resources in an AWS account
belonging to a particular owner?
Since you are using Terraform, you can use Blank Apply.
It will destroy all the Resources in a state file.
We use terraform to set this up but sometimes due to corruption, the
state becomes inconsistent.
It's better to use version control system to avoid drifts and inconsistencies in your state file and use remote states in order to make sure everyone is on the same page.

How to differentiate between different AWS Environments - Dev, Test, Stage, Production?

I would want to have different environments in AWS. At first I thought of differentiating environments by Tags, tags on AWS Resources. But then I cannot restrict users to change Tags of the machine. What that means is, if I allow them ec2:CreateTags, they can not only create tag, but also change tag of any of the resources, since cannot apply a condition on it - say for example if it belongs to a particular VPC or subnet. If I don't allow them the previlege to create tag, then they can launch an instance but their tags are not applied and hence any further operation on the instance is not permitted.
If I want to distinguish between environments by VPC-ID, then for operations such as ec2:StartInstance cannot apply a condition to allow the operation only in a specific VPC-ID, but can conditionally allow based on Resource Tag which for reasons in previous paragraph is not convincing.
On AWS documentation it mentions
One approach to maintaining this separation was discussed in the Working with AWS Access Credentials, that is, to use different accounts for development and production resources.
So it is possible to have one Paying Account for several other accounts which themselves are Paying Accounts? I still don't think multiple accounts for just different environments is a good idea.
How do you generally differentiate among environments for enforcing policies?
Thanks.
Different accounts is the way to go. There are so many places you'll want to create isolation that you'll make yourself crazy trying to do it within one account. Think about it - there's network controls, all the IAM permissions for all the services, access control lists, tags that have the limitations you describe, and on and on. Real isolation comes from putting things in different accounts for now.
The last thing you want is some weakness in your dev environment to pivot into your production environment - end of story. Consider also the productivity benefit of separating prod and dev accounts... you'll never break a prod system from a mistake or experiment in development.
Consolidated billing is the answer to paying for it all. Easy to setup and track. If you need more reporting, look into CloudAbility.
Where this gets really interesting is in the space of multiple production and multiple dev environments. There are a lot of opinions on isolation there. Some people combine all prod and dev into two accounts, and some put every prod and dev into their own. It all depends on your risk profile. Just don't end up like CloudSpaces.
It is possible to do consolidated billing, where one account is billed for its own usage + the AWS usage for any other linked account. However, you can not split that bill (e.g. have the master account only pay for EC2 services on a linked account, while having the linked account pay for it's other usage like S3, etc.).
As for differentiating between environments, I've used different security groups for each one (dev, staging, production) as an alternative to tags, but there are limitations when it comes to enforcing policies. The best option to have full policy control is to use different accounts.
I would suggest go with with one VPC and use Security Groups for isolation. As your AWS infra grows, you will need Directory Services (Name Servers, User Directory, VM Directory, Lookup services etc.). If you have two VPCs, sharing the Directory Services will not be easy. Also if you need Code Repository (e.g. GitHub) or Build tools (e.g. Jenkins) having three separate VPCs for DEV, Staging and Production will make things really complicated.