Our company has reached a point where too many demo projects have been launched and left rolling on their own without having anyone actually assigned to the projects anymore.
How does one set up something that would detect unused projects/resources and send an email to the owning IAM User?
It might be fair to assume that CloudFormation is being used by almost all the projects (generally through a CodeStar, Elastic Beanstalk, or Lambda set up).
If no reply/activity is manifested by the contacted IAM User, maybe another email could be sent to some administrator notifying about such a situation, too.
I have the same need to monitor all ressources used/unused, non compliance ... in my vpc.
I would go to AWS config to , among all feature, get a view of ressources relationship, and AWS service catalog to "regroup" authorized ressources and follow a bunch of informations.
Furthermore AWS config is deploy as a cloud formation stack.
If you dev deploy through cloud formation, you can dig a new feature called Cloud Formation Hook as compliance feature to inspect stack ressource before provisionning, and list all ressource within DynamoDb.
You can check also this (AWS organizations - List resources by AWS account?)
Have a look also aws-nuke that list ressource - and delete them with the correct quote - for any aws account.
Related
I have some experience with AWS and an AWS Developer Associate certification. I have been told that I am being moved to a project where I will be using GCP. How easy/hard would it be to learn GCP with AWS experience? Alternatively, how can I facilitate my entry into GCP with an AWS background?
The fundamentals are similar in AWS, GC and Azure, although the terminology is different. There are differences of course (for example, subnets in GC are regional whereas in AWS they're in AZs) but they're easy not too difficult to understand once you get into it.
There's a course by Google in Coursera, which is designed for people familiar with AWS - https://www.coursera.org/learn/gcp-fundamentals-aws
The GC learning resources should also help - https://cloud.google.com/training?hl=en
I think the main difference between AWS and GCP is how projects are managed. I'm referring to Identity and Access Management (IAM) and Resource Manager. In GCP you manage projects in a hierarchical way, using an approach called Resource Hierarchy.
In GCP you always have an Organization, a Project and resources. You might also have Folders. In GCP, basically everything is a resource (like in a REST API). All GCP resources belong to a project, and an individual GCP account can manage multiple projects.
You can manage each GCP project individually, or you can group related projects into folders and manage them from there, or even manage everything from the top-level GCP Organization.
By managing, I mean applying policies: what this resource can do, which accounts can use it.
GCP accounts are sometimes called IAM principals. An IAM principle can be a user account, a Google group (i.e. a bunch of user accounts), a service account (i.e. an account assigned to a program).
The relationship between 1 resource (e.g. a GCP project) and N IAM principals (e.g. 2 user accounts, 1 service account) that have that set of privileges is called IAM binding. A IAM policy is a set of IAM bindings.
As for the services AWS, Azure and GCP offer, there is this nice comparison chart.
So to recap, focus on learning IAM and resource hierarchy first. You will need it whatever GCP service you will end up using.
I wanted to know if there was a way to track alerts or audit anything that happens with the AWS account like who changed what and why. I did find this https://docs.aws.amazon.com/opensearch-service/latest/developerguide/audit-logs.html where they use a comand line for enabling audit logs on an existing domain: aws opensearch update-domain-config --domain-name my-domain --log-publishing-options "AUDIT_LOGS={CloudWatchLogsLogGroupArn=arn:aws:logs:us-east-1:123456789012:log-group:my-log-group,Enabled=true}" but this is in regard to Amazon OpenSearch Service which I believe is only free for 12 months if you haven't used already. AWS Audit Manager. I am aware there are services that can do this but require a fee and I wanted to know if there were any free options
From the AWS documentation:
With AWS CloudTrail, you can monitor your AWS deployments in the cloud by getting a history of AWS API calls for your account, including API calls made by using the AWS Management Console, the AWS SDKs, the command line tools, and higher-level AWS services. You can also identify which users and accounts called AWS APIs for services that support CloudTrail, the source IP address from which the calls were made, and when the calls occurred. You can integrate CloudTrail into applications using the API, automate trail creation for your organization, check the status of your trails, and control how administrators turn CloudTrail logging on and off.
AWS Config provides a detailed view of the resources associated with your AWS account, including how they are configured, how they are related to one another, and how the configurations and their relationships have changed over time.
Basically, AWS CloudTrail keeps a log of API calls (requests to AWS to do/change stuff), while AWS Config tracks how individual configurations have changed over time (for a limited range of resources, such as Security Group rule being changed).
I've tried to setup my AWS organization using AWS Landing Zone. This is what I have done :-
Deploy the AWS Landing Zone based on the AWS Landing Zone initiation template
Execute CodePipeline created by initiation template
Core accounts were created by CodePipeline, but build gets failed while creating the CoreResources
Now, I wanted to execute the codepipeline again after doing some changes in Manifest.yaml file.
Can someone help me in understanding how can I delete the created organizations i.e. "core" and "application" and core accounts?
As far I know deletion of AWS account from Organization is not that straight forward and you have provide payment, plan details before deleting accounts created by Landing Zone. Plus, even after provided all required details AWS won't allow you to delete that account immediately.
Is there any way to delete Organizations and core accounts created by AWS Landing Zone immediately?
To solve your immediate issue, there is no way to close the core accounts through the AWS Landing Zone's pipeline.
However, you can manually close the created accounts from the AWS Management Console: https://aws.amazon.com/premiumsupport/knowledge-center/close-aws-account/
In general I would recommend using AWS Control Tower instead of the AWS Landing Zone Solution if possible. Control Tower is a Managed Service providing the Landing Zone capabilities without you having to deal with the pipeline and everything else yourself.
AWS Control Tower: https://aws.amazon.com/controltower/
Since April 22 you can also use ControlTower to set up a multi-account structure in an existing AWS Organization and enroll existing member accounts.
I would like to set up different AWS Identity and Access Management (IAM) users so that if an AWS resource is created by that IAM user, the resource is automatically assigned a specific tag.
For example: if IAM user F creates a new EC2 instance, the instance is automatically tagged as User:MrF. Is it possible to build a custom policy that does this?
My company GorillaStack have an open source lambda function that does exactly that.
The function 'listens' for CloudTrail logs to be delivered and tag the created resource with the ARN of the user that created it. It also support cross account tagging, for cases where a central account collects CloudTrail logs for other accounts.
Github: https://github.com/GorillaStack/auto-tag
Blog Post: http://blog.gorillastack.com/gorillastack-presents-auto-tag
It got a shout out at the 2015 re:Invent conference which is pretty cool :)
Hope that helps!
This is not available when using the AWS APIs directly (i.e. there's no way to command all AWS API's to tag new resources automatically on your behalf), however, depending on the specifics of your use case you could work around that limitation by correlating the creating user with the resource via post hoc tagging:
Workaround
You could activate AWS CloudTrail, which records AWS API calls for your account and delivers log files to you and provides exactly the information you are after:
The recorded information includes the identity of the API caller, the
time of the API call, the source IP address of the API caller, the
request parameters, and the response elements returned by the AWS
service.
Based on that information, a dedicated service of yours could analyze the logs and apply post hoc tags to all resources based on the logged user and created resource via the resp. API actions. Please see my answer to Which user launched EC2 instance? for some caveats/constraints to consider when going down this route.
An even better solution (faster plus I believe cheaper than parsing through CloudTrail logs) is to use CloudTrail but in combination with CloudWatch Events.
This is the basic concept described in a diagram
The implementation is detailed in this article:
https://blogs.aws.amazon.com/security/post/Tx150Z810KS4ZEC/How-to-Automatically-Tag-Amazon-EC2-Resources-in-Response-to-API-Events
The article also describes how to setup an IAM policy that only allows the creator of a resource to perform certain actions (like start / stop, describe, edit, terminate) against it.
I would chose AWS Config. Create a rule that automatically tags resources on creation. No cost, works across multiple accounts. Great for enforcing compliance. https://aws.amazon.com/about-aws/whats-new/2019/03/aws-config-now-supports-tagging-of-aws-config-resources/
Currently there is no such feature on IAM. If what you need is allow/deny based on user names, what you could do is use variables on your policy to allow or deny access based on naming conventions, e.g.:
...
"Resource":"arn:aws:dynamodb:us-east-!:123456789:table/ItemsCatalog_${aws:username}"
...
I am in the early stages of writing an AWS app for our users that will run our research algorithms using their AWS resources. For example, our code will need to spin up EC2 instances running our 'worker' app, access RDS databases, and create access SQS queues. The AWS Java SDK examples (we are writing this in Java) use a AwsCredentials.properties file to store the Access Key ID and Secret Access Key, which is fine for examples, but obviously not acceptable for our users, who are would be in essence giving us access to all their resources. What is a clean way to go about running our system on their behalf? I discovered AWS Identity and Access Management (IAM) which seems to be for this purpose (I haven't got my head around it yet), esp. Cross-account access between AWS accounts. This post makes it sound straightforward:
Use the amazon IAM service to create a set of keys that only has
permission to perform the tasks that you require for your script.
http://aws.amazon.com/iam/
However, other posts (e.g., Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?) suggest there are limitations to using IAM with EC2 in particular.
Any advice would be really helpful!
The key limitation with regards to RDS and EC2 is that while you can restrict access to certain API actions there are no resource level constraints. For example with an IAM S3 policy you can restrict a user to only being able to perform certain actions on certain buckets. You can write a policy for EC2 that says that user is allowed to stop instances, but not one that says you can only stop certain instances.
Another option is for them to provide you with temporary credentials via the Security Token Service. Another variant on that is to use the new IAM roles service. With this an instance has a set of policies associated with it. You don't need to provide an AwsCredentials.proprties file because the SDK can fetch credentials from the metadata service.
Finally one last option might be consolidated billing. If the reason you are using their AWS resources is just because of the billing, then setup a new account which is billed from their account. The accounts are isolated from each other so you can't for example delete their instances by accident. Equally you can't access their RDS snapshots and things like that (access to an RDS instance via mysql (as opposed to the AWS api) would depend on the instance's security group). You can of course combine this with the previous options - they could provide you with credentials that only allow you to perform certain actions within that isolated account.