Find out which AWS regions have resources - amazon-web-services

Is there a quick way to find out which regions have any resources in my account? I'm specifically using the AWS .NET SDK but the answer likely applies to other AWS SDKs and the CLI since they all seem to be just wrappers to the REST API. I can obviously run all the List* methods across all regions but I'm thinking there must be a more optimal way to decide whether to query the entire region or not. Maybe something in billing, but it also needs to be relatively up-to-date, maybe within the last 5 minutes or so. Any ideas?

There is no single way to list all resources in an AWS account or in multiple regions.
Some people say that Resource Groups are a good way to list resources, but I don't think they include "everything" in an account.
AWS Config does an excellent job of keeping track of resources and their history, but it is also limited in the types of resources it tracks.
My favourite way to list resources is to use nccgroup/aws-inventory: Discover resources created in an AWS account. It's a simple HTML/JavaScript file that makes all the 'List' calls for you and shows them in a nicely formatted list.

Related

Is there a generic approach to count aws cloud resources on an account?

I need to list the amount of resources that are part of an AWS account in Go, while a resource should be anything that has a price tag on it and can be counted, e.g.
S3 buckets
EC2 instances
RDS instances
ELBs
...
state, region, type and tags are not relevant for this kind of overview, just the raw numbers.
I could of course use the Go SDK and use each corresponding service to get the instances and sum them up, but this would mean lots of boilerplate code and lots of time to create it.
My question: Is there any more generic approach to get the item counts for most services (fine if it doesn't work for all) that can be used with the Go SDK, so I don't have to recode the same query for each service manually?

View permissions used by AWS for a resource?

When building lambdas for example using cloudformation. It is easy to start allowing a little too much by allowing * on resources and eventually ending up hardening/tightening your security. Is it somehow possible to view which permissions actually are in use? And by that way, figuring out what the minimal set of permissions that is needed.
This is a popular request. One option is to leverage Netflix's Aardvark and RepoKid. Another is to ensure that CloudTrail Logs are enabled and then find a way to query them (for example using Athena).
Have you tried:
AWS Policy Simulator
I have not seen anything exactly as you described, but I believe this tool would actually in the end give you what you need and also make you more and more familiar with all of the policies in IAM.

AWS CloudFormation resource limit of 200

I have an app with a lot of resources (a bunch of DynamoDB tables, lambda functions, etc) and apparently I've run into the hard limit of 200 resources. The specific error is:
Template format error: Number of resources, 204, is greater than the maximum allowed, 200
The error message is pretty clear, but I'd like to know what my options are. Worst case, I can split the app into several pieces. Are there any alternative strategies?
You can create nested stacks that would also have the advantage of simpler testing, improving re-use and using different roles.
Common practice is to separate out different layers into different stacks. For example, build the VPC in one stack, deploy back-end in another stack and the front-end in another stack.
See: Use Nested Stacks to Create Reusable Templates and Support Role Specialization
The resource limit is now 500 if that helps!
AWS CloudFormation now supports increased limits on five service quotas
I have to face the same problem in the serverless framework. What I do!
Create microservices for each module like Authentication, User-management, SMS Gateway, Notification etc. that helps to manage code and AWS resources.
At the end expose API to create AWS custom domain and assign cloud formation to it.
I follow this blog, it's help and Serverless also suggest flow link.

Access management for AWS-based client-side SDK

I'm working on client-side SDK for my product (based on AWS). Workflow is as follows:
User of SDK somehow uploads data to some S3 bucket
User somehow saves command on some queue in SQS
One of the worker on EC2 polls the queue, executes operation and sends notification via SNS. This point seems to be clear.
As you might have noticed, there are quite some unclear points about access management here. Is there any common practice to provide access to AWS services (S3 and SQS in this case) for 3rd-party users of such SDK?
Options which I see at the moment:
We create IAM-user for users of the SDK which have access to some S3 resources and write permission for SQS.
We create additional server/layer between AWS and SDK which is writing messages to SQS instead of users as well as provides one-time short-living link for SDK to write data directly to S3.
First one seems to be OK, however I'm hesitant that I'm missing some obvious issues here. Second one seems to have a problem with scalability - if this layer will be down, whole system won't work.
P.S.
I tried my best to explain the situation, however I'm afraid that question might still lack some context. If you want more clarification - don't hesitate to write a comment.
I recommend you look closely at Temporary Security Credentials in order to limit customer access to only what they need, when they need it.
Keep in mind with any solution to this kind of problem, it depends on your scale, your customers, and what you are ok exposing to your customers.
With your first option, letting the customer directly use IAM or temporary credentials exposes knowledge to them that AWS is under the hood (since they can easily see requests leaving their system). It has the potential for them to make their own AWS requests using those credentials, beyond what your code can validate & control.
Your second option is better since it addresses this - by making your server the only point-of-contact for AWS, allowing you to perform input validation / etc before sending customer provided data to AWS. It also lets you replace the implementation easily without affecting customers. On availablily/scalability concerns, that's what EC2 (and similar services) are for.
Again, all of this depends on your scale and your customers. For a toy application where you have a very small set of customers, simpler may be better for the purposes of getting something working sooner (rather than building & paying for a whole lot of infrastructure for something that may not be used).

Managing AWS Storage Gateway snapshots via API

I'm trying to write a tool which manages Amazon AWS snapshots automatically according to some very simple rules. These snapshots are created on a schedule set up in Amazon Storage Gateway, and show up as you'd expect in the web interface for that tool.
The Storage Gateway API only has operations for snapshots as far as the snapshot schedule goes. EC2 is the API which talks about snapshots. The problem is that if I DescribeSnapshots through that API I see many many hundreds of snapshots, but none of them have volume IDs which match the volume IDs of the snapshots created from Storage Gateway. They're just random public snapshots which I'm not interested in.
So I guess Storage Gateway snapshots are different somehow, but is there a way to use any of Amazon's APIs to list and manipulate them?
EDIT: Interestingly, they do show up in the EC2 web control panel.
Here's a top tip: the snapshots are there, just make sure you're looking for them using the right function. In this case, my novitiate in Clojure is still in effect and I tried to use contains? to search for an item in a sequence. Again. But it doesn't work like that, it looks for keys in collections which means over sequences it wants a number and will tell you if there's an item at that index or not. Even more fun, pass it a sequence and a string and it won't bat an eyelid, it just says false.
Oh and Amazon's not always consistent with capitalisation of volume IDs either, so make sure you lowercase everything before you compare it. That bit's actually relevant to AWS rather than me stubbornly misinterpreting the documentation of a core function.