AWS: limitations of using VPCs to manage environments? [closed] - amazon-web-services

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I've seen some companies manage multiple environments in AWS by using more accounts.
If a VPC is sorta like a virtual datacenter, it seems to me that using VPCs and IAM permissions should be enough to manage different environments.
What are some objective limitations of using a single AWS Account together with VPCs + IAM permissions for managing environments (dev, test, staging, prod)?
Example: another SO user has pointed out, AWS sets certain limits/quotas on an account basis, so excessive use of resources by one environment (VPC) would effectively impact another environment. To me this is as objective as you can get.
From personal experience, I have seen some times it's easier for people to figure out billing for big organizations if the environments are in different accounts. Whereas these limitations have more to do with the way the company operates, the company certainly feels it's an objective limitation.
So I'm trying to gather a list of these objective limitations why a company would decide to manage environments in other ways than simply through IAM + VPCs.
Another way of looking at the question would be, think of the recurrent environment management tasks/processes that you perform on a regular basis and then list those you could not do if you were only using VPCs + IAM.

From a network perspective: no
From a permissions model perspective: yes
Using an account per environment is the AWS-recommended approach for larger organizations because it enforces strict boundaries between environments. Doing a cross-environment call in a normal environment can be done easily (e.g. messing up the DynamoDB of prod instead of dev) whereas in a multiple-account setup you need to have different credentials.
Apart from the permissions model there is also an advantage in having limits per account (= per environment) instead of for your company. E.g. the Lambda concurrency limit is enforced on account level. Your dev environment can mess up your prod account in this situation.
Last but not least naming could also be a good reason to have an account per environment. E.g. variables in the Parameter Store have to be unique per account. Using multiple accounts you can use the same parameters for every environment without clashing. A similar thing is true for many resources, e.g. Cloudformation stacks.

Related

AWS EC2 Cost comparison in one place? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Currently having more than twenty AWS accounts and under each account having 6-10 On-demand EC2 instances both Linux and windows of different sizes and types ,altogether around 100 instances . So looking for cost saving options with various options such as Compute Saving Plan, EC2 Saving Plan, Reserve Instance however unable compare of all different options and their estimates side by side.
Though compute and EC2 estimates are given as recommendation through Billing->Cost Explorer but you need to go through each account , then select different option e.c Compute or EC2 saving , then payment options ,then tenure 1 or 3 years and it display estimates .
I want to see all 100 instance and their prices on one page if possible as below
Under Compute saving plan for 1 and 3 year with full , partial or no upfront payment
under EC2 saving plan for 1 and 3 years with full , partial or no upfront payment
Under reserved instance for 1 and 3 year with full , partial or no upfront payment
is there any easier way to get this done ?
This is probably the best you can get in terms of what you are looking for:
https://calculator.s3.amazonaws.com/index.html
From What is AWS Compute Optimizer? - AWS Compute Optimizer:
AWS Compute Optimizer is a service that analyzes the configuration and utilization metrics of your AWS resources. It reports whether your resources are optimal, and generates optimization recommendations to reduce the cost and improve the performance of your workloads. Compute Optimizer also provides graphs showing recent utilization metric history data, as well as projected utilization for recommendations, which you can use to evaluate which recommendation provides the best price-performance trade-off. The analysis and visualization of your usage patterns can help you decide when to move or resize your running resources, and still meet your performance and capacity requirements.
Compute Optimizer provides a console experience, and a set of APIs that allows you to view the findings of the analysis and recommendations for your resources across multiple AWS Regions. You can also view findings and recommendations across multiple accounts, if you opt in the management account of an organization. The findings from the service are also reported in the consoles of the supported services, such as the Amazon EC2 console.
https://ec2instances.github.io/ tiny opensource tool that pulls pricing data from AWS json-API and presents it as a table.
Disclaimer: I've built this (for myself)

Billing for multiple users of google cloud APIs on one GCP project [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I have one GCP project and multiple users/service accounts that use the google cloud APIs (e. g. Cloud text-to-speech, Cloud speech-to-text etc.). In the metrics overview for each API it is possible to see how often an API has been called by whom, but for the billing overview, it is not possible to identify which user/service account caused a specific amount of costs. So my question is: Is it possible to identify the different users/service accounts in the actual billing costs?
Normally, one would use labels to distinguish between different users, but unfortunately labels are not supported for those APIs (see list of currently supported services: https://cloud.google.com/resource-manager/docs/creating-managing-labels#label_support)
Additionally, each user/service account has a separate Cloud run instance connected to it, that runs a server listening for incoming requests and forwards them to the corresponding API. Would this approach somehow facilitates the mapping from user to costs in one GCP project?
Metrics and billing are 2 different things.
Google provides metrics to follow and understand the usage of your service in your project
The billing is at the project level, whatever the user/service account, YOU pay, it's not the concern of Google of how will you rebill the service to your users.
So, here the solution is to use the metrics to get the data and then to equally distribute the cost according to the APIs usages.
Similarly, Cloud Run label will help you to have details in the BigQuery billing export, but google will charge you for all your services.
Ultimately, if the services/customers are independent, you can imagine to create a project per customer, and thus to have 1 free tiers per project (when applicable) and, above all, 1 billing per project, and thus per customer!

AWS best way to handle high volume transactions [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am writing a system that has extremely high volume of transactions, CRUD and I am working with AWS. What are the considerations that I must keep in mind given that none of the data should be lost?
I have done some research and they say to use SQS queues to make sure that data is not lost. What other backup, redundancy, quick processing considerations should I keep in mind?
So if you want to create a system that is highly resilient, whilst also being redundant I would advise you to take a read of the AWS Well Architected Framework. This will go into more detail that a person can provide on stack overflow.
Regarding individual technologies:
If you're transactional like you said, then you should look at using a relational data store for storing the data. I'd recommend taking a look at Amazon Aurora, it has built in features like auto scaling of read onlys and multi master support. Whilst you might be expecting large number, by using autoscaling you will only pay for what you use.
Try to decouple your APIs, have a dumb validation layer before handing off to your backend if you can help it. Technologies like SQS (as you mentioned before) help with decoupling when you combine with Lambda.
SQS guarantees at least once, so if your system should not write duplicates you'll want to account for idempotency in your application.
Also use a dead letter queue (DLQ) to handle any failed actions.
Ensure any resources residing in your VPC are spread across availability zones.
Use S3, EC2 Backup Manager and RDS snapshots to ensure data is backed up. Most other services has some sort of backup functionality you can enable.
Use autoscaling wherever possible to ensure you're reducing costs.
Build any infrastructure using an IaC tool (CloudFormation or Terraform), and any provisioning of resources via a tool like (Ansible, Puppet, Chef). Try to follow a pre baked AMI workflow to ensure that it is quick to return to the base server state.

How do I manage multiple clients on AWS? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am new to Amazon AWS and as a freelancer I am not clear on how I would facilitate dozens of clients using AWS. I average 5 clients per month. How would I do billing and set up instances for multiple clients? I have been using godaddy for a long time and they have a pro user dashboard that manages all of that.
You should create a separate AWS account for each client. If you are handling the AWS payments, then you could use AWS Organizations to combine the accounts into a single bill. You will be able to split the billing report into accounts to see exactly what each client owes you for AWS services.
This will also allow you to hand over an AWS account to a client, or provide their developers with access if they need it, without compromising your other clients in any way.
If you are the only person who can access the AWS services (eg management console, create resources, etc), then #MarkB's suggestion is sound: Create separate AWS Accounts under an Organization, the the customers for their usage.
Another benefit of this method is that you might want to charge your clients a fixed amount per month, or an uplift (eg extra 20% on top of AWS costs) for your service of managing their account and taking care of payments.
If, however, your clients have the ability to create resources under AWS, you might want to have them setup the AWS accounts so that it bills them directly. This is because your clients might create resources that cost additional money and might then claim that they didn't realise the impact of what they were doing, thus leaving you with a bill that they don't want to pay.

AWS EC2 billed hours per instance in a given time period [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
My CIO is asking me for a monthly "per instance" breakdown of EC2 charges, as some of our EC2 instances are run on behalf specific customers. Does anyone know how to accomplish this?
I can use java, python, or the aws command line tools if necessary, but a report tools or service is preferable.
You need to tag resources associated with a particular customer (for example EC2 instances, RDS) and enable the Detailed Billing Report.
Log into the My Account area of the console and go to the Billing Preferences area. Enable Monthly Report, Programmatic Access and Detailed Billing Report.
AWS will start to aggregate your billing to a nominated S3 bucket as CSV files and break it down by tags. There will be a charge for the storage on S3.
Aggregation by tags only starts from when you turn it on so you won't get the full month till the next report.
More details here and here for how to set up and analyse the data.
Tag the instance ,it will reflect in your bills based on your tags .
There is fairly new tool open-sourced by Netflix called Ice which allows you to visualize the billing details as retrieved via the AWS reports generated into your S3 buckets.
You might also want to check the answers over at serverfault to a similar question.
First thing is to enable detailed billing export to a S3 bucket (see here)
Then I wrote a simplistic server in Python that retrieves your detailed bill and breaks it down per service-type and usage type (see it on this GitHib repo).
Thus you can check anytime what your costs are and which services cost you the most etc.
If you tag your EC2 instances, S3 buckets etc, they will also show up on a dedicated line.
I work for Cloudability, and our tool is built to do exactly that. It collects AWS billing and usage data as well as your tags from all of your accounts and puts it into a custom reporting interface. It's completely point-and-click so you don't have to mess around with writing scripts or building spreadsheets.
A lot of companies are using it to do exactly what you're talking about ... split up costs/usage by instance, department, project, client, etc..
You can check it out at https://cloudability.com