How do I manage multiple clients on AWS? [closed] - amazon-web-services

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am new to Amazon AWS and as a freelancer I am not clear on how I would facilitate dozens of clients using AWS. I average 5 clients per month. How would I do billing and set up instances for multiple clients? I have been using godaddy for a long time and they have a pro user dashboard that manages all of that.

You should create a separate AWS account for each client. If you are handling the AWS payments, then you could use AWS Organizations to combine the accounts into a single bill. You will be able to split the billing report into accounts to see exactly what each client owes you for AWS services.
This will also allow you to hand over an AWS account to a client, or provide their developers with access if they need it, without compromising your other clients in any way.

If you are the only person who can access the AWS services (eg management console, create resources, etc), then #MarkB's suggestion is sound: Create separate AWS Accounts under an Organization, the the customers for their usage.
Another benefit of this method is that you might want to charge your clients a fixed amount per month, or an uplift (eg extra 20% on top of AWS costs) for your service of managing their account and taking care of payments.
If, however, your clients have the ability to create resources under AWS, you might want to have them setup the AWS accounts so that it bills them directly. This is because your clients might create resources that cost additional money and might then claim that they didn't realise the impact of what they were doing, thus leaving you with a bill that they don't want to pay.

Related

AWS EC2 Cost comparison in one place? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Currently having more than twenty AWS accounts and under each account having 6-10 On-demand EC2 instances both Linux and windows of different sizes and types ,altogether around 100 instances . So looking for cost saving options with various options such as Compute Saving Plan, EC2 Saving Plan, Reserve Instance however unable compare of all different options and their estimates side by side.
Though compute and EC2 estimates are given as recommendation through Billing->Cost Explorer but you need to go through each account , then select different option e.c Compute or EC2 saving , then payment options ,then tenure 1 or 3 years and it display estimates .
I want to see all 100 instance and their prices on one page if possible as below
Under Compute saving plan for 1 and 3 year with full , partial or no upfront payment
under EC2 saving plan for 1 and 3 years with full , partial or no upfront payment
Under reserved instance for 1 and 3 year with full , partial or no upfront payment
is there any easier way to get this done ?
This is probably the best you can get in terms of what you are looking for:
https://calculator.s3.amazonaws.com/index.html
From What is AWS Compute Optimizer? - AWS Compute Optimizer:
AWS Compute Optimizer is a service that analyzes the configuration and utilization metrics of your AWS resources. It reports whether your resources are optimal, and generates optimization recommendations to reduce the cost and improve the performance of your workloads. Compute Optimizer also provides graphs showing recent utilization metric history data, as well as projected utilization for recommendations, which you can use to evaluate which recommendation provides the best price-performance trade-off. The analysis and visualization of your usage patterns can help you decide when to move or resize your running resources, and still meet your performance and capacity requirements.
Compute Optimizer provides a console experience, and a set of APIs that allows you to view the findings of the analysis and recommendations for your resources across multiple AWS Regions. You can also view findings and recommendations across multiple accounts, if you opt in the management account of an organization. The findings from the service are also reported in the consoles of the supported services, such as the Amazon EC2 console.
https://ec2instances.github.io/ tiny opensource tool that pulls pricing data from AWS json-API and presents it as a table.
Disclaimer: I've built this (for myself)

Billing for multiple users of google cloud APIs on one GCP project [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I have one GCP project and multiple users/service accounts that use the google cloud APIs (e. g. Cloud text-to-speech, Cloud speech-to-text etc.). In the metrics overview for each API it is possible to see how often an API has been called by whom, but for the billing overview, it is not possible to identify which user/service account caused a specific amount of costs. So my question is: Is it possible to identify the different users/service accounts in the actual billing costs?
Normally, one would use labels to distinguish between different users, but unfortunately labels are not supported for those APIs (see list of currently supported services: https://cloud.google.com/resource-manager/docs/creating-managing-labels#label_support)
Additionally, each user/service account has a separate Cloud run instance connected to it, that runs a server listening for incoming requests and forwards them to the corresponding API. Would this approach somehow facilitates the mapping from user to costs in one GCP project?
Metrics and billing are 2 different things.
Google provides metrics to follow and understand the usage of your service in your project
The billing is at the project level, whatever the user/service account, YOU pay, it's not the concern of Google of how will you rebill the service to your users.
So, here the solution is to use the metrics to get the data and then to equally distribute the cost according to the APIs usages.
Similarly, Cloud Run label will help you to have details in the BigQuery billing export, but google will charge you for all your services.
Ultimately, if the services/customers are independent, you can imagine to create a project per customer, and thus to have 1 free tiers per project (when applicable) and, above all, 1 billing per project, and thus per customer!

AWS best way to handle high volume transactions [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am writing a system that has extremely high volume of transactions, CRUD and I am working with AWS. What are the considerations that I must keep in mind given that none of the data should be lost?
I have done some research and they say to use SQS queues to make sure that data is not lost. What other backup, redundancy, quick processing considerations should I keep in mind?
So if you want to create a system that is highly resilient, whilst also being redundant I would advise you to take a read of the AWS Well Architected Framework. This will go into more detail that a person can provide on stack overflow.
Regarding individual technologies:
If you're transactional like you said, then you should look at using a relational data store for storing the data. I'd recommend taking a look at Amazon Aurora, it has built in features like auto scaling of read onlys and multi master support. Whilst you might be expecting large number, by using autoscaling you will only pay for what you use.
Try to decouple your APIs, have a dumb validation layer before handing off to your backend if you can help it. Technologies like SQS (as you mentioned before) help with decoupling when you combine with Lambda.
SQS guarantees at least once, so if your system should not write duplicates you'll want to account for idempotency in your application.
Also use a dead letter queue (DLQ) to handle any failed actions.
Ensure any resources residing in your VPC are spread across availability zones.
Use S3, EC2 Backup Manager and RDS snapshots to ensure data is backed up. Most other services has some sort of backup functionality you can enable.
Use autoscaling wherever possible to ensure you're reducing costs.
Build any infrastructure using an IaC tool (CloudFormation or Terraform), and any provisioning of resources via a tool like (Ansible, Puppet, Chef). Try to follow a pre baked AMI workflow to ensure that it is quick to return to the base server state.

AWS: limitations of using VPCs to manage environments? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I've seen some companies manage multiple environments in AWS by using more accounts.
If a VPC is sorta like a virtual datacenter, it seems to me that using VPCs and IAM permissions should be enough to manage different environments.
What are some objective limitations of using a single AWS Account together with VPCs + IAM permissions for managing environments (dev, test, staging, prod)?
Example: another SO user has pointed out, AWS sets certain limits/quotas on an account basis, so excessive use of resources by one environment (VPC) would effectively impact another environment. To me this is as objective as you can get.
From personal experience, I have seen some times it's easier for people to figure out billing for big organizations if the environments are in different accounts. Whereas these limitations have more to do with the way the company operates, the company certainly feels it's an objective limitation.
So I'm trying to gather a list of these objective limitations why a company would decide to manage environments in other ways than simply through IAM + VPCs.
Another way of looking at the question would be, think of the recurrent environment management tasks/processes that you perform on a regular basis and then list those you could not do if you were only using VPCs + IAM.
From a network perspective: no
From a permissions model perspective: yes
Using an account per environment is the AWS-recommended approach for larger organizations because it enforces strict boundaries between environments. Doing a cross-environment call in a normal environment can be done easily (e.g. messing up the DynamoDB of prod instead of dev) whereas in a multiple-account setup you need to have different credentials.
Apart from the permissions model there is also an advantage in having limits per account (= per environment) instead of for your company. E.g. the Lambda concurrency limit is enforced on account level. Your dev environment can mess up your prod account in this situation.
Last but not least naming could also be a good reason to have an account per environment. E.g. variables in the Parameter Store have to be unique per account. Using multiple accounts you can use the same parameters for every environment without clashing. A similar thing is true for many resources, e.g. Cloudformation stacks.

website with fluctuating traffic [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a web application that has very fluctuating traffic. I'm talking about 30 to 40 users daily to thousands of people simultaneously. It's a ticketing app so this kind of behavior is here to stay so I want to make a strategic choice I don't want to by a host with a high configuration because it's just going to be sitting around for most of the time. We're running a Node.js server so we usually run low on RAM. My question is this: what are my options and how difficult is it to go from a normal VPS to something like Microsoft Azure, Google Cloud, or AWS.
It's difficult to be specific without knowing more about your application architecture but both AWS Lambda and Google App Engine offer 'serverless architecture' and support Node.js. Serverless architectures allow you to host code directly rather than running servers and associated infrastructure. Scaling is given to you by the services, costs are based on consumption and you can configure constraints and alerts to prevent racking up huge unexpected bills. In both instances you would need to front the services with additional Google or AWS services to make them accessible to customers, but these offer a great way to scale and pay only for what you need.
A first step is to offload static content to Amazon S3 (or similar service). Those services will handle any load and will lessen the load on your web server.
If the load goes up/down gradually (eg over the course of 30 minutes), you can use Auto Scaling to add/remove Amazon EC2 servers based upon load metrics. For example, you probably don't need many servers at night.
However, for handling of spiky traffic, rewriting an application as Serverless will make it highly resilient, highly scalable and most likely a lot cheaper too!