AWS Codepipeline limitations on a single account - amazon-web-services

We are planning to leverage AWS codepipeline by hosting it on a single AWS account, moving forward pipeline count will get around ~500, Is there any limitation by AWS that only certain number of pipelines needs to be hosted on a single account.
Do we need to have a separate account for hosting all these pipelines or just host these on the AWS account in which the application is running? what are the best practices?

You can see the limits pertaining to CodePipeline at https://docs.aws.amazon.com/codepipeline/latest/userguide/limits.html.
It looks like as of now there is a soft limit is 300 pipelines per region per account. If you hit that number, you should be able to request an increase by following the link in that document.

As mentioned in another answer, the default limit for pipelines per account per region is 300. This limit can be raised on request.
While you can run more than 300 pipelines per account, you may also start running into related limits like IAM roles per account, CloudWatch Event rules per account, etc. You can get these limits raised too, but the complexity of dealing with all this can start to add up.
My personal recommendation would be to split things across multiple accounts so that there are about 300 pipelines per account at most. If you have multiple teams or multiple departments, splitting accounts by team/department can be a good idea anyway.

Related

What is the true maximum values for AWS quotas?

Does anywhere officially or unofficially document what the true maximums are for all AWS quotas?
I am new to AWS, and am trying to figure out the maximum values for certain quotas.
For example, the default value for S3 Access Points supports a maximum of 1000 per account.
but in the AWS quota console it says it is Adjustable, and the docs suggest I can request a quota increase.
You can create a maximum of 1,000 access points per AWS account per Region. If you need more than 1,000 access points for a single account in a single Region, you can request a service quota increase. For more information about service quotas and requesting an increase, see AWS Service Quotas in the AWS General Reference.
I'd like to know what the true maximums are across the board for IAM and S3 resources, to ease design of features I'm working on, without having to do a request to increase resources I may not actually use, if appropriate resource limits can't be requested.
After discussing with AWS support, some quota changes aren't reflected in this console at this time (e.g dynamoDb quota changes)
Haven't tried it, but possibly using aws-limit-checker may show the real limits

AWS SFTP from AWS Transfer Family

I am planning to spin-up AWS Managed SFTP Server. AWS Documentation say, I can create upto 20 users. Can I configure for 20 users 20 different buckets and assign seperate previleges ? Is this a possible configuration ?
All I am looking for exposing same endpoint with different vendors having access to different AWS S3 buckets to upload their files to designated AWS S3 buckets.
Appreciate all your thoughts and response at the earliest.
Thanks
Setting up separate buckets and AWS Transfer instances for each vendor is a best practice for workload separation. I would recommend setting up a custom URL in Route53 for each of your vendors and not attempt to consolidate on a single URL (it isn't natively supported).
https://docs.aws.amazon.com/transfer/latest/userguide/requirements-dns.html
While setting up separate AWS Transfer Family instances will work, it comes at a higher cost (remember you are charged even if you stop until the time you delete, you are billed $0.30 per hour which is ~ $216 per month).
The other way is to create different users (one per vendor) and use different home directories (one per vendor) and lock down permissions through IAM role for that user (also there is provision to use a scope-down policy along with AD). If using service managed users see this link https://docs.aws.amazon.com/transfer/latest/userguide/service-managed-users.html.

How to Terraform Plan multiple AWS accounts and then review and then apply to multiple accounts

Currently we have about 30 AWS accounts (and growing) and we have a script that goes through and plans them all one by one and then we look at the plan and then we apply it if it looks correct. Then it goes on to the next account. I am wondering if there is a way to get the plan for all accounts and then if they all look good to kick off the apply for all accounts at once so we don't have to do it one by one each time.

Services limits for accounts in AWS organization

When you create accounts in AWS organization, does each account have their own services limitation?
e.g. Lambda has 1000 concurrency limit for each account. If I created 2 accounts from AWS organization, will I have 1000 concurrent executions / account? (2000 concurrency in total, I know it won't simply sum up to 2000 so this is an oversimplification)
I'm pretty sure this is the case, but I couldn't find any written statement for this.
The service limits are just like any standalone account. No change nor any consolidation in the number of resources provided for a given service.
Only the billing is consolidated for the master account of the AWS Organizations.
You can find this in the following document:
https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html
If you want to increase the limit there are two possible approaches:
Account Level
Function Level
More details are given here:
https://docs.aws.amazon.com/lambda/latest/dg/concurrent-executions.html
Thanks,

CloudWatch Events rule Limits

what is the maximum number of rules for cloud watch I can create on my AWS account. I might have a lot of different rules that will invoke lambda function on schedule. Is it unlimited?
The basic limits are documented at http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_limits.html - currently 50 rules per account.
If you need more, reach out through your AWS contact and these can be expanded.
This is no longer 50 and has been increased to 100 per region per account.
As per this link:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/cloudwatch_limits_cwe.html
And as mentioned by johnny: this can be increased further on request (if amazon approves the request).
After talking to AWS cloud watch team I found out that the rule limit can be increased as per your need.
If you're willing to use a non-AWS service, then you might check out Microsoft Azure. Azure offers a great job scheduler that doesn't pose any limits. You could use this service to invoke your lambda functions.