Why is ec2.describe_regions() not returning all AWS regions via Boto3? - amazon-web-services

I'm trying to enumerate all AWS regions available to me in Python.
AWS documentation suggests the following method for EC2:
ec2 = boto3.client('ec2')
# Retrieves all regions/endpoints that work with EC2
response = ec2.describe_regions()
print('Regions:', response['Regions'])
However running it results in an exception
botocore.exceptions.NoRegionError: You must specify a region.
When I am specifying a region to boto3.client request, I'm getting 11 regions out of 18 available.
Apart from the obvious mistake in AWS documentation, and apart from lack of logic of requiring to provide a region to get a full list of regions, how do I get around this?

The AWS docs are technically correct.
ec2.describe_regions() retrieves all regions that are 'available to you'.
In clearer terms, this means the response will only include regions that are enabled for your account & thus exclude any regions that are disabled within your account.
While not on the same page, the documentation for describe_regions explicitly states this:
Describes the Regions that are enabled for your account, or all Regions.
You most likely have 15 regions disabled within your account, which is why not all 26 regions (excluding 2 GovCloud regions) are being returned.
As you've discovered, setting the AllRegions parameter to True will return all regions regardless of their status within your account but please note that just because the API now returns them all, does not mean you can interact with them.
P.S. I agree that the AWS docs could be improved probably by rewording 'Retrieves all regions/endpoints that work with EC2' to 'Retrieves all enabled regions/endpoints within your account that work with EC2'. This is the source for the page you've linked - feel free to open a pull request suggesting an improvement.

Found it - all that is required is to add AllRegions=True to describe_regions():
response = ec2.describe_regions(AllRegions=True)

Related

AWS Config shows both ConfigurationItemsRecorded and ConfigurationRecorderInsufficientPermissionFailure in CloudWatch Metrics

I am trying to figure out the why is AWS Config allowed to record some ResourceType(s) in one of our AWS account but not in other.
CloudWatch Metrics show ConfigurationRecorderInsufficientPermissionFailure in one account but both ConfigurationItemsRecorded and ConfigurationRecorderInsufficientPermissionFailure in another account for SAME ResourceTypes and also records the changes. Config was setup with the same stack in both accounts and have same policies.
Here are the screenshots of the same:
Notice AWS::EC2::Subnet has both the metrics and config records the respective changes as well.
Ignore the regions, we use 4 different regions and it is the same in all regions in respective accounts.
How can I figure out why is this the case?
Both the setups (and rules) are handled centrally by an account I don't have access to, so I can't manage individual ResourceTypes.
Just need to figure out where to look to replicate the conditions in both accounts.

Copy EBS Snapshots to another region

Is there anyway to copy EBS Snapshot from one region to another region in AWS (other than Console option and and using aws lambda)? I tried using AWS Lambda and boto3 but only 20 snapshots can be in pending state when you are performing copy_snapshost operation. I have close to 5,000 snapshots in us-east-1 and wanted to copy the same to us-west-1. Kindly suggest.
Even supposing was another tool, the limit of 20 concurrent copies would still apply, as it will use the same API (potentially with a different SDK) which you use for your lambda or which is the Console backend.
That being said, even though it's listed as a not adjustable on the EBS Service Limits page according to this announcement you [can raise a support request for a higher limit][2].
[2]: https://aws.amazon.com/about-aws/whats-new/2020/04/amazon-ebs-increases-concurrent-snapshot-copy-limits-to-20-snapshots-per-destination-region/#:~:text=You%20can%20now%20copy%20up,(US)%20and%20China).

AWS SFTP from AWS Transfer Family

I am planning to spin-up AWS Managed SFTP Server. AWS Documentation say, I can create upto 20 users. Can I configure for 20 users 20 different buckets and assign seperate previleges ? Is this a possible configuration ?
All I am looking for exposing same endpoint with different vendors having access to different AWS S3 buckets to upload their files to designated AWS S3 buckets.
Appreciate all your thoughts and response at the earliest.
Thanks
Setting up separate buckets and AWS Transfer instances for each vendor is a best practice for workload separation. I would recommend setting up a custom URL in Route53 for each of your vendors and not attempt to consolidate on a single URL (it isn't natively supported).
https://docs.aws.amazon.com/transfer/latest/userguide/requirements-dns.html
While setting up separate AWS Transfer Family instances will work, it comes at a higher cost (remember you are charged even if you stop until the time you delete, you are billed $0.30 per hour which is ~ $216 per month).
The other way is to create different users (one per vendor) and use different home directories (one per vendor) and lock down permissions through IAM role for that user (also there is provision to use a scope-down policy along with AD). If using service managed users see this link https://docs.aws.amazon.com/transfer/latest/userguide/service-managed-users.html.

withRegion(Regions) of AmazonS3ClientBuilder takes what parameter?

withRegion(Regions) of AmazonS3ClientBuilder takes what parameter? From AWS documentation says "It sets the region to be used by the client."
Is it the region where our application is running? So that there would be minimum latency as it will read from the same region of S3 bucket where the calling client is deployed?
Or it is the Region where S3 bucket is present?
Sample code of line:
AmazonS3 amazonS3 = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1).build();
Please don't do any guess work.. An URL(pref doc.aws.amazon.com) to support your explanation will be highly appreciated..
https://docs.aws.amazon.com/general/latest/gr/rande.html
Some services, such as IAM, do not support regions; therefore, their endpoints do not include a region. Some services, such as Amazon EC2, let you specify an endpoint that does not include a specific region, for example, https://ec2.amazonaws.com. In that case, AWS routes the endpoint to us-east-1.
If a service supports regions, the resources in each region are independent. For example, if you create an Amazon EC2 instance or an Amazon SQS queue in one region, the instance or queue is independent from instances or queues in another region.
In this case, S3 buckets can be created in specific regions and there are multiple REST endpoints you can access. In the case of S3, you must connect to the same region as the bucket (except for calls such as ListAllMyBuckets that are region agnostic). For other services you do not.
https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
As you point out, the Javadoc for AmazonS3ClientBuilder is incredibly vague, because it inherits the withBuilder documentation from AwsClientBuilder, which is inherited by services that support regions and those that do not.
To further add to the confusion, particularly when reading older advice scattered over the internet, it was possible in the past to access any bucket from the same region with the S3 Java API (these calls may be slower). It is possible to revert to this behaviour with withForceGlobalBucketAccessEnabled:
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Builder.html#withForceGlobalBucketAccessEnabled-java.lang.Boolean-
Configure whether global bucket access is enabled for clients generated by this builder.
When global bucket access is enabled, the region to which a request is routed may differ from the region that is configured in AwsClientBuilder.setRegion(String) in order to make the request succeed.
The following behavior is currently used when this mode is enabled:
All requests that do not act on an existing bucket (for example, AmazonS3Client.createBucket(String)) will be routed to the region configured by AwsClientBuilder.setRegion(String), unless the region is manually overridden with CreateBucketRequest.setRegion(String), in which case the request will be routed to the region configured in the request.
The first time a request is made that references an existing bucket (for example, AmazonS3Client.putObject(PutObjectRequest)) a request will be made to the region configured by AwsClientBuilder.setRegion(String) to determine the region in which the bucket was created. This location may be cached in the client for subsequent requests acting on that same bucket.
Enabling this mode has several drawbacks, because it has the potential to increase latency in the event that the location of the bucket is physically far from the location from which the request was invoked. For this reason, it is strongly advised when possible to know the location of your buckets and create a region-specific client to access that bucket.

AWS buckets and regions

The application is using carrierwave in conjunction with the carrierwave-aws gem. It hit snags when migrating rails versions (bumped up to 4.2), ruby versions (2.2.3) and redeployed to the same staging server.
The AWS bucket was initially created in the free tier, thus Oregon, us-west-2. However, I have found that all the S3 files have the property which links to eu-west-1. Admittedly I've been tinkering around and considered using the eu-west-1 region. However I do not recall making any config changes - not even sure it is allowed in the free tier...
So yes, I've had to configure my uploads initializer with:
config.asset_host = 'https://s3-eu-west-1.amazonaws.com/theapp'
config.aws_credentials = {
region: 'eu-west-1'
}
Now the console for AWS is accessible with a URL that includes region=us-west-2
I do not understand how this got to be and am looking for suggestions.
In spite of appearances, and AWS account doesn't have a "home" (native) region.
The console defaults to us-west-2 (Oregon), and conventional wisdom suggests that this is a region where AWS has the most available/spare resources, lower operational costs, lower pricing for customers, and fewest constraints for growth, so that in the event a user does not have enough information at hand to actively select a region where they deploy services, Oregon will be used by default.
But for each account, no particular region has any special standing. If you switch regions in the console, the console will tend to open up to the same region next time.
Most AWS services -- EC2, SQS, SNS, RDS (to name a few) are strictly regional: the regions are independent and not connected together¹, in the interest of reliability and survivability. When you're in a given region in the console, you can only see EC2 resources in that region, SQS queues in that region, SNS topics in that region, etc. To see your resources in other regions, you switch regions in the console.
When making API requests to these services, you're using an endpoint in the region and your credentials also include the region.
Other services are global, with centralized administration -- examples here are CloudFront, IAM, and Route 53 hosted zones. When you make requests to these services, you always use the region "us-east-1" because that's the home location of those services' central, global administration. These tend to be services were a partitioning event (where one part of the global network is isolated from another). Administrative changes are replicated out around the world, but after the provisioning is replicated, the regional installations can operate autonomously without major service impacts. When you select these services in the console, you'll note that the region changes to "Global."
S3 is a hybrid that is different from essentially all of the others. When you select S3 in the console, you'll notice that the console region also changes to show "Global" and you can see all of your buckets, like other global services. S3 has independently operating regions, but a global namespace. The regions are logically connected and can communicate administrative messages among themselves and can transfer data across regions (but only when you do this deliberately -- otherwise, data remains in the region where you stored it).
Unlike the other global services, S3 does not have a single global endpoint that can handle every possible request.
Each time you create a bucket, you choose the region where you want to bucket to live. Subsequent requests related to that bucket have to be submitted to the bucket's region, and must have authorization credentials for the correct region.
If you submit a request to another S3 region's endpoint for that bucket, you'll receive an error telling you the correct region for the bucket.
< HTTP/1.1 301 Moved Permanently
< x-amz-bucket-region: us-east-1
< Server: AmazonS3
<Error>
<Code>PermanentRedirect</Code>
<Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message>
<Bucket>...</Bucket>
<Endpoint>s3.amazonaws.com</Endpoint>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
Conversely, if you send an S3 request to the correct endpoint but using the wrong region in your authentication credentials, you'll receive a different error for similar reasons:
< HTTP/1.1 400 Bad Request
< x-amz-bucket-region: us-west-2
< Server: AmazonS3
<
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AuthorizationQueryParametersError</Code>
<Message>Error parsing the X-Amz-Credential parameter; the region 'eu-west-1' is wrong; expecting 'us-west-2'</Message>
<Region>us-west-2</Region>
<RequestId>...</RequestId>
<HostId>...</HostId>
</Error>
Again, this region is the region where you created the bucket, or the default "US Standard" (us-east-1). Once a bucket has been created, it can't be moved to a different region. The only way to "move" a bucket to a different region without the name changing is to remove all the files from the bucket, delete the bucket (you can't delete a non-empty bucket), wait a few minutes, and create the bucket in the new region. During the few minutes that S3 requires before the name is globally available after deleting a bucket, it's always possible that somebody else could take the bucket name for themselves... so choose your bucket region carefully.
S3 API interactions were edited and reformatted for clarity; some unrelated headers and other content was removed.
¹ not connected together seems (at first glance) to be contradicted in the sense that -- for example -- you can subscribe an SQS queue in one region to an SNS topic in another, you can replicate RDS from one region to another, and you can transfer EBS snapshots and AMIs from one region to another... but these back-channels are not under discussion, here. The control planes of the services in each region are isolated and independent. A problem with the RDS infrastructure in one region might disrupt replication to RDS in another region, but would not impact RDS operations in the other region. An SNS outage in one region would not impact SNS in another. The systems and services sometimes have cross-region communication capability for handling customer requested services, but each region's core operations for these regional services is independent.