Amazon AWS: updated list of actions and services - amazon-web-services

I'm working on automating access to some APIs thru a visual interface and thus would like to present the user with a user-friendly interface to call Amazon AWS APIs.
However the documentation uses human-readable formats but then the API need be called using more compact tokens.
I'd like to have a list of all the services, ideally:
ServiceID, Service name, Action Firendly Name, Action/Operation name, command line name
e.g. looking into CloudFront ListDistributions operation we can see that:
Service is called "CloudFront" but the API endpoint is spelled lowercase "cloudfront"
API requires calling GET /< version>/distribution (see https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListDistributions.html )
Commandline requires using the "list-distributions" form: https://docs.aws.amazon.com/cli/latest/reference/cloudfront/list-distributions.html
similar thing with "ListPublicKeys" https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_ListPublicKeys.html
Thus a table like this would help:
ServiceID, Service name, Action Firendly Name, Action/Operation name, command line name
cloudfront, CloudFront, ListDistributions, distribution, list-distributions
cloudfront, CloudFront, ListPublicKeys, public-key, list-public-keys

The link posted from #John Rotenstein in the comment resolves the issue.
The data files from Boto core contain enought information to build the above mentioned table.

Related

How to use the API subscriptions from AWS Data Exchange?

So I got access to SimilarWeb ranking API from AWS(https://aws.amazon.com/marketplace/pp/prodview-clsj5k4afj4ma?sr=0-1&ref_=beagle&applicationId=AWSMPContessa).
I'm not able to figure out how to pass the authentication or how to give a request to retrieve the ranks for domains.
For ex. how will you pass the request for this URL in python?
URL: https://api-fulfill.dataexchange.us-east-1.amazonaws.com/v1/v1/similar-rank/amazon.com/rank
This particular product does not seem to be available any longer. Generally speaking, an AWS IAM principal with correct IAM permissions, can make API calls against AWS Data Exchange for APIs endpoints. The payload of the API call needs to adhere to the OpenAPI spec defined within the DataSet of the product used. The specific API call is 'SendApiAsset'. The easiest way to think about is to read the boto3 documentation for it, here: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dataexchange.html#DataExchange.Client.send_api_asset
Other AWS SDKs have the same call, idiomatic to the specific language.
The managed policy that describes the IAM permissions needed is named AWSDataExchangeSubscriberFullAccess, the dataexchange specific permission needed is 'dataexchange:SendApiAsset'.
The awscli way of making the call is described here: https://docs.aws.amazon.com/cli/latest/reference/dataexchange/send-api-asset.html
The required parameters are: asset-id, data-set-id, revision-id. You will likely also need to provide values for: method and body (and perhaps others also depending on the specific API you are calling.
The content of the 'body' parameter needs to adhere to the OpenAPI spec of the actual dataset provided as part of the product.
You can obtain the values for asset-id, data-set-id and revision-id from the AWS Data Exchange service web console describing the product/dataset.

How to Query Route53 hosted zone to check for an existing record set?

I am new to amazon Route53. As of now, I am able to create a hosted zone and a resource record set in my amazon account. But now I want to search whether a record set already exists in my hosted zone. For Example
Hosted zone "abc.com" and it has two-record set in it.
A.abc.com
B.abc.com
Now I want to query my hosted zone and find out whether A.abc.com already exists in the abc.com.
So, is there any API that I can use where I can pass my amazon credentials and my amazon hostedzone and the searched "record set" and then I can get the result back whether that record set already exists. Kindly guide me.
After research, I found out that there is "ListResourceRecordSet" which will give me the list back for a particular zone. But I don't want the list I just want to check whether the entry already exists.
I have been able to perform this check efficiently using the ListResourceRecordSet API method, and the name and maxitems parameters. You haven't specified how you are accessing the API, so I'm going to explain this using the standard AWS REST API.
Given your example:
Call the API passing A.abc.com as the name parameter and 1 as the maxitems parameter. Your request will look like this: https://route53.amazonaws.com/2013-04-01/hostedzone/{YOUR_HOSTED_ZONE_ID}/rrset?name=A.abc.com.&maxitems=1
Note that I've added a trailing dot (".") to the end of the resource name A.abc.com. The API reference indicates that it may affect result sort order so I add it just in case.
You will get back an XML result in this format:
<?xml version="1.0"?>
<ListResourceRecordSetsResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/">
<ResourceRecordSets>
<ResourceRecordSet>
<Name>A.abc.com.</Name>
<Type>A</Type>
<TTL>3600</TTL>
<ResourceRecords>
<ResourceRecord>
<Value>SOME_IP_ADDRESS</Value>
</ResourceRecord>
</ResourceRecords>
</ResourceRecordSet>
</ResourceRecordSets>
<IsTruncated>true</IsTruncated>
<NextRecordName>B.abc.com.</NextRecordName>
<NextRecordType>A</NextRecordType>
<MaxItems>1</MaxItems>
</ListResourceRecordSetsResponse>
Now you're going to have to do some parsing. Check the result to see if there is one ResourceRecordSet and if its Name property matches the name of the resource record you are looking for (you probably want to do a case-insensitive compare of the two values). Keep in mind that the Name property has that trailing period (".") at the end, so add it to the name you're searching for before doing the comparison.
If there is exactly one resource record set and the name matches the one you are looking for, it exists. If either one of those checks fails, then it does not exist.
Granted, this isn't as simple as a GetResourceRecordSet operation would be, but at least it keeps you from having to query the entire zone and parse a bunch of records. You also won't run into the long delay or throttling issues that you may using the CLI --query option.
There does not appear to be a way to use this method with the AWS CLI as it lacks a --name parameter for some reason. I can vouch for the fact that the JavaScript SDK will allow you to do this using the StartRecordName parameter.
There is no way to filter the API call, but there is a way to filter the data returned. Using the CLI you can do this with the --query option.
From the documentation: "To view all the resource record sets of a particular name, use the --query parameter to filter them out. For example:"
aws route53 list-resource-record-sets --hosted-zone-id Z2LD58HEXAMPLE --query "ResourceRecordSets[?Name == 'A.abc.com']"

Renewing IAM SSL Server Certificates

I have been using IAM server certificates for some of my Elastic Beanstalk applications, but now its time to renew -- what is the correct process for replacing the current certificate with the updated cert?
When I try repeating an upload using the same command as before:
aws iam upload-server-certificate --server-certificate-name foo.bar --certificate-body file://foobar.crt --private-key file://foobar.key --certificate-chain file://chain_bundle.crt
I receive:
A client error (EntityAlreadyExists) occurred when calling the UploadServerCertificate operation: The Server Certificate with name foo.bar already exists.
Is the best practice to simply upload using a DIFFERENT name then switch the load balancers to the new certificate? This makes perfect sense - but I wanted to verify I'm following the correct approach.
EDIT 2015-03-30
I did successfully update my certificate using the technique above. That is - I uploaded the new cert using the same technique as originally, but with a different name, then updated my applications to point to the new certificate.
The question remains however, is this the correct approach?
Yes, that is the correct approach.
Otherwise, you would be forced to roll it out to every system that used it at the same time, with no opportunity to test, first, if desired.
My local practice, which is I don't intend to imply is The One True Way™, yet serves the purpose nicely, is to append -yyyy-mm for the year and month of the certificate's expiration date to the end of the name, making it easy to differentiate between them at a glance... and using this pattern, when the list sorted is lexically, they're coincidentally sorted chronologically as well.

Custom endpoint in AWS powershell

I am trying to use AWS Powershell with Eucalyptus.
I can do this with AWS CLI with parameter --endpoint-url.
Is it possible to set endpoint url in AWS powershell?
Can I create custom region with my own endpoint URL in AWS Powershell?
--UPDATE--
The newer versions of the AWS Tools for Windows PowerShell (I'm running 3.1.66.0 according to Get-AWSPowerShellVersion), has an optional -EndpointUrl parameter for the relevant commands.
Example:
Get-EC2Instance -EndpointUrl https://somehostnamehere
Additionally, the aforementioned bug has been fixed.
Good stuff!
--ORIGINAL ANSWER--
TL;TR
Download the default endpoint config file from here: https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json
Customize it. Example:
{
"version": 2,
"endpoints": {
"*/*": {
"endpoint": "your_endpoint_here"
}
}
}
After importing the AWSPowerShell module, tell the SDK to use your customized endpoint config. Example:
[Amazon.AWSConfigs]::EndpointDefinition = "path to your customized Amazon.endpoints.json here"
Note: there is a bug in the underlying SDK that causes endpoints that have a path component from being signed correctly. The bug affects this solution and the solution #HyperAnthony proposed.
Additional Info
Reading through the .NET SDK docs, I stumbled across a section that revealed that one can global set the region rules given a file: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-other.html#config-setting-awsendpointdefinition
Unfortunately, I couldn't find anywhere where the format of such a file is documented.
I then splunked through the AWSSDK.Core.dll code and found where the SDK loads the file (see LoadEndpointDefinitions() method at https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/RegionEndpoint.cs).
Reading through the code, if a file isn't explicitly specified on AWSConfigs.EndpointDefinition, it ultimately loads the file from an embedded resource (i.e. https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json)
I don't believe that it is. This list of common parameters (that can be used with all AWS PowerShell cmdlets) does not include a Service URL, it seems instead to opt for a simple string Region to set the Service URL based on a set of known regions.
This AWS .NET Development forum post suggests that you can set the Service URL on a .NET SDK config object, if you're interested in a possible alternative in PowerShell. Here's an example usage from that thread:
$config=New-Object Amazon.EC2.AmazonEC2Config
$config.ServiceURL = "https://ec2.us-west-1.amazonaws.com"
$client=[Amazon.AWSClientFactory]::CreateAmazonEC2Client($accessKeyID,$secretKeyID,$config)
It looks like you can use it with most config objects when setting up a client. Here's some examples that have the ServiceURL property. I would imagine that this is on most all AWS config objects:
AmazonEC2Config
AmazonS3Config
AmazonRDSConfig
Older versions of the documentation (for v1) noted that this property will be ignored if the RegionEndpoint is set. I'm not sure if this is still the case with v2.

AWS S3 - Privacy error when accessing file from link

I am working with a team that is using S3 to host content and they moved from a single bucket for all brands to one bucket for each brand and now we are having trouble when linking to the content from within salesforce site.com page. When I copy the link from S3 as HTTPS, I get a >"Your connection is >not private, Attackers might be trying to steal your information from >spiritxpress.s3.varsity.s3.amazonaws.com (for example, passwords, messages, or credit cards)."
I have asked them to compare the settings from the one that is working, and I don't have access to dig into it myself, and we are pretty new to this as well so thought I would see if there were any known paths to walk down. The ID and Key have not changed and I can access the content via CyberDuck, it just is not loading when reached via a link.
Let me know if additional information is needed and I will provide as quickly as I can.
[EDIT] the bucket naming convention they are using is all lowercase and meets convention guidelines as well, but it seems strange to me they way it is structured as they have named the bucket "brandname.s3.companyname" and when copying the link it comes across as "https://brandname.s3.company.s3.amazonaws.com/directory/filename" where the other bucket was being rendered as "https://s3.amazonaws.com/bucketname/......
Whoever made this change has failed to account for the way wildcard certificates work in HTTPS.
Requests to S3 using HTTPS are greeted with a certificate identifying itself as "*.s3[-region].amazonaws.com" and in order for the browser to consider this to be valid when compared to the link you're hitting, there cannot be any dots in the part of the hostname that matches the * offered by the cert. Bucket names with dots are valid, but they cannot be used on the left side of "s3[-region].amazonaws.com" in the hostname unless you are willing and able to accept a certificate that is deemed invalid... they can only be used as the first element of the path.
The only way to make dotted bucket names and S3 native wildcard SSL to work together is the other format: https://s3[-region].amazonaws.com/example.dotted.bucket.name/....
If your bucket isn't in us-standard, you likely need to use the region in the hostname, so that the request goes to the correct endpoint, e.g. https://s3-us-west-2.amazonaws.com/example.dotted.bucket.name/path... for a bucket in us-west-2 (Oregon). Otherwise S3 may return an error telling you that you need to use a different endpoint (and the endpoint they provide in the error message will be valid, but probably not the one you're wanting for SSL).
This is a limitation on how SSL certificates work, not a limitation in S3.
Okay, it appears it did boil down to some permissions that were missed and we were able to get the file to display as expected. Other issues are present, but the present one is resolved so marking as answered.