Is there an API/way to programmatically query AWS documentation for a specific service? For instance, I want to know the encryption algorithm used by a service for protecting data at rest. Can I write a script that will automatically query AWS documentation for that service and give me this information?
There is no API for AWS Documentation.
However, the AWS CLI is open-source and it has data files that detail all API calls and their parameters.
It would not, however, contain the encryption algorithms. That is internal to Amazon S3 and is not shared publicly.
Related
I am building a serverless application using AWS, with AWS API, AWS Lambda functions, and AWS RDS (database).
I have an existing MySQL schema (basically, a table dump), and I want to create the API automatically from this schema, ideally something that I can easily import into AWS API Gateway (like something from SwaggerHub or similar service).
Then, I want to have the operations for the database (CRUD operations that match the API) also automatically generated for NodeJs or Python, which I can then easily deploy to AWS Lambda, for example using SAM templates, or maybe just uploaded as a package somehow to AWS.
The lambda operations should be able to connect to my AWS RDS database, and perform the CRUD operations described by the API.
The idea is to determine some way to simplify this process. If the database schema changes significantly, for example, I do not want to manually edit a bunch of lambda functions to accommodate the new DB schema every time!
I'm wondering if anyone has any suggestions as to how I could make this work.
I'm learning serverless architectures and currently reading this article on Martin Fowler's blog.
So I see this scheme and try to replace the abstract components with AWS solutions. I wonder if not using API gateway to control access to S3 a good idea (on the image the database no.2 is not using it). Martin speaks about Google Firebase and I'm not familiar with how it compares to S3.
https://martinfowler.com/articles/serverless/sps.svg
Is it a common strategy to expose S3 to client-side applications without configuring an API gateway as a proxy between them?
To answer your question - probably, yes.
But, you’ve made a mistake in selecting AWS services for the abstract in Martin’s blog. And you probably shouldn’t use S3 at all in the way you’re describing.
Instead of S3; you’ll want dynamoDB. You’ll also want to look at Cognito for auth.
Have a read of this after Martin’s article for how to apply what you’ve learned on AWS specific services https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/
Aws S3 is not a database, its an object storage service.
Making S3 bucket publicly accessible is possible but not recommended however, you can access its objects using the S3 API either via the CLI or the SDK.
Back to your question in the comments regarding whether consuming the API directly from the frontend (assuming you mean using JS) is for sure a bad practic since AWS highly recommend you to securly store your API credentials (keys), and as any AWS API call should include the API credentials (keys) provided by AWS for your IAM user, then obviously anyone using your web application can see these keys.
Hope this answered your question.
I am trying to access the aws rds api to describe db snapshots. I plan on having this be parsed so that I can list all the available aws snapshots by id using groovy. However the biggest problem I am having is getting the api in the first place. I took a look at AWS's reference on this topic but I can't seem to figure out how to generate the pre-signed portion of the request with credentials. I am not sure why that part is even necessary. Why can't the user authenticate using the Access key ID and the Secret access key combination?
The reference:
https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBSnapshots.html
The section with the issue:
https://rds.us-west-2.amazonaws.com/
?Action=DescribeDBSnapshots
&IncludePublic=false
&IncludeShared=true
&MaxRecords=100
&SignatureMethod=HmacSHA256
&SignatureVersion=4
&Version=2014-09-01
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Credential=AKIADQKE4SARGYLE/20140421/us-west-2/rds/aws4_request
&X-Amz-Date=20140421T194732Z
&X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
&X-Amz-Signature=4aa31bdcf7b5e00dadffbd6dc8448a31871e283ffe270e77890e15487354bcca
If groovy is a hard requirement, I'd look into something like this https://grails.org/plugin/aws-sdk
If you're comfortable with Java, I'd say use the official AWS-SDK
If you're scripting this out, you could also use the official AWS cli tool and do something like
aws rds describe-db-snapshots [OPTIONS]
From there you could use a tool like jq to zero-in and parse out your specific ID's. You can find more documentation here.
The way you'd authorize with the SDK is either through environment variables (the preferred approach) or probably hardcoding your KEY and SECRET (big no no)
I think rather than trying to directly communicate with the API directly you should make use of the built in wrappers that AWS provide.
If you're accessing this with a supported programmatic language take a look at the AWS SDKs. There are currently officially supported libraries for:
C++
Go
Java
JavaScript
.NET
NodeJS
PHP
Python
Ruby
If your language of choice is not covered there may be a third party solution already. Alternatively take a look at the AWS CLI to resolve your problem.
For your specific action describe-db-snapshots you can get a list of all IDs by running the below, then parse as JSON.
aws rds describe-db-snapshots --query 'DBSnapshots[*].DBSnapshotIdentifier' --format json
I'm new to AWS S3, and I'm studying S3 SDK now.
If I would like to put a file on S3, there are two ways:
1) Using SDK client, $s3->putObject method
2) Using s3:// protocol
What's the difference between two?
Thank you! :)
Accessing AWS services via an SDK will make fully-authenticated API calls. Such calls require IAM credentials. They are the best way to interact with AWS services. Some commands, such as creating buckets, are only available via API calls.
Amazon S3 has the additional ability to provide access to objects via normal HTTP/HTTPS requests. For example, if an object is public, it can be accessed via https://s3.amazonaws.com/bucket-name/path/object
This means that content from Amazon S3 can be incorporated into web pages via <a> and <img> tags.
If you wish to use such links to access private objects, then the URL will need additional authentication information attached. This is know an a Amazon S3 pre-signed URL.
We are using AWS primarily for our application but we also need to use a particular Google service. This service requires us to upload media on Google Cloud Storage.
Like AWS resources, we want to use the serverless framework to create all required GCP resources.
I need your help to know the answer to the below questions:
How can we use the same serverless.yml to create required GCP resources as well?
Do we need to use two serverless.yml files, one for AWS and other for Google?
How to manage credentials for creating and accessing GCP resources?
How can we use the same serverless.yml to create required GCP resources as well?
Since YAML is just (from the docs)
a human friendly data serialization
standard for all programming languages.
there is no proper way to have one file that fits both architectures, by looking at both examples the just change a few lines,so you wont be able to use the same file but it will be very similar
Do we need to use two serverless.yml files, one for AWS and other for Google?
Yes both services need specific configurations for them to work correctly
How to manage credentials for creating and accessing GCP resources
To access GCP resources you will use service accounts this is all managed by Cloud IAM and it's made to represent a non-human user, in this case a app, an API, service, etc.
EXTRA: Some useful links:
App Egine configuration with yaml
AWS serveless .yml example