How does AWS inform API users of changes? - amazon-web-services

I'm working on a project centered around API Change Management. I'm curious as to how AWS informs developers of changes to its APIs. Is it through the document history (https://docs.aws.amazon.com/apigateway/latest/developerguide/history.html)? Or do they send out emails to developers?
Regarding emails, are emails sent to all developers using the API (ex. API Gateway) or just developers using a particular endpoint and will be affected by the change? What is the frequency of notifications - breaking changes, minor changes, etc.
Thanks so much for your help!

For non-breaking changes, you can learn about them on the Developer Guide as you pointed out. Some of these changes are also announced on their What's New page (RSS feed). You can also follow the SDK releases which are updated often (e.g. by using the RSS feed for aws-sdk-go releases). I believe that most of the SDKs are using code generation to generate a lot of the API functionality. They push updates to these files in the SDK git repositories (ruby example, go example), but it is not clear if there is another place to find these files. It doesn't seem like they want us to consume these directly (see this developer forum thread from 2015). There's also awsapichanges.info, which appears to be built by AWS themselves.
AWS very rarely makes breaking changes to their API. Even SimpleDB, which is a very old AWS product, still works.
Having said that, they do make breaking changes from time to time, but they try to announce them well ahead of time. The biggest breaking change that they are trying to complete is probably their attempt to deprecate S3 path-style access. This was first quietly announced in their AWS Developer Forums, which caused a lot panic especially since the timeline was incredibly short. Based on the panic, AWS quickly backtracked and revised the plan, more publicly this time.
They have done some other S3 breaking changes in other ways. For example, S3 buckets must now have DNS-compliant names. This was only recently (March 1, 2018) enforced on new buckets in us-east-1, but for most other regions this was enforced from the start when the regions were made available. Old S3 buckets in us-east-1 may still have names that are not DNS-compliant.
Lambda is removing old runtimes once the version of the programming language stops being maintained (such as Python 2.7). This should be a known expectation for anyone who starts using the service, and there is always a new version that you can migrate to. AWS sends you email reminders if you still have Lambda functions that is using the old runtime, when the deadline nears.
Here is a GitHub repository where people try to track breaking changes: https://github.com/SummitRoute/aws_breaking_changes. You can see that the list is not that long.

Related

Several Problems with Google SDK Speech-to-text (Transcription) details

I've encountered several fairly serious problems in my efforts to develop a speech-to-text application. Some of them (I hope) may just be my lack of experience/common-sense/in-depth-reading/etc. Here's the list:
Long (>60 sec) transcriptions -- Forces me to use a GS bucket to first upload the soundfile to the bucket. Trouble is:
a. I have to run the "gcloud auth login" on each machine I need to run on. I have well over 50 machines. This appears to be a purely manual operation, as you have to copy a long URL to your browser, hit enter, click on the right account, accept the permissions, then hand-copy and paste the key presented back into the gcloud prompt, and hit enter there. While it does appear to be presistent to some degree, it is subject to one interesting constraint: only 51 machines (maybe 50, I got tired trying to count) are allowed. And the earliest logged-in machine is un-loggedin to make room for the new login. This was very odious. All this hassle is purely for using the buckets. A shorter transcription will not use GS, and complete without complaint. Really!!!! Is there no better way? Do we have to use gcloud auth login? Manually???? The number of servers we can use with GoogleStorage????
Another google storage issue: transcription requires the bucket to be "public". We are pretty worried about security, and the privacy of our customers, whose recordings will be in the uploaded bucket, even if only there briefly.
The transcription app offers transcriptions in multiple languages, but the "phone_call" model is fixed to en_US, and seems to ignore the language setting. If I change the request to es_US, and supply a spanish recording, it behaves the same. (But everything works OK in the 'command_and_search' model). This seems to be in evolution, any idea when/if they will carry the multi-language features over to the phone_call model?
If anyone can help, Oh Wise Ones, please impart of thy wisdom!
murf

Using DynamoDB from CakePHP 3 installed to Elastic Beanstalk

I have installed CakePHP 3 using directions from this tutorial:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-cakephp-tutorial.html
It is working perfectly and actually installation was quite easy. There is PHP, CakePHP, MySQL working and also I noticed that the newest AWS SDK as whole is installed in vendor directory. So I am fully set to use also DynamoDB as my data source. You might ask why I should use DynamoDb since I am already using MySQL/MarianDB, this is because we have an application that is already in production and it is using DynamoDB. But we should be able to write admin application using CakePHP in top of DynamoDB. This is not technical decision but coming from business side.
I found good tutorial written by StarTutorial how to use DynamoDB as session handler in CakePHP 3:
https://www.startutorial.com/articles/view/using-amazon-dynamodb-as-session-handler-in-cakephp-3
Well, there is not long way to using DynamoDB for putting data, getting data and doing scans, isn't there? Do you have any simple example how to do it, how to write data to DynamoDB or do scan?
I have also read the article:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.PHP.html
and this is working fine, no problem. But I would like to all the advantages of the CakePHP 3, templating, security and so on, thousands of hours time saved with well written code and very fast to start coding for example admin console :)
Thank you,
You could create a Lambda function (in case you want to go serverless) or any other microservice to abstract communication with your DynamoDB. This will definitely simplify your PHP code. You may call Lambda functions directly (via API Gateway), or post messages to SQS for better decoupling. I would recommend the use of SQS -- you'll need some kind of microservice anyway to consume messages and deal with your DynamoDB in a CQRS fashion. Hope it helps!
Thank you for your answer, I was looking for a example how to use the AWS SDK for DynamoDB without creating more complexity to this environment as it is. This way I would have to create yet another layer without using the SDK that already exists. Can you please give wokring example how AWS SDK is used from CakePHP 3 so that it can use DynamoDB as a data source for its applications without losing it´s own resources an capabilities (MVC, security etc).
Thank you,
After a hard debug and found bugs I was able to get it working with only using AWS SDK in CakePHP 3.

Is it ok to use API instead of SDK?

I like fast code execution (because of that I switched from Python to Go) and I do not like dependencies. Amazon recommends using SDK for simpler authentication (but in Lambda I can get tokens from IAM from environment variables) and because of built into SDK retry on errors (few lines of code, as I think). Yes it is faster to write my code using SDK, but what additional caveats about using pure HTTP API instead of SDK? Am I too crazy about milliseconds? Such optimizations worth it?
Anything you do with AWS is the result of an API call, whether executed by CLI, Web console, or SDK.
The SDKs make it easier to interact with those APIs. While you may be able to come up with some minor improvements for some calls, overall you will spend a lot of time doing it to very little benefit.
I think the stated focus on performance belies real trade-offs.
Consider that someone will have to maintain your code -- if you use an API, the test area is small, but AWS APIs might change or be deprecated; if you an SDK, next programmer will plug in new SDK version and hope that it works, but if it doesn't they'd be bogged down by sheer weight of the SDK.
Likewise, imagine someone needs to do a security review of this app, or to introduce something not yet covered by SDK (let's imagine propagating accounting group from caller role to underlying storage).
I don't think there is a clear answer.
Here are my suggestions:
keep it consistent -- either API or SDK (within given app)
consider the bigger picture (how many apps do you plan to write?)
don't be afraid to switch to the other approach later
I've had to decide on something similar in the past, with Docker (much nicer APIs and SDKs/libs). Here's how it played out:
For testing, we ended up using beta version of Docker Python bindings: prod version was not enough, and bindings (your SDK) were overall pretty good and clear.
For log scraping, I used HTTP calls (your API), "because performance", in reality comparative mental load using API vs SDK, and because bindings (SDK) did not support asyncio.

how to integrate AWS services for language without sdk

AWS provides SDK only for some languages. How could I integrate AWS services in an application for which an official SDK is not provided. Eg C of Scala or Rust? I know that for Scala, some aws sdk projects are available but as they are individual contributions (and not aws releases), I am reluctant to use them.
All the SDKs do is wrap some minimal interface around the API calls made to the AWS servers. For any service you wish to integrate into your application, just head over to their API documentation, and write your own codes/wrappers.
For eg. this link takes you to the API reference for EC2 service.
In the early days of AWS, we needed an SDK for C++. At that time an SDK for C++ did not exist, so I wrote one based upon the REST API. This is no easy task as the Amazon API is huge and by the time you complete coding for one service, you have to go back and update with all of the AWS feature improvements and changes. This seems like a never ending loop.
Several of the AWS SDKs were started by third party developers and then contributed to Amazon as open source projects. If you have a popular language that you feel that others could benefit from, start an open source project and get everyone involved. It could become an official project if there is enough demand. Ping me if you do as I might be interested in contributing.

AWS - Is there anything the API can't do ?

A mostly pointless question but I'm curious none the less and google gave me nothing (so hey lets let google index this one for next time)
Is there anything that explicitly cannot be done on an AWS account through the API alone?
eg. Is there something you MUST log in to the console, or even some other method perhaps ?
For arguments sake, if I were to go ahead and develop an exact copy of the web console, obviously utilising the API, is there anything my web console couldnt do?
There are features that are available only in the console. For example, the recently released ability to see the last time a particular IAM user or role was actually used is available only in the console. And scheduled Lambda functions originally appeared as a console-only feature but is now available via the CloudWatch Events API.
It's a pretty rare thing. For the most part, the console is built on the API's but it does happen.
And there are many examples of capabilities in the SDK's that are not available in the console.