Is it ok to use API instead of SDK? - amazon-web-services

I like fast code execution (because of that I switched from Python to Go) and I do not like dependencies. Amazon recommends using SDK for simpler authentication (but in Lambda I can get tokens from IAM from environment variables) and because of built into SDK retry on errors (few lines of code, as I think). Yes it is faster to write my code using SDK, but what additional caveats about using pure HTTP API instead of SDK? Am I too crazy about milliseconds? Such optimizations worth it?

Anything you do with AWS is the result of an API call, whether executed by CLI, Web console, or SDK.
The SDKs make it easier to interact with those APIs. While you may be able to come up with some minor improvements for some calls, overall you will spend a lot of time doing it to very little benefit.

I think the stated focus on performance belies real trade-offs.
Consider that someone will have to maintain your code -- if you use an API, the test area is small, but AWS APIs might change or be deprecated; if you an SDK, next programmer will plug in new SDK version and hope that it works, but if it doesn't they'd be bogged down by sheer weight of the SDK.
Likewise, imagine someone needs to do a security review of this app, or to introduce something not yet covered by SDK (let's imagine propagating accounting group from caller role to underlying storage).
I don't think there is a clear answer.
Here are my suggestions:
keep it consistent -- either API or SDK (within given app)
consider the bigger picture (how many apps do you plan to write?)
don't be afraid to switch to the other approach later
I've had to decide on something similar in the past, with Docker (much nicer APIs and SDKs/libs). Here's how it played out:
For testing, we ended up using beta version of Docker Python bindings: prod version was not enough, and bindings (your SDK) were overall pretty good and clear.
For log scraping, I used HTTP calls (your API), "because performance", in reality comparative mental load using API vs SDK, and because bindings (SDK) did not support asyncio.

Related

Upload data from Linux in C++ to Firebase service

I'm working on an IoT application on top of embedded Linux, and I want to collect log data (mostly text files) of the devices.
The language I use is C++. I went through the documentation/tutorials of Firebase, however, it looks like only iOS, Android and Web (JS) are best supported, even the C++ part is assuming the carrier device is iOS/Android.
Is it's a good choice to use Firebase for my requirement? Should I just go ahead with C++ SDK or use REST API instead (that I can do with libcurl)
Thanks.
Depending on the complexity of what you want to do, you might just want to use the REST API.
Your biggest hurdle there is likely going to be the authentication part, once you get that out of the way, using the API itself is extremely simple.
Since you're talking about embedded Linux, your resources might be limited, which for me personally would be a reason to go use the REST API approach.
It comes down to ease of use (SDK), or lightweight (REST API). That's my 2 cts anyway...

Can AWS Lambda Replace an entire Rest Api layer in an enterprise web application

I am new to AWS and havebeen reading about aws lambda. Its very useful but you still have to write individual lambda functions instead of as a whole. i am wondering practically if its possible AWS Lambda can replace an entire Rest Api layer in an enterprise web application
Of course, everything is possible in the computer world but you need to answer lambda-serverless is the best way for me?
For example, you need smaller business flow per lambda(lambda have some hardware limits and need short computing and starting time for cost savings), that's mean you must separate your flow, its success depends on your business area and implementation. is your working area fit for this? But Lambda can handle almost everything with other AWS services(to be honest, maybe in some cases, lambda is a bit harder than the current system and community support is less than traditional systems but it also has lots of advantages as you know). You can check this repo, it full-serverless booking app and this serverless e-commerce repo.
To sum up, if your team is ready for it, you can start the conversion from some part of your application and check everything is ok. This answer totally depends on your team and business BCS nothing is impossible and that's engineering.
That's my opinion because your question looks like a comment question.

Google API Speeds Slow in Cloud Run / Functions?

Bottom Line: Cloud Run and Cloud Functions seem to have bizarrely limited bandwidth to the Google Drive API endpoints. Looking for advice on how to work around, or, ideally, #Google support to fix the underlying issue(s) as I will not be the only like use case.
Background: I have what I think is a really simple use case. We're trying to automate private domain Google Drive users to take existing audio recordings and send them off to Speech API to generate a transcript on an ad hoc basis, and to dump the transcript back into the same Drive folder with email notification to the submitter. Easy, right? Only hard part is that Speech API will only read from Google Cloud Storage, so the 'hard part' should be moving the file over. 'Hard' doesn't really cover it...
Problem: Writing in nodejs and using the latest version of the official modules for Drive and GCS, the file copying was going extremely slow. When we broke things down, it became apparent that the GCS speed was acceptable (mostly -- honestly it didn't get a robust test, but was fast enough in limited testing); it was the Drive ingress which was causing the real problem. Using even the sample Google Drive Download app from the repo was slow as can be. Thinking the issue might be either my code or the library, though, I ran the same thing from the Cloud Console, and it was fast as lightning. Same with GCE. Same locally. But in Cloud Functions or Cloud Run, it's like molasses.
Request:
Has anyone in the community run into this or a like issue and found a workaround?
#Google -- Any chance that whatever the underlying performance bottleneck is, you can fix it? This is a quintessentially 'serverless' use case, and it's hard to believe that the folks who've been doing this the longest can't crack it.
Thank you all in advance!
Updated 1/4/19 -- GCS is also slow following more robust testing. Image base also makes no difference (tried nodejs10-alpine, nodejs12-slim, nodejs12-alpine without impact), and memory limits equally do not impact results locally or on GCP (256m works fine locally; 2Gi fails in GCP).
Google Issue at: https://issuetracker.google.com/147139116
Self-inflicted wound. Google-provided code seeks to be asynchronous and do work in the background. Cloud Run and Cloud Functions do not support that model (for now at least). Move to promise-chaining and all of a sudden it works like it should -- so long as the CPU keeps the attention it needs. Limits what we can do with CR / CF, but hopefully that too will evolve.

Which AWS JS SDK package(s) to use for Cognito?

As of now, there are at least 5 packages for the AWS SDK as it pertains to Cognito.
Custom built via multiple mechanisms: https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/building-sdk-for-browsers.html
https://github.com/aws/amazon-cognito-identity-js
https://github.com/aws/amazon-cognito-auth-js
https://github.com/aws/amazon-cognito-js
The global SDK https://github.com/aws/aws-sdk-js
Some of them overlap in terms of methods. Many are only slightly different. The docs and links vary from outdated to flat out incorrect.
Most docs are in ES5, some in ES6, some in TypeScript, some in Node.
How are developers supposed to know how to make heads or tails from these?
I work with Cognito everyday as a developer. I recommend starting with the AWS JavaScript SDK (the full SDK). Everything that you need for Cognito development is there and is always the latest. Once you know the details for Cognito, take a look at the higher level packages. By that time you will probably have written your own library of code and then won't consider the others.
The problem with Cognito development is that unless you stay with the core SDK, the other packages either don't exist or are not compatible with SDKs for other platforms or for other languages such as PHP or Java.
Depending on your goals / requirements, you may need to support mobile, desktop, server, Lambda, etc. If you stay with the core SDK then you can quickly adapt to each environment. If you use a higher level package that only works, for example with node.js, then you have a porting problem.
[EDIT]
One item that I forgot to mention is that Cognito is really three different services and therefore three different sections (classes, etc.) within the SDK. There is Cognito User Pools, Cognito Federated Identities and Cognito Sync. Some of the higher level SDKs only support one or the other or just parts of one to make interfaces easier (or more intuitive).

version control + continuous integration with Flex + Ruby or Django

trying to pick version control, continuous integration, and host for Flex + Ruby or Django smallish project. Question:
version control: I've used SVN and CVS in the past. I hear great things about git. Not sure what to pick.
continuous integration: I've heard good things about hudson and cruiseControl. Not sure what to pick
hosting: is my own server the only way to go? Are the decent cloud options that are not too expensive? or should I look for some free hosting service?
thank you for your help!
f
Use Git.
Git is a great tool that allows a very flexible workflow. It has lots of benefits over subversion/cvs, the biggest of which is the ability to branch and merge seamlessly. This can't be overstated. The merge-hell that ensues when attempting to use svn's branching and merging is a thing of the past. For a better case on why to use git, check out http://whygitisbetterthanx.com/
Use Hudson.
Hudson is the easily the best CI tool in the game. The reason Hudson is the best is that its easy to configure (for one or multiple nodes), it has a ton of plugins, and handles the 90% use case extremely well. You are in the 90% use case. People like Mozilla aren't. Check out C. Titus Brown's talk at Pycon for more info. http://pycon.blip.tv/file/3259794/ (If you decide that Hudson isn't what you should use, check out buildbot)
Use Webfaction (or Rackspace Cloud).
Webfaction is a great starter ground. If your needs are low, check them out. Beyond that, I'd suggest taking a hard look at Rackspace Cloud (RSC). RSC makes scaling out much easier and their pricing model is very palatable for things that aren't bandwidth intensive (ie: most things that don't require tons of uploads/downloads). It starts at $10/mo. Their management console is good (save the DNS administration interface, but even that is more than bearable). If your needs expand beyond RSC (doubtful), you would do well to check out Amazon's EC2. Companies like RightScale can help when it comes to scaling out.