I'm not familiar with AWS lambdas.
Does AWS provide swagger or anything similar to lambdas?
my use case doesn't require me to create rest endpoint to the lambdas. do I also need to play around with gateways as well?
I'm not looking for cli solutions: sam/aws-cli/serverless/...
Everything in AWS uses the same public API that AWS provides. How that's implemented depends on the service. The official SDKs/CLIs use that same API and abstract the implementation details for you, so I'd recommend sticking to those.
If you want to build your own tool to talk to the AWS APIs or more specifically Lambda, you can have a look at the official developer guide, which includes an API-Reference.
More specifically you're going to need these two actions:
CreateFunction
DeleteFunction
You should be aware, that you need to implement the Signature v4 process to sign your requests with your AWS credentials yourself in that case, which is non-trivial. This signing process is used to authenticate yourself to AWS or more specifically to Identity and Access Management.
The API-Reference doesn't directly list the API-endpoints, but you're going to have to use the one for lambda in the region you want to create/delete your functions in, e.g. https://lambda.eu-central-1.amazonaws.com, where eu-central-1 would be your region. For a full list of the service endpoints for Lambda take a look at this documentation.
I'd really recommend you stick to one of the official SDKs/CLIs - this will make your life much easier.
Related
I have a multi-account strategy in AWS. All is deployed using the CDK. Each service has it's own account and I want to achieve this (recommended by this aws blog post):
If I deploy the API account first, it needs the other account's lambda ARNs for integration, which are not yet created.
If I deploy a service account first, it needs the API methods ARNs for giving them permission to invoke the lambdas.
I think this is kind of a "deadlock" situation and I can't figure it out.
Putting it in other words, how can I integrate, using the CDK, the API account's methods with lambdas from another account?
Thanks!
There's no "one size fits all" approach to problems like these.
Common approach I have previously seen:
Define the component with the least amount of dependencies on other components, let's say in this case it's the microservice
Replace the dependency parameters with placeholders, for example, instead of allowing API account to invoke the lambda, allow microservice's account to invoke it first
Now you have lambda ARNs of the microservice, which you can use in other components
Repeat until all components are deployed (but not necessarily functional)
Now you can replace placeholder values in the original microservice deployment
AWS Amplify and the "Applications" feature within AWS Lambda seem to have a few things in common:
Both seem to be a wrapper around several AWS resources
Both walk you through a guided setup to provision a working AWS stack
Both set up some CI/CD pipelines for you so that Git commits trigger a build and deploy
So what are the differences between these two services, and what are some scenarios where you might want to choose one over the other?
AWS Amplify is a toolchain for front-end developers to interact with AWS resources. It provides a cli program to manage resources and (JS/Android/iOS) libraries to integrate them into your front-end applications.
It doesn't 'wrap' resources, but is merely a convenience layer to manage them (it is somewhat similar to AWS SAM); Amplify generates CloudFormation templates, stores those locally, and uses aws-cli to provision them. Note that Amplify can also be used just as a front-end library to integrate resources that are already set up.
AWS Lambda Applications is an actual AWS service, or rather a feature of AWS Lambda. It groups related resources, so they can be managed and deployed as if it was a single resource.
... what are
some scenarios where you might want to choose one over the other?
Amplify is aimed at web- and mobile developers: it allows them to manage backend resources without much backend knowledge.
It is not a matter of 'using one over the other'; they can actually be used in conjunction with each other.
I'm learning serverless architectures and currently reading this article on Martin Fowler's blog.
So I see this scheme and try to replace the abstract components with AWS solutions. I wonder if not using API gateway to control access to S3 a good idea (on the image the database no.2 is not using it). Martin speaks about Google Firebase and I'm not familiar with how it compares to S3.
https://martinfowler.com/articles/serverless/sps.svg
Is it a common strategy to expose S3 to client-side applications without configuring an API gateway as a proxy between them?
To answer your question - probably, yes.
But, you’ve made a mistake in selecting AWS services for the abstract in Martin’s blog. And you probably shouldn’t use S3 at all in the way you’re describing.
Instead of S3; you’ll want dynamoDB. You’ll also want to look at Cognito for auth.
Have a read of this after Martin’s article for how to apply what you’ve learned on AWS specific services https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/
Aws S3 is not a database, its an object storage service.
Making S3 bucket publicly accessible is possible but not recommended however, you can access its objects using the S3 API either via the CLI or the SDK.
Back to your question in the comments regarding whether consuming the API directly from the frontend (assuming you mean using JS) is for sure a bad practic since AWS highly recommend you to securly store your API credentials (keys), and as any AWS API call should include the API credentials (keys) provided by AWS for your IAM user, then obviously anyone using your web application can see these keys.
Hope this answered your question.
How to configure the credential to use AWS service from inside the EKS. I can not use AWS SDK for this specific purpose. I have mentioned a role with required permission in the yaml file but it does not seem like it is picking up the role.
ThankYou
Any help is appreciated.
Typically you'd want to apply some level of logic to allow the pods themselves to obtain IAM credentials from STS. AWS does not currently (its re:Invent now so you never know) provide a native-way to do this. The two community solutions we've implemented are:
kube2IAM: https://github.com/jtblin/kube2iam
kIAM: https://github.com/uswitch/kiam
Both work well in production/large environments in my experience. I prefer kIAM's security model, but both get the job done.
Essentially the work the same basic way ... intercepting (for lack of a better word) communications b/t the SDK libraries in the container and STS, matching identity of the pod with an internal role dictionary, and then obtaining STS credentials for that role and handing those creds back to the container. The SDK isn't inherently aware its in a container, its just doing what it does anywhere ... walking its access tree until it sees the need to obtain creds from STS and receiving those.
I would like to use AWS Lambda to perform a computation on behalf of a 3rd party and then prove to them that I did so as intended. A proof would be a cryptographically signed digest of the function body, the request, and the response. Ideally, Amazon would sign the digest with its own private key and publish their public key to allow verification of the signature. The idea is similar to the "secure enclave" that new Intel chips provide through SGX (Software Guard Extensions).
The existing Lambda service has some of the ingredients needed. For example, the GetFunction response includes a CodeSha256 field that uniquely identifies the function implementation. And the Amazon API Gateway allows you to make HTTPS requests to the Lambda service, which might allow a TLSNotary-style proof of the request-response contents. But to do this right I think AWS Lambda needs to provide the signature directly.
Microsoft Azure is working on trusted software enclaves ("cryptlets") in their Project Bletchley:
https://github.com/Azure/azure-blockchain-projects/blob/master/bletchley/bletchley-whitepaper.md
https://github.com/Azure/azure-blockchain-projects/blob/master/bletchley/CryptletsDeepDive.md
Is something like this possible with the current AWS Lambda?
Let's make some definitions first, Lambda isn't a server but a service that runs your code. it does not provide any signature directly but rather what you configure for it on AWS.
The Secure Enclave is one implementation or a type of TPM (Trusted Platform Module), this can be done in many ways and the Secure Enclave is one of the best.
The short answer to your question is yes it can be done as long as you implement the needed code and add all the required configuration, SSL etc.
I would advide you to read the following:http://ieeexplore.ieee.org/document/5703613/?reload=true
And in case you want to have a TPM out of the box you can use microsoft project: https://github.com/Microsoft/TSS.MSR
AWS has different approach according to security. You can set what can use particular resource, and which way.
For sure you can do what was described. You can identify request, response, and exact version of code that was used. Question is if you want to sign code, when processing request. Easier way is to have that calculated on deploy.
For first case - you need language with access to source. Like with Python, you can get it, sign and return that, or store somewhere.
Second case - I would use tagging.
There is also another solution to the problem by using IAM. You can provision an IAM role for your customer that has read-access to the Lambda source code. By using the public lambda endpoint (the one that looks like https://api-id.execute-api.region.amazonaws.com/STAGE) - you can assure the customer that the request is directly hitting this specific lambda function.
The IAM role available to your customer has permissions to do the following:
View the lambda code and other details across all revisions
Read the API gateway configuration to validate that the request directly hits the lambda, and doesn't go elsewhere.
All your customer needs to do then is setup auditing at their end against lambda by using the given IAM role. They can setup a periodic cron that downloads all versions of your lambda as it is updated. If you have a pre-review process - that can be easily configured against their alerting.
Note that this relies on "AWS" running in good faith and the underlying assumptions being:
AWS Lambda is running the code it is configured against.
AWS management APIs return correct responses.
The time-to-alert is reasonable. This is easier, since you can download previous lambda code versions as well.
All of these are reasonable assumptions.