I've got a piece of code that I need to make available over the 'Net. It's a perfect fit for an AWS Lambda with an HTTP API on top - a stateless, side effect free, rather CPU intensive function, blob in, blob out. It's written in C#/.NET, but it's not pure .NET, it makes use of the UWP API, therefore requires Windows Server 2016.
AWS Lambdas only run on Linux hosts, even C# ones. Is there any way to deploy this piece in the Amazon cloud in serverless manner - maybe something other than a Lambda? I know I can go with a EC2 VM, but this is the very kind of thing serverless architecture was invented for.
Lambda is the only option for serverless computing on AWS and Lambda functions run only on Linux machines.
If you need to run serverless functions in a Windows machine, try Azure Functions. That's the Lambda equivalent in the Microsoft cloud. I'm not sure if it runs in a Windows Server 2016 machine and couldn't find any reference to the platform, but I would expect that, as a brand new service, they are using their own edge tech.
To confirm if the platform is what you need, try this function:
using System.Management;
using System.Net;
using System.Threading.Tasks;
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
// Get OS friendly name
// http://stackoverflow.com/questions/577634/how-to-get-the-friendly-os-version-name
var caption = (from x in new ManagementObjectSearcher("SELECT Caption FROM Win32_OperatingSystem").Get().Cast<ManagementObject>()
select x.GetPropertyValue("Caption")).FirstOrDefault();
string name = caption != null ? caption.ToString() : "Unknown";
// the function response
return req.CreateResponse(HttpStatusCode.OK, name);
}
I think yoy can achieve this via combination of CodeDeploy service and AWS CodePipeline.
Refer to this article:
http://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-windows.html
to learn how to deploy code via CodeDeploy. Later see this article:
http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-4.html
to learn how you can configure aws Pipline to call Code Deploy and later execute your batch job on created windows machine (note: you will probably want to use
S3 instead of Github - which is possible with CodePipeline).
I would consider to bootstrap whole such configuration via script - using aws cli - this way you can clean up easily your resources like this:
:aws codepipeline delete-pipeline --name "MyJob"
Of course you can configure the pipeline via aws web console and leave the pipeline configured to run your code on regular basis.
Related
I am looking for a language / framework or a method by which I can build API / web application code such that it can run on Serverless compute's like aws lambda and the same code runs on a dedicated compute system like lightsail or EC2.
First I thought of using Docker to do this but AWS Lambda entry point is a specific function signature which is very different than Spring Controllers. Is there a solution available currently?
So basically when I run it in lambda - it will have cold start issue, later when the app is ready or get popular I would like to move it to a EC2 instance for better performance and higher traffic load.
I want to start right in this situation so that later it can be easy to port and resolve the performance issue's
I'd say; no this is not possible easily.
When you are building an api that you'd want to run on lambda's you most likely will be using an API Gateway which takes care of your routing to different lambda functions (best practice). So the moment you would me working on an api like this migrating to EC2 would be a nightmare as you would need to rebuild the whole application a more of a monolith application which could run on EC2.
I would honestly commit to either run it on EC2/Containers or run it on Lambda, if cold start is your main issue with Lambda's you might wanna look into Lambda Snapstart for Java or use another language like Typescript/Python.
After some correct keywords in google I finally got what I was looking for, checkout this blog and code library shared by AWS which helps you convert the request and response of the request as per the framework required http request
Running APIs Written in Java on AWS Lambda: https://aws.amazon.com/blogs/opensource/java-apis-aws-lambda/
Repo Code: https://github.com/awslabs/aws-serverless-java-container
Thanks Ricardo for your response - will do check out Lambda Snapstart for sure and try it as well. I have not tested out this completely but it looks promising to some extent.
Hi I am using the serverless framework to develop my application and I need to set it up in a local environment I am using API gateway, Lambda, VPC , SNS, SQS, and DB is connected via VPC peering, currently, everytime I am deploying and testing my code and its tedious process and takes 5 mins to deploy, Is there any way to set up a local environment to have everything in one place
It should be possible in theory, but it is not an easy thing to do. There are products like LocalStack that offer exactly this.
But, I would not recommend going that route. Ultimately, by design this will always be a huge cat and mouse game. AWS introduces a new feature or changes some minor detail of their implementation and products like LocalStack need to catch up. Furthermore, you will always only get an "approximation" of the "actual cloud". It never won't be a 100% match.
I would think there is a lot of work involved to get products like LocalStack working properly with your setup and have it running well.
Therefore, I would propose to invest the same time into proper developer experience within the "actual cloud". That is what we do: every developer deploys their version of the project to AWS.
This is also not trivial, but the end result is not a "fake version" of the cloud that might or might not reflect the "real cloud".
The key to achieve this is Infrastructure as code and as much automation as possible. We use Terraform and Makefiles which works very well for us. If done properly, we only ever build and deploy what we changed. The result is that changes can be deployed in seconds to AWS and the developer can test the result either through the Makefile itself or using the AWS console.
And another upside of this is, that in theory you need to do all the same work anyway for your continuous deployment, so ultimately you are reducing work by not having to maintain local deployments and cloud deployments.
I have been trying to look at the code that is deployed in an aws lambda.
There is an existing go function that is running in the go lambda.
However, I am not able to. AWS docs says we can look at the code through the visual config view, where is this view? This is the screen that I see, where is the view to see the code?
Please help.
Or is it because we are using a go server, only the executable which is a binary is running in the lambda and hence we are not able to see the code?
Code inline is supported only for interpreted languages (js for example) and not compiled languages.
Beside below lamda limitation, It seems console lamda editor does not support go.
However the documentation suggest to use code star.
You can also get started with AWS Lambda Go support through AWS
CodeStar. AWS CodeStar lets you quickly launch development projects
that include a sample application, source control and release
automation. With this announcement, AWS CodeStar introduced new
project templates for Go running on AWS Lambda. Select one of the
CodeStar Go project templates to get started. CodeStar makes it easy
to begin editing your Go project code in AWS Cloud9, an online IDE,
with just a few clicks.
announcing-go-support-for-aws-lambda
Q: How do I create an AWS Lambda function using the Lambda console?
If you are using Node.js or Python, you can author the code for your
function using code editor in the AWS Lambda console which lets you
author and test your functions, and view the results of function
executions in a robust, IDE-like environment
lambda-faqs
Deployment package size
50 MB (zipped, for direct upload)
250 MB (unzipped, including layers)
3 MB (console editor)
lambda limits
lambda-go-how-to-create-deployment-package
Based on the code size the AWS code editor will display the code. Since the size of the code/package is large AWS Code editor can't display the same.
But you can download the package from AWS Lambda function using export.
Kindly follow the below steps:
Go to the lambda
Select the lambda
Click on Actions
Select Export function
You will get few options. Select Download deployment package.
I'm new in AWS but have already tried to compose and deploy simple .NET Core 2.0 application.
I have .Net 4.6 application which uses external c++ dll. The last one hase it own huge number of dependencies - over 300 MB of other dlls. So I try to deploy those stuff on AWS using Lambda.
At first I've created simple AWS Lambda Project and tried to code logic on following method
public async Task<string> FunctionHandler(S3Event evnt, ILambdaContext context) { ... }
But during deployment I've got error - it allows to deploy just ~65MB of content with Lambda.
Later I've created AWS Serverless Application - it was much better because of possibility of WebAPI using (it would be useful to use it in future for me). I've started to create logic in public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction class, adding the handler function:
public async Task<string> FunctionHandlerAsync(JObject param, ILambdaContext context) { ... }
The first trouble was JObject - it was awful to parse it to get Bucket and Object key. And still it was the limitation of deployment content - already ~250MB. The fix was done - I've got all dependencies and .exe file into the .zip and unzipped it to the \tmp folder during the LambdaEntryPoint initialization. That was right and without issues. But later I tried to launch an .exe file using the following code:
var process = new System.Diagnostics.Process();
process.StartInfo.FileName = "Photolemur Console.exe";
process.StartInfo.WorkingDirectory = #"\tmp";
process.StartInfo.Arguments = $"\"{inboxPath}\" \"{outboxPath}\"";
process.Start();
process.WaitForExit();
And I've got FileNotFound Exception. So the question is below -
Is it possible to do such thing using AWS lambda functions? I know that I could rise EC2 with virtual Window installation. But is it right way? What do you think about AWS .NET in general? Should I continue my researching or maybe it's easier way to explore Microsoft Azure Functions?
PS: Is it some nice solutions to do such work using just my C++ libraries at AWS?
This is IMO not worth the effort. Lambda functions are executing in Linux containers, therefore running Windows .exe would require Wine, which is possible, but painful and further increases size of lambda application and you could quickly run out of space in /tmp (512MB).
Also size limit for lambda application (50MB) exists for a reason: it allows AWS infrastructure to quickly scale number of instances up and down as needed. Circumventing this limitation ruins this advantage of AWS lambda.
I do not know scaling/latency/usage needs of your application, but using regular EC2 instance(s) seems to me to be better fit. Instances with performance comparable with AWS Lambda are quite cheap so the only drawback is that you have to manage them yourself.
So I'm currently using AWS serverless to make a createObject endpoint, which takes the values from the http body and uses the key-values to store an item in DynamoDB. I am doing this via AWS Lambda. We recently started using Realm (Realm.io) and we see that they have a Javascript library for node. Is it possible to invoke a realm function to store the same object, via AWS Lambda?
As I understand it, the Node SDK for Realm is only for the Professional and Enterprise editions of the Realm Object Server (ROS). The ROS can only be deployed on your own Linux instance.
Please see this for more details: https://realm.io/docs/realm-object-server/pe-ee/
Exactly as Clifton Labrum says, the professional and enterprise editions of the Realm Mobile Platform has a node.js SDK to be used on the server to listen for changes in Realms.
At every change in a Realm, you will get an event, which you can then process as you like. For instance, you can store objects in DynamoDB. You can also just leave the objects in Realms.