I'm new in AWS but have already tried to compose and deploy simple .NET Core 2.0 application.
I have .Net 4.6 application which uses external c++ dll. The last one hase it own huge number of dependencies - over 300 MB of other dlls. So I try to deploy those stuff on AWS using Lambda.
At first I've created simple AWS Lambda Project and tried to code logic on following method
public async Task<string> FunctionHandler(S3Event evnt, ILambdaContext context) { ... }
But during deployment I've got error - it allows to deploy just ~65MB of content with Lambda.
Later I've created AWS Serverless Application - it was much better because of possibility of WebAPI using (it would be useful to use it in future for me). I've started to create logic in public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction class, adding the handler function:
public async Task<string> FunctionHandlerAsync(JObject param, ILambdaContext context) { ... }
The first trouble was JObject - it was awful to parse it to get Bucket and Object key. And still it was the limitation of deployment content - already ~250MB. The fix was done - I've got all dependencies and .exe file into the .zip and unzipped it to the \tmp folder during the LambdaEntryPoint initialization. That was right and without issues. But later I tried to launch an .exe file using the following code:
var process = new System.Diagnostics.Process();
process.StartInfo.FileName = "Photolemur Console.exe";
process.StartInfo.WorkingDirectory = #"\tmp";
process.StartInfo.Arguments = $"\"{inboxPath}\" \"{outboxPath}\"";
process.Start();
process.WaitForExit();
And I've got FileNotFound Exception. So the question is below -
Is it possible to do such thing using AWS lambda functions? I know that I could rise EC2 with virtual Window installation. But is it right way? What do you think about AWS .NET in general? Should I continue my researching or maybe it's easier way to explore Microsoft Azure Functions?
PS: Is it some nice solutions to do such work using just my C++ libraries at AWS?
This is IMO not worth the effort. Lambda functions are executing in Linux containers, therefore running Windows .exe would require Wine, which is possible, but painful and further increases size of lambda application and you could quickly run out of space in /tmp (512MB).
Also size limit for lambda application (50MB) exists for a reason: it allows AWS infrastructure to quickly scale number of instances up and down as needed. Circumventing this limitation ruins this advantage of AWS lambda.
I do not know scaling/latency/usage needs of your application, but using regular EC2 instance(s) seems to me to be better fit. Instances with performance comparable with AWS Lambda are quite cheap so the only drawback is that you have to manage them yourself.
Related
I am looking for a language / framework or a method by which I can build API / web application code such that it can run on Serverless compute's like aws lambda and the same code runs on a dedicated compute system like lightsail or EC2.
First I thought of using Docker to do this but AWS Lambda entry point is a specific function signature which is very different than Spring Controllers. Is there a solution available currently?
So basically when I run it in lambda - it will have cold start issue, later when the app is ready or get popular I would like to move it to a EC2 instance for better performance and higher traffic load.
I want to start right in this situation so that later it can be easy to port and resolve the performance issue's
I'd say; no this is not possible easily.
When you are building an api that you'd want to run on lambda's you most likely will be using an API Gateway which takes care of your routing to different lambda functions (best practice). So the moment you would me working on an api like this migrating to EC2 would be a nightmare as you would need to rebuild the whole application a more of a monolith application which could run on EC2.
I would honestly commit to either run it on EC2/Containers or run it on Lambda, if cold start is your main issue with Lambda's you might wanna look into Lambda Snapstart for Java or use another language like Typescript/Python.
After some correct keywords in google I finally got what I was looking for, checkout this blog and code library shared by AWS which helps you convert the request and response of the request as per the framework required http request
Running APIs Written in Java on AWS Lambda: https://aws.amazon.com/blogs/opensource/java-apis-aws-lambda/
Repo Code: https://github.com/awslabs/aws-serverless-java-container
Thanks Ricardo for your response - will do check out Lambda Snapstart for sure and try it as well. I have not tested out this completely but it looks promising to some extent.
My (python) code runs inside a docker container.
The container is deployed on AWS EC2 for our production and testing purposes, but sometimes on our local machines or other cloud vendors for development and CICD purposes.
For some functionality, I want my python code to be able to distinguish between an EC2 deployment and non-EC2. Is this possible?
I found this answer which uses the EC2 instance metadata endpoint, But I'm wondering:
a) Would this also work from within a docker container?
b) Isn't there a more elegant solution? Issuing an HTTP request and waiting for it seems a bit too much.
(I'm aware that a simple solution is probably to add some proprietary environment variable or flag, trying to find a more native to check this)
I recommend you to go with a custom environment variable. This way you will be able to easily reproduce the required behaviour outside of AWS (on your workstation or using other cloud provider).
Using curl or checking for presence of /etc/cloud would make your application behaviour dependent on third-party services/tools. Beside logic complexity (you'd have to handle possible curl errors, like invalid response codes) that can lead to bugs you surely don't want to meet.
I want to make a bot that makes other bots on Telegram platform. I want to use AWS infrastructure, look like their Lamdba functions are perfect fit, pay for them only when they are active. In my concept, each bot equal to one lambda function, and they all share the same codebase.
At the starting point, I thought to make each new Lambda function programmatically, but this will bring me problems later I think, like need to attach many services programmatically via AWS SDK: Gateway API, DynamoDB. But the main problem, how I will update the codebase for these 1000+ functions later? I think that bash script is a bad idea here.
So, I moved forward and found SAM (AWS Serverless Application Model) and CloudFormatting, which should help me I guess. But I can't understand the concept. I can make a stack with all the required resources, but how will I make new bots from this one stack? Or should I build a template and make new stacks for each new bot programmatically via AWS SDK from this template?
Next, how to update them later? For example, I want to update all bots that have version 1.1 to version 1.2. How I will replace them? Should I make a new stack or can I update older ones? I don't see any options in UI of CloudFormatting or any related methods in AWS SDK for that.
Thanks
But the main problem, how I will update the codebase for these 1000+ functions later?
You don't. You use lambda alias. This allows you to fully decouple your lambda versions from your clients. This works because you are using an alias of your function in your client's code (or api gateway). The alias is fixed and does not change.
However, alias is like a pointer - it can point to different versions of your lambda function. Therefore, when you publish a new lambda version you just point alias to it. Its fully transparent from your clients and their alias does not require any change.
I agree with #Marcin. Also it would be worth checking serverless? Seems like you are still experimenting so most likely you are deploying using bash scripts with AWS SDK/SAM commands. This is fine but once you start getting the gist of what your architecture looks like, I think you will appreciate what serverless can offer. You can deploy/teardown cloudformation stacks in matter of seconds. Also you can use serverless-offline so that you can have a local build of your AWS lambda architecture on your local machine.
All this has saved me hours of grunt work.
Long time stack overflow lurker and fist time poster.
I've started a new project using AWS lambdas and have found the learning curve particularly steep coming from a background of developing desktop applications.
When developing desktop applications it's easy to create a test environment locally. I know it's possible to test lambda functions locally and I've been able to do this for simple cases.
The lambda functions I'm using interact a lot with other AWS services (S3, Aurora, etc). Also, the final solution will include around 15 lambda functions linked via a step function.
I want to know if it's possible to create a separate test environment to the live production environment for the entire step function. This would allow me to perform system tests before deploying to production.
I've looked into AWS codepipeline as a possible solution but I'm not sure if this would allow me to create a seperate test environment before deploying to production.
Any help would be greatly appreciated.
Thanks!
I've got a piece of code that I need to make available over the 'Net. It's a perfect fit for an AWS Lambda with an HTTP API on top - a stateless, side effect free, rather CPU intensive function, blob in, blob out. It's written in C#/.NET, but it's not pure .NET, it makes use of the UWP API, therefore requires Windows Server 2016.
AWS Lambdas only run on Linux hosts, even C# ones. Is there any way to deploy this piece in the Amazon cloud in serverless manner - maybe something other than a Lambda? I know I can go with a EC2 VM, but this is the very kind of thing serverless architecture was invented for.
Lambda is the only option for serverless computing on AWS and Lambda functions run only on Linux machines.
If you need to run serverless functions in a Windows machine, try Azure Functions. That's the Lambda equivalent in the Microsoft cloud. I'm not sure if it runs in a Windows Server 2016 machine and couldn't find any reference to the platform, but I would expect that, as a brand new service, they are using their own edge tech.
To confirm if the platform is what you need, try this function:
using System.Management;
using System.Net;
using System.Threading.Tasks;
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
// Get OS friendly name
// http://stackoverflow.com/questions/577634/how-to-get-the-friendly-os-version-name
var caption = (from x in new ManagementObjectSearcher("SELECT Caption FROM Win32_OperatingSystem").Get().Cast<ManagementObject>()
select x.GetPropertyValue("Caption")).FirstOrDefault();
string name = caption != null ? caption.ToString() : "Unknown";
// the function response
return req.CreateResponse(HttpStatusCode.OK, name);
}
I think yoy can achieve this via combination of CodeDeploy service and AWS CodePipeline.
Refer to this article:
http://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-windows.html
to learn how to deploy code via CodeDeploy. Later see this article:
http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-4.html
to learn how you can configure aws Pipline to call Code Deploy and later execute your batch job on created windows machine (note: you will probably want to use
S3 instead of Github - which is possible with CodePipeline).
I would consider to bootstrap whole such configuration via script - using aws cli - this way you can clean up easily your resources like this:
:aws codepipeline delete-pipeline --name "MyJob"
Of course you can configure the pipeline via aws web console and leave the pipeline configured to run your code on regular basis.