I am exploring AWS, and I'd like to implement in Java EE an EC2 app like the Online Photo Processing Service example in Getting Started with Amazon EC2 and Amazon SQS (PDF). It has a web-based client that submits jobs asynchronously to a client-facing web server app that then queues jobs for one or more worker servers to pick up, run, then post back to a results queue. The web server app monitors the results queue and pushes them back to the client. The block diagram is here.
How would you implement an app like this using Java EE, i.e., what technologies would you use for the servers in the diagram? We're using AWS because our research algorithms will require some heavy computation, so we want it to scale. I am comfortable with AWS basics (e.g., most things you can do in their management console - launch instances, etc), I know Java, I understand the Java AWS APIs, but I have little experience on the server side.
There are many possibilities to solve your problem, go with the simplest one for you. Myself, I would build a simple Java EE 6 (based on weld) web application with Amazon SQS dependency, this web application would send messages to AWS based SQS, another instance (possibly based on stateless EJB's) again with Amazon SQS dependency, which would read incoming messages and process them, you can use stateless EJBs as web service to process data synchronously, set the EJB pool size for each server instance depending on the processing load you need etc..
Most of the functionality in J2EE is way over the top for the majority of tasks. Start trying to implement this by using basic servlets. Keep the code in them as stateless as possible to assist with scaling issues. Only when servlets have some architectural flaw that prevent you completing the task would I move onto something more complex.
Related
I am a bit new to AWS Lambda, and i am not sure if it is a deployment framework where we can host ASP.NET Core MVC web application inside it? or can call .NET Core console application? it is a hosting services. For example currently we have the following 2 main components:-
ASP.NET Core Console application which runs on schedule basis using Windows tasks scheduler.
ASP.NET Core MVC web application which is hosted inside IIS to host some API End Points.
So if we want to use AWS Lambda, in the above 2 components, which section/s it will replace ? For example for the first point, will it replace the Console application or the windows task scheduler? and for the second point, will it replace the ASP.NET Core MVC web application or IIS? Or AWS Lambda will not replace any thing but will/can integrate with the above components?
Thanks in advance for any help. and sorry if i do not have much knowledge in AWS Lambda.
In short, AWS Lambda or Azure Functions are special types of application models for the cloud based solutions, not traditional console or web apps. Don't make any attempt to associate them with each others, as they differ significantly.
Such cloud applications won't take place of tradition app models any time soon, as not everyone is moving to the cloud.
For example, if you want scheduled jobs on AWS, then Lambda and CloudWatch can work together.
However, to host REST API on AWS you do have multiple options, such as
Serverless API with Lambda
Traditional web app on Elastic Beanstalk
Edit: I mistakenly was under the impression that C# needed a custom docker image. This is not the case, there are provided images for C# in Lambda
Theoretically, you could make lambdas your controllers in an MVC application - in fact its actually a good design decision to do so. However, given ASP's nature this can become a little tricky first time. They would have to be behind API endpoints but with a little work in the configuraiton you can distribute the controllers across lambdas.
The scheduled tasks are perfect for lambda as well, but again - not c# native.
There is not any default Windows images in AWS. Honestly, if you are going to do a c# ASP app you may want to consider Azure instead, as it natively interacts with C# and ASP where you have to hack the options together in AWS
Sorry I'm new to web server. I want to deploy a cloud server for user data:
User can login using web, with verification code sent to user's phone.
User can manipulate his data (add/modify/remove) when login.
Android/iPhone client can manipulate user data when login.
Server should have a database for storage, SQLLite or others.
It would be good to use Amazon/Ali-cloud cloud service, provided it can speed up my deployment. I'm not sure if I need run into blobs such as H5, PHP/JSP, node.js or others. Can you provide a guide for me, web link or book?
And, what's the most popular programming interface between Android/IOS app and cloud server? http post/get or other wrapper ?
Surely you can speed up your deployment using Amazon Web Services. This is my recommendation:
For Webserver,
Amazon EC2: Launch an instance where you can install Apache/Nginx
here. You will need a RDS instance running parallel with your server
which will lower your need on server CPU/Mem, but will cost also.
For Database, you can have many approach ways here:
Amazon RDS: Launch an instance where you host your Database
(mysql/...). This one will provide you with Database Name, Hostname,
Users, ... which you can use to connect with your webserver in EC2.
Your Android/IOS application can use RDS information for the database
connection.
Amazon DynamoDB: Fast, Flexible for NoSQL (wonder if you want to use
traditional database or NoSQL?): https://aws.amazon.com/amplify/
For Mobile/Website access control,
AWS Cognito: Great for user-accounts, designed for real-time data
model: https://aws.amazon.com/cognito/?nc1=f_ls
For serverless if you want to GET/PUT API on your webserver for
easier,
AWS Lambda: https://aws.amazon.com/lambda/?nc1=f_ls
Taking into account that you are just starting with your application, I would suggest going with serverless architecture with AWS Lambda running your business logic.
Key benefits:
No server management = spend time on building your application vs on maintaining infrastructure
Flexible scaling = scale based on what you really need
Pay for value = don't pay for resources that you don't need
Automated high availability = serverless provides built-in availability and fault tolerance
To learn more on serverless, you may want to check Building Serverless Web Applications - 2017 AWS Online Tech Talks.
Now when it comes to going deep, I would suggest checking online trainings available from acloud.guru, cloud academy, udemy or linuxacademy for serverless and also for the development language you want to use (Node.js is often used for such scenarios).
I've been doing some server architecture design over the past few weeks and have run into an issue that I need outside help with. I'm creating a game server for a massively multiplayer game, so I need to receive constant updates on entity locations, then broadcast them out to relevant clients.
I've written servers with scale in mind before, but they were stateless servers, so it wasn't all that difficult. If I'm deploying this server on a cloud platform like Google Cloud or AWS, is it better to simple scale the instance that the server is running on, or should I opt for the reverse proxy method and deploy the server across multiple instances?
Sorry if this is a vague question. I can provide more details if necessary.
You may want to start here -
https://aws.amazon.com/gaming/
https://aws.amazon.com/gaming/game-server/
You also should consider messaging solutions such as SNS and SQS. If the app can receive push notifications then SNS might be your best option.
I have to deploy a restful web api 2 project to Azure expecting a lot of traffic. I am not sure what Azure service to select in regards to the best performance.
Web api services are running in background the complete IIS for http handling whereas a worker role needs implementation of http handling via OWIN. Any experiences?
I would highly recommend you use the Azure App Service (either Web App or API App) in lieu of Azure 'Cloud Services'. The benefits are bountiful, the drawbacks are scarce.
A few notable benefits the App Service brings are auto-scaling, web jobs (think light weight worker roles), simpler and faster deployment mechanism, and some seamless integration with Application Insights.
About the only thing Cloud Services does better is scale (both vertical and horizontal). But for most web/webAPI scenarios these advantages are very much diminished with the new pricing tiers available for the App Service.
The App Service Environment (a new feature of the App Service) where you can literally scale up to an unlimited number of instances (default is 50, but you can call Microsoft to increase the limit) and use beefier (yes, that is a technical term) instance sizes.
Before you go the route of App Service Environment, I would recommend you evaluate the geo-distribution of your user population. Each App Service Plan can scale up to 10 and 25 for Standard and Premium pricing tiers, respectively. You could plop an App Service Plan in a couple different data centers (US-West, US-East, US-Central, or overseas depending the scenario) front it with a Traffic Manager and now you have three app service plans each with a max of 10 or 25 depending on the pricing tier. That can add up to a lot of metal and have the dual benefit of improving end user experience and increasing your system's availability / disaster recover.
These days I would only recommend Cloud Services for really intense batch processing or where there are architectural limitations of your existing application that require the ability to have greater control over the underlying OS of the instance (Cloud Services support startup tasks that let you do all kinds of crazy things when a new instance is spawned that you just can't do with the App Service).
I would recommend that you use Azure API Apps (https://azure.microsoft.com/en-us/documentation/articles/app-service-api-apps-why-best-platform/) as that is a service intended to host Web API 2 services. You get load-balancing, auto-scaling, monitoring, etc. when you use API Apps. So you can focus on building something that fullfil business requirements.
You should always avoid having to do any plumbing on your own as that always can get back and bite you later. API Apps is the right choice in this case!
I am designing my first Amazon AWS project and I could use some help with the queue processing.
This service accepts processing jobs, either via an ASP.net Web API service or a GUI web site (which just calls the API). Each job has one or more files associated with it and some rules about the type of job. I want to queue each job as it comes in, presumably using AWS SQS. The jobs will then be processed by a "worker" which is a python script with a .Net wrapper. The python script is an existing batch processor that cannot be altered/customized for AWS, hence the wrapper in .Net that manages the AWS portions and passing in the correct params to python.
The issue is that we will not have a huge number of jobs, but each job is somewhat compute intensive. One of the reasons to go to AWS was to minimize infrastructure costs. I plan on having the frontend web site (Web API + ASP.net MVC4 site) run on elastic beanstalk. But I would prefer not to have a dedicated worker machine always online polling for jobs, since these workers need to be a bit "beefier" instance (for processing) and it would cost us a lot to mostly sit doing nothing.
Is there a way to only run the web portion on beanstalk and then have the worker process only spin up if there are items in the queue? I realize I could have a micro "controller" instance always online polling and then have it control the compute spinup, but even that seems like it shouldn't be needed. Can EC2 instances be started based on a non-zero SQS queue size? So basically web api adds job to queue, something watches the queue and sees it's non-zero, this triggers the EC2 worker to start, it spins up and polls the queue on startup. It processes until the queue until empty, then something triggers it to shutdown.
You can use Autoscaling in conjunction with SQS to dynamically start and stop EC2 instances. There is a AWS blog post that describes the architecture you are thinking of.