I have a web service that allows a client to provision information to the server.
Although, each provisioning method of the service allows provisioning multiple pieces of information at once, I don't think it is appropriated to provisioning of huge amount of information.
Therefore, I'm looking for an efficient way to implement bulk provisioning through web services.
Assume no particular application protocol is used. Therefore, I would like to hear if one technology is more appropriated than another for this kind of operation (e.g. REST is more appropriated than SOAPoverHTTP).
Thanks,
Mickael
Related
What is the AWS recommendation for hosting multiple micro-services? can we host all of them on same APIG/Lambda? With this approach, I see when want to update an API of one service then we are deploying API of all services which means our regression will include testing access of all services. On the other hand creating separate APIG/Lambda per service we will end up with multiple resources (and multiple accounts) to manage, can be operational burden later on?
A microservice is autonomously developed and should be built, tested, deployed, scaled, ... independently. There are many ways to do it and how you'd split your product into multiple services. One pattern as an example is having the API Gateway as the frontdoor to all services behind it, so it would be its own service.
A lambda usually performs a task and a service can be composed of multiple lambdas. I can't even see how multiple services can be executed from the same lambda.
It can be a burden specially if there is no proper tooling and processes to manage all systems in a automated and escalable way. There are pros and cons for any architecture, but the complexity is definitely reduced for serverless applications since all compute is managed by AWS.
AWS actually has a whitepaper that talks about microservices on AWS.
i have to make a web application which a maximum of 10,000 concurrent users for 1h. The web server is NGINX.
The application is a simple landing page with an HTML5 player with streaming video from CDN WOWZA.
can you suggest a correct deployment on AWS?
LoadBalancer on 2 or more EC2?
If so, which EC2 sizing do you recommend? Better to use Autoscaling?
thanks
thanks for your answer. The application is 2 page PHP and the impact is minimal because in PHP code i write only 2 functions that checks user/password and token.
the video is provided by Wowza CDN because is live streaming, not on-demand.
what tool or service do you suggest about the stress test of Web Server?
I have to make a web application which a maximum of 10,000 concurrent users for 1h.
Avg 3/s, it is not so bad. Sizing is a complex topic and without more details, constraints, testing, etc. You cannot get a reasonable answer. There are many options and without more information it is not possible to say which one is the best. You just started NGINX, but not what it's doing (static sites, PHP, CGI, proxy to something else, etc.)
The application is a simple landing page with an HTML5 player with streaming video from CDN WOWZA.
I will just lay down a few common options:
Let's assume it is a single static (another assumption) web page referring an external resource (video). Then the simplest and the most scalable solution would be an S3 bucket hosting behind the CloudFront (CDN).
If you need some simple quick logic, maybe a lambda behind a load balancer could be good enough.
And you can of course host your solution on full compute (ec2, beanstalk, ecs, fargate, etc.) with different scaling options. But you will have to test out what is your feasible scaling parameters or bottleneck (io, network CPU, etc.). Please note that different instance types may have different network and storage throughput. AWS gives you an opportunity to test and find out what is good enough.
I'm trying to figure out what AWS services I need for the mobile application I'm working on with my startup. The application we're working on should go into the app-/play-store later this year, so we need a "best-practice" solution for our case. It must be high scaleable so if there are thousands of requests to the server it should remain stable and fast. Also we maybe want to deploy a website on it.
Actually we are using Uberspace (link) servers with an Node.js application and MongoDB running on it. Everything works fine, but for the release version we want to go with AWS. What we need is something we can run Node.js / MongoDB (or something similar to MongoDB) on and something to store images like profile pictures that can be requested by the user.
I have already read some informations about AWS on their website but that didn't help a lot. There are so many services and we don't know which of these fit our needs perfectly.
A friend told me to just use AWS EC2 for the Node.js server + MongoDB and S3 to store images, but on some websites I have read that it is better to use this architecture:
We would be glad if there is someone who can share his/her knowledge with us!
To run code: you can use lambda, but be careful: the benefit you
don't have to worry about server, the downside is lambda sometimes
unreasonably slow. If you need it really fast then you need it on EC2
with auto-scaling. If you tune it up properly it works like a charm.
To store data: DynamoDB if you want it really fast (single digits
milliseconds regardless of load and DB size) and according to best
practices. It REQUIRES proper schema or will cost you a fortune,
otherwise use MongoDB on EC2.
If you need RDBMS then RDS (benefits:
scalability, availability, no headache with maintenance)
Cache: they have both Redis and memcached.
S3: to store static assets.
I do not suggest CloudFront, there are another CDN on market with better
price/possibilities.
API gateway: yes, if you have an API.
Depending on your app, you may need SQS.
Cognito is a good service if you want to authenticate your users at using google/fb/etc.
CloudWatch: if you're metric-addict then it's not for you, perhaps standalone EC2
will be better. But, for most people CloudWatch is abcolutely OK.
Create all necessary alarms (CPU overload etc).
You should use roles
to allow access to your S3/DB from lambda/AWS.
You should not use the root account but create a separate user instead.
Create billing alarm: you'll know if you're going to break budget.
Create lambda functions to backup your EBS volumes (and whatever else you may need to backup). There's no problem if backup starts a second later, so
Lambda is ok here.
Run Trusted Adviser now and then.
it'd be better for you to set it up using CloudFormation stack: you'll be able to deploy the same infrastructure with ease in another region if/when needed, also it's relatively easier to manage Infrastructure-as-a-code than when it built manually.
If you want a very high scalable application, you may be need to use a serverless architecture with AWS lambda.
There is a framework called serverless that helps you to manage and organize all your lambda function and put them behind AWS Gateway.
For the storage you can use AWS EC2 and install MongoDB or you can go with AWS DynamODB as your NoSql storage.
If you want a frontend, both web and mobile, you may be want to visit the react native approach.
I hope I've been helpful.
I have to deploy a restful web api 2 project to Azure expecting a lot of traffic. I am not sure what Azure service to select in regards to the best performance.
Web api services are running in background the complete IIS for http handling whereas a worker role needs implementation of http handling via OWIN. Any experiences?
I would highly recommend you use the Azure App Service (either Web App or API App) in lieu of Azure 'Cloud Services'. The benefits are bountiful, the drawbacks are scarce.
A few notable benefits the App Service brings are auto-scaling, web jobs (think light weight worker roles), simpler and faster deployment mechanism, and some seamless integration with Application Insights.
About the only thing Cloud Services does better is scale (both vertical and horizontal). But for most web/webAPI scenarios these advantages are very much diminished with the new pricing tiers available for the App Service.
The App Service Environment (a new feature of the App Service) where you can literally scale up to an unlimited number of instances (default is 50, but you can call Microsoft to increase the limit) and use beefier (yes, that is a technical term) instance sizes.
Before you go the route of App Service Environment, I would recommend you evaluate the geo-distribution of your user population. Each App Service Plan can scale up to 10 and 25 for Standard and Premium pricing tiers, respectively. You could plop an App Service Plan in a couple different data centers (US-West, US-East, US-Central, or overseas depending the scenario) front it with a Traffic Manager and now you have three app service plans each with a max of 10 or 25 depending on the pricing tier. That can add up to a lot of metal and have the dual benefit of improving end user experience and increasing your system's availability / disaster recover.
These days I would only recommend Cloud Services for really intense batch processing or where there are architectural limitations of your existing application that require the ability to have greater control over the underlying OS of the instance (Cloud Services support startup tasks that let you do all kinds of crazy things when a new instance is spawned that you just can't do with the App Service).
I would recommend that you use Azure API Apps (https://azure.microsoft.com/en-us/documentation/articles/app-service-api-apps-why-best-platform/) as that is a service intended to host Web API 2 services. You get load-balancing, auto-scaling, monitoring, etc. when you use API Apps. So you can focus on building something that fullfil business requirements.
You should always avoid having to do any plumbing on your own as that always can get back and bite you later. API Apps is the right choice in this case!
I am exploring AWS, and I'd like to implement in Java EE an EC2 app like the Online Photo Processing Service example in Getting Started with Amazon EC2 and Amazon SQS (PDF). It has a web-based client that submits jobs asynchronously to a client-facing web server app that then queues jobs for one or more worker servers to pick up, run, then post back to a results queue. The web server app monitors the results queue and pushes them back to the client. The block diagram is here.
How would you implement an app like this using Java EE, i.e., what technologies would you use for the servers in the diagram? We're using AWS because our research algorithms will require some heavy computation, so we want it to scale. I am comfortable with AWS basics (e.g., most things you can do in their management console - launch instances, etc), I know Java, I understand the Java AWS APIs, but I have little experience on the server side.
There are many possibilities to solve your problem, go with the simplest one for you. Myself, I would build a simple Java EE 6 (based on weld) web application with Amazon SQS dependency, this web application would send messages to AWS based SQS, another instance (possibly based on stateless EJB's) again with Amazon SQS dependency, which would read incoming messages and process them, you can use stateless EJBs as web service to process data synchronously, set the EJB pool size for each server instance depending on the processing load you need etc..
Most of the functionality in J2EE is way over the top for the majority of tasks. Start trying to implement this by using basic servlets. Keep the code in them as stateless as possible to assist with scaling issues. Only when servlets have some architectural flaw that prevent you completing the task would I move onto something more complex.