I have created a puppeteer's bot a put it inside a docker container. Until there, no problem !
But now i need to scale it(by duplicattion) when there is a new request. In fact, if the bot already works on a request, i want an auto scale to a second.
Second question, for you is it better to host it on cloud or a dedicated server ?
I tried to do something with Amazon ECS and fargate but i'm a newbe with thoses technology and i can't do anything which work.
If you have any suggestion, you're welcome
Thank you a lot for your responses and sorry for my bad English ;)
I want to use a docker application container with auto scale on request. Not a scaling on ressources but on the request.
I tried to do it on Amazon ECS without success. I'm open to other hosting solutions.
If you just want to run one application container and scale depending on load, take a look at AWS App Runner. It allows you to set a limit on concurrent requests a container can serve, and scale up when that threshold is reached - https://docs.aws.amazon.com/apprunner/latest/dg/manage-autoscaling.html
Alternatively if it doesn't have to be AWS, this sounds like a use case that would fit the Cloud Run service on Google Cloud.
Related
I have a web app that is deployed on my server. But, I want to be able to know when the server is down. I know there are many tools on the internet that I can use to monitor. So, I want to know if is possible, if my server is down, another container located on AWS container be automatically set to go up. Or, vice versa.
Thank you.
Well, yes.
The most straightforward solution is to have a lambda checking /health of your own server every minute let's say. If you need to check more often, you'd have to either start a micro instance of ec2 or fargate or loop in lambda which is not ideal. Then if your server is down, you simply start a container using aws-sdk (boto3 in python for example, or aws-sdk in javascript). Now, the other way around could be also based on lambda, but the trick is, you need to expose some mechanism which starts your own server. But if you use for example fargate service, your task will be recreated based on your policies, so no need to start external server
I have a docker image uploaded ECR that provides a single page web app. I would like to make an API Gateway endpoint that starts a new instance based on that image, and then maps port 443 (or 80) to container port 7894 (specified by app in the image).
Sketch of ideal architecture:
Running EC2 instance based off my ECR image
API Gateway with single GET endpoint mapping.
That is, I want something as close as possible to what I starting the image via
docker run -p 80:7894 my_app:latest
and navigating to http://localhost.
I only ever want at most one of these running (not a hard requirement, just saying I don't feel I need a load balancer), and I have a preference for "less" security given there's security in the app and I'm literally the only person who will be using this. That is, I'd be very happy if I could
start a new container
directly expose port 7894 of that to the internet
What is the absolute minimal set of resources I need to set up to make this happen?
I've tried a few approaches based on Fargate, but these all end up requiring a number of extra networking and load balancing steps (VPC/subnets, ALB/NLB/CloudMap, ...) that seem unnecessary.
AWS offers numerous way to run containers. At the time of writing the probably easiest way for use case is to use AWS App Runner. App Runner allows you to provide a Docker image and will deploy that including auto scaling and load balancing without you to need to worry about all the heavy lifting. Instead you'll directly get a HTTP endpoint you can reach your app at and that's it.
i have to make a web application which a maximum of 10,000 concurrent users for 1h. The web server is NGINX.
The application is a simple landing page with an HTML5 player with streaming video from CDN WOWZA.
can you suggest a correct deployment on AWS?
LoadBalancer on 2 or more EC2?
If so, which EC2 sizing do you recommend? Better to use Autoscaling?
thanks
thanks for your answer. The application is 2 page PHP and the impact is minimal because in PHP code i write only 2 functions that checks user/password and token.
the video is provided by Wowza CDN because is live streaming, not on-demand.
what tool or service do you suggest about the stress test of Web Server?
I have to make a web application which a maximum of 10,000 concurrent users for 1h.
Avg 3/s, it is not so bad. Sizing is a complex topic and without more details, constraints, testing, etc. You cannot get a reasonable answer. There are many options and without more information it is not possible to say which one is the best. You just started NGINX, but not what it's doing (static sites, PHP, CGI, proxy to something else, etc.)
The application is a simple landing page with an HTML5 player with streaming video from CDN WOWZA.
I will just lay down a few common options:
Let's assume it is a single static (another assumption) web page referring an external resource (video). Then the simplest and the most scalable solution would be an S3 bucket hosting behind the CloudFront (CDN).
If you need some simple quick logic, maybe a lambda behind a load balancer could be good enough.
And you can of course host your solution on full compute (ec2, beanstalk, ecs, fargate, etc.) with different scaling options. But you will have to test out what is your feasible scaling parameters or bottleneck (io, network CPU, etc.). Please note that different instance types may have different network and storage throughput. AWS gives you an opportunity to test and find out what is good enough.
I have my services split into Cloud Run services, say e.g. in a blog application I have user, post and comment services.
For each service I get a separate http endpoint on deploy, but what I want is to have api.mydomain.com act as a gateway for accessing all of them via their respective routes (/user*, /post*, etc).
Is there a standard (i.e. GCP-managed and serverless-ey) way to do this?
Things I've tried/thought of and their issues:
Firebase hosting with rewrites - this is the 'suggested' solution, but it's not very flexible and more problematically I think this leads to double wrapping CDNs on every request. Correct me if wrong, but Cloud Run endpoints use a CDN already, then you have Firebase hosting running through fastly. Seems silly to be needlessly adding cost and latency like that.
nginx on a constantly running instance - works ok but not managed and not serverless; requires scaling interventions
nginx on Cloud Run - this seems like it would have highly variable performance since there are (a) two possible cold starts, and (b) again double wrapping CDN.
using Cloud LB/CDN directly - seemingly not supported with Cloud Run
Any ideas? For me this kind of makes Cloud Run unusable for microservices. Hopefully there's a way around it.
I'm developing a prototype IoT application which does the following
Receive/Store data from sensors.
Web application with a web-based IDE for users to deploy simple JavaScript/Python scripts which gets executed in Docker Containers.
Data from the sensors gets streamed to these containers.
User programs can use this data to do analytics, monitoring etc.
The logs of these programs are outputted to the user on the webapp
Current Architecture and Services
Using one AWS EC2 instance. I chose EC2 because I was trying to figure out the architecture.
Stack is Node.js, RabbitMQ, Express, MySQl, MongoDB and Docker
I'm not interested in using AWS IoT services like AWS IoT and Greengrass
I've ruled out Heroku since I'm using other AWS services.
Questions and Concerns
My goal is prototype development for a Beta release to a set of 50 users
(hopefully someone else will help/work on a production release)
As far as possible, I don't want to spend a lot of time migrating between services since developing the product is key. Should I stick with EC2 or move to Beanstalk?
If I stick with EC2, what is the best way to handle small-medium traffic? Use one large EC2 machine or many small micro instances?
What is a good way to manage containers? Is it worth it use swarm and do container management? What if I have to use multiple instances?
I also have small scripts which have status of information of sensors which are needed by web app and other services. If I move to multiple instances, how can I make these scripts available to multiple machines?
The above question also holds good for servers, message buses, databases etc.
My goal is certainly not production release. I want to complete the product, show I have users who are interested and of course, show that the product works!
Any help in this regard will be really appreciated!
If you want to manage docker containers with least hassle in AWS, you can use Amazon ECS service to deploy your containers or else go with Beanstalk. Also you don't need to use Swarm in AWS, ECS will work for you.
Its always better to scale out rather scale up, using small to medium size EC2 instances. However the challenge you will face here is managing and scaling underlying EC2's as well as your docker containers. This leads you to use Large EC2 instances to keep EC2 scaling aside and focus on docker scaling(Which will add additional costs for you)
Another alternative you can use for the Web Application part is to use, AWS Lambda and API Gateway stack with Serverless Framework, which needs least operational overhead and comes with DevOps tools.
You may keep your web app on Heroku and run your IoT server in AWS EC2 or AWS Lambda. Heroku is on AWS itself, so this split setup will not affect performance. You may heal that inconvenience of "sitting on two chairs" by writing a Terraform script which provisions both EC2 instance and Heroku app and ties them together.
Alternatively, you can use Dockhero add-on to run your IoT server in a Docker container alongside your Heroku app.
ps: I'm a Dockhero maintainer