I am trying to build a web application with Kubernetes and Amazon Web Service.
I believe there are many different approaches but I would like to ask for your opinions!
To make it simple, the app would be a simple web page that display information. User logs in, and based on filters, get specific information.
My reflection is on the k8s inside architecture:
Can I put my whole application as a Pod? That way, cluster and node scalability would make the app available for each user by allocating each of them 1 pod. Is that good practice?
By following that logic, every different elements of my app would be a container. For instance, in a simple way, 1 container that contains the front of the app, 1 container that has the data access/management, 1 container for backend/auth etc
So 1 user would "consume" 1 pod which containers' are discussing together to give the required data from the user. k8s would create a pod for every users, scaling up/down the nodes number etc...
But then, except for the data itself, everything would be dockerized and stored on ECR (Elastic Container Registry) right? And so no need for any S3/EBS/EFS in my opinion.
I am quite new at AWS and k8s, so please feel free to give honest opinions :) Feedback, good or bad, is always good to take.
Thanks in advance!
I'd recommend a layout where:
Any container can perform its unit of work for any user
One container per pod
Each component of your application has its own (stateless) deployment
It probably won't work well to try to put your entire application into a single multi-container pod. This means the entire application would need to fit on a single node; some applications are larger than that, and even if it fits, it can lead to trouble with scheduling. It also means that, if you update the image for any single container, you need to delete and recreate all of the containers, which could be more disruptive than you want.
Trying to create a pod per user also will present some practical problems. You need to figure out how to route inbound requests to a particular user's pod, and keep requests within that user's set of containers; Kubernetes doesn't have any sort of native support for this. A per-user pod will also keep using resources even if it's overnight or the weekend for that user and they're not using the application. You would also need something with access to the Kubernetes API to create and destroy resources as new users joined your platform.
In an AWS-specific context you might consider using RDS (hosted PostgreSQL/MySQL) or S3 (object storage) for data storage (and again one database or S3 bucket shared across all customers). ECR is useful, but as a place to store your Docker images; that is, your built code, but not any of the persisted data or running containers.
Related
So, I really like the idea of server-less. I came across Google Cloud Functions and Google Cloud Run.
So google cloud functions are individual functions, which is a broad perspective, I assume google must be securely running on a huge nodejs server. And it contains all the functions of all the google consumers and fulfils the request using unique URLs. Now, Google takes care of the cost of this one big server and charges users for every hit their function gets. So its pay to use. And makes sense.
But when it comes to Cloud Run. I fail to understand how does it work. Obviously the container must not always be running because then they will simply charge a monthly basis instead of a per-hit basis, just like a normal VM where docker image is deployed. But no, in reality, they charge on per hit basis, that means they spin up the container when a request arrives. So, I don't understand how does it spin it up so fast? The users have the flexibility of running any sort of environment, that means the docker container could contain literally anything. Maybe a full-fledged Linux OS. How does it load up the environment OS so quickly and fulfils the request? Well, maybe it maintains the state of the machine and shut it down when not in use, but even then, it will require a decent amount of time to restore the state.
So how does google really does it? How is it able to spin up a customer's container in literally no time?
The idea of fast spinning-up sandboxes containers (that run on their own kernel for security reasons) have been around for a pretty long time. For example, Intel Clear Linux Containers and Firecracker provide fast startup through various optimizations.
As you can imagine, implementing something like this would require optimizations at many layers (scheduling, traffic serving, autoscaling, image caching...).
Without giving away Google’s secrets, we can probably talk about image storage and caching: Just like how VMs use initramfs to pre-cache the state of the VM, instead of reading all the files from harddisk and following the boot sequence, we can do similar tricks with containers.
Google uses a similar solution for Cloud Run, called gVisor. It's a user-space virtualization technique (not an actual VMM or hypervisor). To run containers on a Linux-like environment, gVisor doesn't need to boot a Linux kernel from scratch (because gVisor reimplements the linux kernel in go!).
You’ll find many optimizations on other serverless platforms across most cloud providers (such as how to keep a container instance around, should you be predictively scheduling inactive containers before the load arrives). I recommend reading the Peeking Behind the Curtains of Serverless Platforms paper to get an idea about what are the problems in this space and what are cloud providers trying to optimize for speed and cost.
You have to decouple the containers to the VMs. The second link of Dustin is great because if you understand the principles of Kubernetes (and more if you have a look to Knative), it's easy to translate this to Cloud Run.
You have a pool of resources (Nodes in Kubernetes, the VM in fact with CPU and memory) and on these resources, you can run container: 1, 2, 1000 per VM, maybe, you don't know and you don't care. The power of the container, is the ability to be packaged with all the dependency that it needs. Yes, I talked about package because your container isn't an OS, it contains the dependencies for interacting with the host OS.
For preventing any problem between container from different project/customer, the container run into a sandbox (GVisor, first link of Dustin).
So, there is no VM to start and to stop, no VM to create when you deploy a Cloud Run services,... It's only a start of your container on existing resources. It's also for this reason that you need to have a stateless container, without disks attached to it.
Do you want 3 "secrets"?
It's exactly the same things with Cloud Functions! Your code is packaged into a container and deploy exactly as it's done with Cloud Run.
The underlying platform that manages Cloud Functions and Cloud Run is the same. That's why the behavior and the feature are very similar! Cloud Functions is longer to deploy because Google need to build the container for you. With Cloud Run the container is already built.
Your Compute Engine instance is also managed as a container on the Google infrastructure! More generally, all is container at Google!
In google cloud platform i want to write one application that will take http request , hit apis in chain and then show a template based on the response received from the api and populate them with data received from apis . There are many templates .
What is the best way to design on GCP considering the below.
1. The application will received huge traffic.
2. Some apis will return dynamic urls that template needs.
I was thinking of wrinting in java and putting that on Kubernetes , that will manage the traffic . But what should be the choice of database to be used ?
The data is mostly key value pairs and should be highly available , in case it is down some backup should be there
Yes, Kubernetes is one option, something else that you may want to consider to handle huge app traffic is Google App Engine (GAE), since you mentioned Java development you can use the GAE Standard environment which is easy to build, deploy and runs reliably even under heavy load (fully managed).
You may want to consider using Cloud Datastore since based on your description, it is the best fit for the application needs (NoSQL database and automatically handles sharding and replication). You can also use the diagram to choose the best storage option.
My company is currently evaluating hyperledger(fabric) and we're using it for our POC. It looks very promising and we're targeting rolling out to production in next few months.
We're targeting AWS as our production environment.
However, we're struggling to find good tutorial/practices/recommendations about operating hyperledger network in such environment.
I'm aware that Cello is aiming to solve/ease deploying/monitoring hyperledger network but i also read that its not production ready yet. Question is, should we even consider looking at Cello at this point?
If not, what are our alternatives? Docker swarm, kubernetes?
I also didn't find information about recommended instance types. I understand this is application and AWS specific but what are the minimal system requirements
(memory&CPU&network) for example for 'peer' node (our application is not network intensive, nor a lot of transactions will be submitted per hour/day, only few of them per day).
Another question is where to create those instances on AWS from geographical&decentralization point of view. Does it make sense all of them to be created in same region? Or, we must create instances running in different regions?
Tnx a lot.
Igor.
yes, look at Cello.. if nothing else it will help you see the aws deployment model.
really nothing special..
design the desired system, peers, orderer, gateways, etc..
then decide who many ec2 instance u need to support that.
as for WHERE (region).. depends on where the connecting application is and what kind of fault tolerance you need for your business model.
one of the businesses I am working with wants a minimum of 99.99999 % availability. so, multi-region is critical. its just another ec2 instance with sockets open from different hosts..
aws doesn't provide much in terms of support for hyperledger. they have some templates which allow you to setup the VMs initially, but that's stuff you can do yourself as well.
you are right, the documentation is very light and most of the time confusing. I got to the point where I can start from scratch with a brand new VM and got everything ready and deploy my own network definition and chaincode and have the scripts to do that.
IBM cloud has much better support for hyperledger however. you can design your network visually, you can download your connection profiles, deploy and instantiate chaincode, create and join channels, handle certificates, pretty much everything you need to run and support such a network. It's light years ahead of AWS. They even have a full CI / CD pipepline that you could replicate for your own project. if you look at their marbles demo, you'll see what i mean.
Cello is definitely worth looking at, with the caveat that it's incubation meaning, not real yet, not production ready and not really useful until it becomes a fully fledged product.
Lately I see this buzzword phrase in job requirements in position descriptions of skills needed from an applicant (knowledge of):
"Horizontally scalable RESTful services"...
What exactly is it? I cannot google anything that would really explain the notion.
I expect that horizontal scaling is adding more servers to handle more load, rather than adding more memory and cpus to a server, as that is vertical scaling.
So, you could have a docker container that has your REST service, which should be stateless. There are many ways to scale in production.
On each connection you could then create a new container, and once that service is done you remove it, so every connection has its own server.
If you are running something like nodejs, which is very light, then you could get away with this, but if you are using a heavier web server then you may want to look at something like autoscaling, from AWS, and as the load on each container goes up, create a new one, so you don't overload any particular server.
You don't have to use Docker, but it wouldn't hurt for you to learn about it.
running multiple instance of application in multiple machines with a load balancer, We traditionally call it as web farming.
If your API is stateless, then you can directly go to load balancer without docker.
I am writing a webapp thats runs on AWS. My app requires users to upload their pdf files. I will convert them into Images using the "convert" utility in linux.
Here is my setup on Ubuntu 12.04:
Django
Celery
Django Celery
Boto
I am using apache as my webserver.
The work flow is as follows:
Three are three asynchronous tasks and two queues for handling all the processing and S3 for storing input and Output files.
A user uploads a pdf then:
accept_file_task is called: This task takes the user uploaded pdf and stores it in my S3 storage and then inserts a message into the input_queue(SQS)
check_queue_and_launch_instance_task: A periodic task that keeps monitoring the number of messages in the input_queue and launches instances whenever the queue has more messages than the no of Ec2 instances
The instances have a bootstrap script which is a while True: loop. Any of the instances can pick the message from the input_queue and do a Subprocess.Popen("convert "+input+ouput) and write the processed stated to output_queue and also upload the image generated into S3 output bucket and make it available as a download link
output_process_task: another periodic task that keeps polling the output_queue and whenever a message is available it will update the status in the table mentioned below.
I am using a model called Document to store all the status information. I also have users registering and hence a table to store all user information. Also Celery created a lot of tables to store all its task information. Right now I am using a single instance and the sqlite3 database (that comes with python) on that instance.
I am unsure about the following things
How do I scale up the database? Should I go for a RDS or a simpleDB or AmazonDB. If not celery, I could have easily used simpleDB. I am really stuck on this one
How do I get rid of the two periodic tasks check_queue_and_launch_instance_task and output_process_task. My idea is that Autoscaling must be used in some way so that if need at a later stage an Elastic Load Balancer can be used.
If any of you have designed something similar please help me on how to go about it
How do I scale up the database? Should I go for a RDS or a simpleDB or AmazonDB. If not celery, I could have easily used simpleDB. I am really stuck on this one
Keep in mind that premature optimization is the root of all evil. The question of RDS (which is really just MySQL, Oracle, or MS SQL) vs. SimpleDB is more of an application design decision than one based on scalability. SimpleDB is just a simple key-value store. RDS, on the other hand, will give you full ACID functionality. If your data is relational, then you should be using a relational database. If you just need a place to store simple strings or integers, then something like SimpleDB would make more sense.
Right now I am using a single instance and the sqlite3 database (that comes with python) on that instance.
Make sure that you understand the consequences of a) creating a single point-of-failure in your design and b) SQLite's limitations compared to using a standalone RDBMS in this application. (You can use it, but it's really intended for single-user applications).
How do I get rid of the two periodic tasks check_queue_and_launch_instance_task and output_process_task. My idea is that Autoscaling must be used in some way so that if need at a later stage an Elastic Load Balancer can be used.
If you're willing to replace Celery with SQS, you can tie together SQS + SNS + Cloudwatch to simplify this portion of your app. Though what you're doing doesn't sound like a bad choice, especially if it's working well already. Your time is probably better spent working on the problems in front of you rather than those that might occur down the road.