django deployment on GCP with kubernetes - django

I finally got my first django project and I need help in deploying it in GCP with kubernetes.
I've never deployed any project before so it's a bit frustrating for me with the client nagging on my head.
it's an E-learning platform so I want to use GCP with kubernetes (for CI/DI since there will be a lot of updates for the project) and Google cloud storage for storing media files.
i'd love to have some help concerning deployment, things to do/don't, and some useful links to start with.
ps: this is my first question so be easy on me

Your question is too wide, try starting first and then asking a question - as we won’t be able to explain an universal way of deploying Django applications in GCP. I recommend starting from getting familiar with GCP services.
There is a really cool but paid course on Coursera platform ("Getting Started with Google Kubernetes Engine") with practical hands-on labs on how to use Kubernetes on GCP in pair with CI/CD tool like Jenkins. You can also find more about Jenkins in GCP in here.
You will also find there how to:
Use different deployment strategies with Kubernetes (Rolling Updates,
Canary and Blue-Green Deployments) with simple hello-world app.
Create a continuous delivery pipeline using Jenkins
You can enroll into this course with free trial account.

Related

What is correct way to deploy single tenant app on GCP?

we have an application with FastApi backend, next.js frontend, postgresql database and cloud storage. There is possibility that in future we will add some new service to this.
For CI/CD we are using github with github actions.
Our goal is to deploy instance per client (organization), so they will be isolated, and each will have its own subdomain like client1.ourdomain.com, client2.ourdomain.com etc.
We are using Google Cloud Platform. What would be best way to deploy and manage it later?
In future we will be creating an app to manage our clients, will it be possible to turn off, or create those environments from app?
For development purpose we have virtual machine on gcp with docker-compose to run all this.
I thought about using cloud run for fastapi, cloud run for nextjs, postgres on gcp and google cloud storage. Is it correct approach?
Please be aware that your question is not easy to answer, because your are talking about how to handle tenancy.
A good tenancy management depends on your requirements, and the way you want to evolve and maintain your software does have big impact on that.
According to the few information you shared, organizing your clients by projects looks the way you want. That way allows you to isolate your client's resources (including accounts/billing and so on) by implementing the right IAM policies.
Also, your can later deploy a dedicated "admin" project to manage the others from an admin app.
If you have manly programming/developer skills in your team, I would suggest to talk to a GCP architect/engineer to figure out what's the right architecture to create, and to adopt a IAC approach to create your client's projects and deploys so that your app is easy to replicate and maintain across clients.

Kubeflow deployment on GCP

I have been reading for few weeks for different approaches for ML in production. I decided to test Kubeflow and I decided to test it on GCP. I started to deploy Kubeflow on GCP using the guiidline on official kubeflow website(here https://www.kubeflow.org/docs/gke/). I run into a lot of issues and it was quit hard to fix them. I started to look into a better approach and I noticed that GCP AI platform now offers deploying Kubeflow pipelines with just few simple steps. (https://cloud.google.com/ai-platform/pipelines/docs/connecting-with-sdk.)
After easily setting up this, I had few question and doubts. If it is this much easy to set up and deploy Kubeflow why we have to go through such a cumbersome way as suggested in the kubeflow official website. Since creating Kubeflow pipeline on GCP means basically I am deploying Kubeflow on GCP, does that mean I can access other Kubeflow services like Katib?
Elnaz
The kubeflow official website provides the required information in detailed way and where as in google cloud it directly provides you the services with possible ready solution.
Referring to will fuks document it says YES, you can able to access katlib on GCP
The GCP managed service of Kubeflow Pipelines is just that. You won't have a lot of access to the cluster to make changes. I've deployed a Kubeflow cluster that can still reach the AI Hub as well.
I believe they have plans to expand what can be deployed in the AI Platform but if you don't want to wait, the self-deployment is possible (but not easy) IMO.

Deploying a multi-service app to a cloud provider

There are several tutorials on how to deploy a containerized service to the cloud: AWS, Google Cloud Platform, Heroku, and many others all have nice tutorials on how to do this.
However, most real-world apps are made of two or more services (for example a database + a web server), rather than just one service.
Is it bad practice to deploy the various services of a multi-service app to different clusters (e.g. deploy the database to a GKE cluster, and the web server to another GKE cluster)? I'm asking this because I am finding it very difficult to deploy a simple web app to a single cluster, while I was expecting that once I set up my Dockerfiles and docker-compose.yml everything would work out-of-the-box (as advertised by the documentations of Docker Compose and Kubernetes) and I would be able to have a small cluster with 1 container for my database and 1 container for my web server.
So my questions are:
Is it bad practice to deploy the various services of a multi-service app to different clusters?
What is, in general, the de-facto standard way to deploy a web app with a database and a web server to the cloud? What are the easiest tools to achieve this?
Practically, what is the simplest way I can deploy a React + Express + MongoDB app to any cloud provider with a free-tier account?
Deploying multiple services (AKA applications) that shares some logic between them on the same cluster/namespace is actually the best practice. I am not sure why you find it difficult, but you could take a container orchestrator platform, such as Kubernetes and deploy as many applications as you want - in the same project on the same cluster.
I would recommend getting into a cloud platfrom that serves a Container Orchestrator such as Google Container Engine of Google Cloud Platform (or any other cloud platform you want) and start exploring around. You can also read about containers overall or Kubernetes.
So, practically speaking, I would probably create MongoDB and the express app inside the same namespace (and every other service or application related to the project on another container within the same namespace).

Need to learn spring cloud microservices to be deployed on AWS

Need to learn microservices developed with spring boot+spring cloud and to be deplpoyed on AWS , where to start from ? as of know I know spring boot but I know nothing about spring cloud and AWS
Is there any specific api given AWS to write spring microservices and deploy it on AWS or spring cloud is enough ?
Thanks,
Vasu
This is... not a great question as it's suppper broad. Entire books have been written about this topic.
Having said that, I'm going to point you at some of the books written about this topic: reading those should help you understand some of your currently unkwown unknowns and be able to get this done (and give you the background to ask more specific questions in the future).
The Spring Cloud AWS Documentation <-- discovered this about 6 months ago for some other stuff, it's suupppperr good
Manning's Spring Microservices In Action book. The Cloud and microservices is actually kind of hard, and requires some infrastructure you may not have thought about
Baeldung's Spring Cloud AWS - EC2 Introduction article. Their stuff is reaaallly good, you'll run into it a lot on the web when hunting for Spring questions.
Good luck!
You can dockerize the spring boot/cloud applications.
1. Run on EC2 machine via a docker-compose (group of microservices).
2. Elastic Container Service.
Hope this helps.

Deploying a command line python program using Azure

My question is specifically related to Azure OR AWS, ie. a cloud provider. So, please do not downvote.
I want to ask how can I deploy a commdn line program like:
https://github.com/rhiever/reddit-twitter-bot
which is written in python.
to the cloud?
I want the program to just run indefinitely, ie.e it will post data from reddit to twitter.
Can it be done with Azure, i know Azure provides for website deployment.
But for this, I think is there any service?
Or if I have to setup a Virtual machien and set up the code, how to configure my machine so that it posts data to twitter (are any networking issues associated)?
Sorry if the question is beginner, I have just started using cloud.
If you were to choose AWS, you could run this easily within a docker container within Elastic Container Service (ECS). Look here for more information: AWS ECS Features
You can probably get what you want in the free tier.