I need to deploy two cloud run servicen on GCP, one is frontend and the other is backend so I wanna ask
is it possible to connect 2 services like this ones?
if its possible what is the best way of connecting those two services which will be able to communicate?
I searched through the internet didn't find a lot of useful info
Please consider the official documentation :
Securing Cloud Run services tutorial
This tutorial walks through how to create a secure two-service
application running on Cloud Run. This application is a Markdown
editor which includes a public "frontend" service which anyone can use
to compose markdown text, and a private "backend" service which
renders Markdown text to HTML.
Yes you i'm not going to go into details. but i will give you a quick overview of the workflow.
Assuming you don't have the code in source control and this is an already built docker containers.
you want to load the docker image via gcloud by using
docker load to load the docker .tar image.
next you will tag that image.
push the image to container registry
navigate to cloud web-console click create service or you can run gcloud run image on the cli.
if you need a db its much better to use the cloud sql assuming its postgresql. you want to create one beforehand in the same region.
during deployment you can click connection tab and attach your db instance set the container port to your listening port.
don't for get to like if it helps!
Related
I would like to deploy the time series database QuestDB on GCP, but I do not see any instructions on the documentation. Could I get some steps?
This can be done in a few shorts steps on Compute Engine. When creating a new instance, choose the region and instance type, then:
In the "Container" section, enable "Deploy a container image to this VM instance"
type questdb/questdb:latest for the "Container image"
This will pull the latest QuestDB docker image and run it on your instance when launching. The rest of the setup steps are setting firewall rules to allow networking on the ports you require:
port 9000 - web console & REST API
port 8812 - PostgreSQL wire protocol
Source of this info is an ETL tutorial by Gabor Boros which deploys QuestDB to GCP and uses Cloud Functions for loading and processing data from a storage bucket.
My question is specifically related to Azure OR AWS, ie. a cloud provider. So, please do not downvote.
I want to ask how can I deploy a commdn line program like:
https://github.com/rhiever/reddit-twitter-bot
which is written in python.
to the cloud?
I want the program to just run indefinitely, ie.e it will post data from reddit to twitter.
Can it be done with Azure, i know Azure provides for website deployment.
But for this, I think is there any service?
Or if I have to setup a Virtual machien and set up the code, how to configure my machine so that it posts data to twitter (are any networking issues associated)?
Sorry if the question is beginner, I have just started using cloud.
If you were to choose AWS, you could run this easily within a docker container within Elastic Container Service (ECS). Look here for more information: AWS ECS Features
You can probably get what you want in the free tier.
I need to develop a Spring Boot microservice and need to deploy in Docker. Now I developed a sample microservice. When I am learning Docker and container deployment I found many documentations for installing Docker and building images and running the application as container packaging. Here I have some doubts in deployment procedure:
If I need to deploy 4 Spring Boot microservice in Docker, do I need to create separate image for all? Or can I use the same Docker file in all my Spring Boot microservices?
I am using PostgreSQL database. So can I include that connection into Docker image file? Or I need to manage separately?
If you have four different Spring Boot applications, I suggest creating four different Dockerfiles, and building four different images from those files. Basically put one Dockerfile in each Spring application folder.
You can build PostgreSQL credentials (hostname, username & password) into the application by writing it in the code. This is easiest.
If you use AWS and ECS (Elastic Container Service) or EC2 to run your Docker containers you could store the credentials in the EC2 Parameter Store, and have your application fetch them at startup, however this takes a bit more AWS knowledge and you have to use the AWS SDK to fetch the credentials from the application. Here is a StackOverflow question about exactly this: Accessing AWS parameter store values with custom KMS key
Ask Question
There can be one single image for your all micro-services but its not a good design and not suggested. Always try to decoupled the things one from another. In your case, create separate images(separate Dockerfile) for each micro-service.
Again same thing for your second question, create a separate image(one Dockerfile) for your database as well. And for the credentials, you can follow the Jonatan's suggestion.
For a project we are trying to expand Google Cloud Datalab and deploy the modified version to the Google Cloud platform. As I understand it, the deploying process normally consists of the following steps:
Build the Docker image
Push it to the Container Registry
Use the container parameter with the Google Cloud deployer to specify the correct Docker image, as explained here.
Since the default container registry, i.e. gcr.io/cloud_datalab/datalab:<tag> is off-limits for non-Datalab contributors, we pushed the Docker image to our own container registry, i.e. to gcr.io/<project_id>/datalab:<tag>.
However, the Google Cloud deployer only pulls directly from gcr.io/cloud_datalab/datalab:<tag> (with the tag specified by the containerparameter) and does not seem to allow specification of the source container registry. The deployer does not appear to be open-source, leaving us with no way to deploy our image to Google Cloud.
We have looked into creating a custom deployment similar to the example listed here but this never starts Datalab, so we suspect the start script is more complicated.
Question: How can we deploy a Datalab image from our own container registry to Google Cloud?
Many thanks in advance.
The deployment parameters can be guessed but it is easier to get the Google Cloud Datalab deployment script by sshing to the temporary compute node that is responsible for deployment and browsing the /datalab folder. This contains a runtime configuration file for use with the App Engine Flexible Environment. Using this configuration file, the google preview app deploy command (which accepts an --image parameter for Docker images) will deploy this to the App Engine correctly.
I've been following the official Amazon documentation on deplaying to the Elastic Bean Stalk.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python.html
and the customization environment
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html#customize-containers-format
however, I am stuck. I do not want to use the built in RDS database I want to use mongodb but have my django/python application scale as a RESTful frontend or rather API endpoint for my users.
Currently I am running one EC2 instance to test out my django application.
Some problems that I have with the Elastic Bean:
1. I cannot figure out how to run commands such as
pip install git+https://github.com/django-nonrel/django#nonrel-1.5
Since I cannot install the device mongo driver for use by django I cannot run my mongodb commands.
I was wondering if I am just skipping over some concepts or just not understanding how deploying on the beanstalk works. I can see that beanstalk just launches EC2 instances and possibly need to write custom scripts or something I don't know.
I've searched around but I don't exactly know what to ask in regards to this. Top results of google are always Amazon documents which are less than helpful in customization outside of their RDS environment. I know that Django traditionally uses RDS environments but again I don't want to use those as they are not flexible enough for the web application I am writing.
You can create a customize AMI to your specific needs the steps are outline in the AWS documentation below. Basically you would create a custom AMI with the packages needed to host your application and then update the Beanstalk config to use your customize AMI.
Using Custom AMIs