Separate URL for each git branch in Cloud Run - google-cloud-platform

I am looking into Cloud Run to host my new app, and I am wondering if it is possible to generate a separate URL for each git branch.
I use Netlify to host my other app. When it is connected to GitHub (or other VCS services), it builds the source code in a branch and deploy it to a URL that is specific to the branch.
Can it be done easily or do I have to write some logic?
Or do you think AWS amplify or some other services are of better fit?

The concept of Cloud Run and URLs is quite simple:
https://<service-name>-<project hash>.<region>.run.app
If your project and region are the same for all the branches, you simply have to deploy a different service for each branch to get a different URL.
That was for Cloud Run. Now, I'm not sure that Netlify is compliant with Cloud Run. I found no documentation on this.

This answer won't be directly useful to you but I think it's relevant and worth mentioning
The open source Knative API (and implementation actually exposes a "tag" feature while splitting the traffic between multiple revisions: https://github.com/knative/docs/blob/master/docs/serving/spec/knative-api-specification-1.0.md#traffictarget
This feature is not currently supported on Cloud Run fully managed, but it will be.
By tagging releases this way, you could define tag: v1 and tag: v2 in your traffic configuration, and you would get new URLs like:
https://v1-SERVICE_NAME...run.app
https://v2-SERVICE_NAME...run.app
that directly go to these specific versions.
And the interesting thing is, these revisions you specified in the traffic: block of the Service object do not have to receive any traffic (you can say traffic percentage: 0) but it would still create a domain name like I showed above to the inactive revisions of your app.
So when Cloud Run fully-managed supports tag fields, you can actually achieve this, although it will be less out-of-the-box experience than Netlify.

Related

How do I add a description to a serverless deploy call?

I have a pretty complex backend project that I deploy to AWS using the Serverless framework. The problem I'm facing is related to versioning. I have a React app on the FE, which has a version on it, but I didn't add a version to the BE for simplicity (it is the same app, I'm not exposing any special API so didn't want to deal with versioning matrices between the FE and the BE, backward compatibility, etc..) --> Is this a mistake?
When I deploy my BE code, AWS does keeps track of the deploy calls and adds versions in the Versions tab of the Lambdas page, and it has a Description property. I'd like to access that Description to at least have an idea which code is running at any given time.
I was looking at the serverless docs and couldn't find a way to send a Description up to AWS. I'm calling it like so:
serverless deploy -s integration
NOTE: I don't have CI/CD hooked up yet, but the idea would be that only checkins to a specific branch (master or develop) would do a deploy to AWS (as opposed to doing it manually on a feature branch while developing). Is this something anyone is doing?
Any thoughts and/or ideas on versioning serverless backend are appreciated.

Using cloud functions vs cloud run as webhook for dialogflow

I don't know much about web development and cloud computing. From what I've read when using Cloud functions as the webhook service for dialogflow, you are limited to write code in just 1 source file. I would like to create a real complex dialogflow agent, so It would be handy to have an organized code structure to make the development easier.
I've recently discovered Cloud run which seems like it can also handle webhook requests and makes it possible to develop a complex code structure.
I don't want to use Cloud Run just because it is inconvenient to write everything in one file, but on the other hand it would be strange to have a cloud function with a single file with thousands of lines of code.
Is it possible to have multiple files in a single cloud function?
Is cloud run suitable for my problem? (create a complex dialogflow agent)
Is it possible to have multiple files in a single cloud function?
Yes. When you deploy to Google Cloud Functions you create a bundle with all your source files or have it pull from a source repository.
But Dialogflow only allows index.js and package.json in the Built-In Editor
For simplicity, the built-in code editor only allows you to edit those two files. But the built-in editor is mostly just meant for basic testing. If you're doing serious coding, you probably already have an environment you prefer to use to code and deploy that code.
Is Cloud Run suitable?
Certainly. The biggest thing Cloud Run will get you is complete control over your runtime environment, since you're specifying the details of that environment in addition to the code.
The biggest downside, however, is that you also have to determine details of that environment. Cloud Funcitons provide an HTTPS server without you having to worry about those details, as long as the rest of the environment is suitable.
What other options do I have?
Anywhere you want! Dialogflow only requires that your webhook
Be at a public address (ie - one that Google can resolve and reach)
Runs an HTTPS server at that address with a non-self-signed certificate
During testing, it is common to run it on your own machine via a tunnel such as ngrok, but this isn't a good idea in production. If you're already familiar with running an HTTPS server in another environment, and you wish to continue using that environment, you should be fine.

GCP Deployment Manager - What Dev Ops Tool To Use In Conjunction?

I'm presently looking into GCP's Deployment Manager to deploy new projects, VMs and Cloud Storage buckets.
We need a web front end that authenticated users can connect to in order to deploy the required infrastructure, though I'm not sure what Dev Ops tools are recommended to work with this system. We have an instance of Jenkins and Octopus Deploy, though I see on Google's Configuration Management page (https://cloud.google.com/solutions/configuration-management) they suggest other tools like Ansible, Chef, Puppet and Saltstack.
I'm supposing that through one of these I can update something simple like a name variable in the config.yaml file and deploy a project.
Could I also ensure a chosen name for a project, VM or Cloud Storage bucket fits with a specific naming convention with one of these systems?
Which system do others use and why?
I use Deployment Manager, as all 3rd party tools are reliant upon the presence of GCP APIs, as well as trusting that those APIs are in line with the actual functionality of the underlying GCP tech.
GCP is decidedly behind the curve on API development, which means that even if you wanted to use TF or whatever, at some point you're going to be stuck inside the SDK, anyway. So that's why I went with Deployment Manager, as much as I wanted to have my whole infra/app deployment use other tools that I was more comfortable with.
To specifically answer your question about validating naming schema, what you would probably want to do is write a wrapper script that uses the gcloud deployment-manager subcommand. Do your validation in the wrapper script, then run the gcloud deployment-manager stuff.
Word of warning about Deployment Manager: it makes troubleshooting very difficult. Very often it will obscure the error that can help you actually establish the root cause of a problem. I can't tell you how many times somebody in my office has shouted "UGGH! Shut UP with your Error 400!" I hope that Google takes note from my pointed survey feedback and refactors DM to pass the original error through.
Anyway, hope this helps. GCP has come a long way, but they've still got work to do.

best practice for bitbucket pipeline deployment in AWS to live server

I am on a project which is about to release first version. I want to setup bitbucket pipeline when deploying to AWS. When doing so, I am afraid that users on website might be affected while we are deploying. What is the best practice for deploying new feature to the live server without affecting users on the website?
One possible option might be that put maintenance page on the web and deploy new codes when not many users are using the website. is there other way to deploy?
As mentioned in the comment it something that depends on underlying tools and technology, but I will focus on your last question.
One possible option might be that put maintenance page on the web and
deploy new codes when not many users are using the website. is there
other way to deploy?
First thing, you should not deploy a new feature without proper testing as pipeline must include automating testing, as sometimes such code breaks the complete application.
You should not put application under maintenance during deployment, that is why we have CI/CD pipeline. You should design your pipeline in the way that you are sure about the lastest code and feature that It should work in production as expected. Many AWS services support blue/green deployment and in the interesting part of blue/green deployment is rollback. You can explore further in the below links.
AWS_Blue_Green_Deployments
using-bitbucket-pipeline-for-aws-ecs-deployments
deploy-to-ec2-with-aws-codedeploy-from-bitbucket-pipelines
continuous-deployment-pipeline

Monitoring the Beanstalk production environment for errors(code level)

We have our production site in Elastic Beanstalk. SNS notifications is good feature to keep us updated about the environment status whenever it changes. But, we want to watch the production environment logs closely.
Our project is a java webapplication, we want to check the status of the production environment from other beanstalk environments i.e., beta and staging environment which are also in the same region and within the same application.
Our goals are to
use aws sdk or other aws tools to get the production beanstalk tomcat logs and display in our beta site on some page.
Run some tool periodically from the Beta environment on Live environment. Which basically does the testing of the sites, i.e., whether all code level mappings are good, if any exceptions then email them.
if we break down the point2 into further more -
We have quartz scheduler to schedule a job at a particular time. We are planning to add some script which test the complete environment periodically. Are there any Beanstalk built in tools that tests the complete site, accessing all URLS and testing the DB to java serialize object classes mappings (hibernate mappings) etc.,
We do use S3 elastic beanstalk bucket to check tomcat logs, but would like to implement the step1 & step2 if possible.
--
Thanks
For Item #1:
I don't recommend using beta and dev to watch production. Instead, here's what I'd do:
Setup Pingdom on all the three environments, so we could have a close eye on uptime
Review the Logging Code. Do you have a explicitly pattern/idiom for exception handling in place? Are your logging functioning?
Setup Papertrail with Logback. Why? You'll have realtime aggregate log tailing for each and every machine you setup a syslog receiver for. For beanstalk-maven-plugin, we are about to release an archetype (see an example 'blank' project created out of it). Even if you're not using, its worth it so see how to use it.
Setup Log Rollout to S3. As it is, the usage is quite useless. I suggest you work into something to import for analysis (or better yet: Export for usage from Hive - Which is something Papertrail Does)
Define your Health Check Code accordingly. Think about what could go wrong, in terms of dependencies
Look / Set up some CloudWatch metrics. If you application is heavy and you're on a t1.micro, which conditions it would spike? Use that at your advantage
That are just a few ideas.
w/r/t Item #2:
I suggest you rethink your structure. I actually dislike the idea of using crontab in elastic beanstalk servers, since its error prone (leader_only? Managing output?). Instead, I use my newer favourite crontab webapp - Jenkins, and set up an integration testing / smoke testing artifact, with only the relevant bits to remotely test the instance. Selenium might help, but I guess if your Services are critical, you might be more happy relying in rest-assured, for instance.
Hope it helps.