Versioning in Spring Cloud Dataflow stream apps - cloud-foundry

Is there a way to deploy Spring Cloud Dataflow stream apps without versioning in CloudFoundry? Whenever we deploy a stream using SCDF 2.9.x that uses Skipper server, it adds a version number to the app deployed in CF.
For example scdf_processor_splitter it will automatically add versioning to the end of the name and make it scdf_processor_splitter_v1.
This causes issues as we alwasy end up with a route that has versioning in it. Having a static app name and route helps us with monitoring and alos helps us in maintenance. We have a lot of apps in our stream and adding a static route at the time of deployment for each of them is not a feasible option. Could you please let me know if there are any configurations to get the unversioned app name in cloud foundry?
Undeploying the stream and redeploying does not reset versioning. It only increments the versioning.

Related

Which is the best way on AWS to set up a CI/CD of a Django app from GitHub?

I have a Django Web Application which is not too large and uses the default database that comes with Django. It doesn't have a large volume of requests either. Just may not be more than 100 requests per second.
I wanted to figure out a method of continuous deployment on AWS from my source code residing in GitHub. I don't want to use EBCLI to deploy to Elastic Beanstalk coz it needs commands in the command line and is not automated deployment. I had tried setting up workflows for my app in GitHub Actions and had set up a web server environment in EB too. But it ddn't seem to work. Also, I couldn't figure out the final url to see my app from that EB environment. I am working on a Windows machine.
Please suggest the least expensive way of doing this or share any videos/ articles you may hae which will get me to my app being finally visible on the browser after deployment.
You will use AWS CodePipeline, a service that builds, tests, and deploys your code every time there is a code change, based on the release process models you define. Use CodePipeline to orchestrate each step in your release process. As part of your setup, you will plug other AWS services into CodePipeline to complete your software delivery pipeline.
https://docs.aws.amazon.com/whitepapers/latest/cicd_for_5g_networks_on_aws/cicd-on-aws.html

Stop Server Side GTM

I am trying to stop server side GTM as I did it for a test to understand the process but I am still getting billed. What are the steps to stop this.
I have so far.
Removed the transport URL from the GA tag
Paused the GA tag in the client side GTM
Removed the 4 A's and 4 AAAA records from my DNS
Deleted the mapping from the Cloud account under App Engine > Settings
Disabled the application as well
You can find here how to stops it from serving and incurring billing charges related to serving your app:
https://cloud.google.com/appengine/docs/managing-costs#understanding_billing
Anyway, you may continue to incur charges from other Google Cloud products.
Google Tag Manager has a dependency on App Engine and it requires the creation of a Google Cloud Platform project.
In order to stop charges from accruing to an App Engine application you could either disable the application (although some fees related to Cloud Logging, Cloud Storage, or Cloud Datastore might keep being charged), disable billing or my recommendation will be to completely shut down the project related to your tagging server. Take into consideration that when shutting down a project after around 30 days all the resources associated with your project will be fully deleted and you won't be able to recover it.

Can I read external configurations from Redis or some aws service by spring boot application

I am looking for a way to read external configuration by spring boot application.
Currently I am using spring-config-server and read configuration from application.properties by #Value.
I want to move to aws ECS and do not run config-server. As a result I want to remove the config-server and read configuration properties from external directly by each spring boot application.
I already checked the aws ssm parameter but the limit of parameters amount (100,000) that I can store is too small per account and region.
Can I read configuration from Redis by sprig boot application and access them by #value or other simple way? (not as a backend to config-server but dirrectly from spring boot application)
Or maybe there are other db/aws service that I can use?
I would strongly suggest to keep using the Spring Cloud Config Server. It can use several different backends for configuration like AWS S3 or, as you mentioned, Redis.
Which backend you want to use, doesn't have to change the fact that you use the Spring Cloud Config Server (and client). It really makes things easier, instead of trying to reinvent the wheel yourself.
That being said instead of using a plain #Value you might want to look at Type-safe Configuration Properties to make it easier to work with properties from the Environment.

How to set up a remote backend bucket with Terraform before creating the rest of my infrastructure? (GCP)

How would I go about initializing Terraform's backend state bucket on GCP first with Gitlab's pipeline, and then the rest of my infrastructure? I found this but not sure what that implies with Gitlab's pipeline.
This is always a difficult question. My post won't answer your question directly but will give my view about the subject. (too long to be a comment)
It's a bit like asking to manage the server where you have your CI tools with the same CI tools (for exemple: the gitlab server managing itself).
If you use gitlab CI to create your repository, you won't be able to keep the state as you would not have remote state to store it for this specific task. This would mean you would have an inconsistent resource with a tf but no state.
If you want to integrate it with your CI I would recommend using gcloud cli inside your ci, checking if the gcs exists and if not creating it.
If you really want to use terraform, maybe use the free tier of terraform cloud with remote backend only for this specific resource. Like this you have all resources managed by tf, and all with a tfstate.
You now have another option, which does not involve GCP, with GitLab 13.0 (May 2020)
GitLab HTTP Terraform state backend
Users of Terraform know the pain of setting up their state file (a map of your configuration to real-world resources that also keeps track of additional metadata).
The process includes starting a new Terraform project and setting up a third party backend to store the state file that is reliable, secure, and outside of your git repo.
Many users wanted a simpler way to set up their state file storage without involving additional services or setups.
Starting with GitLab 13.0, GitLab can be used as an HTTP backend for Terraform, eliminating the need to set up state storage separately for every new project.
The GitLab HTTP Terraform state backend allows for a seamless experience with minimal configuration, and the ability to store your state files in a location controlled by the GitLab instance.
They can be accessed using Terraform’s HTTP backend, leveraging GitLab for authentication.
Users can migrate to the GitLab HTTP Terraform backend easily, while also accessing it from their local terminals.
The GitLab HTTP Terraform state backend supports:
Multiple named state files per project
Locking
Object storage
Encryption at rest
It is available both for GitLab Self-Managed installations and on GitLab.com.
See documentation and issue.
Furthermore, this provider will be supported for the foreseeable futur, with GitLab 13.4 (September 2020):
Taking ownership of the GitLab Terraform provider
We’ve recently received maintainer rights to the GitLab Terraform provider and plan to enhance it in upcoming releases.
In the past month we’ve merged 21 pull requests and closed 31 issues, including some long outstanding bugs and missing features, like supporting instance clusters.
You can read more about the GitLab Terraform provider in the Terraform documentation.
See Documentation and Issue.

How to integrate on premise logs with GCP stackdriver

I am evaluating stackdriver from GCP for logging across multiple micro services.
Some of these services are deployed on premise and some of them are on AWS/GCP.
Our services are either .NET or nodejs based apps and we are invested in winston for nodejs and nlog in .net.
I was looking # integrating our on-premise nodejs application with stackdriver logging. Looking # https://cloud.google.com/logging/docs/setup/nodejs the documentation it seems that there we need to install the agent for any machine other than the google compute instances. Is this correct?
if we need to install the agent then is there any way where I can test the logging during my development? The development environment is either a windows 10/mac.
There's a new option for ingesting logs (and metrics) with Stackdriver as most of the non-google environment agents look like they are being deprecated. https://cloud.google.com/stackdriver/docs/deprecations/third-party-apps
A Google post on logging on-prem resources with stackdriver and Blue Medora
https://cloud.google.com/solutions/logging-on-premises-resources-with-stackdriver-and-blue-medora
for logs you still need to install an agent on each box to collect the logs, it's a BindPlane agent not a Google agent.
For node.js, you can use the #google-cloud/logging-winston and #google-cloud/logging-bunyan modules from anywhere (on-prem, AWS, GCP, etc.). You will need to provide projectId and auth credentials manually if not running on GCP. Instructions on how to set these up is available in the linked pages.
When running on GCP we figure out the exact environment (App Engine, Compute Engine, etc.) automatically and the logs should up under those resources in the Logging UI. If you are going to use the modules from your development machines, we will report the logs against the 'global' resource by default. You can customize this by passing a specific resource descriptor yourself.
Let us know if you run into any trouble.
I tried setting this up on my local k8s cluster. By following this: https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/
But i couldnt get it to work, the fluentd-gcp-v2.0-qhqzt keeps crashing.
Also, the page mentions that there are multiple issues with stackdriver logging if you DONT use it on google GKE. See the screenshot.
I think google is trying to lock you in into GKE.