Stop Server Side GTM - google-cloud-platform

I am trying to stop server side GTM as I did it for a test to understand the process but I am still getting billed. What are the steps to stop this.
I have so far.
Removed the transport URL from the GA tag
Paused the GA tag in the client side GTM
Removed the 4 A's and 4 AAAA records from my DNS
Deleted the mapping from the Cloud account under App Engine > Settings
Disabled the application as well

You can find here how to stops it from serving and incurring billing charges related to serving your app:
https://cloud.google.com/appengine/docs/managing-costs#understanding_billing
Anyway, you may continue to incur charges from other Google Cloud products.

Google Tag Manager has a dependency on App Engine and it requires the creation of a Google Cloud Platform project.
In order to stop charges from accruing to an App Engine application you could either disable the application (although some fees related to Cloud Logging, Cloud Storage, or Cloud Datastore might keep being charged), disable billing or my recommendation will be to completely shut down the project related to your tagging server. Take into consideration that when shutting down a project after around 30 days all the resources associated with your project will be fully deleted and you won't be able to recover it.

Related

Your project is being suspended for cryptocurrency mining in violation of our Terms of Service (GCP bug)

today I got an email that my project was mining cryptocurrencies and the instance was blocked, but no cryptocurrencies have ever been mined in the project.
How does google cloud conclude that cryptocurrencies are mined in the project?
I deploy a project in the energy sector based on blockchain technology, but this is only a deployment - I only deploy a project based on ethereum, and I do not know how google claims that I violate the rules of using the cloud.
Anyone had a similar problem? The solution is in the almost production phase and changes at this stage will be costly.
Whatever network monitoring and heurestics Google Cloud applies, we cannot know, because it is their company internal information.
We cannot know either how you violated Google Cloud rules and thus this question and the matter is strictly between you and Google. We are not starting to guess what stuff you run on your servers and so on.
If Google Cloud support is unhelpful, just use some more customer friendly cloud service provider and close your account with them. Generally, your negotiation power resolving issues like this with Google is zero so there is nothing can you do.

AWS - Log aggregation and visualization

We have couple of application running on AWS. Currently we are redirecting all our logs to single bucket. However for ease of access to users, I am thinking to install ELK Stack on EC2 instance.
Would want to check if there is alternate way available where I don't have to maintain this stack.
Scaling won't be an issue, as this is only for logs generated through application running on AWS, so not ingestion or processing is required. mostly log4j logs.
You can go for either the managed Elasticsearch available in AWS or setup your own in an EC2 instance
It usually comes down to the price involved and the amount of time you have in hand in setting up and maintaining your own setup
With your own setup, you can do a lot more configurations than that provided by the managed service and also helps in reducing the cost
You can find more info on this blog

Google Cloud Platform design for a stateful application

Usecase: Our requirement is to run a service continuously every few minutes. This service reads a value from datastore, and hits a public url using that value from datastore (Stateful). This service doesnt have Front End. No body would be accessing this service publicly. A new value is stored in datastore as a result of response from the url. Exactly one server is required to run.
We are in need to decide one of the below for our use case.
Compute Engine (IaaS -> we dont want to maintain the infra for this simple stateful application)
Kubernetes Engine (still feeling overkill )
App Engine : PaaS-> App Engine is usually used for Mobile apps, Gaming, Websites. App Engine provides a url with web address. Is it right choice for our usecase? If we choose app engine, is it possible to stop the public app engine url? Also, as one instance would be running continuously in app engine, what is cost effective - standard or flexible?
Cloud Functions -> Event Driven(looks not suitable for our application)
Google Cloud Scheduler-> We thought we could use cloud scheduler + cloud functions. But during outage, jobs are queued up. In our case, after outage, only one server/instance/job could be up and running.
Thanks!
after outage, only one server/instance/job could be up and running
Limiting Cloud Function concurrency is enough? If so, you can do this:
gcloud functions deploy FUNCTION_NAME --max-instances 1 FLAGS...
https://cloud.google.com/functions/docs/max-instances
I also recommend taking a look at Google Cloud Run, is a serverless docker platform, it can be limited to a maximum of 1 instances responding to a maximum of 1 request concurrently. It would require Cloud Scheduler too, making regular HTTP requests to it.
With both services configured with max concurrency of 1, only one server/instance/job will be up and running, but, after outages, jobs may be scheduled as soon as another finish. If this is problematic, adding a lastRun datetime field on datastore job row and not running if it's too recent, or disable retry of cloud scheduler, like said here:
Google Cloud Tasks HTTP trigger - how to disable retry

How to see Cloud SQL Admin API's logs?

Since 15/10 I notice a sudden spike on Cloud SQL Admin API queries, but we didn't make any changes in our application or in our environment.
Is there any place where I can see which queries are these?
In Stackdriver I just found Cloud SQL's logs, but nothing about Cloud SQL Admin API.
UPDATE:
According to this https://cloud.google.com/sql/docs/mysql/sql-proxy
" While the proxy is running, it makes two calls to the API per hour per connected instance."
And all requests would be registered as '[USER_NAME]'#'cloudsqlproxy~%'
My application uses 5 PODs of sqlproxy on top, nothing that justifies this spike.
In contact with Google Cloud Support, the agent wasn't able to inform me of the origin of this queries as well.
When the volume of Cloud SQL Admin API's queries increase, the connection latency increases up to 5 seconds and my production application gets unusable.
I'm very disappointed with Google Cloud Support, and I'm thinking in move on to another provider.

How to integrate on premise logs with GCP stackdriver

I am evaluating stackdriver from GCP for logging across multiple micro services.
Some of these services are deployed on premise and some of them are on AWS/GCP.
Our services are either .NET or nodejs based apps and we are invested in winston for nodejs and nlog in .net.
I was looking # integrating our on-premise nodejs application with stackdriver logging. Looking # https://cloud.google.com/logging/docs/setup/nodejs the documentation it seems that there we need to install the agent for any machine other than the google compute instances. Is this correct?
if we need to install the agent then is there any way where I can test the logging during my development? The development environment is either a windows 10/mac.
There's a new option for ingesting logs (and metrics) with Stackdriver as most of the non-google environment agents look like they are being deprecated. https://cloud.google.com/stackdriver/docs/deprecations/third-party-apps
A Google post on logging on-prem resources with stackdriver and Blue Medora
https://cloud.google.com/solutions/logging-on-premises-resources-with-stackdriver-and-blue-medora
for logs you still need to install an agent on each box to collect the logs, it's a BindPlane agent not a Google agent.
For node.js, you can use the #google-cloud/logging-winston and #google-cloud/logging-bunyan modules from anywhere (on-prem, AWS, GCP, etc.). You will need to provide projectId and auth credentials manually if not running on GCP. Instructions on how to set these up is available in the linked pages.
When running on GCP we figure out the exact environment (App Engine, Compute Engine, etc.) automatically and the logs should up under those resources in the Logging UI. If you are going to use the modules from your development machines, we will report the logs against the 'global' resource by default. You can customize this by passing a specific resource descriptor yourself.
Let us know if you run into any trouble.
I tried setting this up on my local k8s cluster. By following this: https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/
But i couldnt get it to work, the fluentd-gcp-v2.0-qhqzt keeps crashing.
Also, the page mentions that there are multiple issues with stackdriver logging if you DONT use it on google GKE. See the screenshot.
I think google is trying to lock you in into GKE.