I created a VM in GCP with a 2 core 8GB Ram config, later i noticed it was changed to 4 core and 16 GB Ram. I need to find out who did this and when from my team.
I tried going through the activity dashboard but its quiet difficult to understand from that. Can anyone provide a solution to this?
Changes to a Compute Engine configuration will be logged in the Admin Activity audit logs. The IAM identity that changed the instance will be logged.
The following CLI command will read the log. Replace PROJECT_ID with your Project ID.
gcloud logging read "logName : projects/PROJECT_ID/logs/cloudaudit.googleapis.com" --project=PROJECT_ID
Understanding audit logs
Compute Engine audit logging information
gcloud logging read
Related
I wrote some code to automate the training procedure on our company vm instances.
you probably know that sometimes GCP can't provide you at the current moment with a machine - 'out of resource' exception.
so , I'd like to monitor which of my machines successfully turned on and which not.
if there is some way to show it on Bigquery it will be great.
thanks .
Using the Cloud Monitoring (Stackdriver) functionality is good way for monitoring all you VMs.
Here is a detailed guide to implement Monitoring on a Compute Engine Instance.
Hope you find it useful.
You can use Google cloud's activity logs too:
Activity logging is enabled by default for all Compute Engine
projects.
You can see your project's activity logs through the Logs Viewer in
the Google Cloud Console:
In the Cloud Console, go to the Logging page. Go to the Logging page
When in the Logs Viewer, select and filter your resource type from the
first drop-down list. From the All logs drop-down list, select
compute.googleapis.com/activity_log to see Compute Engine activity
logs.
Here is the Official documentation.
I'm trying to execute a Script on a Google VM through Terraform.
First I tried it via Google Startup Scripts. But since the metadata is visible in the Google Console (startup scripts count as metadata) and that would mean that anybody with read access can see that script which is not acceptable.
So i tried to get the script from a Storage Account. But for that i need to attach a service account to the VM so the VM has the rights to access the Storage Account. Now people that have access to the VM also have access to my script as long as the service account is attached to the VM. In order to "detach" the service account i would have to stop the VM. Also if i don't want to permanently keep the attachment of the service account i would have to attach the service account via a script which requires another stop and start of the VM. This is probably not possible and also really ugly.
I don't understand how the remote-exec ressource works on GCP VMs. Because i have to specify a user and a userpassword to connect to the VM and then execute the script. But the windows password needs to be set manually via the google console, so i can't specify those things at this point in time.
So does anybody know how I can execute a Script where not anybody has access to my script via Terraform?
Greetings :) and Thanks in advance
I ended up just running a gcloud script in which i removed the Metadata from the VM after the Terraform apply was finished. In my Gitlab pipeline i just called the script in the "after_script"-section. Unfortunately the credentials are visible for approximately 3min.
Following the steps provided in this documentation.
I was looking into better monitoring of our GKE cluster and so thought I'd try out the beta kubernetes Stackdriver monitoring. My cluster version is 1.11.7 (later than the suggested 1.11.2) and I created the cluster with the --enable-stackdriver-kubernetes flag.
In the cluster details Stackdriver logging and monitoring is listed as 'Enabled v2(beta)' however in the stackdriver resources menu the 'kubernetes beta' option will simply not appear as shown here.
I have also confirmed fluentd, heapster and metadata-agent pods are running within the cluster as suggested by the docs.
Any possible suggestions are much appreciated.
I managed to resolve this issue:
Firstly the 'Kubernetes Beta' option appeared in Stackdriver appeared without me making any changes to the cluster(Slightly annoying)
I gave the clusters service account the appropriate monitoring and logging roles.
I am evaluating stackdriver from GCP for logging across multiple micro services.
Some of these services are deployed on premise and some of them are on AWS/GCP.
Our services are either .NET or nodejs based apps and we are invested in winston for nodejs and nlog in .net.
I was looking # integrating our on-premise nodejs application with stackdriver logging. Looking # https://cloud.google.com/logging/docs/setup/nodejs the documentation it seems that there we need to install the agent for any machine other than the google compute instances. Is this correct?
if we need to install the agent then is there any way where I can test the logging during my development? The development environment is either a windows 10/mac.
There's a new option for ingesting logs (and metrics) with Stackdriver as most of the non-google environment agents look like they are being deprecated. https://cloud.google.com/stackdriver/docs/deprecations/third-party-apps
A Google post on logging on-prem resources with stackdriver and Blue Medora
https://cloud.google.com/solutions/logging-on-premises-resources-with-stackdriver-and-blue-medora
for logs you still need to install an agent on each box to collect the logs, it's a BindPlane agent not a Google agent.
For node.js, you can use the #google-cloud/logging-winston and #google-cloud/logging-bunyan modules from anywhere (on-prem, AWS, GCP, etc.). You will need to provide projectId and auth credentials manually if not running on GCP. Instructions on how to set these up is available in the linked pages.
When running on GCP we figure out the exact environment (App Engine, Compute Engine, etc.) automatically and the logs should up under those resources in the Logging UI. If you are going to use the modules from your development machines, we will report the logs against the 'global' resource by default. You can customize this by passing a specific resource descriptor yourself.
Let us know if you run into any trouble.
I tried setting this up on my local k8s cluster. By following this: https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver/
But i couldnt get it to work, the fluentd-gcp-v2.0-qhqzt keeps crashing.
Also, the page mentions that there are multiple issues with stackdriver logging if you DONT use it on google GKE. See the screenshot.
I think google is trying to lock you in into GKE.
How do I view stdout/stderr output logs for cloud ML? I've tried using gcloud beta logging read and also gcloud beta ml jobs stream-logs and nothing... all I see are the INFO level logs generated by the system i.e. "Tearing down TensorFlow".
Also in the case where I have an error that shows the docker container exited with non zero code. It links me to a GUI page that shows the same stuff as gcloud beta ml jobs stream-logs. Nothing that shows me the actual output from the console of what my job produced...
Help please??
It may be the case that the Cloud ML service account does not have permissions to write to your project's StackDriver Logs, or the Logging API is not enabled on your project.
First check whether the Stackdriver Logging API is enabled for the project by going to the API manager: https://console.cloud.google.com/apis/api/logging.googleapis.com/overview?project=[YOUR-PROJECT-ID]
Then Cloud ML service account should be automatically added as an Editor to the project, and therefore allows it to write to the project logs, but if you have changed your project permissions it may have lost it. If so, check that you've manually given the Cloud ML service account LogWriter permissions.
If you are unsure of the service account used by Cloud ML, this page has instructions on how to find it: https://cloud.google.com/ml/docs/how-tos/using-external-buckets