I'm building a chat and having a feature with cloud translation API, for each client I create a new API Key to been able to identify the consume usage of each client, the problem is the following:
I want to see the consume of all API Keys inside a project, something like the Operations Logging:
But revealing information of the timestamp and the API Key name use so I can be able to track each client usage of the service and determine how much I am going to bill them.
Update
Doing some additional research come up to this article which gives a walkthrough to gain visibility on Service Account Keys (similar but not what I needed). On this guide they create a Log Sink to push logs into BigQuery.
The problem now is that the filter used to extract the logs is the following:
logName:"projects/<PROJECT>/logs/cloudaudit.googleapis.com"
protoPayload.authenticationInfo.serviceAccountKeyName:"*"
The second line extract log that belongs to Service Account Key Name. But as it was stated at the beginning of the question I'm looking for the API Key log not the service account key.
You can use Cloud Audit logs 1 , Cloud Audit Logs provides the following audit logs for each Cloud project, folder, and organization:
-Admin Activity audit logs
-Data Access audit logs
-System Event audit logs
-Policy Denied audit logs
Google Cloud services write audit log entries to these logs to help you answer the questions of "who did what, where, and when?" within your Google Cloud resources.
For this scenario it could be helpful Data Access audit logs 2, it contains API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data. Data Access audit logs do not record the data-access operations on resources that are publicly shared (available to All Users or All Authenticated Users) or that can be accessed without logging into Google Cloud.
As mentioned in the previous comment, this logs are disabled by default because they can be quite large; they must be explicitly enabled to be written.
However, the simplest way to view your API metrics is to use the Google Cloud Console's API Dashboard 3. You can see an overview of all your API usage, or you can drill down to your usage of a specific API.
Related
I can see in one of the GCE Instance start up script as
userdel -r userid
due to this user is not able to SSH through browser.
My question is which logs how do we find who has added this startup script to the vm and when?
can we use some logs?
Yes , you can check logs for this in activity log.
you can use below url by replacing your project id.
https://console.cloud.google.com/home/activity?project=
If you want to know who adds a startup script to the VM, you can check the Admin Activity audit logs and the System Event audit logs.
Admin Activity audit logs contain log entries for API calls or other actions that modify the configuration or metadata of resources.
And, System Event audit logs contain log entries for Google Cloud actions that modify the configuration of resources.
Google Cloud services write audit logs that record administrative
activities and accesses within your Google Cloud resources. Audit logs
help you answer "who did what, where, and when?" within your Google
Cloud resources with the same level of transparency as in on-premises
environments. Cloud Audit Logs provides the following audit logs for
each Cloud project, folder, and organization:
Admin Activity audit logs
Data Access audit logs
System Event audit logs
Policy Denied audit logs
The Data Access audit logs can be very useful too, but are disabled by default, Data Access audit logs contain API calls that read the configuration or metadata of resources, as well as user-driven API calls that create, modify, or read user-provided resource data. If you want to enable it, please follow this link.
To view the Audits logs
In the Cloud console, go to the Logging> Logs Explorer page.
Select an existing Cloud project, folder, or organization.
In the Query builder pane, do the following:
In Resource type, select the Google Cloud resource whose audit logs
you want to see.
In Log name, select the audit log type that you want to see:
For Admin Activity audit logs, select activity.
For Data Access audit logs, select data_access.
For System Event audit logs, select system_event.
For Policy Denied audit logs, select policy.
If you don't see these options, then there aren't any audit logs of
that type available in the Cloud project, folder, or organization.
If you want to know more about Audit logs in GCP, please follow this link.
As the title says, there seems to be two ways to collect certain GCP audit logs.
Using The Google Workspace Admin SDK -- Specifying the "gcp" application name in the call.
Using the Google Cloud Logging API
What is the difference in logs collected? I'm still researching but hopefully someone on the google team sees this tag and knows exactly.
Does all of these logs get collected via worksapce admin sdk?
Admin Activity audit logs
Data Access audit logs
System Event audit logs
Policy Denied audit logs
The Admin Reports API only reports "OS login" events.
https://developers.google.com/admin-sdk/reports/v1/appendix/activity/gcp
Background
I have a Google Cloud project running my N applications. Each application has an exclusive IAM service account (total N service account) with minimal permissions.
Scenario
Let's imagine that one of the service accounts was leaked out. An attacker will try to take advantage of these credentials. Because he doesn't know exactly which kind of permissions this account has, we will try to make calls and see if it working for him.
Question
I want to "listen" to audit logs. Once I will see the log from kind "access denied", I will know that something is wrong with this service account.
Is this possible to write all those access denied incidents to Google Cloud Stackdriver?
How you recommend implementing it?
Thank you
Is this possible to write all those access denied incidents to Google
Cloud Stackdriver?
Most but not all Google Cloud services support this. However, access success will also be logged.
You will need to enable Data Access Audit Logs.
This could generate a massive amount of logging information.
Access logs for Org and Folder are only available via API and not the console.
Review pricing before enabling data access audit logs.
How you recommend implementing it?
This question is not suitable for Stackoverflow as it seeks recommendations and opinions. In general you will export your logs to Google Cloud Pub/Sub to be processed by a service such as Cloud Functions, Cloud Run, etc. There are also commercial services such as DataDog designed for this type of service support. Exporting logs to Google BigQuery is another popular option.
Read this article published on DataDog's website on Data Access Audit Logging and their services. I am not recommended their service, just providing a link to more information. Their article is very good.
Best practices for monitoring GCP audit logs
To understand the fields that you need to process read this document:
AuthorizationInfo
This link will also help:
Understanding audit logs
Here is one way to go about it:
Create a new cloud pubsub topic
Create a new log routing sink with destination service of cloud pubsub topic created in the previous step (set a filter to be something like
protoPayload.authenticationInfo.principalEmail="<service-account-name>#<project-name>.iam.gserviceaccount.com" AND protoPayload.authorizationInfo.granted="false" to only get messages about unsuccessful auth action for your service account)
Create a cloud function that's triggered with a new message for the pubsub topic is published; this function can do whatever you desire, like send a message to the email address, page you or anything else you can come up with in the code.
How to fetch cloud storage bucket last access details. As of now, I'm seeing we can find only last modified date for bucket and objects. Is there any way to fetch last access details for buckets and objects. Do we need to enable logging for each object to fetch it or Is there any options available?
There are several types of logs you can enable to get this information.
Cloud Audit Logs is the recommended method for generating logs that track API operations performed in Cloud Storage:
Cloud Audit Logs tracks access on a continuous basis.
Cloud Audit Logs produces logs that are easier to work with.
Cloud Audit Logs can monitor many of your Google Cloud services, not
just Cloud Storage.
Audit Logs are logged in "near" real-time and available as any other logs in GCP. You can view a summary of the audit logs for your project in the Activity Stream in the Google Cloud Console. A more detailed version of the logs can found in the Logs Viewer.
In some cases, you may want to use Access Logs instead. You most likely want to use access logs if:
You want to track access to public objects, such as assets in a
bucket that you've configured to be a static website.
You want to track access to objects when the access is exclusively
granted because of the Access Control Lists (ACLs) set on the
objects.
You want to track changes made by the Object Lifecycle Management
feature.
You intend to use authenticated browser downloads to access objects
in the bucket.
You want your logs to include latency information, or the request and
response size of individual HTTP requests.
As opposed to audit logs, access logs aren't sent "real-time" to Stackdriver Logging but are offered in the form of CSV files, generated hourly when there is activity to report in the monitored bucket, that you can download and view.
The access logs can provide an overwhelming amount of information. You'll find here a table to help you identify all the information provided in these logs.
Cloud Storage buckets are meant to serve high volumes of read requests through a variety of means. As such, reads don't also write any additional data - that would not scale well. If you want to record when an object gets read, you would need to have the client code reading the object to also write the current time to some persistent storage. Or, you could force all reads through some API endpoint that performs the update manually. In either case, you are writing code and using additional resources to store this data.
My library is a CLI utility, and people get it by running pip install [libname]. I would like to automatically record exceptions that occur when people use it and store these logs in the cloud. I have found services that should do just that: AWS CloudWatch, GCP Stackdriver.
However, while looking at their API it appears that I would have to ship my private key in order for the library to authenticate to my account. This doesn't sound right and I am warned by the cloud providers not to do this.
Example from GCP fails, requires credentials:
from google.cloud import logging
client = logging.Client()
logger = client.logger('log_name')
logger.log_text('A simple entry') # API call
While python library exposes source, I understand that any kind of authentication I ship would bear the risk of people sending any fake logs, but this is OK to me, as I would just limit the spending on my account for the (unexpected) case that somebody does just that. Of Course the credentials that ship with the library should be restricted to logging only.
Any example of how to enable logging to a cloud service from user machines?
For Azure Application Insights' "Instrumentation Key" there is a very good article about that subject here: https://devblogs.microsoft.com/premier-developer/alternative-way-to-protect-your-application-insights-instrumentation-key-in-javascript/
While I'm not familiar with the offerings of AWS or GCP, I would assume similar points are vaild.
Generally speaking: While the instrumentation key is a method of authentication, it is not considered a very secret key in most scenarios. The worst damage somebody can do is to send unwanted logs. They cannot read any data or overwrite anything with that key. And you already stated above that you are not really worried in your case about the issue of unwated logs.
So, as long as you are using an App Insights instance only for one specific application / purpose, I would say you are fine. You can still further aggregate that data in the background with data from different sources.
To add an concrete example to this: This little tool from Microsoft (the specific use case does not matter here), collects telemetry as well and sends it to Azure Application Insights - if the user does not opt out. I won't point to the exact code line but their instrumentation key is checked-in to that public GitHub repo for anybody to find.
Alternatively, the most secure way would be to send data from the
browser to your custom API on your server then forward to Application
Insights resource with the correct instrumentation key (see diagram
below).
(Source: the link above)
App Insights SDK for python is here btw: https://github.com/microsoft/ApplicationInsights-Python
To write logs to Stackdriver requires credentials. Anonymous connections to Stackdriver are NOT supported.
Under no circumstances give non-privileged users logging read permissions. Stackdriver records sensitive information in Stackdriver Logs.
Google Cloud IAM provides the role roles/logging.logWriter. This role gives users just enough permissions to write logs. This role does not grant read permissions.
The role roles/logging.logWriter is fairly safe. A user can write logs, but cannot read, overwrite or delete logs. I say fairly safe as there is private information stored in the service account. I would create a separate project only for Stackdriver logging with no other services.
The second issue with providing external users access is cost. Stackdriver logs are $0.50 per GiB. You would not want someone uploading a ton of logfile entries. Make sure that you monitor external usage. Create an alert to monitor costs.
Creating and managing service accounts
Chargeable Stackdriver Products
Alert on Stackdriver usage
Stackdriver Permissions and Roles