Where GKE logs stored in GCP? - google-cloud-platform

I can see logs are present in Stack driver Logging. But want to know here it stored (in any container?) Can I apply rotation on it because I need only 3 months data. And where to check how much it cost of storing the logs.

Each and every project has _Default and _Required Logs buckets, and there is no cost involve.
Required
holds Admin Activity audit logs, System Event audit logs, and Access Transparency logs, and retains them for 400 days. You aren't charged for the logs stored in _Required, and the retention period of the logs stored here cannot be modified. You cannot delete this bucket.
Default
holds all other ingested logs in a Google Cloud project except for the logs held in the _Required bucket. Standard Cloud Logging pricing applies to these logs. Log entries held in the _Default bucket are retained for 30 days, unless you apply custom retention rules. You can't delete this bucket, but you can disable the _Default log sink that routes logs to this bucket.
To answer your question about GKE pods logs, they are stored in the _Default bucket. Until now, there is no cost associated to storing them, but NOTE storage costs will apply to all chargeable logs retained longer than the default retention periods at the rate of $.01 per GiB per month (or fraction thereof); as of March 31, 2021.
Here's the gcloud command for how to read your pod's logs from the GCS bucket:
gcloud logging read resource.type="k8s_pod"

Related

AWS CloudWatch Logs Archive (not S3), how to use it

I am reading AWS CloudWatch Logs documentation here. They says
Archive log data – You can use CloudWatch Logs to store your log data in highly durable storage. The CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service. You can then access the raw log data when you need it.
And in the pricing page, they have
Store (Archival) $0.03 per GB
And in the Pricing Calculator, they mention
Log Storage/Archival (Standard and Vended Logs)
Log volume archived is estimated to be 15% of Log volume ingested (due to compression). Storage/Archival costs are estimated assuming customer choses a retention period of one (1) month. Default retention setting is ‘never expire’.
Problem
I am trying to understand the behavior of this archive feature to decide if I need to move my log data to S3. but I cannot find any further details. I have tried exploring every button and link in CloudWatch Logs pages but cannot find a way to archive the data, I can only delete them or edit their retention rules.
So how does it work? The remark in the Pricing Calculator says it is estimated to be 15% of ingested volume, does this mean it always archive 15% of the log automatically? And why do they have to assume in the calculation taht the retention period is set to 1 month, does the archive feature behave differently otherwise?
The Archive log data feature refers to storing log data in CloudWatch Logs. You do not need to do anything additional to 'archive'. It is the regular storage you can see on console.
Considering only storage pricing, storing logs in S3 is cheaper. It varies depending on region but in average on S3 Standard is about $0.025 per GB vs $0.03 per GB on CloudWatch Logs Storage. And if you move the objects to other storage classes it becomes cheaper.
About:
Log volume archived is estimated to be 15% of Log volume ingested (due
to compression)
It refers to if 100GB of data are ingested on CloudWatch Logs, it reflects as only 15GB (15%) on Storage due to the special compressed format in which they stored this logs.

How to make my log sync bucket persistent in GCP

I am storing GCP's cloud logging in a log bucket, but there is a limit to the storage period.
I would like to store the log bucket permanently in another bucket as a backup, is there a good way to do this?
You can keep the logs up to 10 years in a custom log bucket.
If you need more, you can export the logs to Cloud Storage and archive them.
If you need query capability beyond 10 years, you can export the logs to BigQuery

Cloudwatch log store costing vs S3 costing

I have an ec2 instance which is running apache application.
I have to store my apache log somewhere. For this, I have used two approaches:
Cloudwatch Agent to push logs to cloudwatch
CronJob to push log file to s3
I have used both of the methods. Both methods suit fine for me. But, here I am little worried about the costing.
Which of these will have minimum cost?
S3 Pricing is basically is based upon three factors:
The amount of storage.
The amount of data transferred every month.
The number of requests made monthly.
The cost for data transfer between S3 and AWS resources within the same region is zero.
According to Cloudwatch pricing for logs :
All log types. There is no Data Transfer IN charge for any of CloudWatch.Data Transfer OUT from CloudWatch Logs is priced.
Pricing details for Cloudwatch logs:
Collect (Data Ingestion) :$0.50/GB
Store (Archival) :$0.03/GB
Analyze (Logs Insights queries) :$0.005/GB of data scanned
Refer CloudWatch pricing for more details.
Similarly, according to AWS, S3 pricing differs region wise.
e.g For N.Virginia :
S3 Standard Storage
First 50 TB / Month :$0.023 per GB
Next 450 TB / Month :$0.022 per GB
Over 500 TB / Month :$0.021 per GB
Refer S3 pricing for more details.
Hence, we can conclude that sending logs to S3 will be more cost effective than sending them to CloudWatch.
They both have similar storage costs, but CloudWatch Logs has an additional ingest charge.
Therefore, it would be lower cost to send straight to Amazon S3.
See: Amazon CloudWatch Pricing – Amazon Web Services (AWS)

BigQuery dataset complete deleted/vanished

I've stored analytics in a BigQuery dataset, which I've been doing for over 1.5 years by now, and have hooked up DataStudio, etc and other tools to analyse the data. However, I very rarely look at this data. Now I logged in to check it, and it's just completely gone. No trace of the dataset, and no audit log anywhere showing what happened. I've tracked down when it disappeared via the billing history, and it seems that it mysteriously was deleted in November last year.
My question to the community is: Is there any hope that I can find out what happened? I'm thinking audit logs etc. Does BigQuery have any table-level logging? For how long does GCP store these things? I understand the data is probably deleted since it was last seen so long ago, I'm just trying to understand if we were hacked in some way.
I mean, ~1 TB of data can't just disappear without leaving any traces?
Usually, Cloud Audit Logging is used for this
Cloud Audit Logging maintains two audit logs for each project and organization: Admin Activity and Data Access. Google Cloud Platform services write audit log entries to these logs to help you answer the questions of "who did what, where, and when?" within your Google Cloud Platform projects.
Admin Activity logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of resources. They are always enabled. There is no charge for your Admin Activity audit logs
Data Access audit logs record API calls that create, modify, or read user-provided data. To view the logs, you must have the IAM roles Logging/Private Logs Viewer or Project/Owner. ... BigQuery Data Access logs are enabled by default and cannot be disabled. They do not count against your logs allotment and cannot result in extra logs charges.
The problem for you is retention for Data Access logs - 30 days (Premium Tier) or 7 days (Basic Tier). Of course, for longer retention, you can export audit log entries and keep them for as long as you wish. So if you did not do this you lost these entries and your only way is to contact Support, I think

What is the default retention period for LogGroup in Cloud Watch?

Here is documentation about creating Cloud Watch LogGroup via Cloud Formation. They said:
RetentionInDays
The number of days log events are kept in CloudWatch
Logs. When a log event expires, CloudWatch Logs automatically deletes
it. For valid values, see PutRetentionPolicy in the Amazon CloudWatch
Logs API Reference.
Required: No
So if I create LogGroup without RetentionInDays parameter will Cloud Watch keep those logs forever? Or what RetentionInDays value they use by default?
By default, log data is stored in CloudWatch Logs indefinitely. However, you can configure how long to store log data in a log group. Any data older than the current retention setting is automatically deleted. You can change the log retention for each log group at any time.
Source :- http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SettingLogRetention.html