Does Here charge for storage usage of spreadsheets sent to geocode in batch? - geocoding

Here charges a transaction for each geocoded line. But one thing is not clear in the documentation: will I also be charged for the storage usage of the files I send?

The are no additional costs for any storage in that case. This is only transaction based.

Related

AWS CloudWatch Logs Archive (not S3), how to use it

I am reading AWS CloudWatch Logs documentation here. They says
Archive log data – You can use CloudWatch Logs to store your log data in highly durable storage. The CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service. You can then access the raw log data when you need it.
And in the pricing page, they have
Store (Archival) $0.03 per GB
And in the Pricing Calculator, they mention
Log Storage/Archival (Standard and Vended Logs)
Log volume archived is estimated to be 15% of Log volume ingested (due to compression). Storage/Archival costs are estimated assuming customer choses a retention period of one (1) month. Default retention setting is ‘never expire’.
Problem
I am trying to understand the behavior of this archive feature to decide if I need to move my log data to S3. but I cannot find any further details. I have tried exploring every button and link in CloudWatch Logs pages but cannot find a way to archive the data, I can only delete them or edit their retention rules.
So how does it work? The remark in the Pricing Calculator says it is estimated to be 15% of ingested volume, does this mean it always archive 15% of the log automatically? And why do they have to assume in the calculation taht the retention period is set to 1 month, does the archive feature behave differently otherwise?
The Archive log data feature refers to storing log data in CloudWatch Logs. You do not need to do anything additional to 'archive'. It is the regular storage you can see on console.
Considering only storage pricing, storing logs in S3 is cheaper. It varies depending on region but in average on S3 Standard is about $0.025 per GB vs $0.03 per GB on CloudWatch Logs Storage. And if you move the objects to other storage classes it becomes cheaper.
About:
Log volume archived is estimated to be 15% of Log volume ingested (due
to compression)
It refers to if 100GB of data are ingested on CloudWatch Logs, it reflects as only 15GB (15%) on Storage due to the special compressed format in which they stored this logs.

Is accessing xml files in Google Cloud Storage (Bucket) an operation?

I'd like to upload some XML files to Google Cloud Storage (Bucket) and make it publicly available with an HTTPS load balancer:
https://cloud.google.com/load-balancing/docs/https/ext-load-balancer-backend-buckets
The total size of these XMLs is about a GB. But I want to access the millions of time a day. And I'm not sure the cost of this. I have to pay less than a dollar for the storage, nothing for the network usage, as ingress is free, but what's about the cost of operations? So accessing my XML files through an URL, like example.com/bucket/1.xml, is a Google Cloud (Class A or Class B) operation? So I have to pay the Class A or Class B fee for several million calls? Any idea?
https://cloud.google.com/storage/pricing
Getting an object is a Class B Operations. The table below is found in the GCS Pricing doc.
Note the very first line storage.*.get for Class B Operations.
These operations are just accessing an Object. When doing this, you are not listing a bucket, listing its objects or creating new objects, you are getting the object right away. This is why it is not a Class A operation.
Related to the pricing itself, no worries: the first 50,000 Class B Operations per month are for free. After that, you get charged 0.004 USD per each 10,000 Class B Operation as shown here.
Meaning that each 10,000,000 Class B Operations (after the first 50,000) will charge you for just 4 USD.
You can also find the Detailed example in the docs where millions of Class A and B Operations are being performed. See how the pricing is calculated.
Now, just to clarify you are subject to get charged for Network.
Ingress refers to the elements that go inside GCS. In other words, uploading files to your bucket.
NOTE. You would get charged for the Class A operation of uploading files (stroage.*.insert) but no for the network Ingress.
When calling an object you might get charged for Egress network, which is the content that goes from the bucket to an user. The following scenarios are shown in this doc section:
Data moves within the same location (from US-EAST1 to US-EAST1 OR from EU to EU) will be for free.
Data moves between different locations on the same continent (from US-EAST1 to NORTHAMERICA-NORTHEAST1) will cost 0.01 USD per GB.
Please find these and more examples of how and when you would get charged for Network Egress in the link above.
Some Egress charges might also apply if the content is retrieved Worldwide and will depend on how much data was retrieved during the month.
For instance:
If you send 1 TB or less during the month only to China, you would get charged 0.23 USD per GB in network Egress.
If you send between 1 and 10 TB only to Asia (excluding China), you would get charged 0.11 USD per GB in network Egress.
Find in this section more information about the Network usage.
Sorry for the long answer. I know some of the egress scenarios won't apply, I just wanted to make sure you were aware about all of the possible pricing when talking of Google Cloud Storage. Hope this is helpful! :)

Google Cloud Storage pricing for Archive Objects

I'm trying to use this new GCP feature: https://cloud.google.com/blog/products/storage-data-transfer/archive-storage-class-for-coldest-data-now-available
But I'm not able to find the minimal retention, and the cost for retrieval.
Can anyone help me to undertand pricing for this service?
Thanks in advance
Check this link https://cloud.google.com/storage/pricing in this document google describes pricing for each of storage class(Standard Storage, Nearline Storage, Coldline Storage, Archive Storage)
here is the pricing Table for further reference.
As of now the costs for Archive are:
Class A operations (per 10,000 operations): $0.50
Class B operations (per 10,000 operations): $0.50
Free operations: free
Data retrival per GB: $0.05
The minimum storage duration is of 365 days

GCP bill calls to storage buckets from the same zone?

GCP bill requests to storage objects in the same zone? I didn't find this in the documentation.
Example: if I will use GS as Key/Value storage, internally inside of one zone, I will pay only for storage or for requests too?
Per the Google Cloud Storage Pricing, you are charged both for the data you store and inbound/outbound operations (requests). This means that you will be charged an amount of $ per month based on the storage class and bucket location and any operations that are performed within Google Cloud Storage. Google Cloud Platform has a Pricing Calculator to help you calculate the pricing you would incur using Google Cloud Platform products. I recommend you to check it out.
Google Cloud Storage (GCS) charges on a per-operation rate, regardless of whether you're making calls from the same region. Per-operation costs vary based on storage class and what type of call, but it's around 5-10 cents per 10,000 operations.
Google Cloud Storage separately charges you for data egress, and egress is mostly free within a region, but the operations themselves still have their cost.
If your goal is to frequently access/mutate a large key/value table with relatively small values, Cloud Bigtable is likely to be a cheaper and faster option. If your goal is to store comparatively large objects, GCS is likely to be a better choice. If you have a good idea of what sort of operations you're likely going to use, I highly suggest using the pricing calculator to estimate your costs.

Aws s3 reduced redundancy costs the same as standard?

I typed in the same parameters on s3 calculator for rss and standard and the result seems the same. I put in 10M put requests and 10M get requests for both and they came out to $5.38 under us-east zone.
Am i missing something here?
Thank you in advance.
The requests do cost the same for both types of storage it costs exactly the same.
The pricing differences come down to cost per GB. For example storing 100GB in standard S3 storage will cost you $3. Storing the same amount of data in reduced redundancy will cost you $2,4.
Pricing for standard can be found here: https://aws.amazon.com/s3/pricing/.
While pricing for Reduced can be found here: https://aws.amazon.com/s3/reduced-redundancy/.
Update: As the comment below points out, now the the price for Standard storage is $0,023 per GB, so 100GB would cost $2,4 while the Reduced stayed at the same cost.
Now there's another option called Standard - Infrequent Access Storage, which has the same benefits of S3 Standard and has a cost of 0.0125 per GB, so storing 100GB on tier would cost $1,25, but there are some caveats to watch for:
Minimum object size of 128KB.
Minimum storage duration of 30 days (if you delete the object before you'll be billed for 30 days).
A per GB retrieval fee ($0.01 per GB) similar to the one present in AWS Glacier.
Refer below document.
https://aws.amazon.com/s3/pricing/
Anyways, pricing differs from Region to Region, No. of put/get requests and on data transfer.