Where is input temporarily stored during "bq load .. localfile.csv"? - google-cloud-platform

The gcloud-sdk command "bq load" can take a local file as input.
From the output of the command, it looks like that file is first being uploaded into google cloud storage somewhere before the bigquery load job is scheduled. Given that the REST api for bigquery schedule-load-job endpoint also takes only "gs://" urls, and that the load-job needs the data to be reachable, I am pretty sure that such an upload to cloud-storage is taking place (though I can't find any documentation that explicitly describes "bq load" with local files.
My question then is: can someone tell me where the local file is temporarily uploaded to? Is it one of the gcloud project cloud-storage buckets, or somewhere else? Is it guaranteed to be deleted after the load-job completes?
I have a requirement for data to be kept only in a specific geographical region, thus the location of the (presumed) temporary storage is significant.
I could upload the data explicitly to storage, then use "bq load" with a reference to the cloud storage, but then need to arrange deletion of the data afterwards which is a minor inconvenience. A dedicated storage with a "lifecycle rule" could at least delete after 1 day, but the "bq load .. localfile" approach is cleaner..

If you run bq --help you can see how one of the global bq_flags is --location. It is defined as follows:
--location: “Default geographic location to use when creating datasets or determining where jobs should run (Ignored when not
applicable.)”
If you run:
bq load --location=eu {your-table} {your-source}
For a dataset located in EU, then the job should succeed and all jobs related should be run in EU.

Related

Google CLoud Transfer Job is creating one extra folder

I have created a Transfer Job to import some of my website's static resources to Google storage.
The job was supposed to import the data in a bucket named www.pretty-story.com.
It is importing from a tsv file located here.
For instance the first url is :
https://www.pretty-story.com/wp-includes/js/jquery/jquery.min.js
so I would have expected the job to create the folder structure starting with wp-includes.
But instead the job created this folder structure www.pretty-story.com\wp-includes\js\jquery.
Therefore the complete path (including my bucket name) is :
www.pretty-story.com\www.pretty-story.com\wp-includes\js\jquery.
How can I tell the data transfer job to use the bucket as first folder, instead of creating a subfolder with the same name ?
According to https://cloud.google.com/storage-transfer/docs/create-url-list:
When an object located at http(s)://[HOSTNAME]:[PORT]/[URL_PATH] is transferred to Cloud Storage, the name of the object in Cloud Storage is [HOSTNAME]/[URL_PATH].
You don't have an option to skip the [HOSTNAME]/ part of this, so what you are asking is not possible.
If the amount of data involved is reasonable, I recommend downloading it to a workstation and using gsutil to copy it into a bucket without the hostname prefix.

Actual Count Of Objects Incorrect After Deletion Of Bucket

Two days ago I deleted a bucket that contained a backup of all log files for a site. It contained about 30,000 tiny files and about 275 MB of space.
I noticed in the Monitoring panel of the site that the file count is exactly the same. Decided to wait a couple of days and it still has not changed.
The bucket uses standard storage class, multi-region location, and has no lifecycle rules with uniform permissions.
I can verify that the bucket is gone in the UI as well as using the ls command in cloud shell.
Cloud Storage Object Count
Cloud Storage Object Count
The count of objects in the Monitoring panel reconciled about two days later.
Looks like the change ended up being retroactive, meaning the charts in the past were re-written to reflect the objects being deleted.

Copying objects from one bucket directory folder to another bucket folder using transfer

I'm wanting to use google transfer to copy all folders/files in a specific directory in Bucket-1 to the root directory of Bucket-2.
Have tried to use transfer with the filter option but doesn't copy anything across.
Any pointers on getting this to work within transfer or step by step for functions would be really appreciated.
I reproduced your issue and worked for me using gsutil.
For example:
gsutil cp -r gs://SourceBucketName/example.txt gs://DestinationBucketName
Furthermore, I tried to copy using Transfer option and it also worked. The steps I have done with Transfer option are these:
1 - Create new Transfer Job
Panel: “Select Source”:
2 - Select your source for example Google Cloud Storage bucket
3 - Select your bucket with the data which you want to copy.
4 - On the field “Transfer files with these prefixes” add your data (I used “example.txt”)
Panel “Select destination”:
5 - Select your destination Bucket
Panel “Configure transfer”:
6 - Run now if you want to complete the transfer now.
7 - Press “Create”.
For more information about copy from a bucket to another you can check the official documentation.
So, a few things to consider here:
You have to keep in mind that Google Cloud Storage buckets don’t treat subdirectories the way you would expect. To the bucket it is basically all part of the file name. You can find more information about that in the How Subdirectories Work documentation.
The previous is also the reason why you cannot transfer a file that is inside a “directory” and expect to see only the file’s name appear in the root of your targeted bucket. To give you an example:
If you have a file at gs://my-bucket/my-bucket-subdirectory/myfile.txt, once you transfer it to your second bucket it will still have the subdirectory in its name, so the result will be: gs://my-second-bucket/my-bucket-subdirectory/myfile.txt
This is why, If you are interested in automating this process, you should definitely give the Google Cloud Storage Client Libraries a try.
Additionally, you could also use the GCS Client with Google Cloud Functions. However, I would just suggest this if you really need the Event Triggers offered by GCF. If you just want the transfer to run regularly, for example on a cron job, you could still use the GCS Client somewhere other than a Cloud Function.
The Cloud Storage Tutorial might give you a good example of how to handle Storage events.
Also, on your future posts, try to provide as much relevant information as possible. For this post, as an example, it would’ve been nice to know what file structure you have on your buckets and what you have been getting as an output. And If you can provide straight away what’s your use case, it will also prevent other users from suggesting solutions that don’t apply to your needs.
try this in Cloud Shell in the project
gsutil cp -r gs://bucket1/foldername gs://bucket2

Modifying image in Active Storage cloud

I'm using Rails 5.2 and GCS as cloud service.
I'd like to give an opportunity to users to crop and rotate user's image.
User has many Images, Image has one :image_file attached
In development I use such method:
class Image
...
def rotate(degree)
image = MiniMagick::Image.new(ActiveStorage::Blob.service.send(:path_for, self.image_file.key))
image.rotate "#{degree}"
image.write(ActiveStorage::Blob.service.send(:path_for, self.image_file.key))
self.image_file.blob.analyze
end
...
end
But I can't figure out how to get to image files in cloud.
I've made it to download the file to local storage and make all the operations needed.
Now it takes only to replace (delete current and create a new one with the same name) the file in the cloud (without changing anything in the database records if possible), but I can't figure out how to do this with active storage.
At least I need to get the file name in the cloud to use just bare google-cloud-ruby
To list files stored in Cloud Storage bucket using Ruby on Rails see the code example defined here. You can also upload files to cloud storage bucket and delete files from them using Ruby on Rails.
Also since you are allowing your customers to modify their files in Cloud Storage buckets, you may consider using versioning. This will incur you additional cost but will provide reliability for your customers.
Here is the link to Ruby on Google Cloud Platform documentation which might be helpful to you.

Pointing multiple projects' log sinks to one bucket

I have a few GCP projects with log sinks to different storage buckets. I'd like to combine them into a single bucket. But the stackdriver export doesn't add any distinguishing information to the object names it creates; they all look like cloudaudit.googleapis.com/activity/2017/11/14/00:00:00_00:59:59_S0.json
What will happen if I start pushing them all to a single bucket? Will the different project sinks overwrite each other's objects? Is there any way to distinguish which project created the logs just from the object?
If not, I guess I should switch to pubsub sinks, and then write some code that produces objects with more desirable names. Are there any established patterns or examples for doing this?
Update: I filed https://issuetracker.google.com/issues/69371200 for this issue.
To enable this, just select custom destination on the sink and point to the bucket with this format: storage.googleapis.com/[BUCKET_ID].
I've just enabled this in a couple of my projects, as I'm curious to see the results when exporting to a bucket. However, I have been using a single BQ sink for all my projects, and the tables created have all the logs mixed, so no logs lost when using a single BQ sink.
I'm assuming for a GCS sink will work in the same way, but I'll tell you in a couple of days.
If a single bucket sink does not work, you can always use a single BQ sink (that will help in analyzing the logs), and when you no longer want to have them in BQ, export them and store the files wherever you want.
Also, since you'll be writing to your sink constantly, you can't use nearline or coldline, so the storage pricing is better in BQ than a regional bucket (0.02 USD/GB in BQ vs somewhere between 0.02 and 0.35 USD/GB for regional storage, depending on the region; BQ has 10GB free monthly, GCS 5GB).
I would generally recommend using a BQ sink, but I'll tell you what happens with my bucket logs.
Update:
A few hours later, and I've verified that shared bucket sinks work pretty much as you would expect. It concatenates logs chronologically regardless of the project origin, and only creates a single file for each time window. Hope this helps! (I still prefer BQ as a log sink...)
Update 2:
For the behavior you seek in the feature request, I would use BQ, but you could just as easily grep the project ID and separate the logs:
grep '"logName":"projects/<your-project-id>/' mixed-log.json > single-project-log.json
Or just get a cloud function triggered by bucket updates (so, every time you receive a log file in the sink) to run this for you.
Or namespace you buckets and have a cloud function moving them to wherever you need as soon as they are written.
The possibilities are endless!
If you have an organization or folder which includes all the projects that you want to collect logs from, then you can create a sink that collects from all projects in that org/folder.
Unfortunatlely, you cannot do this from the Cloud Console. Instead you must use gcloud with the --organization or --folder option or the API.