I have a project on Google Cloud and I am trying to create a bucket to store my web files for my website. The only problem is I have a CNAME going from my website to 'c.storage.googleapis.com' so my bucket name has to be the same as my website name which is 'plains.cc'. When I try to create the bucket however, it says the name is already in use. I used this bucket name on a previous account but deleted it so I don't understand why I can't reuse it.
Are you still unable to create it? As per the doc, If you have deleted the bucket from your previous project then I guess this should be a timing issue. But if you have deleted the previous project directly without deleting the bucket contained within it, it could take more a month or more to get the associated data to be eventually deleted. Read document on this here.
Related
The bounty expires tomorrow. Answers to this question are eligible for a +50 reputation bounty.
Faris is looking for an answer from a reputable source.
I tried to import data from an external Amazon S3 bucket (the Dynamic Yield Daily Activity Stream, as it happens) into BigQuery by using the Data Transfer tab.
I created a new data set in my project and created an empty table with no schema (since the s3 data is Parquet, so am I right that I don't need to add a schema to the table?).
I then made a new data transfer with the S3 bucket credentials, selecting my new data set and table as the destination. I have tried multiple times but I get the same error, "Failed to obtain the location of the source S3 bucket. Additional details: Access Denied"
However, when checking with the owner of the bucket they have confirmed 100% that I do have the correct access, and on their end they have successfully pulled data from the bucket. I have been able to pull data from the bucket using Cloudberry Explorer myself too, with the same credentials.
So what have I done wrong? Is it because I didn't define the table schema? Or something else? Maybe the data set location is wrong? What else could be the problem?
Thanks
According to the BigQuery Documentation for Amazon S3 transfers, you do need the schema definition for the table.
Best of luck!
This is definitely an access issue. It can stem from two places though:
Your Access Key ID and Secret are incorrect
Your S3 URI is incorrect
What does your S3 URI look like? Sometimes access is given to an individual "folder" or object rather than a whole bucket.
I get the exact same error when accessing an incorrect S3 bucket with a valid ID and Key.
And to confirm the table needs to be created ahead of time:
And finally, since you're using parquet it does work with an empty schemaless table:
*I used a star notation to grab the file: s3://mys3bucket/*
My Gcp project name is Mobisium. I found out that there are 2 bucket auto created in the storage browser named mobisum-bucket and mobisium-daisy-bkt-asia.I have never used bucket in the project. mobisium-bucket bucket is empty and the mobisium-daisy-bkt-asia contains one file called daisy.log. Both buckets are Location Type: Multi-region. I read in a stack overflow question's comments that If bucket are created automatically multi-region, you will be charged.
My questions is:
Am I being charged for this buckets.
Are these buckets required, If not should I delete them.
According to documentation you are charged for:
data storage
network
operations
So you will be charged for them if they contain data. You can also view all charges assosciated with your billing account
This buckets names suggests that some services created them - the buckets name is hard to figure out which services. Sometimes when you turn on the services, they create buckets for themselves.
Creating new project, there shouldn't be any buckets, so if this is really new project (created from scratch) you could try to delete them.
If this will be repeated for another project (nor only for this one) it will be good idea to contact support, because this is not normal action.
I know namespaces for S3 have to be globally unique but I have seen nothing on if AWS has a process for recycling unused namespaces which makes me wonder if they are unique in perpetuity.
What you're calling namespaces are S3 bucket names. They're globally unique. If you own an S3 bucket and you delete it, another AWS account can later create an S3 bucket with the name that you previously used.
A small experiment suggests:
the bucket name is immediately available for reuse by the same account
the bucket name is not immediately available for reuse by other accounts but once AWS has cleared up everything it needs to then it becomes available (in my test that process took 30+ minutes)
You can think of a Bucket name like a domain name. For example if you were to create a bucket with any name that you want that is available then it will be created, if you were to delete that bucket you could then immediately turn around and create that bucket with the same name.
https://docs.aws.amazon.com/quickstart/latest/rd-gateway/step2.html#existing-standalone
https://s3.amazonaws.com/quickstart-reference/microsoft/rdgateway/latest/templates/rdgw-standalone.template
I'm referencing template above to create my Remote Desktop Gateway (RDGW) in existing VPC. It has QSS3BucketName, QSS3KeyPrefix in parameter section. Resources section has RDGWLaunchConfiguration references QSS3BucketName bucket again. on Setup files, it's calling the following path.
https://${QSS3BucketName}.${QSS3Region}.amazonaws.com/${QSS3KeyPrefix}submodules/quickstart-microsoft-utilities/scripts/Unzip-Archive.ps1
For some reason, PT30 (after30 mins it says it didn't get the required signal and rolls back. Question to the community is what do I need to store these files in the S3 buckets or templates would dump it in S3 while it's creating the stack.
I also created a bucket in S3, copied these scripts from GitHub and pasted inside the bucket in the correct order, still does not work. Kind frustrating.
The quick start template is referencing nested templates placed at aws-quickstart S3 bucket. If we place the default values in above URL, than the exact URL we get is,
https://aws-quickstart.s3.amazonaws.com/quickstart-microsoft-rdgateway/submodules/quickstart-microsoft-utilities/scripts/Unzip-Archive.ps1
You can create the stack with either default values without changing AWS quick start configuration or else you can download all the referencing templates, modify as per your requirement and place it in your own bucket. Once it is done than replace the URL value in the main template with your bucket's URL.
I'm trying to setup an EventArc trigger on a google cloud run project, to be run whenever a new file is uploaded to a particular bucket.
The problem is, I can only get it to work if I choose any resource, i.e files uploaded to any of my buckets would run the trigger. However, I only want it to run for files uploaded to a specific bucket.
It asks me to enter the 'full resource name' if I choose 'a specific resource'. But there seems to be no documentation on how to format the bucket name so that it will work. I've tried the bucket name, projects/_/buckets/my_bucket_name, but the trigger never runs unless I choose 'any resource'.
Any ideas how to give it the bucket name so this will work?
I think the answer may be buried in here .... cloud.google.com/blog/topics/developers-practitioners/… If we read it deeply, we seem to see that event origination is based on audit records being created. We see that a record is created when a new object is created in your bucket. We then read that we can filter on resource name (the name of the object). However it says that wildcards are not yet supported ... so you could trigger on a specific object name ... but not a name which is prefixed by your bucket name.
Eventarc now supports wildcards (path pattern) for cloud audit log based events (e.g., storage.objects.create)
screenshot