What is the reason for error "Resource in project is the subject of a conflict" while trying to recreate a cloudsql instance? - google-cloud-platform

I am trying to create a cloudsql instance with the following command:
gcloud beta sql instances create sql-instance-1 --tier=db-f1-micro --region=asia-south1 --network=default --storage-type=HDD --storage-size=10GB --authorized-networks=XX.XXX.XX.XX/XX
The instance sql-instance-1 is something I need not running all the time. So I create an sqldump file and when I need the database I create it. When I run this command it fails with the following error
ERROR: (gcloud.beta.sql.instances.create) Resource in project [my-project-id] is the subject of a conflict: The instance or operation is not in an appropriate state to handle the request.
From what I understand the gcloud is complaining that instance name was used before although the instance is already deleted. When I change the name to a new unused name the command works fine. The problem with this is I need to give a new name every time I re-create the instance from the dump.
My questions are:
Is this expected behavior i.e. should name of cloud-sql instance be unique and not used before within a project.
I also found that --network option is not recognized with gcloud. Seems to work only with gcloud beta as explained here. When is this expected to become GA?

This is indeed expected behaviour. From the documentation:
You cannot reuse an instance name for up to a week after you have
deleted an instance.
Regarding the --network flag and it's schedule for GA, there is no ETA for its release outside of beta. However, it's release will be listed in the Google Cloud SDK Release Notes, which you can get updates from by subscribing to the google-cloud-sdk-announce group

Related

Could not find any metrics for MLD in the data. Did you load the right dataset

I have run the OSRM server successfully on the AWS EC2 instance, after doing the steps:
1. osrm-extract --profile ../profiles/car.lua all1.osm.pbf
2. osrm-partition all1.osrm
3. osrm-customize all1.osrm
4. osrm-routed --algorithm=MLD all1.osrm
then I copied the volume of that instance (containing all generated OSRM files e.g., all1.osrm) to another instance. And only in the new instance, I try to run the command 4: osrm-routed --algorithm=MLD all1.osrm but I get the following error:
terminate called after throwing an instance of 'osrm::util::exception'
what(): Could not find any metrics for MLD in the data. Did you load the right dataset?
what prevents me from running the same steps again in the new instance is that the new instance has lower capacity and it will take a much longer time to process osm files again.
so, what I did do wrong here? and is there a way to move the configurations of OSRM from one machine/instance to another without the need to re-run the steps again?
thanks,

Druid can not see/read GOOGLE_APPLICATION_CREDENTIALS defined on env path

I installed apache-druid-0.22.1 as a cluster (master, data and query nodes) and enabled “druid-google-extensions” by adding it to the array druid.extensions.loadList in common.runtime.properties.
Finally I defined GOOGLE_APPLICATION_CREDENTIALS ( which has the value of service account json as defined in https://cloud.google.com/docs/authentication/production )as an environment variable of user that run the druid services.
However, I got the following error when I try to ingest data from GCR buckets:
Error: Cannot construct instance of
org.apache.druid.data.input.google.GoogleCloudStorageInputSource,
problem: Unable to provision, see the following errors: 1) Error in
custom provider, java.io.IOException: The Application Default
Credentials are not available. They are available if running on Google
App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise,
the environment variable GOOGLE_APPLICATION_CREDENTIALS must be
defined pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information. at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) while locating
com.google.api.client.http.HttpRequestInitializer for the 3rd
parameter of
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
at
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.storage.google.GoogleStorageDruidModule) while
locating org.apache.druid.storage.google.GoogleStorage 1 error at
[Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1,
column: 180] (through reference chain:
org.apache.druid.indexing.overlord.sampler.IndexTaskSamplerSpec["spec"]->org.apache.druid.indexing.common.task.IndexTask$IndexIngestionSpec["ioConfig"]->org.apache.druid.indexing.common.task.IndexTask$IndexIOConfig["inputSource"])
A case reported on this matter caught my attention. But I can not see
any verified solution to that case. Please help me.
We want to take data from GCP to on prem Druid. We don’t want to take cluster in GCP. So that we want solve this problem.
For future visitors:
If you run Druid by systemctl you then need to add required environments in service file of systemctl, to ensure it is always delivered to druid regardless of user or environment changes.
You must define the GOOGLE_APPLICATION_CREDENTIALS that points to a file path, and not contain the file content.
In a cluster (like Kubernetes), it's usual to mount a volume with the file in it, and to se the env var to point to that volume.

Gcloud command, can't specify "cloud-platform" scope when creating instance templates

So I have a command like this that should create the instance template and give it the "cloud-platform" scope (which should give full access according to docs):
gcloud compute instance-templates create "webserver-template"\
--source-instance=webserver --source-instance-zone=us-east4-c\
--configure-disk=instantiate-from=custom-image,custom-image=projects/myproject-dev/global/images/webserver-image,device-name=webserver\
--network=vpc-dev --scopes=cloud-platform
However, GCP seems to ignore that scope and assigns the default ones instead. Am I missing something here? I did go to an instance template in the GCP UI and created a new one based on it, and specified the option to "Allow full access to all Cloud APIs". When I then use gcloud to describe that template, the scope is "cloud-platform" as it should be. I just can't figure out how to do it all in one gcloud command.
EDIT: I also tried "--scopes=https://www.googleapis.com/auth/cloud-platform"
The problem is the scopes command-line option. Change to
--scopes=https://www.googleapis.com/auth/cloud-platform
I figured out what was going on. As you can see in my original question, I'm specifying the flag "--source-instance". And according to the docs:
The name of the source instance that the instance template will be created from.
You can override machine type and labels. Values of other flags will be ignored and values from the source instance will be used instead.
So the scopes flag was rightfully being ignored, and my source instance had the more limited scopes assigned to it.

How to enable datasharing in Redshift cluster?

I am trying to create a datashare in Redshift by following this documentation. When I type this command:
CREATE DATASHARE datashare_name
I get this message:
ERROR: CREATE DATASHARE is not enabled.
I also tried to make it using console, but same issue.
So how to enable data sharing in Redshift ?
From the documents:here
Data sharing via datashare is only available for ra3 instance types
The document lists ra3.16xlarge, ra3.4xlarge, and ra3.xlplus instance types for producer and consumer clusters.
So, if I were in your place - I would first go back and check my instance type. If still not sure, drop a simple CS ticket and ask them if anything has changed recently & documentation is not updated

how to create a vm snapshot using pyvmomi

I have a task of implementing a basic backup and recovery system within a django app. I have heard of pyvmomi, but never used it before.
My specific tasks at hand is:
1) make a call to a vCenter, pass the vm name, and request to make a snapshot
2) obtain the file location of the snapshot
3) and upload the snapshot file into an OpenStack Swift object store
What is the actual syntax of creating a vm snapshot using pyvmomi?
Also - what is the syntax to request the actual snapshot file from vCenter?
https://github.com/rreubenur/vmware-pyvmomi-examples/blob/master/create_and_remove_snapshot.py
This should be helpful
Snapshot task result itself contains Moref to snashot created
So that you can get reference to created snapshot.