Cloud Foundry can't auto create tables using Mysql - cloud-foundry

Solved
Found the problem, I had a schema defined in my java class
I have a cloud foundry app which uses a mysql data service.
It works great but I want to add another database table.
When I re-deploy to cloud foundry with the new entity class it does not create the table and the log has the following error
2012-08-12 20:42:23,699 [main] ERROR org.hibernate.tool.hbm2ddl.SchemaUpdate - CREATE command denied to user 'ulPKtgaPXgdtl'#'172.30.49.146' for table 'acl_class'

Schema is created dynamically via the service. All you need to do is bind the application to the service and use the cloud namespace. As you mentioned above, removing the schema name from your configuration file will resolve the issue.

Related

BigQuery - How to instantiate BigQueryTemplate without env variables? getting The Application Default Credentials are not available

I'm trying to instantiate BigQueryTemplate without the environment variable GOOGLE_APPLICATION_CREDENTIALS.
Steps tried:
Implemented CredentialsSupplier by instantiating Credentials and setting location to service account json file.
Instantiated Bean BigQuery using BigQueryOptions::newBuilder() and setting credentials and project id.
Instantiating Bean BigQueryTemplate using the BigQuery bean created in step 2.
spring-cloud-gcp-dependencies 3.4.0 version is used.
The application executing in VM (non-gcp env).
Another option I tried is adding below properties
spring.cloud.gcp.bigquery.dataset-name=datasetname
spring.cloud.gcp.bigquery.credentials.location=file:/path/to/json
spring.cloud.gcp.bigquery.project-id=project-id
I'm getting below error
com.google.cloud.spring.bigquery.core.BigQueryTemplate,
applog.mthd=lambda$writeJsonStream$0,
applog.line=299, applog.msg=Error:
The Application Default Credentials are not available.
They are available if running in Google Compute Engine.
Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials.
Please let me know if I have missed any thing.
Thanks in advance.

Druid can not see/read GOOGLE_APPLICATION_CREDENTIALS defined on env path

I installed apache-druid-0.22.1 as a cluster (master, data and query nodes) and enabled “druid-google-extensions” by adding it to the array druid.extensions.loadList in common.runtime.properties.
Finally I defined GOOGLE_APPLICATION_CREDENTIALS ( which has the value of service account json as defined in https://cloud.google.com/docs/authentication/production )as an environment variable of user that run the druid services.
However, I got the following error when I try to ingest data from GCR buckets:
Error: Cannot construct instance of
org.apache.druid.data.input.google.GoogleCloudStorageInputSource,
problem: Unable to provision, see the following errors: 1) Error in
custom provider, java.io.IOException: The Application Default
Credentials are not available. They are available if running on Google
App Engine, Google Compute Engine, or Google Cloud Shell. Otherwise,
the environment variable GOOGLE_APPLICATION_CREDENTIALS must be
defined pointing to a file defining the credentials. See
https://developers.google.com/accounts/docs/application-default-credentials
for more information. at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) at
org.apache.druid.common.gcp.GcpModule.getHttpRequestInitializer(GcpModule.java:60)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.common.gcp.GcpModule) while locating
com.google.api.client.http.HttpRequestInitializer for the 3rd
parameter of
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
at
org.apache.druid.storage.google.GoogleStorageDruidModule.getGoogleStorage(GoogleStorageDruidModule.java:114)
(via modules: com.google.inject.util.Modules$OverrideModule ->
org.apache.druid.storage.google.GoogleStorageDruidModule) while
locating org.apache.druid.storage.google.GoogleStorage 1 error at
[Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 1,
column: 180] (through reference chain:
org.apache.druid.indexing.overlord.sampler.IndexTaskSamplerSpec["spec"]->org.apache.druid.indexing.common.task.IndexTask$IndexIngestionSpec["ioConfig"]->org.apache.druid.indexing.common.task.IndexTask$IndexIOConfig["inputSource"])
A case reported on this matter caught my attention. But I can not see
any verified solution to that case. Please help me.
We want to take data from GCP to on prem Druid. We don’t want to take cluster in GCP. So that we want solve this problem.
For future visitors:
If you run Druid by systemctl you then need to add required environments in service file of systemctl, to ensure it is always delivered to druid regardless of user or environment changes.
You must define the GOOGLE_APPLICATION_CREDENTIALS that points to a file path, and not contain the file content.
In a cluster (like Kubernetes), it's usual to mount a volume with the file in it, and to se the env var to point to that volume.

Can I run dataflowjob between projects?

I want to export data from Cloud Spanner in project A to GCS in project B as AVRO.
If my service-account in project B is given spanner.read access in project A, can I run a dataflow-job from project B with template: Cloud_Spanner_to_GCS_Avro and write to GCS in project B?
I've tried both in console and with following command:
gcloud dataflow jobs run my_job_name
--gcs-location='gs://dataflow-emplates/latest/Cloud_Spanner_to_GCS_Avro'
--region=my_region
--parameters='instanceId=name_of_instance,databaseId=databaseid,outputDir=my_bucket_url
--service-account-email=my_serviceaccount_email
I'm not sure how to specify projectId of the Spanner instance.
With this command from project B it looks in project B:s Spanner and cannot find the instance and database.
I've tried to set: instanceId=projects/id_of_project_A/instances/
name_of_instance but it's not a valid input
Yes you can, you have to grant the correct authorization on the dataflow service account
I recommend you to use a "user-managed service account". The default one is the Compute Engine default service account with the editor roles on the host project, too many authorizations....
So the answer seems to be that it's possible for some templates or if you write a custom one, but not the template I want to use, batch export from Spanner to GCS Avro files.
And that it might be added in a future update to the template.

Cannot Upgrade Google Cloud SQL to Second Generation

I cannot upgrade the First generation Google Cloud Instance to Second generation instance using MySQL Second Generation upgrade wizard in console.
During the check configuration screen, I get Tables that use the MEMORY storage engine found error due to which I cannot proceed further as shown in screenshot
According to documentation at Upgrading a First Generation instance to Second Generation, I have verified using the query mentioned in documentation.
SELECT table_schema, table_name, table_type
FROM information_schema.tables
WHERE engine = 'MEMORY' AND
table_schema NOT IN
('mysql','information_schema','performance_schema');
but found no tables using a MEMORY storage engine as shown below.
I managed to resolve the error and proceeded with upgrade. I had a to remove a table from the performance_schema database that did not use the PERFORMANCE_SCHEMA storage engine before starting the upgrade process. Seems Google Cloud console threw in an irrelevant error!

CKAN: automatically delete datastore tables when a resource is removed

I have a ckan instance configured with the filestore, datastore and datapusher plugins enabled.
When I create a new resource, the datapusher plugin correctly adds a new table to the datasoredb and populates it with the data.
But if I update the resource, a new datapusher task is executed and everything updates correctly. On another ckan instance with a resource linked to it, I have to manually run the task, but everything works ok.
The problem comes if I delete the resource. The datastore tables are still available, and even the link to the file is still active.
Is there some way to configure it to autoremove every trace of the resource??? I mean, remove the files from the filestore, the tables from the datastore, the api, the links, etc.
I partially confirmed this behaviour with http://demo.ckan.org, which is currently ckan_version: "2.4.1"
Create a resource
Query resource via data pusher
Delete resource
Query resource via datastore_search API -> still works can query.
Attempt to access resource file -> 404 - not found.
Will file as bug.
Perhaps use this to delete ?http://docs.ckan.org/en/latest/maintaining/datastore.html#ckanext.datastore.logic.action.datastore_delete
This is possible through CLI:
sudo -u postgres psql datastore_default
(assumes datastore installed from package using these Datastore Extension settings and database name is datastore_defaultand postgres is superuser).
THEN (OPTIONAL TO FIND ALL RESOURCE UUID's):
\dt to list all tables
THEN:
DROP TABLE "{RESOURCE ID}";
(Replace {RESOURCE ID} with resource UUID)