MigrationSchemaMissing(Unable to create the django_migrations table (%s) % exc) - django

Steps I did:
1. Deleted migration files.
2.created only one initial migration file.
3. Enter psql command prompt. Connect to database. drop schema public cascade; create schema public;
4.tried to migrate again.
I get MigrationSchemaMissing(Unable to create the django_migrations table (%s) % exc) Error.

This answer and the comment on its question works for me, in brief you must get required grant for a schema as below:
grant usage on schema public to username;
grant create on schema public to username;

Related

Can export `csv` file from query in Clickhouse to s3? (only sharing)

Get error timeout when try query in Clickhouse with large data. So I try query then export csv file and upload it to S3.
Yes, Can do that.
The first ensure s3_create_new_file_on_insert=1 in current Clickhouse database. Else can run. Need permission to execute below script.
SET s3_create_new_file_on_insert = 1
Example:
INSERT INTO FUNCTION s3('https://...naws.com/my.csv', 'KEY', 'SECRET')
SELECT user_id, name
FROM db.users
WHERE application_id =2
More info
https://medium.com/datadenys/working-with-s3-files-directly-from-clickhouse-7db330af7875
https://clickhouse.com/docs/en/sql-reference/table-functions/s3/

Permission bigquery.tables.updateData denied when querying INFORMATION_SCHEMA.COLUMNS

I'm querying bigquery (via databricks) with a service account with the following roles:
BigQuery Data Viewer
BigQuery Job User
BigQuery Metadata Viewer
BigQuery Read Session User
The query is:
SELECT distinct(column_name) FROM `project.dataset.INFORMATION_SCHEMA.COLUMNS` where data_type = "TIMESTAMP" and is_partitioning_column = "YES"
I'm actually querying via Azure Databricks:
spark.read.format("bigquery")
.option("materializationDataset", dataset)
.option("parentProject", projectId)
.option("query", query)
.load()
.collect()
But I'm getting:
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Access Denied: Table project:dataset._sbc_f67ac00fbd5f453b90....: Permission bigquery.tables.updateData denied on table project:dataset._sbc_f67ac00fbd5f453b90.... (or it may not exist).",
"reason" : "accessDenied"
} ],
After adding BigQuery Data Editor the query works.
Why I need write permissions to view this metadata? Any lower permissions I can give?
In the docs I see that only data viewer is required, so I'm not sure what I'm doing wrong.
BigQuery saves all query results to a temporary table if a specific table name is not specified.
From the document, following permissions are required.
bigquery.tables.create permissions to create a new table
bigquery.tables.updateData to write data to a new table, overwrite a table, or append data to a table
bigquery.jobs.create to run a query job
Since the service account already have BigQuery Job User role, it is able to run the query, it needs BigQuery Data Editor role for bigquery.tables.create and bigquery.tables.updateData permissions.

How to query a table that I don't own and don't have bigquery.jobs.create permissions on it

I was shared on a BigQuery table that I don't own and I don't have the bigquery.jobs.create permission on the dataset that contains the table.
I successfully listed all the tables in the dataset, but when I tried to query the table using this code:
tables.map(async (table) => {
const url = `https://bigquery.googleapis.com/bigquery/v2/projects/${process.env.PROJECT_ID}/queries`;
const query = `SELECT * FROM \`${table.id}\` LIMIT 10`;
const data = {
query,
maxResults: 10,
};
const reqRes = await oAuth2Client.request({
method: "POST",
url,
data,
});
console.log(reqRes.data);
});
I got the following error:
Error: Access Denied: Project project_id: <project_id>
gaia_id: <gaia_id>
: User does not have bigquery.jobs.create permission in project <project_id>.
I can't ask for those permissions, what should I do in this situation?
IMPORTANT:
I have tried to run the same query in the GCP and it ran successfully, but it seems like it created a temporary table clone and then queried this table and no the original one:
There are two projects here: your project, and the project that contains the table.
You currently create the job in ${process.env.PROJECT_ID} that you use in URL, try specifying your own project instead, where you can create jobs.
You'll need to modify query to include table's project to allow BigQuery to find it, so make sure ${table.id} includes project (table's - not yours), dataset and table.

How to fetch DBPROPERTIES, S3Location and comment set while creating database in AWS Athena?

As provided in AWS athena documentation.
https://docs.aws.amazon.com/athena/latest/ug/create-database.html
We can specify DBPROPERTIES, S3Location and comment while creating Athena database as
CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name
[COMMENT 'database_comment']
[LOCATION 'S3_loc']
[WITH DBPROPERTIES ('property_name' = 'property_value') [, ...]]
For example:
CREATE DATABASE IF NOT EXISTS clickstreams
COMMENT 'Site Foo clickstream data aggregates'
LOCATION 's3://myS3location/clickstreams/'
WITH DBPROPERTIES ('creator'='Jane D.', 'Dept.'='Marketing analytics');
But once the properties are set. How can I fetch the properties back using Query.
Let say, I want to fetch creator name from the above example.
You can get these using the Glue Data Catalog GetDatabase API call.
Databases and tables in Athena are stored in the Glue Data Catalog. When you run DDL statements in Athena it translates these into Glue API calls. Not all operations you can do in Glue are available in Athena, because of historical reasons.
I was able to fetch AWS Athena Database properties in Json format using following code of Glue data catalog.
package com.amazonaws.samples;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.glue.AWSGlue;
import com.amazonaws.services.glue.AWSGlueClient;
import com.amazonaws.services.glue.model.GetDatabaseRequest;
import com.amazonaws.services.glue.model.GetDatabaseResult;
public class Glue {
public static void main(String[] args) {
BasicAWSCredentials awsCreds = new BasicAWSCredentials("*api*","*key*");
AWSGlue glue = AWSGlueClient.builder().withRegion("*bucket_region*")
.withCredentials(new AWSStaticCredentialsProvider(awsCreds)).build();
GetDatabaseRequest req = new GetDatabaseRequest();
req.setName("*database_name*");
GetDatabaseResult result = glue.getDatabase(req);
System.out.println(result);
}
}
Also, following permissions are required for user
AWSGlueServiceRole
AmazonS3FullAccess

Doctrine wants to create tables, which are already existing by schema update

I have executed
bin/console d:s:u --force
Then the schema was created successfully. However, if I execute this command again, Symfony wants to re-create the schema. How can this be?
See full command line output:
$ bin/console d:s:u --force
Updating database schema...
Database schema updated successfully!
"7" queries were executed
$ bin/console d:s:u --force
Updating database schema...
[Doctrine\DBAL\Exception\TableExistsException]
An exception occurred while executing 'CREATE TABLE message (id INT AUTO_INCREMENT NOT NULL, user_id INT DEFAULT NULL, subject VARCHAR(255) NOT NULL, text VARCHAR(255) NOT NULL, INDEX IDX_B6BD307FA76ED395 (user_id), PRIMARY KEY(id)) DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci ENGINE = InnoDB':
SQLSTATE[42S01]: Base table or view already exists: 1050 Table 'message' already exists
[Doctrine\DBAL\Driver\PDOException]
SQLSTATE[42S01]: Base table or view already exists: 1050 Table 'message' already exists
[PDOException]
SQLSTATE[42S01]: Base table or view already exists: 1050 Table 'message' already exists
I had this config option, which caused the trouble:
schema_filter: "/user_field_data/"