Refresh Materialized Views Redshift - amazon-web-services

I am running into the following error
SQL Error [XX000]: ERROR: Assert
Detail:
-----------------------------------------------
error: Assert
code: 1000
context: mv_query != "" - Expected non-empty MV CTAS
query: 0
location: refresh.cpp:1570
process: padbmaster [pid=14118]
whenever I execute the SQL statement REFRESH MATERIALIZED VIEW <viw_name> in AWS Redshift. The views themselves are not empty so I do not really know how to resolve this. Would appreciate some help here

When you encounter the error, best fix is to drop your views, recreate them and then re-run the refresh materialized views statement.

Related

Athena query executed through boto3 python client gives smaller result compared to query executed through AWS cli

I want to execute a very simple query through Athena.
Query: select * from information_schema.tables
When I execute the query using the boto3 client with the following code:
...
def run_query(query_string):
query_execution_context = {"Catalog": "awsdatacatalog", "Database": "information_schema"}
response = athena_client.start_query_execution(
QueryString=query_string, QueryExecutionContext=query_execution_context, WorkGroup="primary"
)
return response
query_string_get_tables = "select * from information_schema.tables"
response = run_query(query_string_get_tables)
I get back a result of 9 rows in 0.6s.
When I then go to the AWS console and rerun the same query I get back a result of 500 rows in 6s.
The result from the AWS console is correct. How can I get the same result using the boto3 client?
EDIT:
I downloaded the query history and compared the query string. As you can see they are exactly the same. I also removed the QueryExecutionContext in the boto3 client call but this doesn't change anything. Besides, I tried all combinations of single/double quotes.
Query history:
37b72ac5-3223-496f-8293-79eab8a661a0,select * from information_schema.tables,2022-12-02T18:23:09.738-08:00,SUCCEEDED,6.503 sec,39.01 KB,Athena engine version 2,'-
9d3a274a-8109-4988-aaf8-bba9c8733208,select * from information_schema.tables,2022-12-02T18:14:11.385-08:00,SUCCEEDED,520 ms,0.67 KB,Athena engine version 2,'-
As mentioned in the comments using boto3 needs some efforts to start_query_execution, wait for its completion, and then get_query_results (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena.html#Athena.Client.get_query_results).
To make your life easier, you can use the open-source library AWSWrangler or AWS-SDK-Pandas. With this library to can get the results in a blocking manner:
# Retrieving the data from Amazon Athena
df = wr.athena.read_sql_query("SELECT * FROM my_table", database="my_db")

How to deal with django migration postgres deadlock?

So, I was deploying the django app to production and then the infamous postgres deadlock situation happened. This happens during the django migration.
Django version: 3.2
Postgres 13
Google cloud sql postgres.
OperationalError
deadlock detected
DETAIL: Process 112506 waits for AccessExclusiveLock on relation 29425 of database 27145; blocked by process 112926.
Process 112926 waits for RowShareLock on relation 29381 of database 27145; blocked by process 112506.
HINT: See server log for query details.
I ran this query to get the process info:
SELECT 29425::regclass,29381::regclass from pg_locks;
and result:
regclass | regclass
"custom_requestlog" "auth_user"
"custom_requestlog" "auth_user"
I am not sure how to proceed ahead, as the pgaudit has been enabled but it doesnt show anything and also the query insights is not that helpful. Attached is image of query insights.
Any help would be helpful please!
Update:
The query from log explorer in Google cloud, gave this query just after deadlock detected error:
2022-04-29 13:51:36.590 UTC [6445]: [732-1] db=xyz_prod,user=backend-prod DETAIL: Process 6445 waits for AccessExclusiveLock on relation 29425 of database 27145; blocked by process 9249.
Process 9249 waits for RowShareLock on relation 29381 of database 27145; blocked by process 6445.
Process 6445: SET CONSTRAINTS "custom_requestlog_user_id_3ff3f1cf_fk_some_user_id" IMMEDIATE; ALTER TABLE "custom_requestlog" DROP CONSTRAINT "custom_requestlog_user_id_3ff3f1cf_fk_some_user_id"
Process 9249: INSERT INTO "custom_requestlog" ("user_id", "ip_addr", "url", "session_key", "method", "headers", "query", "body", "cookies", "timestamp", "status_code", "response_snippet") VALUES (NULL, 'xx.xxx.xx.xxx'::inet, '/version/', NULL, 'GET', '{"HTTP_HOST": "api.some.com", "HTTP_ACCEPT": "*/*", "HTTP_ACCEPT_ENCODING": "deflate, gzip", "HTTP_USER_AGENT": "GoogleStackdriverMonitoring-UptimeChecks(https://cloud.google.com/monitoring)", "HTTP_X_CLOUD_TRACE_CONTEXT": "xxxxxx/9771676669485105781", "HTTP_VIA": "1.1 google", "HTTP_X_FORWARDED_FOR": "xx.xxx.xx.xxx, xx.xxx.xx.xxx", "HTTP_X_FORWARDED_PROTO": "https", "HTTP_CONNECTION": "Keep-Alive"}', '{}', '\x'::bytea, '{}', '2022-04-29T13:48:46.844830+00:00'::timestamptz, 200, NULL) RETURNING "custom_requestlog"."id"

Amazon Listing Data Import

I am trying to import the product titles and review ratings from a listing to a Google Spreadsheet. I have tried the ImportXML fuction using Xpath query but that does not work. So, I tried a code as mentioned below and it worked. I have been able to get the listing data but sometimes it gives me an error instead of displaying the data.
Error:
Request failed for https://www.amazon.co.uk returned code 503. Truncated server response: For information about migrating to ... (use muteHttpExceptions option to examine full response). (line 2).
When I refresh the code or when I add/remove https:// from the url, it works again but when I refresh the sheet, it goes off sometime and displays the error.
Question:
Is there any way to get rid of the error?
While trying to get the Star Rating displayed on the sheet, it uses a Span Data-hook class where the data is stored and I am unable to retrieve it. Is there any way to retrieve the star rating as well?
This is the function that I have created to get the product title and other data:
function productTitle(url) {
var content = UrlFetchApp.fetch(url).getContentText();
var match = content.match(/<span id="productTitle".*>([^<]*)<\/span>/);
return match && match [1] ? match[1] : 'Title not found';
}
You are receiving HTTP Status codes of 503, that means the service you are trying to reach is either under maintenance or overloaded.
This is on Amazon's side. You should use the Amazon API instead of the public endpoints for processing this data.

Querying Couchbase Bucket from Postman - Unrecognized parameter in request

Using the Postman tool, I'm trying to query a Couchbase bucket. I'm getting an error response 1065 that there is an "unrecognized parameter in request". The query will work fine within the Couchbase workbench, but I need to be able to do this from Postman.
I'm making a POST request with this body:
{
"statement" : "SELECT * from `myBucketName` where id = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee""
}
There error message is:
"msg": "Unrecognized parameter in request: {\n\"statement\" : \"SELECT from `myBuckeyName` where _id "
I think this is just an issue with how my request body is formatted, I'm just new to this and not sure how it should be formatted based off the error I'm getting.
Here's how I did it:
Open Postman
Select POST
Use a URL of http://localhost:8093/query/service
Under "Authorization", use Basic Auth (with the username/password you've created for Couchbase)
Under "Body", select "x-www-form-urlencoded"
Add a KEY of statement and a value of your query
Click SEND
I got a 200 OK response with the results of the query. Screenshot:
You should definitely check out the Couchbase REST API documentation. It uses curl instead of Postman, but that's a translation you'll eventually get used to.

error using django-admin with django-mssql

I am using django 1.6 with django-mssql
My django admin site works perfectly except for one part.
Whenever I click an "add user" link or otherwise navigate to /admin/auth/user/add I get the following error
(-2147352567, 'Exception occurred.', (0, u'Microsoft OLE DB Provider for SQL Server', u'Cannot issue SAVE TRANSACTION when there is no active transaction.', None, 0, -2147467259), None)
Command:
SAVE TRANSACTION [s3612_x1]
Parameters:
[]
I've tried various seetings in my database config such as "use_mars": true but there has been no change.
I am able to create users in code by using the User model without issue.
I have not encountered this error anywhere else.
Fixed by setting 'ATOMIC_REQUESTS': True for my db connection, which I should have been doing anyway.