Setting AnsiWarnings to 'OFF' is not supported on Azure Data Warehouse - azure-sqldw

I get the message below when executing SET ANSI_WARNINGS OFF.
Msg 104409, Level 16, State 1, Line 2
Setting AnsiWarnings to 'OFF' is not supported.
A similar message happens with SET ARITHABORT OFF. Setting the options to ON executes the command without errors happening. Everything I find on Microsoft websites indicates this is supported on a Azure SQL Data Warehouse.
My goal is to suppress division by zero errors without requiring changes to the SQL syntax by users. We've used this option successfully on SQL Server databases for years.

SQL Data Warehouse only supports setting these values to ON. If you try to set these ON (SET ANSI_WARNINGS ON) you aren't changing the supported behavior. When you try and set these to OFF, you will see the warning as expected.

Related

How to solve QuickSight SQL exception timeout error

I have created a VIEW in Amazon Athena using:
CREATE OR REPLACE VIEW sentiment_analysis AS
SELECT
file,
Sentiment,
SentimentScore.Positive AS Positive,
SentimentScore.Negative AS Negative,
SentimentScore.Neutral AS Neutral,
SentimentScore.Mixed AS Mixed
FROM "targeted_sentiment_output"."sentiment_results"
The VIEW works and populates with the data.
I am now trying to load this into Amazon QuickSight, but get the following error:
Your database generated a SQL exception. This can be caused by query timeouts, resource constraints, unexpected DDL alterations before or during a query, and other database errors. Check your database settings and your query, and try again.
I think it may be a timeout error. I have tried to find the Advanced Settings tab to increase the query timeout limit but can't find it. Please can you provide clear instructions on where to locate the timeout limit?
If it is a different error then please explain...
Thanks!

GCP CLOUD SQL denies permission for pre aggregation

I am trying to use pre aggregations over CLOUD SQL on Google Cloud Platform but the database is denying access and giving error Statement violates GTID consistency.
Any help is appreciated.
Cube.js done pre-aggregation by CREATE TABLE ... SELECT, but you are using MySQL on top of Google SQL with --enforce-gtid-consistency (has limitations).
Since only transactionally safe statements can be logged, there is a limitation to use CREATE TABLE ... SELECT (and some another SQL), because this statement is actually logged as two separate events.
There are two ways how to solve this issue:
1. Use pre-aggregations to an external database. (recommended way).
https://cube.dev/docs/pre-aggregations/#read-only-data-source-pre-aggregations
2. Use not documented flag loadPreAggregationWithoutMetaLock
Attention: This flag is an experimental and can be removed or changed in the feature..
Take a look at the source code
You can pass it directly in the driver constructor. This will produce two SQL statements to pass the limitation:
CREATE TABLE
INSERT INTO
Thanks

Why Google Cloud SQL statistics shows usage even if I am not using it?

I am not using my Cloud SQL since last 7 days and there is no query I am firing. Then why this graph shows I am making 10 queries per second. What does this graph signify ?
That graph signifies the number of statements executed by the server. This variable includes statements executed within stored programs.
To have a wider vision of what queries you Cloud SQL server is executing you can:
1. Enable Cloud SQL flag general_log=ON and log_output=FILE[1]
2. Go to [2], select your Cloud SQL instance and All logs=cloudsql.googleapis.com/mysql-general.log
You will see in the stackdriver logs the queries your server is executing behind.
You can further look up these queries in the mysql reference manual.
If you have Cloud Audit logs enabled then the latter could be the reason why you are seeing these results in the graph. To disable Audits logs, you need to go to[3] and uncheck the Cloud SQL row.
Important:
1. When you set, remove, or modify a flag for a database instance, the database might be restarted. The flag value is then persisted for the instance until you remove it. If the instance is the source of a replica, the replica will also restart to align with the current configuration of the instance[1].
After looking into the cloudsql.googleapis.com/mysql-general.log, you may want to disable general_log and output_file to stop further charges due to the log size. I would recommend you also visit the tips for general log flags[4].
[1] https://cloud.google.com/sql/docs/mysql/flags#config
[2] https://console.cloud.google.com/logs/viewer
[3] https://console.cloud.google.com/iam-admin/audit
[4] https://cloud.google.com/sql/docs/mysql/flags#tips

Auto Create statistics in Azure SQL DW

In Azure SQL Datawarehouse i just used the below tsql code to enable auto statistics creation.The command ran successfully , but when i checked in database properties under option tab Auto Create Statistics is till set to False.
ALTER DATABASE MyDB SET AUTO_CREATE_STATISTICS ON;
Please let me know if i'm missing here something. I have the db_owner access for the database also.
I'm guessing that you are using SQL Server Management Studio.
I was able to reproduce the symptom by turning off and on auto_create_statistics.
The issue appears to be that the database metadata is cached in SSMS. Right-click the database name and select "Refresh" before selecting "Properties". Using this method I got the correct setting for auto_create_statistics showing up each time.
My tests were done using SSMS 17.7
(The need to refresh the database metadata can also occur when adding or removing tables, columns, etc)
You can also query sys.databases, the is_auto_create_stats_on column.

track all Mysql queries in WAMP

I want to track how much time needed for my queries to be executed
I referred to this post but I get only the queries without the time.
It is possible that after using my web application for a wile,using select, update , insert queries (not from console but real web-application execution) I can get a summary like this output generated by SHOW PROFILES; command.
I am working with wamp mysql V5.5.24
Many thanks
Edit: I used triggers to track the update and insert statement following this method
I still have the problem how to track the select query.
any idea please?
This no longer works.
As of July 2013, you need:
general-log=1
general-log-file = "C:\wamp\logs\mysql_general.log"
Are you sure you are not getting execution times in your slow query log?
If you are just looking to optimize your queries (rather than checking the execution time of every single one), you can look at the mysql server status in phpmyadmin (assuming you kept it in your wamp server) as covered here. The full tutorial is paid, but the preview will get you into the server status page where phpmyadmin will point out problem areas for you.
Finaly I used the general log by setup WAMP server like this
[mysqld]
port=3306
long_query_time = 1
slow_query_log = 1
slow_query_log_file = "E:/wamp/logs/slowquery.log"
log = "E:/wamp/logs/genquery.log"
after that I used this tool (trial version) dbForge Studio where I can use a query profiler and I get the complete execution time.