WSO2 DSS : maximum length SQL statement / compound SQL statement (SQL Server) - wso2

I have created a compound SQL statement as a query for a dataservice. It works when the statement is +/- 600 characters long.
When I add more functionality to the SQL statement, after saving the SQL statement the full service becomes invalid (statement is +/- 3500 characters)
Is there a maximum length on the SQL statement that can be added in a dataservice ?
Kind regards
Tore

Related

Apach Superset [v2.0.0]: Row Level Security - Not working in SQL Lab

How do we get ROW LEVEL SECURITY working in Superset [v2.0.0]?
I tried many thing but last I am stuck with getting it work within SQL Lab. See bug:https://github.com/apache/superset/issues/20774
If I set RLS_IN_SQLLAB = True within superset/config.py and try to run a query SELECT * FROM FLIGHTS, it tries to pick the WHERE clause "AIRLINE" = 'AS' but still fails due to prefixing it with public.flights.AIRLINE which is not a valid Column name.
SQL Lab in Apache superset does not support row-level security. SQL lab is meant for superadmins who have access to all data.

Group by incompatible google sql cloud

I did an experiment to create multiple queries on Google SQL cloud, as shown in the picture, but the result is
ERROR 1055 (42000): Expression # 5 of SELECT list is not in GROUP BY
clause and contains nonaggregated column 'ipol.sales.name' which is
not functionally dependent on columns in GROUP BY clause; this is
incompatible with sql_mode = only_full_group_by
In cloud SQL instance go to Edit option and Add Database Flag SQL mode - Traditional
you cannot SET GLOBAL unless you're logged in as root (or equivalent) user.
and also, the GROUP BY clause is invalid. fix the statement before running it
... whatever Expression # 5 may be (just count them).
this comes from sql_mode=only_full_group_by.
if you run this query in your database
select version(),##sql_mode;
you will get
version()
##sql_mode
8.0.26-google
ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION
the ONLY_FULL_GROUP_BY in the ##sql_mode caused the problem.
edit your Cloud SQL via GCP GUI page
add flag sql_mode, select all of the value in the ##sql_mode except ONLY_FULL_GROUP_BY and save.

Bulk Update a million rows

Suppose I have a million rows in a table. I want to flip a flag in a column from true to false. How do I do that in spanner with a single statement?
That is, I want to achieve the following DML statement.
Update mytable set myflag=true where 1=1;
Cloud Spanner doesn't currently support DML, but we are working on a Dataflow connector (Apache Beam) that would allow you to do bulk mutations.
You can use this open source JDBC driver in combination with a standard JDBC tool like for example SQuirreL or SQL Workbench. Have a look here for a short tutorial on how to use the driver with these tools: http://www.googlecloudspanner.com/2017/10/using-standard-database-tools-with.html
The JDBC driver supports both DML- and DDL-statements, so this statement should work out-of-the-box:
Update mytable set myflag=true
DML-statements operating on a large number of rows are supported, but the underlying transaction quotas of Cloud Spanner continue to apply (max 20,000 mutations in one transaction). You can bypass this by setting the AllowExtendedMode=true connection property (see the Wiki-pages of the driver). This breaks a large update into several smaller updates and executes each of these in its own transaction. You can also do this batching yourself by dividing your update statement into several different parts.

Use SQL expression as calculated field in Amazon Quicksights

I am trying to integrated a simple SQL expression in Amazon Quicksights, but every time I use the calculated field I get a error stating that the methods used are not valid.
Amazon Quicksight does not let me use aggregate functions:
ROUND((SUM(CASE WHEN dyn_boolean THEN 1 ELSE 0 END) * 100.0) / COUNT(session_unique), 2)
I know that I can change the CASE into a ifelse, but that does not solve the entire problem.
If you want full power of SQL while preparing data, use custom SQL option during creation of data set.
'New Data set' -> 'FROM EXISTING DATA SOURCES' ->'Create Data Set'->'Edit and Preview Data'-> 'Switch to custom SQL tool'
You can write custome SQL of your choice .

WSO2 DSS and MS SQL server. Service works extremely slow

I was faced with a very strange behavior of Data Service Server (v. 3.5.0).
I prepared quite simple service with some resources for MS SQL Server RDBMS data source. When I call one of resources with simple select query I get answer after 6 seconds (six seconds).
The same select in MS SQL Server Management Studio returns data after 15-100 ms.
The same select in WSO2 DSS Database Explorer returns data after max 15 ms.
The same select in Netbeans returns data after max 100 ms.
I tried JSTL and Microsoft drivers. The result is the same.
Everywhere except DSS service I get answer in max. 100 milliseconds. Result of these queries is very small - 6-8 rows with about 10 columns.
Which is the reason for such behavior?
Could somebody help me?
The reason is very, very strange.
I tried a simple query like:
SELECT a1,a2 .... FROM someView WHERE a1=:parameter
When parameter is of type STRING (I think, like parameter for PreparedStatement) query returns results in about 4000 milliseconds (???).
If parameter is of type QUERY_STRING query returns results in 10 milliseconds (!!!!). Result is very small - about 10 rows with 5 columns.
Time was measured by net.sf.log4jdbc.DriverSpy, but the same difference I get with pure Microsoft Jdbc and Jtds drivers.
Why such a big difference? 400 times faster?
One question remained: What is to blame:
WSO2 DSS software?
JDBC driver?
SQL Server?