SQLite How to use LIKE on a column and add wild cards - c++

I'm trying to compare 2 similar columns from 2 tables in SQLite. I wrote this which doesn't throw an error but also doesn't return anything and I know it should.
SELECT t1.* from t1, t2 Where t1.col1 Like '%'||t2.col1||'%';
I'm looking for when t2.col1 is embedded inside of t1.col1. Both columns are of type TEXT.
Note: I'm using the C-Interface in C++ with Visual Studio 2010.
Ideas?
Edit:
I've played around with pulling a value out of t2.col1 that matches something in t1.col1 and writing something like this,
SELECT t1.* from t1, t2 Where t1.col1 Like '%ValueInT1%';
which works and returns something.
Is there a bug with SQLite when concatenating the '%' character in a like statement or is there a different syntax I should be using? I'm at a loss for why this isn't working.
I've also seen an SQL function called Locate which people use in different Databases to do this kind of check. Does SQLite have a Locate function?
EDIT 2:
I've run the first SQL statement in SQLite Administrator with some of my data and it DOES find it. Could there be a simpler problem? There are '_' in the data could that be causing a problem with the like?

(V.V) The answer was that I was joining to the wrong column in the real table. 2 columns had very similar names and I was checking the wrong one.
Thanks for the help.

Related

Classic report issue with multiple inputs with IN statement

I am trying to refresh a report with dynamic action. And get the following errors:
{'dialogue': {'uv': true, 'line': [{'V': "failure of Widget}]}}
ORA-20876: stop the engine of the APEX.
classic_report"}]}}
I think its an issue with string which can't take and ST.ID IN (:P11_ROW_PK) in sql query.
Please suggest a workaround for the same.
This question requires the context you've provided in
https://stackoverflow.com/a/63627447/527513
If P11_ROW_PK is a delimited list of IDs, then you must structure your query accordingly, not expect the IN statement to deconstruct a bind variable containing a string.
Try this instead
select * from your_table
where st.id in (select column_value from apex_string.split(:P11_ROW_PK))
where REGEXP_LIKE(CUSTOMER_ID, '^('|| REPLACE(:P4_SEARCH,',','|') ||')$')
Above code will act same as APEX_STRING only if you are using lower version Apex

Amazon Athena: no viable alternative at input

While creating a table in Athena; it gives me following exception:
no viable alternative at input
hyphens are not allowed in table name.. ( though wizard allows it ) .. Just remove hyphen and it works like a charm
Unfortunately, at the moment the syntax validation error messages are not very descriptive in Athena, this error may mean "almost" any possible syntax errors on the create table statement.
Although this is annoying at the moment you will need to check if the syntax follows the Create table documentation
Some examples are:
Backticks not in place (as already pointed out)
Missing/extra commas (remember that the last column doesn't need the comma after column definition
Missing spaces
More ..
This error generally occurs when the syntax of DDL has some silly errors.There are several answers that explain different errors based on there state.The simple solution to this problem is to patiently look into DDL and verify following points line by line:-
Check for missing commas
Unbalanced `(backtick operator)
Incompatible datatype not supported by HIVE(HIVE DATA TYPES REFERENCE)
Unbalanced comma
Hypen in table name
In my case, it was because of a trailing comma after the last column in the table. For example:
CREATE EXTERNAL TABLE IF NOT EXISTS my_table (
one STRING,
two STRING,
) LOCATION 's3://my-bucket/some/path';
After I removed the comma at the end of two STRING, it worked fine.
My case: it was an external table and the location had a typo (hence didn't exist)
Couple of tips:
Click the "Format query" button so you can spot errors easily
Use the example at the bottom of the documentation - it works - and modify it with your parameters: https://docs.aws.amazon.com/athena/latest/ug/create-table.html
Slashes. Mine was slashes. I had the DDL from Athena, saved as a python string.
WITH SERDEPROPERTIES (
'escapeChar'='\\',
'quoteChar'='\"',
'separatorChar'=',')
was changed to
WITH SERDEPROPERTIES (
'escapeChar'='\',
'quoteChar'='"',
'separatorChar'=',')
And everything fell apart.
Had to make it:
WITH SERDEPROPERTIES (
'escapeChar'='\\\\',
'quoteChar'='\\\"',
'separatorChar'=',')
In my case, it was an extra comma in PARTITIONED BY section,
In my case, I was missing the singlequotes for the S3 URL
In my case, it was that one of the table column names was enclosed in single quotes, as per the AWS documentation :( ('bucket')
As other users have noted, the standard syntax validation error message that Athena provides is not particularly helpful. Thoroughly checking the required DDL syntax (see HIVE data types reference) that other users have mentioned can be pretty tedious since it is fairly extensive.
So, an additional troubleshooting trick is to let AWS's own data parsing engine (AWS Glue) give you a hint about where your DDL may be off. The idea here is to let AWS Glue parse the data using its own internal rules and then show you where you may have made your mistake.
Specifically, here are the steps that worked for me to troubleshoot my DDL statement, which was giving me lots of trouble:
create a data crawler in AWS Glue; AWS and lots of other places go through the very detailed steps this requires so I won't repeat it here
point the crawler to the same data that you wanted (but failed) to upload into Athena
set the crawler output to a table (in an Athena database you've already created)
run the crawler and wait for the table with populated data to be created
find the newly-created table in the Athena Query Editor tab, click on the three vertical dots (...), and select "Generate Create Table DLL":
this will make Athena create the DLL for this table that is guaranteed to be valid (since the table was already created using that DLL)
take a look at this DLL and see if/where/how it differs from the DLL that you originally wrote. Naturally, this automatically-generated DLL will not have the exact choices for the data types that you may find useful, but at least you will know that it is 100% valid
finally, update your DLL based on this new Glue/Athena-generated-DLL, adjusting the column/field names and data types for your particular use case
After searching and following all the good answers here.
My issue was that working in Node.js i needed to remove the optional
ESCAPED BY '\' used in the Row settings to get my query to work. Hope this helps others.
Something that wasn't obvious for me the first time I used the UI is that if you get an error in the create table 'wizard', you can then cancel and there should be the query used that failed written in a new query window, for you to edit and fix.
My database had a hypen, so I added backticks in the query and rerun it.
This happened to me due to having comments in the query.
I realized this was a possibility when I tried the "Format Query" button and it turned the entire thing into almost 1 line, mostly commented out. My guess is that the query parser runs this formatter before sending the query to Athena.
Removed the comments, ran the query, and an angel got its wings!

Can I disable QUERY OPTIMIZATION for MQT in DB2 LUW without recreating it?

UPDATE from 2021
This question is no longer actual for me.
It was a short period when I worked with DB2 and I don't know how it's in recent versions.
The problem was: I could not test effect of MQT without rebuilding it.
Which was not practical when you deal with multi-Gb data.
I did not found solution earlier, I don't know why question was minused.
SO recommends to not delete questions with answers and who knows: maybe somebody finally answers that.
I have a MQT in DB2 10.5 LUW:
CREATE TABLE MyMQT AS(
SELECT * FROM MyTable
WHERE
ServerName = 'COL'
AND LASTOCCURRENCE > TIMESTAMP '2015-12-21 00:00:00'
)
DATA INITIALLY DEFERRED REFRESH immediate
ENABLE QUERY OPTIMIZATION
MAINTAINED BY SYSTEM;
I want to DISABLE QUERY OPTIMIZATION without DROP/CREATE.
I found "Altering materialized query table properties" https://www-01.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/com.ibm.db2z10.doc.admin/src/tpc/db2z_changemqtableattribs.html
but this is for z/OS.
If I try:
ALTER TABLE MyMQT DISABLE QUERY OPTIMIZATION;
I get:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0104N An unexpected token "TABLE" was found following "ALTER ". Expected
tokens may include: "VIEW". SQLSTATE=42601
Documentation for LUW explains how to change MQT to regular table and otherwise.
Can I alter MQT options in DB2 LUW without recreating it?
Edit
It's quite strange, but looks like this is impossible to achieve in DB2 LUW.
As data_henrik mentioned, it's possible to disable/enable optimization for all MQTs.
I accept his answer although it's not quite what I was looking for.
No personal experience with it, but you could:
SET CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION = NONE
This would tell DB2 to not consider any MQT. Later on you would enable query optimization by setting that variable to "system" (the default) or something else. That statement is documented here.
Try this:
refreshable-table-options
|--●--DATA INITIALLY DEFERRED--●--REFRESH--+-DEFERRED--+--●----->
'-IMMEDIATE-'
.-ENABLE QUERY OPTIMIZATION--.
>--+----------------------------+--●---------------------------->
'-DISABLE QUERY OPTIMIZATION-'

Executing a "long-length" Oracle command in C++

Problem: I have a C++ application that executes different Oracle commands. My application can execute this SQL statement the following way (I have ommited error checking and a few earlier steps):
strcpy(szProcName,"select grantee, granted_role from DBA_ROLE_PRIVS;");
rc=SQLPrepare(sqlc.g_hstmt,(SQLCHAR*)szProcName,(SQLINTEGER)strlen(szProcName));
rc = SQLExecute(sqlc.g_hstmt);
The select statement's data is placed/binded into an MFC List Control. This works without problem...
The issue comes when I try to execute long-length select statements.
I now wish to use the same method, but to run this long SQL statement:
SELECT a.GRANTEE, a.granted_role as "Connect", b.granted_role as "APPUSER" FROM
(SELECT GRANTEE, granted_role from DBA_ROLE_PRIVS where GRANTED_ROLE = 'CONNECT') a
FULL OUTER JOIN
(SELECT GRANTEE, granted_role from DBA_ROLE_PRIVS where GRANTED_ROLE = 'APPUSER') b
ON a.GRANTEE=b.GRANTEE;
Setting that entire statement into szProcName seems like the wrong way to go about things.
What I have tried: I tried to add all the SQL text into szProcName, but it does not fit and makes the code terribly messy. I also thought to create a Stored Procedure to call in C++. The Stored Procedure requires that I use an INTO clause and does not produce a table that I can use in C++. Is there a better way to do this?
Edit: I have found one working way. By increasing szProcName's size and usingstrcat(), I can add each line and then execute. I still wonder if there is a more appropriate way, especially if my statements become any larger (which they probably will).
I have found one working way. By increasing szProcName's size and using strcat(), I can add each line and then execute. I still wonder if there is a more appropriate way, especially if my statements become any larger (which they have).

How to reindex or fix rowid after a row is deleted sqlite?

Alright I am in a rather difficult situation, or at least I think so anyway. I have been doing some research on how to fix my problem but have really come up empty handed.
I need to be able to reindex the rowid of my table after I delete a row. That way at any given time when I want to update or index a row by the rowid it is accessing the correct one.
Now for those of you asking why. Basically I am interfacing a "homebrewed" db that was programmed in C and is really just a bunch of memory locations all accessed like they were a db table. So what I'm trying to say is they can look up a row by searching for a value in the table, or by simply saying i want row 6. Lastly the table could consist of really anything, and any values which means they dont create a column as an index and ultimately the only thing for me to index their row by row number is the rowid to my knowledge.
So I have found that VACUUM would do what I want or need but it appears that the system that database is in isn't giving sqlite privileges to write so when VACUUM is run it comes back with and error. (ERROR 14 or Unable to open the database file) (I also know that my db is open so that isn't the issue but not having write privileges is the only reason I can come up with) I have also read some stuff about the auto increment or something like that but didn't really understand/think that was going to be able to fix my problem.
Any suggestions or ideas from the sqlite or database geniuses out that would be appreciated.
Not sure if I have understood completely your problem, but if you can use SQL code maybe you can write a query to update the IDs (assuming they are in dense order).
You can use a query like this:
UPDATE t1
SET id = (SELECT rank
FROM (SELECT id,
(
SELECT count()+1
FROM (SELECT DISTINCT id
FROM t1 AS t
WHERE t.id < t1.id
)
) rank
FROM t1
) AS sub
WHERE sub.id = t1.id
);
You can check my demo in SQLFiddler. In this demo you will see the result of the DELETE and UPDATE statements (to simulate your case) if you run all queries together.