Parameters on LIMIT and OFFSET not working - coldfusion

Im trying to implement pagination by parameterized my limit from request URL unfortunately I'm having error.
You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''25'' at line 1
Here is the code for sql syntax. Need your help badly.
queryExecute("SELECT * FROM TABLE_NAME LIMIT :limit",{limit:rc.limit});

The limit parameter is being passed as a string by default, whereas the database requires an integer value. Try specifying the type:
queryExecute("SELECT * FROM TABLE_NAME LIMIT :limit",{limit:{value:rc.limit,sqltype:"integer"}});

Related

We can’t parse this SQL syntax shows up in AWS Quicksight after applying toString()

I have a calculated field which computes a total based on a particular type:
sumIf(amount, type = "sale")
Now I'm trying to convert the result to string and then concatenate some text to it, but doing toString(sumIf(amount, type = "sale")) gives the following message:
We can’t parse this SQL syntax. If you are using custom SQL, verify the syntax and try again. Otherwise, contact support.
Is there any way to make this work?
Did you try using the correct bracket type? i.e.
toString(sumIf({amount}, {type} = "sale"))
I tried an example like this and that worked fine, Quicksight can have issues when calling fields without the right brackets.

How do I insert a time value in Redshift in a time column?

Given the following query
ALTER TABLE public.alldatatypes ADD x_time time ;,
how do I insert a value into x_time?
Time appears to be a valid column type according to the documentation.
https://docs.aws.amazon.com/redshift/latest/dg/r_Datetime_types.html#r_Datetime_types-time
https://docs.aws.amazon.com/redshift/latest/dg/c_Supported_data_types.html
However, when I try to do an insert, I always get an error.
Insert query: insert into public.alldatatypes(x_time) values('08:00:00');
Error:
SQL Error [500310] [0A000]: Amazon Invalid operation:
Specified types or functions (one per INFO message) not supported on
Redshift tables.;
I do not want to use another column type.
I am testing all the column types defined in the documentation.
That cryptic error message is the one Redshift gives when you try to use a leader-only function as source data for a compute node. So expect you aren't showing the exact code you ran to generated this error. I know that it can seem like you didn't change anything important to the issue but you likely have.
You see select now(); works just fine but insert into <table> select now(); will give the error you are showing. This is because now() is a leader only function. However insert into <table> select getdate(); works great - this is because getdate() is a function that runs on compute nodes.
Now the following SQL runs just fine for me:
create table fred (ttt time);
insert into public.fred(ttt) values('01:23:00'); -- this is more correctly written values('01:23:00':time)
insert into public.fred(ttt) select getdate()::time;
select * from fred;
While this throws the error you are getting:
insert into public.fred(ttt) select now()::time;
So if this doesn't help clear things up please post a complete test case that demonstrates the error.

Classic report issue with multiple inputs with IN statement

I am trying to refresh a report with dynamic action. And get the following errors:
{'dialogue': {'uv': true, 'line': [{'V': "failure of Widget}]}}
ORA-20876: stop the engine of the APEX.
classic_report"}]}}
I think its an issue with string which can't take and ST.ID IN (:P11_ROW_PK) in sql query.
Please suggest a workaround for the same.
This question requires the context you've provided in
https://stackoverflow.com/a/63627447/527513
If P11_ROW_PK is a delimited list of IDs, then you must structure your query accordingly, not expect the IN statement to deconstruct a bind variable containing a string.
Try this instead
select * from your_table
where st.id in (select column_value from apex_string.split(:P11_ROW_PK))
where REGEXP_LIKE(CUSTOMER_ID, '^('|| REPLACE(:P4_SEARCH,',','|') ||')$')
Above code will act same as APEX_STRING only if you are using lower version Apex

Query hive table with Spark

I am newbie to Apache Hive and Spark. I have some existing Hive tables sitting on my Hadoop server that I can run some HQL commands and get what I want out of the table using hive or beeline, e.g, selecting first 5 rows of my table. Instead of that I want to use Spark to achieve the same goal. My Spark version on server is 1.6.3.
Using below code (I replace my database name and table with database and table):
sc = SparkContext(conf = config)
sqlContext = HiveContext(sc)
query = sqlContext.createDataFrame(sqlContext.sql("SELECT * from database.table LIMIT 5").collect())
df = query.toPandas()
df.show()
I get this error:
ValueError: Some of types cannot be determined after inferring.
Error:root: An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line string', (1, 0))
However, I can use beeline with same query and see the results.
After a day of googling and searching I modified the code as:
table_ccx = sqlContext.table("database.table")
table_ccx.registerTemplate("temp")
sqlContext.sql("SELECT * FROM temp LIMIT 5").show()
Now the error is gone but all the row values are null except one or two dates and column names.
I also tried
table_ccx.refreshTable("database.table")
and it did not help. Is there a setting or configuration that I need to ask my IT team to do? I appreciate any help.
EDIT: Having said that, my python code is working for some of the table on Hadoop. Do not know the problem is because of some entries on table or not? If yes, then how come the corresponding beeline/Hive command is working?
As it came out in the comments, straightening up the code a little bit makes the thing work.
The problem lies on this line of code:
query = sqlContext.createDataFrame(sqlContext.sql("SELECT * from database.table LIMIT 5").collect())
What you are doing here is:
asking Spark to query the data source (which creates a DataFrame)
collect everything on the driver as a local collection
parallelize the local collection on Spark with createDataFrame
In general the approach should work, although it's evidently unnecessarily convoluted.
The following will do:
query = sqlContext.sql("SELECT * from database.table LIMIT 5")
I'm not entirely sure of why the thing breaks your code, but still it does (as it came out in the comments) and it also improves it.

Sqlite 'Unrecognized token: ":" C++

I'm not sure what to do with this as I can't remove the colon from my SQL string.
Basically I am trying to execute an SQL string in Sqlite using the below code.
string database_name = "C:/Programs_C++/Project/Databases/dbase.db";
string exec_string = "SELECT * FROM " + database_name + " WHERE type='table'";
dbase_return=sqlite3_open_v2(database_name.c_str(),&db_handle,SQLITE_OPEN_READWRITE,NULL);
dbase_return_tbl=sqlite3_get_table(db_handle,exec_string.c_str(),&result,&row,&column,&error_msg);
//But I get the error: unrecognized token: ":" ?
How do I get around this? Thanks
You can SELECT from a table, not from a database.
First open the database (using the filename), then execute a valid SQL statement like
SELECT * FROM myTable;
SELECT * FROM C:/Programs_C++/Project/Databases/dbase.db WHERE type = 'table' is not valid SQL. If you are trying to get a list of all tables, you cannot do it that way.
It looks like you have URI filenames switched on - this can be done at compile time or runtime (probably compile time for you if you didn't know about it).
If URI filenames are switched on, you need to change your filename to something like:
file:///C:/Programs_C++/Project/Databases/dbase.db
Edit: If you want to switch it off, I don't think you can do it for this one call (as the call takes a flag as part of the parameters which can only switch it on). Instead you can disable it globally by calling
sqlite3_config(SQLITE_CONFIG_URI, 0)
which tells sqlite to disable the URI filename convention globally. Note: you only need to call this once and it is not thread safe, so probably just put this at the start of your program.
However, it might be worth investigating if URI filenames are useful to you before switching them off entirely.