sqlite update multiple columns with Values List - list

i have a simple table named "test" in an sqlite DB
columns: id, foo, bar
row1: 1, 5 , 6
row2: 2, 7 , 8
I want to update the rows with the following sqlite statement:
UPDATE test SET (foo, bar) = ( 8, 9 )
according to https://sqlite.org/lang_update.html and the picture there it should be possible. It is also recommended in UPDATE syntax in SQLite
Unfortunately i get
near "(": syntax error:
A Syntax like:
UPDATE test SET foo=8, bar=9
works, but is not the solution here.
Can somebody explain why the list query is wrong?
Thank you!

Issue was with the sqlite3 Version in Python (even latest version) has dll with sqlite3.sqlite_version = 3.14
Feature became available with sqlite 3.15...

Related

REDSHIFT : How to get the time difference between two columns in HH:MM:SS? [duplicate]

I've been trying to get the duration between two timestamps using
select timestamp1-timestamp2 as time_s
from table_name
but I'm getting result as:
00:00:06.070627
00:00:33.415313
00:00:51.293319
00:02:00.146453
00:02:25.600623
00:06:37.811005
00:28:27.698517
I don't want the values after '.'
I tried using
split_part(time_s, '.',1)
but it gave
6070627
33415313
51293319
120146453
145600623
397811005
1707698517
Can somebody please help?
The difference between two timestamps is an INTERVAL.
The basic problem that you are experiencing is that there is no standard way to display or return an Interval. For example:
If you try your query in the web-based Redshift Query Editor, it will return NULL
If you run it in your particular SQL Client, it is returning 00:00:06.070627
If you run it in DbVisualizer, it is returning 0 years 0 mons 0 days 0 hours 0 mins 0.0 secs
If you run it in DataGrip, it is returning Invalid character data was found
When you were attempting to extract a portion of the result to the left of the ., Redshift was confused because it does not see the output the way it is displayed in your SQL Client.
The best way to avoid your SQL Client from doing strange things to results is to convert the answer to TEXT:
SELECT (timestamp1 - timestamp2)::TEXT
This gives an answer like: 24 days 13:57:40.561373
You can then manipulate it like a string:
SELECT SPLIT_PART((timestamp1 - timestamp2)::TEXT, '.', 1)
This gives: 24 days 13:57:40
Bottom line: When things look strange in SQL results, cast the result to TEXT or VARCHAR to see what the data really looks like.

How to store Array or Blob in SnappyData?

I'm trying to create a table with two columns like below:
CREATE TABLE test (col1 INT ,col2 Array<Decimal>) USING column options(BUCKETS '5');
It is creating successfully but when i'm trying to insert data into it, it is not accepting any format of array. I've tried the following queries:
insert into test1 values(1,Array(Decimal("1"), Decimal("2")));
insert into test1 values(1,Array(1,2));
insert into test1 values(1,[1,2,1]);
insert into test1 values(1,"1,2,1");
insert into test1 values(1,<1,2,1>);
etc..
Please help!
There is an open ticket for this: https://jira.snappydata.io/browse/SNAP-1284 which will be addressed in next release for VALUES strings (JSON string and Spark compatible strings).
The Spark Catalyst compatible format will work:
insert into test1 select 1, array(1, 2);
When selecting the data is by default shipped in serialized form and shown as binary. For now you have to use "complexTypeAsJson" hint to show as JSON:
select * from test1 --+complexTypeAsJson(true);
Support for displaying in simpler string format by default will be added in next release.
One other thing that can be noticed in your example is a prime value for buckets. This was documented as preferred in previous releases but as of 1.0 release it is recommended to use a power of two or some even number (e.g the total number of cores in your cluster can be a good choice) -- perhaps some examples are still using the older recommendation.

update results in postgresql 8.4 and 9.5 are different

In our software we are still using postgresql 8.4.
On Archlinux I could not find any possibility to install 8.4, so I started a task with 9.5 and wanted to translate it to 8.4 afterwards. Now I can run the UPDATE query on both versions but I get different results.
Query:
UPDATE properties
SET propertyvalue = regexp_replace(propertyvalue, E'[\u0001-\u001f]', '', 'g')
WHERE properties_id in
(SELECT DISTINCT(properties_id)
FROM regexp_matches(propertyvalue, E'[\u0001-\u001f]'));
Workflow is that I reinit the database and dump the same sql-dump into both versions.
On 8.4 the message says "UPDATE 2689816" and on 9.5 it says "UPDATE 241294".
When I run
SELECT count(*)
FROM properties
WHERE properties_id in
(SELECT DISTINCT(properties_id)
FROM regexp_matches(propertyvalue, E'[\\u0001-\\u001f]'));
I get the same result with both versions:
count
--------
241294
(1 row)
That's what confuses me the most.
why does regexp_matches seem to interpret the pattern differently from regexp_replace?
Does anyone have any experience with the matter?
I was just stupid, the script just removed a \ and I just saw it in my question here, sorry for the question

line chart type diagram with bokeh- change the str format

I have date and 3 other elements of each job, that python reed them from a txt. and now I want to use these information to create a diagram with Bokeh.
how can I use date format(x-axies.start and end time of each job) and string formats(y-axies.3 elements for each job) in bokeh?
***Does anyone know of a working example for the Step Line chart type which exemplifies how to build the necessary data structure?
EDIT: the original answer below is obsolete, as the ggplot compat layer was removed from Bokeh many years ago. However, Bokeh now has its own built-in Step glyph:
https://docs.bokeh.org/en/latest/docs/user_guide/plotting.html#step-lines
OBSOLETE ANSWER:
I'm not sure if this is what you are asking for or not, but you can
use ggplot.py to generate a step-chart and then output to Bokeh:
http://docs.bokeh.org/docs/gallery/step.html
There will probably be a native step chart in Bokeh.charts later this
year (or sooner, if an outside contributor pitches in).
You would need to add your own data to this part:
df = pd.DataFrame({
"x": range(100),
"y": np.random.choice([-1, 1], 100)
})

Prepared statement with "partition by" doesn't work against Sybase IQ?

I'm seeing a problem when querying Sybase IQ with a prepared statement. The query works fine when I type the entire query as text and then call PrepareStatement on it with no parameters. But when I stick in one parameter, then I get back errors, even though my sql is correct. Any idea why?
This code works perfectly fine and runs my query:
errorquery<<"SELECT 1 as foobar \
, (SUM(1) over (partition by foobar) ) as myColumn \
FROM spgxCube.LPCache lpcache \
WHERE lpcache.CIG_OrigYear = 2001 ";
odbc::Connection* connQuery= SpgxDBConnectionPool::getInstance().getConnection("MyServer");
PreparedStatementPtr pPrepStatement(connQuery->prepareStatement(errorquery.str()));
pPrepStatement->executeQuery();
But this is the exact same thing except instead of typing "2001" directly in the code, I insert it with a parameter:
errorquery<<"SELECT 1 as foobar \
, (SUM(1) over (partition by foobar) ) as myColumn \
FROM spgxCube.LPCache lpcache \
WHERE lpcache.CIG_OrigYear = ? ";
odbc::Connection* connQuery = SpgxDBConnectionPool::getInstance().getConnection("MyServer");
PreparedStatementPtr pPrepStatement(connQuery->prepareStatement(errorquery.str()));
int intVal = 2001;
pPrepStatement->setInt(1, intVal);
pPrepStatement->executeQuery();
That yields this error:
[Sybase][ODBC Driver][Adaptive Server Anywhere]Invalid expression near '(SUM(1) over(partition by foobar)) as myColumn'
Any idea why the first one works if the second one fails? Are you not allowed to use "partition by" with inserted sql parameters or something like that?
The Sybase (ASA) Adaptive Server Anywhere error is fine, there is a Sybase ASA instance included in the IQ DB, used for the SYSTEM space.
I do not know if partition by is supported / fully supported in versions prior to Sybase IQ v12.7. I recall having problems with it under v12.6. Under v12.7 or better it should be fine and otherwise you command looks good to me.
I know very little about Adaptive Server Anywhere, but you're using the Adaptive Server Anywhere driver to query Sybase IQ.
Is that really what you want?