In HugSQL SQL, Chinese parameters are replaced with question marks - clojure

在HugSQL SQL中的中文参数,被替换成了问号。
In HugSQL SQL, Chinese parameters are replaced with question marks.
-- :name save-message-1! :! :n
-- :doc creates a new message
INSERT INTO guestbook (name, message, timestamp)
VALUES ('姓名', '消息测试', '2022-03-21 04:19:56')
The query result in the database is as follows:
# id, name, message, timestamp
'1', '??', '???', '2022-03-21 04:19:56'

Related

Error ORA-01858: a non-numeric character was found where a numeric was expected when running query

01858: a non-numeric character was found where a numeric was expected
select project_name, my_date, sum(records_number) as
from (
select project_name,
case
when :P33_RG = 'Daily' then
to_char(date_sys, 'MM/DD/YYYY')
when :P33_RG = 'Weekly' then
to_char(TRUNC(date_sys, 'IW'), 'MM/DD/YYYY')
end as my_date,
BATCH.RECORDS_NUMBER
from BATCH
where date_sys between :P33_START_DATE and :P33_END_DATE
) my_records
group by project_name, my_date
;
any advice to fix the error would be appreciated! Thank you
:P33_START_DATE and :P33_END_DATE are not dates. They're strings with a date in the format of your apex application settings (MM/DD/YYYY). All page items in apex are strings when used as bind variables. What you defined them to be in apex (number, date, text field) only impacts how they're displayed on the screen, it doesn't set a datatype for the bind variables.
Try changing the query to
where date_sys between TO_DATE(:P33_START_DATE,'MM/DD/YYYY') and TO_DATE(:P33_END_DATE,'MM/DD/YYYY')

AWS Log insights, parse all occurrences in a log

I have a question concerning log insights in aws. How is it possible to fetch all the occurrences in a log ? I tried with and without a regex and the parse will only fetch the first occurrence.
I have a log like this (and multiple entries of this kind of log):
[ERROR] - [{'id': 'id1'}, {'id': 'id2'}, {'id': 'id3'}]
And I want to extract all the ids, so I tried :
parse #message "id': '*'" as id
which return only id1 (the first occurrence) by log
and I also tried a regex :
parse #message /id': '(?<id>\S*)'/
which return only id1 (the first occurrence) as well by log
I expect something like [id1, id2, id3] or multiple line in the result (one by match).
I still haven't found a nice way to handle this, it seems like we can't get more than one result from one log
But maybe you can use the practice shared on the answer linked below and find how many items exist in each message
and you can also get list of the values by string manipulation
fields (strlen(#message)-strlen(replace(#message, "'id'", ""))) / strlen("'id'") as count,
replace(replace(replace(#message, "}", ""), "},", ","), "{'id': ", "") as list
# would return 3, ['id1', 'id2', 'id3']
https://stackoverflow.com/a/73254710/1762994

bulk insert data with Postgres into QuestDB

How does one bulk insert data with Postgres into QuestDB?
The following does not work
CREATE TABLE IF NOT EXISTS employees (employee_id INT, last_name STRING,first_name STRING);
INSERT INTO employees
(employee_id, last_name, first_name)
VALUES
(10, 'Anderson', 'Sarah'),(11, 'Johnson', 'Dale');
For inserting data in bulk, there are a few options. You can use CREATE AS SELECT to bulk insert from an existing table which is closest to your example:
CREATE TABLE employees
AS (SELECT employee_id, last_name, first_name FROM existing_table)
Or you can use prepared statements, there are full working examples in a few languages in the QuestDB Postgres documentation, here is a snippet from the Python example:
# insert 10 records
for x in range(10):
cursor.execute("""
INSERT INTO example_table
VALUES (%s, %s, %s);
""", (dt.datetime.utcnow(), "python example", x))
# commit records
connection.commit()
Or you can bulk import from CSV, i.e.:
curl -F data=#data.csv http://localhost:9000/imp

Apache Hive regEx serde: proper regex for a mixed format (json)

im trying to create a AWS Athena table using RegexSerDe.. due to some export issues i cannot use JsonSerDe.
2019-04-11T09:05:16.775Z {"timestamp":"data0","level":"data1","thread":data2","logger":"data3","message":"data4","context":"data5"}
I was trying to obtain json values with a regex, but without any luck.
CREATE EXTERNAL TABLE IF NOT EXISTS dsfsdfs.mecs3(
`timestamp` string,
`level` string,
`thread` string,
`logger` string,
`message` string,
`context` string
)
)ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "[ :]+(\\"[^\"]*\\")"
)LOCATION 's3://thisisates/'
Error: HIVE_CURSOR_ERROR: Number of matching groups doesn't match the
number of columns
Would be great some help as i'm not an expert in regex.
Thanks and BR.
Getting this working will probably be very hard - even if you can write a regex that will capture the columns out of the JSON structure, can you guarantee that all JSON documents will be rendered with the properties in the same order? JSON itself considers {"a": 1, "b": 2} and {"b": 2, "a": 1} to be equivalent, so many JSON libraries don't guarantee, or even care about ordering.
Another approach to this is to create a table with two columns: timestamp and data, as a regex table with a regex with two capture groups, the timestamp and the rest of the line – or possibly as a CSV table if the character after the timestamp is a tab (if it's a space it won't work since the JSON will contain spaces):
CREATE EXTERNAL TABLE IF NOT EXISTS mecs3_raw (
`timestamp` string,
`data` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "^(\\S+) (.+)$"
)
LOCATION 's3://thisisates/'
(the regex assumes that there is a space between the timestamp and the JSON structure, change it as needed).
That table will not be very usable by itself, but what you can do next is to create a view that extracts the properties from the JSON structure:
CREATE VIEW mecs3 AS
SELECT
"timestamp",
JSON_EXTRACT_SCALAR("data", '$.level') AS level,
JSON_EXTRACT_SCALAR("data", '$.thread') AS thread,
JSON_EXTRACT_SCALAR("data", '$.logger') AS logger,
JSON_EXTRACT_SCALAR("data", '$.message') AS message,
JSON_EXTRACT_SCALAR("data", '$.context') AS context
FROM mecs3_raw
(mecs3_raw is the table with timestamp and data columns)
This will give you what you want and will be much less error prone.
Try Regex: (?<=")[^\"]*(?=\" *(?:,|}))
Demo

how to extract column parameters from sqlite create string?

in sqlite it is possible to have string by which the table was created:
select sql from sqlite_master where type='table' and tbl_name='MyTable'
this could give:
CREATE TABLE "MyTable" (`id` PRIMARY KEY NOT NULL, [col1] NOT NULL,
"another_col" UNIQUE, '`and`,''another'',"one"' INTEGER, and_so_on);
Now I need to extract with this string any additional parameters that given column name has been set with.
But this is very difficult since the column name could be enclosed with special characters, or put plain, column name may have some special characters that are used as encapsulation etc.
I don't know how to approach it. The result should be having a column name the function should return anything that is after this name and before , so giving it id it should return PRIMARY KEY NOT NULL.
Use the pragma table_info:
http://www.sqlite.org/pragma.html#pragma_table_info
sqlite> pragma table_info(MyTable);
cid|name|type|notnull|dflt_value|pk
0|id||1||1
1|col1||1||0
2|another_col||0||0
3|`and`,'another',"one"|INTEGER|0||0
4|and_so_on||0||0