I would like to retrieve ROWID of a table from Oracle DB and store in memory as a character array for later use. For example, I run the following query:
SELECT ROWID, MARKS FROM MTB WHERE EID='123';
Then using Pro*C, I would like to store this ROWID as a character array rrr to use later as:
UPDATE MTB SET MARKS = 80 WHERE ROWID='<rrr>'
Please help and point to appropriate documentation of Pro*C usage to convert a ROWID to an array of character strings.
You can use the ROWIDTOCHAR and CHARTOROWID functions:
SELECT ROWIDTOCHAR(ROWID), MARKS INTO :rrr, :marks FROM MTB WHERE EID='123';
And then
UPDATE MTB SET MARKS = 80 WHERE ROWID=CHARTOROWID(:rrr);
Related
I created a customers table with columns has account_id.cust_id, account_id.ord_id and so on.
My create external table query was as follows:
CREATE EXTERNAL TABLE spectrum.customers
(
"account_id.cust_id" numeric,
"account_id.ord_id" numeric
)
row format delimited
fields terminated by '^'
stored as textfile
location 's3://awsbucketname/test/';
SELECT "account_id.cust_id" FROM spectrum.customers limit 100
and I get an error as :
Invalid Operation: column account_id.cust_id does not exists in
customers.
Is there any way or syntax to write column names like account_id.cust_id (text.text) while creating the table or while writing the select query?
Please help.
PS: Single quotes, back ticks don't work either.
I've a table "City" with more than 100k records.
The field "name" contains strings like "Roma", "La Valletta".
I receive a file with the city name, all in upper case as in "ROMA".
I need to get the id of the record that contains "Roma" when I search for "ROMA".
In SQL, I must do something like:
select id from city where upper(name) = upper(%name%)
How can I do this in kettle?
Note: if the city is not found, I use an Insert/update field to create it, so I must avoid duplicates generated by case-sensitive names.
You can make use of the String Operations steps in Pentaho Kettle. Set the Lower/Upper option to Y
Pass the city (name) from the City table to the String operations steps which will do the Upper case of your data stream i.e. city name. Join/lookup with the received file and get the required id.
More on String Operations step in pentaho wiki.
You can use a 'Database join' step. Here you can write the sql:
select id from city where upper(name) = upper(?)
and specify the city field name from the text file as parameter. With 'Number of rows to return' and 'Outer join?' you can control the join behaviour.
This solution doesn't work well with a large number of rows, as it will execute one query per row. In those cases Rishu's solution is better.
This is how I did:
First "Modified JavaScript value" step for create a query:
var queryDest="select coalesce( (select id as idcity from city where upper(name) = upper('"+replace(mycity,"'","\'\'")+"') and upper(cap) = upper('"+mycap+"') ), 0) as idcitydest";
Then I use this string as a query in a Dynamic SQL row.
After that,
IF idcitydest == 0 then
insert new city;
else
use the found record
This system make a query for file's row but it use few memory cache
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production.
I have a table in the below format.
Name Department
Johny Dep1
Jacky Dep2
Ramu Dep1
I need an output in the below format.
Dep1 - Johny,Ramu
Dep2 - Jacky
I have tried the 'LISTAGG' function, but there is a hard limit of 4000 characters. Since my db table is huge, this cannot be used in the app. The other option is to use the
SELECT CAST(COLLECT(Name)
But my framework allows me to execute only select queries and no PL/SQL scripts.Hence i dont find any way to create a type using "CREATE TYPE" command which is required for the COLLECT command.
Is there any alternate way to achieve the above result using select query ?
You should add GetClobVal and also need to rtrim as it will return delimiter in the end of the results.
SELECT RTRIM(XMLAGG(XMLELEMENT(E,colname,',').EXTRACT('//text()')
ORDER BY colname).GetClobVal(),',') from tablename;
if you cant create types (you can't just use sql*plus to create on as a one off?), but you're OK with COLLECT, then use a built-in array. There's several knocking around in the RDBMS. run this query:
select owner, type_name, coll_type, elem_type_name, upper_bound, length
from all_coll_types
where elem_type_name = 'VARCHAR2';
e.g. on my db, I can use sys.DBMSOUTPUT_LINESARRAY which is a varray of considerable size.
select department,
cast(collect(name) as sys.DBMSOUTPUT_LINESARRAY)
from emp
group by department;
A derivative of #anuu_online but handle unescaping the XML in the result.
dbms_xmlgen.convert(xmlagg(xmlelement(E, name||',')).extract('//text()').getclobval(),1)
For IBM DB2, Casting the result to a varchar(10000) will give more than 4000.
select column1, listagg(CAST(column2 AS VARCHAR(10000)), x'0A') AS "Concat column"...
I end up in another approach using the XMLAGG function which doesn't have the hard limit of 4000.
select department,
XMLAGG(XMLELEMENT(E,name||',')).EXTRACT('//text()')
from emp
group by department;
You can use:
SELECT department
, REGEXP_REPLACE(XMLCAST(XMLAGG(XMLELEMENT(x, name, ',')) AS CLOB), ',$')
FROM emp
GROUP BY department
it will return CLOB that has no size limit, handles correctly XML entity escapes and separators.
Instead of REGEXP_REPLACE(..., ',$')) you can use RTRIM(..., ','), which should be faster, but will remove all separators from the end of the result (including those that can appear in name at the end, or previous ones if last names are empty).
I need to trim the date from a text string in the function call of my app.
The string comes out as text//date and I would like to trim or replace the date with blank space. The column name is overall_model and the value is ford//1911 or chevy//2011, but I need the date part removed so I can loop over the array or list to get an accurate count of all the models.
The problem is that if there is a chevy//2011 and a chevy//2010 I return two rows in my table because of the date. So if I can remove the date and loop over them I can get my results of chevy there are 23 chevy models.
I have not used Sybase in a while, but I remember its string functions are very similar to MS SQL Server.
If overall_model always contains "//", use charindex to return the position of the delimiter and substring to retrieve the "text" before it. Then combine it with a COUNT. (If the "//" is not always present, you will need to add a CASE statement as well).
SELECT SUBSTRING(overall_model, 1, CHARINDEX('/', overall_model)-1) AS Model
, COUNT(*) AS NumberOfRecords
FROM YourTable
GROUP BY SUBSTRING(overall_model, 1, CHARINDEX('/', overall_model)-1)
However, ideally the "text" and "date" should be stored separately. That would offer greater flexibility and generally better performance.
in sqlite it is possible to have string by which the table was created:
select sql from sqlite_master where type='table' and tbl_name='MyTable'
this could give:
CREATE TABLE "MyTable" (`id` PRIMARY KEY NOT NULL, [col1] NOT NULL,
"another_col" UNIQUE, '`and`,''another'',"one"' INTEGER, and_so_on);
Now I need to extract with this string any additional parameters that given column name has been set with.
But this is very difficult since the column name could be enclosed with special characters, or put plain, column name may have some special characters that are used as encapsulation etc.
I don't know how to approach it. The result should be having a column name the function should return anything that is after this name and before , so giving it id it should return PRIMARY KEY NOT NULL.
Use the pragma table_info:
http://www.sqlite.org/pragma.html#pragma_table_info
sqlite> pragma table_info(MyTable);
cid|name|type|notnull|dflt_value|pk
0|id||1||1
1|col1||1||0
2|another_col||0||0
3|`and`,'another',"one"|INTEGER|0||0
4|and_so_on||0||0