Apex Data Load : 2 number columns from csv file select 1 based on the condition and load into table - oracle-apex

I have a issue in Oracle Apex Data Load which I will try to explain in a simple way:
I want to copy csv data (copy/paste) in Data Load Application and apply the transformation, and rules and load the data into the table BIKE.
csv columns (type, amount-a, amount-b)
blue, 10, 100
green, 20, 200
table BIKE columns
(type, amount)
I want to create a transformation to check if the column value in the table BIKE is 'blue' then load amount-b other wise amount-a.
Can anyone help me on this?
Thanks

Create a staging table, e.g. like this (you will need to adjust the details of the columns to match your data model):
create table bike_staging
(
apex_session number not null,
bike_id number not null,
bike_type varchar2(100) not null,
amount_a number,
amount_b number
);
Add a trigger to populate session_id:
create or replace trigger bi_bike_staging
before insert on bike_staging for each row
begin
:new.apex_session := v('APP_SESSION');
end bi_bike_staging;
Add two processes to the third page of the data load wizard, on either side of the "Prepare Uploaded Data" process, like this:
The code for "delete staging table" will be like this:
delete bike_staging where apex_session = :APP_SESSION;
The code for "load bikes" may be something like this:
merge into bikes t
using (select bike_id,
bike_type,
case bike_type
when 'BLUE' then amount_b
else amount_a
end as amount
from bike_staging
where apex_session = :APP_SESSION
) s
on (t.bike_id = s.bike_id)
when matched then update set
t.bike_type = s.bike_type,
t.amount = s.amount
when not matched then insert (bike_id, bike_type, amount)
values (s.bike_id, s.bike_type, s.amount);
delete bike_staging where apex_session = :APP_SESSION;
Alternatively, if you are only inserting new records you don't need the merge, you can use a simple insert statement.

Related

Adding label in AutoML for text classification

I am trying to create a text dataset in a Pipeline for a text classification but I believe I am doing it the wrong way or at least I don't get it. The csv passing only contains two columns message and label which is true or false.
Inside my pipeline I am creating dataset like this which I am not very sure how dataset is recognizing that column label is the independent variable.
dataset = gcp_aip.TextDatasetCreateOp(
project = project # my project id,
display_name = display_name # reference name,
gcs_source = src_uris # path to my data in gcs,
import_schema_uri = aiplatform.schema.dataset.ioformat.text.single_label_classification,
)
once created the dataset, i do training like this within the Pipeline
# training
model = gcp_aip.AutoMLTextTrainingJobRunOp(
project = project,
display_name = display_name,
prediction_type = "classification",
multi_label = False,
dataset = dataset.outputs["dataset"],
)
Not sure if creation and training is doing correctly since I never specified that label is my label column and needs to use message as a feature.
In vertex ai the dataset created look like this
But in my training section the results from the AutML, looks like this, dont know why, label with 0% is there, which makes me doubt about the insertion of the data
In preparation of CSV file, you don't need to specify which column is the feature and the label. Vertex AI's AutoML automatically reads the first column as the feature and the second column as the label. You may refer to this documentation for more details in preparation of CSV data.
Below is sample CSV file, all values under first column(column A) are detected to be the feature and all values under second column(column B) are the labels.
You might need to check your CSV file and search for the word "label" on your second column and replace it with either "True" or "False" since based on your given data, you are only trying to have 2 labels which are "True" and "False". In addition, if you find the word "label" on your 2nd column and it doesn't have a value on its first column, then you just need to just remove the word "label".
In your provided screenshot here, there is a 1 count for the word "label", which means there is a "label" value existing on the 2nd column of your CSV data.

Can I dynamically derive Dax Filters from "dissecting"a criteria field in my database?

I have a database table that contains records, where each record has a criteria attribute. This criteria attribute can hold anywhere between 1-n criteria that I'd like to apply as filters on a different table.
This looks something like:
message.status:::eq:::submitted;;;message.count:::ge:::5
but could also be only
message.count:::ge:::5
What I'd like to do in DAX, is take that string and translate it into dynamic Filter attributes. So I somehow need to split the string based on ;;;, and then disect each section into the target (e.g. message[count]), the operator (e.g. ge --> >=) and the value (e.g. 5).
So in the end the following DAX snippet should be added to my Calculate 1 or more times:
example measure = CALCULATE(
COUNTROWS('message),
FILTER (
ALL('message'),
--- line below should be dynamically injected
message[count] >= 5
),
I'm struggling with how to create a loop (is this even possible in PBI?), and then even with a single string... hoe to filter based on this.
Thanks
You can try to build new table for splited message.
Table 2 =
var _splitby = ";;;"
var _string = SELECTCOLUMNS(ADDCOLUMNS(VALUES('Table'[message]),"pathx", SUBSTITUTE([message],_splitby,"|")),"pathx",[pathx])
var _generate = GENERATE(_string, GENERATESERIES(1, PATHLENGTH([pathx])))
var _GetVal = SELECTCOLUMNS(_generate, "Msg", PATHITEM([pathx], [Value]))
return
_GetVal
If you have always message.count:::ge::: like string at the end, you can follow these steps in Power Query-
Step 1: Duplicate you message column
Step-2: apply split on new duplicated column using string message.count:::ge::: and you will have a new column with last Numeric value from your original text.
Step-3: you can now apply filter on the new column.
Sample Output-

Fetch Returns only 1 Row - Interactive Grid - Oracle Apex

I have an interactive grid which has a dynamic action to fetch returns from another table
begin
for c in (select
REW_SIZE,
IN_STOCK,
DESCRIPTION,
FINANCIAL_YEAR_ID
into
:REW_SIZE,
:IN_STOCK,
:DESCRIPTION,
:FINANCIAL_YEAR_ID
from
T_SORDER_PROFOMA_REWINDING
where so_id = :so_id)
loop
:REW_SIZE := c.REW_SIZE;
:IN_STOCK :=c.IN_STOCK;
:DESCRIPTION:=c.DESCRIPTION;
:FINANCIAL_YEAR_ID:=c.FINANCIAL_YEAR_ID;
end loop;
end;
When I had simple select into query, it gave "exact fetch returns more than requested number of rows" thn I applied the above code but it returns only 2nd row. I have 2 rows in the table for this ID.
For what I understand, you want to get some extra data columns. Your best shot is to create a view to join these table and deliver the columns you need. If you need editing, you will have to create an instead of trigger on the view as well, to do a correct dml on one or both tables.
create view which joins both tables
create instead of trigger (if you need editing)
query of the Interactive Grid should be on the view
This solution adds the extra data columns to the source of the IG, which is better and easier in my opinion.

Kettle database lookup case insensitive

I've a table "City" with more than 100k records.
The field "name" contains strings like "Roma", "La Valletta".
I receive a file with the city name, all in upper case as in "ROMA".
I need to get the id of the record that contains "Roma" when I search for "ROMA".
In SQL, I must do something like:
select id from city where upper(name) = upper(%name%)
How can I do this in kettle?
Note: if the city is not found, I use an Insert/update field to create it, so I must avoid duplicates generated by case-sensitive names.
You can make use of the String Operations steps in Pentaho Kettle. Set the Lower/Upper option to Y
Pass the city (name) from the City table to the String operations steps which will do the Upper case of your data stream i.e. city name. Join/lookup with the received file and get the required id.
More on String Operations step in pentaho wiki.
You can use a 'Database join' step. Here you can write the sql:
select id from city where upper(name) = upper(?)
and specify the city field name from the text file as parameter. With 'Number of rows to return' and 'Outer join?' you can control the join behaviour.
This solution doesn't work well with a large number of rows, as it will execute one query per row. In those cases Rishu's solution is better.
This is how I did:
First "Modified JavaScript value" step for create a query:
var queryDest="select coalesce( (select id as idcity from city where upper(name) = upper('"+replace(mycity,"'","\'\'")+"') and upper(cap) = upper('"+mycap+"') ), 0) as idcitydest";
Then I use this string as a query in a Dynamic SQL row.
After that,
IF idcitydest == 0 then
insert new city;
else
use the found record
This system make a query for file's row but it use few memory cache

SyncFramework: How to sync all columns from a table?

I create a program to sync tables between 2 databases.
I use this common code:
DbSyncScopeDescription myScope = new DbSyncScopeDescription("myscope");
DbSyncTableDescription tblDesc = SqlSyncDescriptionBuilder.GetDescriptionForTable("Table", onPremiseConn);
myScope.Tables.Add(tblDesc);
My program creates the tracking table only with Primary Key (id column).
The sync is ok to delete and insert rows.
But updating don't. I need update all the columns and they are not updated (For example: a telephone column).
I read that I need to add the columns I want to sync MANUALLY with this code:
Collection<string> includeColumns = new Collection<string>();
includeColumns.Add("telephone");
...
includeColumns.Add(Last column);
And changing the table descripcion in this way:
DbSyncTableDescription tblDesc = SqlSyncDescriptionBuilder.GetDescriptionForTable("Table", includeColumns, onPremiseConn);
Is there a way to add all the columns of the table automatically?
Something like:
Collection<string> includeColumns = GetAllColums("Table");
Thanks,
SqlSyncDescriptionBuilder.GetDescriptionForTable("Table", onPremiseConn) will include all the columns of the table already.
the tracking tables only stores the PK and filter columns and some Sync Fx specific columns.
the tracking is at row level, not column level.
during sync, the tracking table and its base table are joined to get the row to be synched.