wwv_flow_files no longer available to use its fields in Apex19.1 - oracle-apex

We are migrating applications from Apex4.2 to Apex19.1 and used Temp table (wwv_flow_files) in our one page to upload spreadsheet and then perform PLSQL process on the page. But as wwv_flow_files is now deprecated and we have to use APEX_APPLICATION_TEMP_FILES temp table but unfortunately the fields that we used don't exit in new Temp table in Apex19.1
select blob_content into v_blob_data from wwv_flow_files where last_updated = (select max(last_updated) from wwv_flow_files where upper(UPDATED_BY) = upper(:APP_USER)) and id = (select max(id) from wwv_flow_files where upper(updated_by) = upper(:APP_USER));
Little brief about PLSQL process: Above PLSQL is part of one block where Spreadsheet will be uploaded and then on multiple validations it gets values into Physical table of Oracle.
We are performing migration and have to make sure will minimal effort functionality should work as is.
Please help. Thanks in advance.

In the where clause you only need the column "name" from apex_application_temp_files!
https://docs.oracle.com/cd/E11882_01/appdev.112/e11945/up_dn_files.htm#CIHDDJGF
Example:
IF ( :P1_FILE_NAME is not null ) THEN
INSERT INTO oehr_file_subject(id,NAME, SUBJECT, BLOB_CONTENT, MIME_TYPE)
SELECT ID,:P1_FILE_NAME,:P1_SUBJECT,blob_content,mime_type
FROM APEX_APPLICATION_TEMP_FILES
WHERE name = :P1_FILE_NAME;
END IF;

Related

Oracle Apex error ORA-01776: cannot modify more than one base table through a join view

I have an app in Oracle Apex 22.21. There are multiple tables (ORDERS, ORDER_ITEMS, STORES, and PRODUCTS).
ORDERS table
enter image description here
I have a Master Detail report that is editable. The main report shows the ORDERS table and the detail shows the ORDER_ITEMS table.
Report image
enter image description here
In the ORDERS table, there is a column STORE_ID which is a foreign key to the STORES table. The STORES table has a column STORE_NAME. I am able to edit the report (change the STORE_ID to another 'id' ex: 1,2,3) when the table's Source is set to the ORDERS table.
STORES table
enter image description here
STORES table data
enter image description here
I want the ORDERS table to include the STORE_NAME column referring to the STORES table. As it does not make sense for the user to enter a STORE_ID to edit a row. I want the user to be able to edit the STORE_ID by entering the STORE_NAME or by choosing an LOV. I changed the report Source Type to SQL Query and ran the below code.
select
ORDERS_LOCAL.*,
STORES.STORE_NAME
from ORDERS_LOCAL
inner join STORES
ON ORDERS_LOCAL.STORE_ID=STORES.STORE_ID
However, when I try to edit a cell, I encounter an error ORA-01776: cannot modify more than one base table through a join view
I've found a post/solution regarding this error and tried to follow the instructions. The first solution does not work in my case because I actually want the user to be able to edit the STORE_ID column by showing STORE_NAME.
enter image description here
I've tried changing and running the PL/SQL code exactly as instructed but nothing saves when I change a cell value and click save. But I don't receive any error.
BEGIN
CASE :apex$row_status
WHEN 'C'
THEN
INSERT INTO stores (store_id, store_name)
VALUES ( :p10_store_id, :p10_store_name);
INSERT INTO orders_local (order_id,
order_number,
order_date,
store_id,
full_name,
email,
city,
state,
zip_code,
credit_card,
order_items
)
VALUES ( :p10_order_id,
:p10_order_number,
:p10_order_date,
:p10_store_id,
:p10_full_name,
:p10_email,
:p10_city,
:p10_state,
:p10_zip_code,
:p10_credit_card,
:p10_order_items);
WHEN 'U'
THEN
UPDATE orders_local
SET order_id = :p10_order_id,
order_number = :p10_order_number,
order_date = :p10_order_date,
store_id = :p10_store_id,
full_name = :p10_full_name,
email = :p10_email,
city= :p10_city,
state= :p10_state,
zip_code= :p10_zip_code,
credit_card= :p10_credit_card,
order_items= :p10_order_items
WHERE order_id = :p10_order_id;
UPDATE stores
SET store_name = :p10_store_name
WHERE store_id = :p10_store_id;
WHEN 'D'
THEN
DELETE orders_local
WHERE order_id = :p10_order_id;
DELETE stores
WHERE store_id = :p10_store_id;
END CASE;
END;
Take a step back. The "report that is editable" is an interactive grid. If the report is display only, then you can use any SQL to display data. However, if it is editable then the SQL statement is used to update the rows as well. The statement
select
ORDERS_LOCAL.*,
STORES.STORE_NAME
from ORDERS_LOCAL
inner join STORES
ON ORDERS_LOCAL.STORE_ID=STORES.STORE_ID
Cannot be used to update the store_id in the orders_local table. Currently you're trying to work around this by using custom code for the update but that is overcomplicating things. So, take a step back and restart.
The query for the interactive grid should be
select
*
from ORDERS_LOCAL
Define a List of Values to display the select list for Stores. The query for that list of values is
select
store_id as return_value,
store_name as display_value
from stores
In the interactive grid us this list of values for the store_id column.
That is all there is to it. This will allow you to use the native process for handling the IG updates.

Query for listing Datasets and Number of tables in Bigquery

So I'd like make a query that shows all the datasets from a project, and the number of tables in each one. My problem is with the number of tables.
Here is what I'm stuck with :
SELECT
smt.catalog_name as `Project`,
smt.schema_name as `DataSet`,
( SELECT
COUNT(*)
FROM ***DataSet***.INFORMATION_SCHEMA.TABLES
) as `nbTable`,
smt.creation_time,
smt.location
FROM
INFORMATION_SCHEMA.SCHEMATA smt
ORDER BY DataSet
The view INFORMATION_SCHEMA.SCHEMATA lists all the datasets from the project the query is executed, and the view INFORMATION_SCHEMA.TABLES lists all the tables from a given dataset.
The thing is that the view INFORMATION_SCHEMA.TABLES needs to have the dataset specified like this give the tables informations : dataset.INFORMATION_SCHEMA.TABLES
So what I need is to replace the *** DataSet*** by the one I got from the query itself (smt.schema_name).
I am not sure if I can do it with a sub query, but I don't really know how to manage to do it.
I hope I'm clear enough, thanks in advance if you can help.
You can do this using some procedural language as follows:
CREATE TEMP TABLE table_counts (dataset_id STRING, table_count INT64);
FOR record IN
(
SELECT
catalog_name as project_id,
schema_name as dataset_id
FROM `elzagales.INFORMATION_SCHEMA.SCHEMATA`
)
DO
EXECUTE IMMEDIATE
CONCAT("INSERT table_counts (dataset_id, table_count) SELECT table_schema as dataset_id, count(table_name) from ", record.dataset_id,".INFORMATION_SCHEMA.TABLES GROUP BY dataset_id");
END FOR;
SELECT * FROM table_counts;
This will return something like:

Oracle Apex IG force user to have filter on column

I need to force user to have filter on column with date. Dataset is rly big and it must-have. I know how to force user to have any filter. Just add to "Where":
:apex$f1 is not null
But I need to find how to force user to have filter on specific column
Like i wrote in comment this is solution (I couldn't find better one)
In 'Where Clause' I added this
EXISTS (SELECT /*+ NO_UNNEST */ F.*, R.*, C.* FROM apex_appl_page_ig_rpt_filters F, APEX_APPL_PAGE_IG_RPTS R, APEX_APPL_PAGE_IG_COLUMNS C
WHERE 1=1
AND F.report_id = R.report_id AND C.column_id = F.column_id
AND F.APPLICATION_ID = 100 AND F.PAGE_ID = 100
AND C.NAME = 'MY_COLUMN_NAME'
AND C.REGION_NAME = 'MY_REGION_NAME'
AND R.SESSION_ID = :APP_SESSION)
This checks setting of IG in apex objects. In addition you just need to wrote in Messages "When No Data Found" something about that column that is required, just to inform users.

Multiple files upload through wwv_flow_files in Oracle Apex 4.1

I have selected Storage type as Apex dynamic table: wwv_flow_files
But DML statement is not letting the BLOB file to save in dynamic table. Once it saves my further process plan is to move data from wwv_flow_files to own table with the help of process or dynamic action.
Your help is valuable.
Regards,
Anshul Ayushya
Addition of a process for each file browser item will work with below plsql code:
insert into CASE_DOCUMENT
( CASE_ID,
DOCUMENT_NAME,
REPORT_TYPE,
DOCUMENT_DATE,
DOCUMENT_CATEGORY,
DESCRIPTION,
DOCUMENT
, file_type
, filename
)
select :P59_case_ID,
:P59_DOCNAME,
:P59_REPORT_TYPE,
:P59_DOCUMENT_DATE,
:P59_DOCUMENT_CATEGORY,
:P59_DOCDESCRIPTION,
blob_content
, mime_type
, filename
from wwv_flow_files WHERE NAME = :P59_DOCUMENT;
DELETE FROM wwv_flow_files WHERE NAME= :P59_DOCUMENT;

Django: Distinct foreign keys

class Log:
project = ForeignKey(Project)
msg = CharField(...)
date = DateField(...)
I want to select the four most recent Log entries where each Log entry must have a unique project foreign key. I've tries the solutions on google search but none of them works and the django documentation isn't that very good for lookup..
I tried stuff like:
Log.objects.all().distinct('project')[:4]
Log.objects.values('project').distinct()[:4]
Log.objects.values_list('project').distinct('project')[:4]
But this either return nothing or Log entries of the same project..
Any help would be appreciated!
Queries don't work like that - either in Django's ORM or in the underlying SQL. If you want to get unique IDs, you can only query for the ID. So you'll need to do two queries to get the actual Log entries. Something like:
id_list = Log.objects.order_by('-date').values_list('project_id').distinct()[:4]
entries = Log.objects.filter(id__in=id_list)
Actually, you can get the project_ids in SQL. Assuming that you want the unique project ids for the four projects with the latest log entries, the SQL would look like this:
SELECT project_id, max(log.date) as max_date
FROM logs
GROUP BY project_id
ORDER BY max_date DESC LIMIT 4;
Now, you actually want all of the log information. In PostgreSQL 8.4 and later you can use windowing functions, but that doesn't work on other versions/databases, so I'll do it the more complex way:
SELECT logs.*
FROM logs JOIN (
SELECT project_id, max(log.date) as max_date
FROM logs
GROUP BY project_id
ORDER BY max_date DESC LIMIT 4 ) as latest
ON logs.project_id = latest.project_id
AND logs.date = latest.max_date;
Now, if you have access to windowing functions, it's a bit neater (I think anyway), and certainly faster to execute:
SELECT * FROM (
SELECT logs.field1, logs.field2, logs.field3, logs.date
rank() over ( partition by project_id
order by "date" DESC ) as dateorder
FROM logs ) as logsort
WHERE dateorder = 1
ORDER BY logs.date DESC LIMIT 1;
OK, maybe it's not easier to understand, but take my word for it, it runs worlds faster on a large database.
I'm not entirely sure how that translates to object syntax, though, or even if it does. Also, if you wanted to get other project data, you'd need to join against the projects table.
I know this is an old post, but in Django 2.0, I think you could just use:
Log.objects.values('project').distinct().order_by('project')[:4]
You need two querysets. The good thing is it still results in a single trip to the database (though there is a subquery involved).
latest_ids_per_project = Log.objects.values_list(
'project').annotate(latest=Max('date')).order_by(
'-latest').values_list('project')
log_objects = Log.objects.filter(
id__in=latest_ids_per_project[:4]).order_by('-date')
This looks a bit convoluted, but it actually results in a surprisingly compact query:
SELECT "log"."id",
"log"."project_id",
"log"."msg"
"log"."date"
FROM "log"
WHERE "log"."id" IN
(SELECT U0."id"
FROM "log" U0
GROUP BY U0."project_id"
ORDER BY MAX(U0."date") DESC
LIMIT 4)
ORDER BY "log"."date" DESC