How get Region ID or Region Name in Oracle Apex 5 - oracle-apex

I want to implement Authorization Scheme using my own created tables,for this purpose I want to get Current Region Name ,how to get it?

First, you'll need to assign a static id to your region.. then you can query the apex_application_page_regions table.
SELECT region_name
FROM apex_application_page_regions
WHERE static_id = 'SOME_STATIC_ID'
AND page_id = :APP_PAGE_ID
AND application_id = :APP_ID;

You don't actually need to assign a static ID for this purpose. There is a column REGION_ID in the apex_application_page_regions table. So filter out that column based on REGION_NAME.
select region_id from apex_application_page_regions
where region_name = 'your_region_name';

You can easily turn this into a function for future use:
function get_region_id (p_app_id number, p_app_page_id number, p_ir_static_id varchar2) return number is
l_region_id number;
begin
select region_id into l_region_id
from apex_application_page_regions
where application_id = p_app_id
and page_id = p_app_page_id
and upper(static_id) = upper(p_ir_static_id)
and upper(template) = upper('Interactive Report');
return l_region_id;
end get_region_id;

Related

Custom Metrics of AWS to display how many newly added/updated entries to DynamoDB every day

It looks like we need to create a new metric to fulfill the query. If so, what will it be?
SELECT COUNT(#new entries#) FROM SCHEMA("AWS/DynamoDB", Operation,TableName) WHERE TableName = 'table1' AND TableName = 'table2' AND TableName = 'table3' AND TableName = 'table4' GROUP BY TableName ORDER BY COUNT() DESC

how to update table from other table by using from and its id

I want to update my play_log_new table country value by its ip using an ip_range table.
Here is my query in mysql:
update play_log_new pln
set country = ipr.country_name
from ip_range ipr where (INET_ATON(pln.ip) BETWEEN ipr.ip_start_digit AND ipr.ip_end_digit)
and pln.ip is not null and pln.country is null;
Try the below sql, better if you can provide 2 table details
UPDATE
`play_log_new` AS `pln`,
(
SELECT
*
FROM
`ip_range`
WHERE
INET_ATON(`pln`.`ip`) BETWEEN `ip_range`.`ip_start_digit` AND `ip_range`.`ip_end_digit`
) AS `ipr`
SET
`pln`.`country` = `ipr`.`country_name`
WHERE
`pln`.`ip` is not null and `pln`.`country` is null

GCP BigQuery how to set expiration date to table by python api

I am using BigQuery Python API to create table, and would like to set an expiration date to the table, so the table would be automatically dropped after certain days.
Here is my code:
client = bq.Client()
job_config = bq.QueryJobConfig()
dataset_id = dataset
table_ref = client.dataset(dataset_id).table(filename)
job_config.destination = table_ref
job_config.write_disposition = 'WRITE_TRUNCATE'
dt = datetime.now() + timedelta(seconds=259200)
unixtime = (dt - datetime(1970,1,1)).total_seconds()
expiration_time = unixtime
job_config.expires = expiration_time
query_job = client.query(query, job_config=job_config)
query_job.result()
The problem is that the expiration parameter doesn't seem to work. When I am checking the table detail in the UI, the expiration date is still Never.
To answer a slightly different question, instead of specifying the expiration as part of the request options, you can use a CREATE TABLE statement instead, where the relevant option is expiration_timestamp. For example:
CREATE OR REPLACE TABLE my_dataset.MyTable
(
x INT64,
y FLOAT64
)
OPTIONS (
expiration_timestamp=TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL 3 DAY)
);
This creates a table with two columns that will expire three days from now. CREATE TABLE supports an optional AS SELECT clause, too, if you want to create the table from the result of a query (the documentation goes into more detail).
To update an existing table expiration time with Python:
import datetime
from google.cloud import bigquery
client = bigquery.Client()
table = client.get_table("project.dataset.table")
table.expires = datetime.datetime.now() + datetime.timedelta(days=1)
client.update_table(table, ['expires'])
Credits: /u/ApproximateIdentity
Looking at the docs for the query method we can see that it's not possible to set an expiration time in the query job config.
The proper way of doing so is setting at the Table resource, something like:
client = bq.Client()
job_config = bq.QueryJobConfig()
dataset_id = dataset
table_ref = client.dataset(dataset_id).table(filename)
table = bq.Table(table_ref)
dt = datetime.now() + timedelta(seconds=259200)
table.expires = dt
client.create_table(table)
query_job = client.query(query, job_config=job_config)
query_job.result()

Understanding Secondary Indexes

if i have Table
Table
CREATE TABLE Users (
userId STRING(36) NOT NULL,
contactName STRING(300) NOT NULL,
eMail STRING(100) NOT NULL,
....
) PRIMARY KEY (userId)
and secondary index
CREATE NULL_FILTERED INDEX ActiveUsersByEMail
ON Users (
eMail,
isActive,
)
and i select record by:
SELECT * FROM Users WHERE eMail = 'test#test.com' AND isActive = TRUE
spanner will automatically look at index, take userId and give me a record ?.
or i need to create
CREATE NULL_FILTERED INDEX ActiveUsersByEMail_01
ON Users (
eMail,
isActive,
userId
)
and first take userId by:
SELECT userId from Users#{FORCE_INDEX=ActiveUsersByEMail_01} WHERE eMail = 'test#test.com' AND isActive = TRUE
and then i take a record by:
`SELECT * FROM Users WHERE userId = '${userId}'``
Question is automatically use or not spanner secondary indices for standard select if condition match secondary index keys?
You should use FORCE_INDEX as Cloud Spanner will only choose an index in rare circumstances as stated here. You can use the STORING clause to add data directly to the index, allowing you to read the data directly from the index to avoid the second call. This is suggested for common query patterns in your application.
In github i ask same question and It turned out that this is easily done (without creating additional index) by:
SELECT * from Users#{FORCE_INDEX=ActiveUsersByEMail} WHERE eMail = 'test#test.com' AND isActive = TRUE
At this time the search is going on index and row come with all fields

Select some columns instead of select * with SubSonic 3

I'm using SubSonic 3 as my OR mapper in the project. My problem is that the query SubSonic generates for select and other operations is like:
var repo = GetRepo();
var results = repo.GetAll();
And this would make select * from the entity, but I have to select only Id and Title from the table. I don't have permission to select * on the table
There isn't much to go off of and you probably solved this problem already but just in case someone else has the same question, here's my 2 cents.
If you just wanted to select Id and Title you could do one of the following.
I'm guessing you're using ActiveRecord.
//results in a list of anonymous objects with Id and Title
var results = (from r in YourTable.All() select new { Id = r.Id, Title = r.Title }).ToList();
If you're using AdvancedTemplate, then you could do
//create an instance of the db schema
var repo = new SubSonic.AdvancedTemplate.YourDatabase();
//results in a list of anonymous objects with Id and Title
var results = (from r in repo.YourTable select new { Id = r.Id, Title = r.Title }).ToList();