Routine for Bulk deletion of users in toad - toad

Can anyone help me with the routine for the bulk deletion of users based on any criteria in the toad database.

If you are on Oracle database this example will delete all users having a username starting with DELETEMEUSER. You can write a query to fetch users matching your criteria and delete using dynamic SQL.
DECLARE
CURSOR c1
IS
SELECT username
FROM all_users
WHERE username LIKE 'DELETEMEUSER%';
BEGIN
FOR c1_rec IN c1
LOOP
EXECUTE IMMEDIATE 'DROP USER ' || c1_rec.username || ' CASCADE';
END LOOP;
END;

Related

Custom pl/sql code to process the IG data on submit instead of the native process

How can I use custom pl/sql code to process insert update the IG data (which is attached with inner join) on submit instead of the native process in oracle apex Interactive Grid. If anyone have solution please reply.
SELECT dept.DNAME,
emp.empno,
emp.ename,
emp.job,
emp.mgr,
emp.hiredate
FROM emp
INNER JOIN dept ON
emp.deptno = dept.deptno
To update just the emp record (it should be trivial to add code for the dept table) you could use this block.
begin
if :APEX$ROW_STATUS = 'D' then
-- DELETE the record
delete from emp
where empno = :EMPNO;
elsif :APEX$ROW_STATUS = 'U' then
update emp SET
ename = :ENAME,
job = :JOB,
mgr = :MGR,
hiredate = :HIREDATE
where empno = :EMPNO;
elsif :APEX$ROW_STATUS = 'C' then
insert into emp (ename, job, mgr, hiredate)
values (:ENAME, :JOB, :MRG, :HIREDATE);
end if;
end;
There are a couple of very complete blogs on this topic, for example here:
https://mikesmithers.wordpress.com/2019/07/23/customizing-dml-in-an-apex-interactive-grid/, make sure to also check the references section at the bottom.

wwv_flow_files no longer available to use its fields in Apex19.1

We are migrating applications from Apex4.2 to Apex19.1 and used Temp table (wwv_flow_files) in our one page to upload spreadsheet and then perform PLSQL process on the page. But as wwv_flow_files is now deprecated and we have to use APEX_APPLICATION_TEMP_FILES temp table but unfortunately the fields that we used don't exit in new Temp table in Apex19.1
select blob_content into v_blob_data from wwv_flow_files where last_updated = (select max(last_updated) from wwv_flow_files where upper(UPDATED_BY) = upper(:APP_USER)) and id = (select max(id) from wwv_flow_files where upper(updated_by) = upper(:APP_USER));
Little brief about PLSQL process: Above PLSQL is part of one block where Spreadsheet will be uploaded and then on multiple validations it gets values into Physical table of Oracle.
We are performing migration and have to make sure will minimal effort functionality should work as is.
Please help. Thanks in advance.
In the where clause you only need the column "name" from apex_application_temp_files!
https://docs.oracle.com/cd/E11882_01/appdev.112/e11945/up_dn_files.htm#CIHDDJGF
Example:
IF ( :P1_FILE_NAME is not null ) THEN
INSERT INTO oehr_file_subject(id,NAME, SUBJECT, BLOB_CONTENT, MIME_TYPE)
SELECT ID,:P1_FILE_NAME,:P1_SUBJECT,blob_content,mime_type
FROM APEX_APPLICATION_TEMP_FILES
WHERE name = :P1_FILE_NAME;
END IF;

redshift - Not able to apply listagg function

I am getting error when trying to use listagg function.
Query
select
a.user_name,
listagg(a.group_name::text)
within group (order by a.group_name) as group_name
from (
SELECT
usename as user_name,
groname as group_name
FROM
pg_user
join
pg_group
on
pg_user.usesysid = ANY(pg_group.grolist) AND
pg_group.groname in (SELECT DISTINCT pg_group.groname from pg_group)
)a
group by user_name
Error
[Code: 500310, SQL State: XX000] Amazon Invalid operation: One or more of the used functions must be applied on at least one user created tables. Examples of user table only functions are LISTAGG, MEDIAN, PERCENTILE_CONT, etc;
None of the value is null.
Just like there are some functions that can only be run on the leader node there are some that can only be run on compute nodes - listagg() is one of these. If you need to run listagg() on leader data there are a few approaches you can use: (sorry I'm not on a cluster now so cannot test these directly - I saw your question was aging and thought I'd get you started. Grain of salt as I also cannot directly observe your issue but I think I've know what is going on.)
You can use a cursor to save the data from the leader node and use
this as the source for listagg(). A stored procedure can
streamline this. There are examples of this on stackoverflow.
You can make a temp table out of the leader node data and use this
in listagg() but I expect you will need to exit(unload) and
reenter(copy) the cluster to do this.
There just isn't a direct path from leader-node-only results to the compute nodes without some sort of this kind of push-up. Consequence of the large networked cluster architecture of Redshift.
UPDATE
I got some cluster time and there are several unexpected issues with this one. grolist is an array type that isn't generally support cluster wide and the need to user pg_group as source are key ones. So this is going to require #1 AND #2 from above.
The process goes like this:
Define cursor to hold the result of the pg_user / pg_group join select statement
Move cursor results to temp table
Use temp table as source to outer (list_agg()) select
A stored procedure can be written to do #1 and #2 which streamlines things. So you end up with the following SQL:
CREATE OR REPLACE procedure make_user_group()
AS
$$
DECLARE
row record;
BEGIN
create temp table user_group (user_name varchar(256),group_name varchar(256));
for row in SELECT
usename::text as user_name,
groname::text as group_name
FROM
pg_user
join
pg_group
on
pg_user.usesysid = ANY(pg_group.grolist) AND
pg_group.groname in (SELECT DISTINCT pg_group.groname from pg_group)
LOOP
INSERT INTO user_group(user_name,group_name) VALUES (row.user_name,row.group_name);
END LOOP;
END;
$$ LANGUAGE plpgsql;
call make_user_group();
select
user_name,
listagg(group_name::text, ', ')
within group (order by group_name) as group_name
from user_group
group by user_name;
Clearly the stored procedure only needs to be created once but called every time the temp table needs to be created.

Saving User Role Assignments

It is possible to save User Role Assignments? When I replace application or remove it and add new with same application id, Apex removes User Role Assignments from APEX_APPL_ACL_USERS table. Actually I can create custom table to store User Role Assigments and later use to insert in APEX_APPL_ACL_USERS table, but I think isn't good solution.
There currently isn't. The safest way is to backup your user roles before replacing the app. I found that when I do an import through the UI the users are preserved, when you delete the app the users are gone. There is a great blog about how to generate a script to restore user roles.
For a single app, here is a script (mostly taken from above blog). Replace the 12345 with your application id. It generates a script that you have to execute after you have recreated your application.
WITH application_id (app_id) AS
(SELECT 12345 FROM dual)
SELECT txt
FROM (SELECT 1 x, 'BEGIN' txt FROM DUAL
UNION ALL
SELECT 2,
RPAD(' ', 3)
|| CASE
WHEN rn = 1 THEN REPLACE(q'[APEX_UTIL.set_workspace('~workspace~');]', '~workspace~', workspace)
END
|| REPLACE(
REPLACE(
REPLACE(
q'[APEX_ACL.ADD_USER_ROLE(p_application_id=>~app~,p_user_name=>'~user~',p_role_static_id=>'~role~');]',
'~app~',
application_id),
'~user~',
user_name),
'~role~',
role_static_id) txt
FROM (SELECT workspace,
ROW_NUMBER() OVER(PARTITION BY workspace ORDER BY application_id, user_name, role_static_id) rn,
application_id,
user_name,
role_static_id
FROM apex_appl_acl_user_roles
JOIN application_id a ON a.app_id = application_id)
UNION ALL
SELECT 3, ' COMMIT;' FROM DUAL
UNION ALL
SELECT 4, 'END;' FROM DUAL)
ORDER BY x;

How to search through rows and assign column value based on search in Postgres?

I'm creating an application similar to Twitter. In that I'm writing a query for the profile page. So when the user visits someone other users profile, he/she can view the tweets liked by that particular user. So for that my query is retrieving all such tweets liked by that user, along with total likes and comments on that tweet.
But an additional parameter I require is whether the current user has liked any of those tweets, and if yes, I want it to retrieve it as boolean True in my query so I can display it as liked in UI.
But I don't know how to achieve this part. Following is a sub-query from my main query
select l.tweet_id, count(*) as total_likes,
<insert here> as current_user_liked
from api_likes as l
INNER JOIN accounts_user ON l.liked_by_id = accounts_user.id
group by tweet_id
Is there an inbuilt function in postgres that can scan through the filtered rows and check whether current user id is present in liked_by_id. If so mark current_user_liked as True, else False.
You want to left outer join back into the api_likes table.
select l.tweet_id, count(*) as total_likes,
case
when lu.twee_id is null then false
else true
end as current_user_liked
from api_likes as l
INNER JOIN accounts_user ON l.liked_by_id = accounts_user.id
left join api_likes as lu on lu.tweet_id = l.tweet_id
and lu.liked_by_id = <current user id>
group by tweet_id
This will continue to bring in the rows you are seeing and will add a row for the lu alias on api_likes. If no such row exists matching the l.tweet_id and the current user's id, then the columns from the lu alias will be null.