I have a sas dataset called list that contains all files/path/filename of a directory.
sample dataset
I want to create a new column base on the suffix of column the_name to add 1, so 01 will become 02 and 02 will become 03.
For example:
the_name: FOR_PROCESSING_1234562020042002
new_name: FOR_PROCESSING_1234562020042003
the_name: FOR_PROCESSING_1234562020042101
new_name: FOR_PROCESSING_1234562020042102
Thanks for your help.
Recy:
A safer increment would be to scan then entire number off the end of the _name, and not rely on incrementing just a single tail end digit.
data _null_;
the_name = 'FOR_PROCESSING_1234562020042002';
suffix = scan(the_name,-1,'_');
nextnum = input(suffix,best20.)+1;
new_name = cats(transtrn(the_name,trim(suffix),''),nextnum);
put the_name= / new_name= ;
run;
--- LOG ---
the_name=FOR_PROCESSING_1234562020042002
new_name=FOR_PROCESSING_1234562020042003
Related
i have tried alot doing vlookup between two workbooks, one is open (book1) and the other
needs to be opened by GetOpenFilename() function let us call it (book2).
(book1) contains sheet1 where i want to do the vlookup
(book2) contains sheet2 where i want to match the id (column 1) and get the names (column2) by vlookup.
here is my code. Notice that the number of rows changes every month so, i need to loop until the last row in sheet1(in book1).
Sub status_first_Step()
Range("C:C").Insert
Set book1 = Sheets("Sheet1")
book2 = Application.GetOpenFilename()
Workbooks.Open book2
Set book2 = ActiveWorkbook
For i = 1 To Cells(Rows.Count, "A").End(xlUp).Row
ws1.Cells(i, "C").Value = Application.VLookup(book1.Cells(i, 1).Value, book2.Sheets("sheet2").Columns("A:Y"), 2, 0)
Next i
End Sub
Thank You.
I hope someone can help!
I am fairly new to Apache Beam, Cloud Dataflow, and what I would like to do is
Read GZIP File Contents
File is fixed width
Filter said contents based on first two characters to another pCollection
What I have so far:
--PAR DO Function
class FilterHeader(beam.DoFn):
def process(self, element):
if element[:2] == '01':
yield element
else:
return 'Header Not found' # Return nothing - this was blank, as I was just trying to view in the return came back with anything if not the row
And my pipeline is as follows
with beam.Pipeline(options=PipelineOptions(pipeline_args)) as p:
# Initial PCollection Full File
rows = (
p | 'Read daily Spot File' >> beam.io.ReadFromText(
file_pattern='gs://<my bucket>/filename.gz',
compression_type='gzip',
coder=coders.BytesCoder(),
skip_header_lines=0))
# Header Collection - filtered on first two characters = 01
header_collection = (
rows | 'Filter Record Type 01 to our HEADER COLLECTION' >> beam.ParDo(FilterHeader())
| 'Output Header Rows' >> beam.io.WriteToText('gs://<destination bucket>/new_fileName.txt'))
When I remove the filter, I can output all the rows so there isn't anything wrong with the file, or the initial pCollection. Once I add in the filter, the rows I am after do not come out. And yes, the data is present in the file i.e. there is one row which starts with 01 as the first characters.
Is there something simple I am missing?
Any direction greatly appreciated.
I am looking to speed up the following PL/SQL function. Right now it has run for over 2 hours with no sign of completing. We aborted that one and attempting it again with a EXIT WHEN of 20 and it still shows no signs of actually completing.
We are running these through SQLDeveloper 17.3, and each of the (4) tables has about 15k rows.
The goal is to grab all of the SSN's in our database and change the first character to an illegal char and the last 2 characters to a random A-Z combination. We then have to update that SSN in every table that uses it (4).
declare
v_random varchar2(2);
v_origin_ssn varchar2(100);
v_working_start varchar2(100);
v_working_middle varchar2(100);
v_new_ssn varchar2(100);
begin
for o in (
select distinct ssn --loop all rows in tbl_customer
from program_one.tbl_customer
)
loop
if regexp_like(o.ssn, '^[A-Za-z9].*[A-Z]$') then continue; --if this is already scrambled, skip
else
select dbms_random.string('U', 2) --create random 2 cap letters
into v_random
from dual;
v_origin_ssn := o.ssn; --set origin ssn with the existing ssn
if regexp_like(o.ssn, '^[A-Za-z]') --if first char is already alpha, leave it alone, otherwise 9
then v_working_start := substr(o.ssn, 1, 1);
else v_working_start := 9;
end if;
v_working_middle := substr(o.ssn, 2, 6); --set middle ssn with the unchanged numbers
v_new_ssn := v_working_start||v_working_middle||v_random; --create new sanitized ssn
update program_one.tbl_customer --update if exists in tbl_customer
set ssn = v_new_ssn
where ssn = v_origin_ssn;
commit;
update program_one.tbl_mhc_backup --update if exists ssn tbl_mhc_backup
set ssn = v_new_ssn
where ssn = v_origin_ssn;
commit;
update program_two.tbl_waiver --update if exists ssn tbl_waiver
set ssn = v_new_ssn
where ssn = v_origin_ssn;
commit;
update program_two.tbl_pers --update if exists in tbl_pers
set ssan = v_new_ssn
where ssan = v_origin_ssn;
commit;
end if;
--dbms_output.put_line(v_origin_ssn||' : '||v_new_ssn); --output test string to verify working correctly
end loop;
end;
I'd do it without a function in plain SQL:
Create a table with old and new ssn:
CREATE TABLE tmp_ssn AS
SELECT ssn, '9'||substr(ssn,2,6)||dbms_random.string('U',2) as new_ssn
FROM (SELECT distinct ssn FROM program_one.tbl_customer);
CREATE UNIQUE INDEX ui_tmp_ssn ON tmp_ssn(ssn, new_ssn);
EXEC DBMS_STATS.GATHER_TABLE_STATS(null,'tmp_ssn');
... and then update the tables one by one:
MERGE INTO program_one.tbl_customer z USING tmp_ssn q ON (z.ssn=q.ssn)
WHEN MATCHED THEN UPDATE z.ssn = q.new_ssn;
COMMIT;
MERGE INTO program_one.tbl_mhc_backup z USING tmp_ssn q ON (z.ssn=q.ssn)
WHEN MATCHED THEN UPDATE z.ssn = q.new_ssn;
COMMIT;
etc
If that is still to slow, I'd do
RENAME tbl_customer to tbl_customer_old;
CREATE TABLE tbl_customer as
SELECT s.new_ssn as ssn, t.col1, t.col2, ... , t.coln
FROM tbl_customer_old t JOIN tmp_ssn s USING(ssn);
DROP TABLE tbl_customer_old;
I have a table temp that have a column name "REMARKS"
Create script
Create table temp (id number,remarks varchar2(2000));
Insert script
Insert into temp values (1,'NAME =GAURAV Amount=981 Phone_number =98932324 Active Flag =Y');
Insert into temp values (2,'NAME =ROHAN Amount=984 Phone_number =98932333 Active Flag =N');
Now , i want to fetch the corresponding value of NAME ,Amount ,phone_number, active_flag from the remarks column of the table.
I thought of using regular expression ,but i am not comfortable in using it .
I tried with substr and instr to fetch the name from the remakrs column ,but if i want to fetch all four, i need to write a pl sql .Can we achieve this using Regular expression.
Can i get output(CURSOR) like
id Name Amount phone_number Active flag
------------------------------------------
1 Gaurav 981 98932324 Y
2 Rohan 984 98932333 N
-------------------------------------------
Thanks for your help
you can use something like :
SQL> select regexp_replace(remarks, '.*NAME *=([^ ]*).*', '\1') name,
2 regexp_replace(remarks, '.*Amount *=([^ ]*).*', '\1') amount,
3 regexp_replace(remarks, '.*Phone_number *=([^ ]*).*', '\1') ph_number,
4 regexp_replace(remarks, '.*Active Flag *=([^ ]*).*', '\1') flag
5 from temp;
NAME AMOUNT PH_NUMBER FLAG
-------------------- -------------------- -------------------- --------------------
GAURAV 981 98932324 Y
ROHAN 981 98932324 N
I want to add a header row grouping the column headers.
Departure Arrival <-- This row is what I want to add
Airport Gate Date Airport Gate Date
-------- ----- ----- -------- ----- -------
O'Hare A10 10Mar Atlanta G19 10Mar
DFW K98 11Mar Denver Z76 11Mar
Note that I'm using an ALV List, not an ALV Grid. I've looked at the sample program BALVBT01 which has a 2-level header but it turns out it's because they are displaying parent/child data. My data has only one level, I just want to group the columns.
Found my solution here.
Use the top_of_list event to add custom header info before the standard header is printed. If you want to replace the standard header with your own you can turn off the standard header by passing is_layout-no_colhead = 'X' in the layout table.
* Get Event table
CALL FUNCTION 'REUSE_ALV_EVENTS_GET'
IMPORTING
et_events = it_evt.
* Add pointer to custom top_of_list event handler
READ TABLE it_evt INTO wa_evt
WITH KEY name = slis_ev_top_of_list .
wa_evt-form = 'MY_TOP_OF_LIST' .
MODIFY it_evt FROM wa_evt INDEX sy-tabix .
* Pass event table when printing ALV list
CALL FUNCTION 'REUSE_ALV_LIST_DISPLAY'
EXPORTING
i_callback_program = w_prog
is_layout = fs_layout
it_fieldcat = t_fieldcat
it_events = it_evt
TABLES
t_outtab = t_spfli.
************************************
* Custom event handler to write group-level header
FORM MY_TOP_OF_LIST .
ULINE AT 1(43) .
FORMAT COLOR COL_HEADING .
WRITE: / sy-vline ,
10 'SAP' ,
22 sy-vline ,
31 'VPPA' ,
43 sy-vline .
ENDFORM.