Why can't BigQuery cast this number as an integer? - casting

In my query, I have a value formatted as a dollar amount, like this:
Coverage_Amount
$10,000
$15,000
null
$2,000
So I remove the extra characters and map the null to 0. I get a column back like this:
Coverage_Amount
10000
15000
0
2000
However, these values are stored as strings, and when I try something like this:
CASE
WHEN Coverage_Amount IS NOT NULL THEN INTEGER(REGEXP_REPLACE(query.Coverage_Amount, r'\$|,', ''))
ELSE 0
END AS Coverage_Amount
I get back
Coverage_Amount
null
null
0
null
The documentation for the INTEGER() function says
Casts expr to a 64-bit integer. Returns NULL if is a string that doesn't correspond to an integer value.
Is there anything I can do to make BigQuery recognize that these are in fact integers?

Both below versions for BigQuery (respectivelly Legacy SQL and StandardSQL) work and return below result
Coverage_Amount val
10000 10000
15000 15000
2000 2000
Legacy SQL
#legacySQL
SELECT
Coverage_Amount,
IFNULL(INTEGER(REGEXP_REPLACE(Coverage_Amount, r'\$|,', '')), 0) AS val
FROM
(SELECT '10000' Coverage_Amount),
(SELECT '15000' Coverage_Amount),
(SELECT '2000' Coverage_Amount)
Standard SQL
#standardSQL
WITH `project.dataset.table` AS (
SELECT '10000' Coverage_Amount UNION ALL
SELECT '15000' UNION ALL
SELECT '2000'
)
SELECT
Coverage_Amount,
IFNULL(CAST(REGEXP_REPLACE(Coverage_Amount, r'\$|,', '') AS INT64), 0) AS val
FROM `project.dataset.table`
Obviously, same works for '$15,000' and '$10,000' and '$2,000' etc.

It could be because you have spaces after 0 at the end of string.
I mean f.e. '&10000 '. So you can try to use RTRIM(value, ' ')
SELECT
Coverage_Amount,
IFNULL(INTEGER(REGEXP_REPLACE(RTRIM(Coverage_Amount, ' '), r'\$|,', '')),0) AS val
FROM
(SELECT '$10,000 ' Coverage_Amount)
to delete all spaces from the end of string
Then output will be:
Row Coverage_Amount val
1 $10,000 10000

Are you using Standard? This worked for me (notice I use the CAST operator):
WITH data as(
select "$10,000" d UNION ALL
select "$15,000" UNION ALL
select "$2,000")
SELECT
d,
CAST(REGEXP_REPLACE(d, r'\$|,', '') AS INT64) AS Coverage_Amount
FROM data

Related

REGEX conversion of VARCHAR value to DATE in Snowflake stored procedure using RLIKE not consistent

I am trying to convert a column that has mixed date formats - 2017/12/10, 2018-02-27, 8/18/2017 to YYYY-MM-DD format through a Snowflake stored procedure. When executing through CALL statement, the order in which it executes the case statement doesn't seem to be consistent.
TableA:
CREATE TABLE TABLE_A
(
START_DATE VARCHAR,
END_DATE VARCHAR,
RECORDED_DATE VARCHAR);
INSERT INTO TABLE_A VALUES ('2021-11-09', '2021-11-09','2018/03/29');
INSERT INTO TABLE_A VALUES ('2021-11-09', '2021-11-09','2018-02-27');
INSERT INTO TABLE_A VALUES ('2021-11-09', '2021-11-09','8/18/2017');
Stored procedure:
CREATE OR REPLACE PROCEDURE LOAD_TABLE_B(LD VARCHAR)
RETURNS STRING
LANGUAGE JAVASCRIPT
EXECUTE AS CALLER
AS
$$
var insert_command =`INSERT INTO TABLE_B
SELECT START_DATE
,END_DATE
,CASE WHEN RECORDED_DATE RLIKE '\\d{4}/\\d{2}/\\d{2}' THEN TO_DATE(RECORDED_DATE, 'YYYY/MM/DD')
ELSE TO_DATE(RECORDED_DATE)
END AS RECORDED_DATE
,HASH(S.$1,S.$2,S.$3) AS CHECKSUM_HASH
FROM TABLE_A S;
`;
try {
snowflake.execute({sqlText:insert_command});
return "Success";
}
catch (err) {
throw err;
}
$$ ;
CALL LOAD_TABLE_B(1);
Error message:
Execution error in store procedure LOAD_TABLE_B: Date '2018/03/29' is not recognized At Snowflake.execute, line 18 position 11
Because you're running this in a stored procedure. The query itself has an extra round of parsing and character escaping before it is executed. meaning you need extra backslashes. The syntax gets borderline silly, but this is what you need.
var insert_command =`CREATE OR REPLACE TABLE TABLE_B AS
SELECT START_DATE
,END_DATE
,CASE WHEN RECORDED_DATE RLIKE '\\\\d{4}/\\\\d{2}/\\\\d{2}' THEN TO_DATE(RECORDED_DATE, 'YYYY/MM/DD')
ELSE TO_DATE(RECORDED_DATE)
END AS RECORDED_DATE
,HASH(S.$1,S.$2,S.$3) AS CHECKSUM_HASH
FROM TABLE_A S;
`;
another solution instead of using RLIKE in a CASE is to just nest the TRY_TO_DATE formats in a COALESCE
COALESCE(TRY_TO_DATE(recorded_date), TRY_TO_DATE(recorded_date, 'YYYY/MM/DD')) AS RECORDED_DATEAS recorded_date

Convert two datetimes to timestamps and keep the larger one

I work with Oracle Database 19c and I would like to convert two datetimes (DD/MM/YY HH24:MM:SS) into timestamps and only keep the larger one.
I try several script for the conversion like this one :
SELECT
CODE_ACT_PROD,
LIB,
CAST (DAT_CRE AS TIMESTAMP) AS DATE_CRE_TIMESTAMP,
CAST (DAT_MOD AS TIMESTAMP) AS DATE_MOD_TIMESTAMP
FROM ACTI
WHERE CODE_ACT_PROD
IN (
SELECT CODE_ACT_PROD
FROM ART_COM
WHERE ETAT = 0
)
but the result is not what I want, the datetimes are not convert and I don't know how to keep the larger one.
Use GREATEST:
SELECT CODE_ACT_PROD,
LIB,
CAST (DAT_CRE AS TIMESTAMP) AS DATE_CRE_TIMESTAMP,
CAST (DAT_MOD AS TIMESTAMP) AS DATE_MOD_TIMESTAMP,
CAST(GREATEST(DAT_CRE, DAT_MOD) AS TIMESTAMP) AS greatest_timestamp
FROM ACTI
WHERE CODE_ACT_PROD IN (
SELECT CODE_ACT_PROD
FROM ART_COM
WHERE ETAT = 0
)
Which, for the sample data:
CREATE TABLE acti (
code_act_prod INT,
lib INT,
dat_cre DATE,
dat_mod DATE
);
CREATE TABLE art_com (
code_act_prod INT,
etat INT
);
INSERT INTO acti (code_act_prod, lib, dat_cre, dat_mod)
SELECT 1, 2, SYSDATE - 1, SYSDATE FROM DUAL UNION ALL
SELECT 3, 4, TRUNC(SYSDATE), SYSDATE - 2 FROM DUAL;
INSERT INTO art_com (code_act_prod, etat)
SELECT 1, 0 FROM DUAL UNION ALL
SELECT 3, 0 FROM DUAL;
Outputs:
CODE_ACT_PROD
LIB
DATE_CRE_TIMESTAMP
DATE_MOD_TIMESTAMP
GREATEST_TIMESTAMP
1
2
2021-09-01 08:38:21.000000
2021-09-02 08:38:21.000000
2021-09-02 08:38:21.000000
3
4
2021-09-02 00:00:00.000000
2021-08-31 08:38:21.000000
2021-09-02 00:00:00.000000
db<>fiddle here
Oracle does not have a datetime data type. It has date which has a day and a time to the second. And it has a timestamp which also has a day and a time to the second with optional fractional seconds and time zone. Converting a date to a timestamp would just add fractional seconds which were always 0. Neither date nor timestamp data types have a format. A varchar2 would have a format. If the columns are date data types, your code is syntactically valid. I'm not sure how results you are getting differ from the results you want since you're not showing us your sample data or expected results and you're not telling us what you mean when you say that something isn't converted.
Assuming the two columns are actually of type date, your code appears to be fine and you just want to use the greatest function to get the latest date. See this fiddle
with cte as (
select sysdate dat_cr, sysdate + 1 dat_mod
from dual
)
select cast(dat_cr as timestamp) ts_cr,
cast(dat_mod as timestamp) ts_mod,
cast( greatest( dat_cr, dat_mod ) as timestamp ) ts_greatest
from cte;
TS_CR TS_MOD TS_GREATEST
02-SEP-21 08.25.38.000000 AM 03-SEP-21 08.25.38.000000 AM 03-SEP-21 08.25.38.000000 AM
Note that the conversion of the three timestamps to strings to be displayed to humans is controlled by your session's nls_timestamp_format.
If you want to handle null dates by returning whichever date is not null, you can use a coalesce and a case statement
with cte as (
select sysdate dat_cr, sysdate + 1 dat_mod
from dual
union all
select null, sysdate from dual
union all
select sysdate, null from dual
)
select cast(dat_cr as timestamp) ts_cr,
cast(dat_mod as timestamp) ts_mod,
cast( case when dat_cr is null or dat_mod is null
then coalesce( dat_mod, dat_cr )
else greatest( dat_cr, dat_mod )
end
as timestamp ) ts_greatest
from cte;
See this fiddle

How to find missing dates in BigQuery table using sql

How to get a list of missing dates from a BigQuery table. For e.g. a table(test_table) is populated everyday by some job but on few days the jobs fails and data isn't written into the table.
Use Case:
We have a table(test_table) which is populated everyday by some job( a scheduled query or cloud function).Sometimes those job fail and data isn't available for those particular dates in my table.
How to find those dates rather than scrolling through thousands of rows.
The below query will return me a list of dates and ad_ids where data wasn't uploaded (null).
note: I have used MAX(Date) as I knew dates was missing in between my boundary dates. For safe side you can also specify the starting_date and ending_date incase data hasn't been populated in the last few days at all.
WITH Date_Range AS
-- anchor for date range
(
SELECT MIN(DATE) as starting_date,
MAX(DATE) AS ending_date
FROM `project_name.dataset_name.test_table`
),
day_series AS
-- anchor to get all the dates within the range
(
SELECT *
FROM Date_Range
,UNNEST(GENERATE_TIMESTAMP_ARRAY(starting_date, ending_date, INTERVAL 1 DAY)) AS days
-- other options depending on your date type ( mine was timestamp)
-- GENERATE_DATETIME_ARRAY or GENERATE_DATE_ARRAY
)
SELECT
day_series.days,
original_table.ad_id
FROM day_series
-- do a left join on the source table
LEFT JOIN `project_name.dataset_name.test_table` AS original_table ON (original_table.date)= day_series.days
-- I only want the records where data is not available or in other words empty/missing
WHERE original_table.ad_id IS NULL
GROUP BY 1,2
ORDER BY 1
Final output will look like below:
An Alternate solution you can try following query to get desired output:-
with t as (select 1 as id, cast ('2020-12-25' as timestamp) Days union all
select 1 as id, cast ('2020-12-26' as timestamp) Days union all
select 1 as id, cast ('2020-12-27' as timestamp) Days union all
select 1 as id, cast ('2020-12-31' as timestamp) Days union all
select 1 as id, cast ('2021-01-01' as timestamp) Days union all
select 1 as id, cast ('2021-01-04' as timestamp) Days)
SELECT *
FROM (
select TIMESTAMP_ADD(Days, INTERVAL 1 DAY) AS Days, TIMESTAMP_SUB(next_days, INTERVAL 1 DAY) AS next_days from (
select t.Days,
(case when lag(Days) over (partition by id order by Days) = Days
then NULL
when lag(Days) over (partition by id order by Days) is null
then Null
else Lead(Days) over (partition by id order by Days)
end) as next_days
from t) where next_days is not null
and Days <> TIMESTAMP_SUB(next_days, INTERVAL 1 DAY)),
UNNEST(GENERATE_TIMESTAMP_ARRAY(Days, next_days, INTERVAL 1 DAY)) AS days
Output will be as :-
I used the code above but had to restructure it for BigQuery:
-- anchor for date range - this will select dates from the source table (i.e. the table your query runs off of)
WITH day_series AS(
SELECT *
FROM (
SELECT MIN(DATE) as starting_date,
MAX(DATE) AS ending_date
FROM --enter source table here--
---OPTIONAL: filter for a specific date range
WHERE DATE BETWEEN 'YYYY-MM-DD' AND YYYY-MM-DD'
),UNNEST(GENERATE_DATE_ARRAY(starting_date, ending_date, INTERVAL 1 DAY)) as days
-- other options depending on your date type ( mine was timestamp)
-- GENERATE_DATETIME_ARRAY or GENERATE_DATE_ARRAY
)
SELECT
day_series.days,
output_table.date
FROM day_series
-- do a left join on the output table (i.e. the table you are searching the missing dates for)
LEFT JOIN `project_name.dataset_name.test_table` AS output_table
ON (output_table.date)= day_series.days
-- I only want the records where data is not available or in other words empty/missing
WHERE output_table.date IS NULL
GROUP BY 1,2
ORDER BY 1

Redshift. Convert comma delimited values into rows

I am wondering how to convert comma-delimited values into rows in Redshift. I am afraid that my own solution isn't optimal. Please advise. I have table with one of the columns with coma-separated values. For example:
I have:
user_id|user_name|user_action
-----------------------------
1 | Shone | start,stop,cancell...
I would like to see
user_id|user_name|parsed_action
-------------------------------
1 | Shone | start
1 | Shone | stop
1 | Shone | cancell
....
A slight improvement over the existing answer is to use a second "numbers" table that enumerates all of the possible list lengths and then use a cross join to make the query more compact.
Redshift does not have a straightforward method for creating a numbers table that I am aware of, but we can use a bit of a hack from https://www.periscope.io/blog/generate-series-in-redshift-and-mysql.html to create one using row numbers.
Specifically, if we assume the number of rows in cmd_logs is larger than the maximum number of commas in the user_action column, we can create a numbers table by counting rows. To start, let's assume there are at most 99 commas in the user_action column:
select
(row_number() over (order by true))::int as n
into numbers
from cmd_logs
limit 100;
If we want to get fancy, we can compute the number of commas from the cmd_logs table to create a more precise set of rows in numbers:
select
n::int
into numbers
from
(select
row_number() over (order by true) as n
from cmd_logs)
cross join
(select
max(regexp_count(user_action, '[,]')) as max_num
from cmd_logs)
where
n <= max_num + 1;
Once there is a numbers table, we can do:
select
user_id,
user_name,
split_part(user_action,',',n) as parsed_action
from
cmd_logs
cross join
numbers
where
split_part(user_action,',',n) is not null
and split_part(user_action,',',n) != '';
Another idea is to transform your CSV string into JSON first, followed by JSON extract, along the following lines:
... '["' || replace( user_action, '.', '", "' ) || '"]' AS replaced
... JSON_EXTRACT_ARRAY_ELEMENT_TEXT(replaced, numbers.i) AS parsed_action
Where "numbers" is the table from the first answer. The advantage of this approach is the ability to use built-in JSON functionality.
If you know that there are not many actions in your user_action column, you use recursive sub-querying with union all and therefore avoiding the aux numbers table.
But it requires you to know the number of actions for each user, either adjust initial table or make a view or a temporary table for it.
Data preparation
Assuming you have something like this as a table:
create temporary table actions
(
user_id varchar,
user_name varchar,
user_action varchar
);
I'll insert some values in it:
insert into actions
values (1, 'Shone', 'start,stop,cancel'),
(2, 'Gregory', 'find,diagnose,taunt'),
(3, 'Robot', 'kill,destroy');
Here's an additional table with temporary count
create temporary table actions_with_counts
(
id varchar,
name varchar,
num_actions integer,
actions varchar
);
insert into actions_with_counts (
select user_id,
user_name,
regexp_count(user_action, ',') + 1 as num_actions,
user_action
from actions
);
This would be our "input table" and it looks just as you expected
select * from actions_with_counts;
id
name
num_actions
actions
2
Gregory
3
find,diagnose,taunt
3
Robot
2
kill,destroy
1
Shone
3
start,stop,cancel
Again, you can adjust initial table and therefore skipping adding counts as a separate table.
Sub-query to flatten the actions
Here's the unnesting query:
with recursive tmp (user_id, user_name, idx, user_action) as
(
select id,
name,
1 as idx,
split_part(actions, ',', 1) as user_action
from actions_with_counts
union all
select user_id,
user_name,
idx + 1 as idx,
split_part(actions, ',', idx + 1)
from actions_with_counts
join tmp on actions_with_counts.id = tmp.user_id
where idx < num_actions
)
select user_id, user_name, user_action as parsed_action
from tmp
order by user_id;
This will create a new row for each action, and the output would look like this:
user_id
user_name
parsed_action
1
Shone
start
1
Shone
stop
1
Shone
cancel
2
Gregory
find
2
Gregory
diagnose
2
Gregory
taunt
3
Robot
kill
3
Robot
destroy
Here are two ways to achieve this.
In my example, I'm assuming that I am accepting a comma separated list of values. My values look like schema.table.column.
The first involves using a recursive CTE.
drop table if exists #dep_tbl;
create table #dep_tbl as
select 'schema.foobar.insert_ts,schema.baz.load_ts' as dep
;
with recursive tmp (level, dep_split, to_split) as
(
select 1 as level
, split_part(dep, ',', 1) as dep_split
, regexp_count(dep, ',') as to_split
from #dep_tbl
union all
select tmp.level + 1 as level
, split_part(a.dep, ',', tmp.level + 1) as dep_split_u
, tmp.to_split
from #dep_tbl a
inner join tmp on tmp.dep_split is not null
and tmp.level <= tmp.to_split
)
select dep_split from tmp;
the above yields:
|dep_split|
|schema.foobar.insert_ts|
|schema.baz.load_ts|
The second involves a stored procedure.
CREATE OR REPLACE PROCEDURE so_test(dependencies_csv varchar(max))
LANGUAGE plpgsql
AS $$
DECLARE
dependencies_csv_vals varchar(max);
BEGIN
drop table if exists #dep_holder;
create table #dep_holder
(
avoid varchar(60000)
);
IF dependencies_csv is not null THEN
dependencies_csv_vals:='('||replace(quote_literal(regexp_replace(dependencies_csv,'\\s','')),',', '\'),(\'') ||')';
execute 'insert into #dep_holder values '||dependencies_csv_vals||';';
END IF;
END;
$$
;
call so_test('schema.foobar.insert_ts,schema.baz.load_ts')
select
*
from
#dep_holder;
the above yields:
|dep_split|
|schema.foobar.insert_ts|
|schema.baz.load_ts|
in conclusion
If you only care about one single column in your input (the X delimited values), then I think the stored procedure is easier/faster.
However, if you have other columns you care about and want to keep those columns along with your comma separated value column now transformed to rows, OR, if you want to know the argument (original list of delimited values), I think the stored procedure is the way to go. In that case, you can just add those other columns to your columns selected in the recursive query.
You can get the expected result with the following query. I'm using "UNION ALL" to convert a column to row.
select user_id, user_name, split_part(user_action,',',1) as parsed_action from cmd_logs
union all
select user_id, user_name, split_part(user_action,',',2) as parsed_action from cmd_logs
union all
select user_id, user_name, split_part(user_action,',',3) as parsed_action from cmd_logs
Here's my equally-terrible answer.
I have a users table, and then an events table with a column that is just a comma-delimited string of users at said event. eg
event_id | user_ids
1 | 5,18,25,99,105
In this case, I used the LIKE and wildcard functions to build a new table that represents each event-user edge.
SELECT e.event_id, u.id as user_id
FROM events e
LEFT JOIN users u ON e.user_ids like '%' || u.id || '%'
It's not pretty, but I throw it in a WITH clause so that I don't have to run it more than once per query. I'll likely just build an ETL to create that table every night anyway.
Also, this only works if you have a second table that does have one row per unique possibility. If not, you could do LISTAGG to get a single cell with all your values, export that to a CSV and reupload that as a table to help.
Like I said: a terrible, no-good solution.
Late to the party but I got something working (albeit very slow though)
with nums as (select n::int n
from
(select
row_number() over (order by true) as n
from table_with_enough_rows_to_cover_range)
cross join
(select
max(json_array_length(json_column)) as max_num
from table_with_json_column )
where
n <= max_num + 1)
select *, json_extract_array_element_text(json_column,nums.n-1) parsed_json
from nums, table_with_json_column
where json_extract_array_element_text(json_column,nums.n-1) != ''
and nums.n <= json_array_length(json_column)
Thanks to answer by Bob Baxley for inspiration
Just improvement for the answer above https://stackoverflow.com/a/31998832/1265306
Is generating numbers table using the following SQL
https://discourse.looker.com/t/generating-a-numbers-table-in-mysql-and-redshift/482
SELECT
p0.n
+ p1.n*2
+ p2.n * POWER(2,2)
+ p3.n * POWER(2,3)
+ p4.n * POWER(2,4)
+ p5.n * POWER(2,5)
+ p6.n * POWER(2,6)
+ p7.n * POWER(2,7)
as number
INTO numbers
FROM
(SELECT 0 as n UNION SELECT 1) p0,
(SELECT 0 as n UNION SELECT 1) p1,
(SELECT 0 as n UNION SELECT 1) p2,
(SELECT 0 as n UNION SELECT 1) p3,
(SELECT 0 as n UNION SELECT 1) p4,
(SELECT 0 as n UNION SELECT 1) p5,
(SELECT 0 as n UNION SELECT 1) p6,
(SELECT 0 as n UNION SELECT 1) p7
ORDER BY 1
LIMIT 100
"ORDER BY" is there only in case you want paste it without the INTO clause and see the results
create a stored procedure that will parse string dynamically and populatetemp table, select from temp table.
here is the magic code:-
CREATE OR REPLACE PROCEDURE public.sp_string_split( "string" character varying )
AS $$
DECLARE
cnt INTEGER := 1;
no_of_parts INTEGER := (select REGEXP_COUNT ( string , ',' ));
sql VARCHAR(MAX) := '';
item character varying := '';
BEGIN
-- Create table
sql := 'CREATE TEMPORARY TABLE IF NOT EXISTS split_table (part VARCHAR(255)) ';
RAISE NOTICE 'executing sql %', sql ;
EXECUTE sql;
<<simple_loop_exit_continue>>
LOOP
item = (select split_part("string",',',cnt));
RAISE NOTICE 'item %', item ;
sql := 'INSERT INTO split_table SELECT '''||item||''' ';
EXECUTE sql;
cnt = cnt + 1;
EXIT simple_loop_exit_continue WHEN (cnt >= no_of_parts + 2);
END LOOP;
END ;
$$ LANGUAGE plpgsql;
Usage example:-
call public.sp_string_split('john,smith,jones');
select *
from split_table
You can try copy command to copy your file into redshift tables
copy table_name from 's3://mybucket/myfolder/my.csv' CREDENTIALS 'aws_access_key_id=my_aws_acc_key;aws_secret_access_key=my_aws_sec_key' delimiter ','
You can use delimiter ',' option.
For more details of copy command options you can visit this page
http://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html

Regular expression on Dates in Oracle

I have date formats in all the possible permutations. MM/DD/YYYY, M/D/YYYY, MM/D/YYYY, M/DD/YYYY
Now I need to write a regular expression in Oracle DB to fetch different date formats from 1 column as is
Try this one:
with t(date_col) as (
select '01/01/2014' from dual
union all
select '1/2/2014' from dual
union all
select '01/3/2014' from dual
union all
select '1/04/2014' from dual
union all
select '11/1/14' from dual)
select date_col,
case
when regexp_instr(date_col, '^\d/\d/\d{4}$') = 1 then
'd/m/yyyy'
when regexp_instr(date_col, '^\d{2}/\d/\d{4}$') = 1 then
'dd/m/yyyy'
when regexp_instr(date_col, '^\d/\d{2}/\d{4}$') = 1 then
'd/mm/yyyy'
when regexp_instr(date_col, '^\d{2}/\d{2}/\d{4}$') = 1 then
'dd/mm/yyyy'
else
'Unknown format'
end date_format
from t;
DATE_COL DATE_FORMAT
---------- --------------
01/01/2014 dd/mm/yyyy
1/2/2014 d/m/yyyy
01/3/2014 dd/m/yyyy
1/04/2014 d/mm/yyyy
11/1/14 Unknown format
I am not sure what your goal is, but since months are always first, followed by day, you can use the following expression to get a date regardless of the input format:
select to_date( column, 'mm/dd/yyyy') from ...
You can select all records for which the following is true:
where [column_value] != to_char(to_date([column_value],'MM/DD/YYYY'),'MM/DD/YYYY')