Informatica Time error ,not matching souce and target data - informatica

One of my source table(oracle) date column having the value 5/3/2013 6:00:51.134000000 AM. I am trying to pull the same into to my target(oracle), but my target converted the micro seconds as "zeros" and loading the value 5/3/2013 06.00.51.000000000 AM. Both my source & target column has declared as timestamp. I have set the date format like MM/DD/YYYY HH24:MI:SS.US in session properties
Can anyone help to me to get the date with micro seconds? I am using informatica 10.2.0 Thx

You can try the workaround suggested at the below link to process high precision dates. You will need to modify the source and target definition field lengths to (29,9).
Link

The way to resolve this is to increase the precision of the source or target definition to precision 29 and scale 9 after the source/target is imported into Informatica. This will handle the digits in milliseconds without converting them to all zeros.

Related

extract local time in a specific format using boost library

I try to extract local time including time zone and format this in a format similar to the following
2019-11-06T00:39:21.25+02:00 with precision of milliseconds.
Any idea how to do this?

AWS Forecast cannot train the predictor due to missing data

This question is close, but doesn't quite help me with a similar issue as I am using a single data set and no related time series.
I am using AWS Forecast with a single time series dataset (no related data, just the main DS). It is a daily data set with about 10 years of data ranging from 2010-2020.
I have 3572 data points in the original data set; I manually filled missing data to ensure there were no missing days in the date range for a total of 3739 data points. I lopped off everything in 2020 to create a validation dataset and then configured the predictor for a 180 day Forecast. I keep getting the following error:
Unable to evaluate this dataset because there is missing data in the evaluation window for all items. Ensure that there is complete data for at least one item in the evaluation window starting from 2019-03-07T00:00:00 up to 2020-01-01T00:00.
There is definitely no missing data, I've double and triple checked the date range and data fill and every day between start and end dates has a data point. I also tried adding a data point for 1/1/2020 (it ended at 12/31/2019) and I continue to get this error. I can't figure out what it's asking me for, except that maybe I'm missing something in my math about the forecast Horizon and Backtest window offset?
Dataset example:
Brief model parameters (can share more if I'm missing something pertinent):
Total data points in training data: 3479
forecastHorizon = 180
create_predictor_response=forecast.create_predictor(PredictorName=predictorName,
ForecastHorizon=forecastHorizon,
PerformAutoML= True,
PerformHPO=False,
EvaluationParameters= {"NumberOfBacktestWindows": 1,
"BackTestWindowOffset": 180},
InputDataConfig= {"DatasetGroupArn": datasetGroupArn},
FeaturizationConfig= {"ForecastFrequency": 'D'
I noticed you don't have entry for 6/24/10 (this american date format is the worst btw)
I faced a similar problem when leaving out days (assuming you're modelling in daily frequency) just like that and having the Forecast automatic filling of gaps to nan values (as opposed to zero which is the default). I suggest you:
pre-fill literally every date within the range of training data (and of forecast window, if using related data)
choose zero as the option for automatically filling of missing values. I think mean or any other float value would also work for that matter
let me know if that works! I am also using Forecast and it's good to keep track of possible problems and solutions

AWS IoT Analytics Delta Window

I am having real problems getting the AWS IoT Analytics Delta Window (docs) to work.
I am trying to set it up so that every day a query is run to get the last 1 hour of data only. According to the docs the schedule feature can be used to run the query using a cron expression (in my case every hour) and the delta window should restrict my query to only include records that are in the specified time window (in my case the last hour).
The SQL query I am running is simply SELECT * FROM dev_iot_analytics_datastore and if I don't include any delta window I get the records as expected. Unfortunately when I include a delta expression I get nothing (ever). I left the data accumulating for about 10 days now so there are a couple of million records in the database. Given that I was unsure what the optimal format would be I have included the following temporal fields in the entries:
datetime : 2019-05-15T01:29:26.509
(A string formatted using ISO Local Date Time)
timestamp_sec : 1557883766
(A unix epoch expressed in seconds)
timestamp_milli : 1557883766509
(A unix epoch expressed in milliseconds)
There is also a value automatically added by AWS called __dt which is a uses the same format as my datetime except it seems to be accurate to within 1 day. i.e. All values entered within a given day have the same value (e.g. 2019-05-15 00:00:00.00)
I have tried a range of expressions (including the suggested AWS expression) from both standard SQL and Presto as I'm not sure which one is being used for this query. I know they use a subset of Presto for the analytics so it makes sense that they would use it for the delta but the docs simply say '... any valid SQL expression'.
Expressions I have tried so far with no luck:
from_unixtime(timestamp_sec)
from_unixtime(timestamp_milli)
cast(from_unixtime(unixtime_sec) as date)
cast(from_unixtime(unixtime_milli) as date)
date_format(from_unixtime(timestamp_sec), '%Y-%m-%dT%h:%i:%s')
date_format(from_unixtime(timestamp_milli), '%Y-%m-%dT%h:%i:%s')
from_iso8601_timestamp(datetime)
What are the offset and time expression parameters that you are using?
Since delta windows are effectively filters inserted into your SQL, you can troubleshoot them by manually inserting the filter expression into your data set's query.
Namely, applying a delta window filter with -3 minute (negative) offset and 'from_unixtime(my_timestamp)' time expression to a 'SELECT my_field FROM my_datastore' query translates to an equivalent query:
SELECT my_field FROM
(SELECT * FROM "my_datastore" WHERE
(__dt between date_trunc('day', iota_latest_succeeded_schedule_time() - interval '1' day)
and date_trunc('day', iota_current_schedule_time() + interval '1' day)) AND
iota_latest_succeeded_schedule_time() - interval '3' minute < from_unixtime(my_timestamp) AND
from_unixtime(my_timestamp) <= iota_current_schedule_time() - interval '3' minute)
Try using a similar query (with no delta time filter) with correct values for offset and time expression and see what you get, The (_dt between ...) is just an optimization for limiting the scanned partitions. You can remove it for the purposes of troubleshooting.
Please try the following:
Set query to SELECT * FROM dev_iot_analytics_datastore
Data selection filter:
Data selection window: Delta time
Offset: -1 Hours
Timestamp expression: from_unixtime(timestamp_sec)
Wait for dataset content to run for a bit, say 15 minutes or more.
Check contents
After several weeks of testing and trying all the suggestions in this post along with many more it appears that the extremely technical answer was to 'switch off and back on'. I deleted the whole analytics stack and rebuild everything with different names and it now seems to now be working!
Its important that even though I have flagged this as the correct answer due to the actual resolution. Both the answers provided by #Populus and #Roger are correct had my deployment being functioning as expected.
I found by chance that changing SELECT * FROM datastore to SELECT id1, id2, ... FROM datastore solved the problem.

what is the best practice for storing date and time in class/object?

recently i'm going to connect to PostgreSQL and I need to store date/time in my object to pass to the query for insert and update some table.
but there is not clear way in c++ to store and retrieve date/time.
any comment?
PostgreSQL TimeFormat 9+ version https://www.postgresql.org/docs/9.1/datatype-datetime.html
Which is a 8byte int(64 bit) time format in microsecond precision, UTC without timezone (from top of the table).
When you create a table you can either time-stamp the record by PostgreSQL current_timestamp , OR insert into table as integer 64bit microsecond format. since PostgreSQL has multiple time format you should decide time any the format you want from table
PostgreSQL approach CREATE,INSERT,RETRIEVE
"CREATE TABLE example_table(update_time_column TIMESTAMP DEFAULT CURRENT_TIMESTAMP)"
"INSERT INTO example_table(update_time_column) VALUES update_time_column=CURRENT_TIMESTAMP"
"SELECT (EXTRACT(epoch FROM update_time_column)*1000000) FROM example_table"
C++ approach
auto/int64_t cppTime = get64bitMicrosecondFormat from some library`
something similar to this answer: Getting an accurate execution time in C++ (micro seconds)
Then push your object / record to PostGRESQL, when retrieve in microseconds, adjust precision /1000 for milliseconds etc.
Just don't forget to synchronize PostgreSQL and C++ timestamp length (eg. 8byte - 8byte each side) otherwise, your time will be thresholded either side, and you will lose precision / get unexpected time.

informatica value larger than specified precision allowed for this column

I tried to load a table ADuplicate which is duplicate of Table A using one to one mapping direct mapping in Informatica.
But I got following error:
"Value larger than specified precision allowed for this column"
I noticed that for C4 column, which is number(15) in both tables, has the problem while loading.
Data which has error in loading are 200000300123 and -1000000000000000000000000000000000000000000
My doubt is:
This value is available in Source of same precision. Why doesn't it get into target?
I changed the Target Column C4 as just Number field I could insert this value manually using TOAD but why couldn't I do the same using Informatica?
Please help me out.
Thanks in advance
Shanmugam
Do you have some transformation between source and target that sets a different precision for this port? Especially the one before the target?
The data written to the target has higher precision - possibly set higher in some transformation(s) in the middle. You may test with an expression transformation in the middle to reduce the precision.
Try checking "Enable high precision" which is available in "properties" tab in session properties.