Splice Machine: TIMESTAMPADD returns value that is 1 hour off 10-15% of the time - splice-machine

Running into a weird bug when executing TIMESTAMPADD queries where the result is not always accurate.
Example #1 (incorrect):
TIMESTAMPADD(SQL_TSI_SECOND, 1214870399, TIMESTAMP('1970-01-01 00:00:00.000Z'))
Returns: 2008-07-01 00:59:59.0
It should be: 2008-06-30 23:59:59.0
Example #2 (correct):
TIMESTAMPADD(SQL_TSI_SECOND, 1167609600, TIMESTAMP('1970-01-01 00:00:00.000Z'))
Returns: 2007-01-01 00:00:00.0 which is correct.
It happens with roughly 10-15% of my queries (lots of unixtime to timestamp converting when querying my tables). It is always the same 1 hour off.
Thanks
Edit with additional information:
Other example unixtimes that show up incorrectly if I try to convert:
1270508410 to 2010-04-06 00:00:10.0 which should be 2010-04-05 23:00:10.0
1304722810 to 2011-05-07 00:00:10.0 which should be 2011-05-06 23:00:10.0
1340221507 to 2012-06-20 20:45:07.0 which should be 2012-06-20 19:45:07.0
This last one is just to show its not just related to timestamps that are near the midnight time period.

Turns out Splice Machine has their own open issue about this problem.
For reference when reaching out to Splice Machine support: Ticket number DB-4937

Related

How to fix that the epoch is not increased in the boardroom(by forking tomb.finance)?

I need your help to fix this problem.
I made http://mercuryfinance.io/ by forking tomb.finance.
In the boardroom, I can't make the current epoch to increase form 0 to 1.
Some addresses is as follow.
Masonry address: 0x3a22E957d70091F9F559F7390f972519C61fC4f9
Treasury address: 0x438FAB186911eDa3b340e260DFD26448861f7162
I tried to run by click "write" button of allocateSeigniorage() on https://ftmscan.com/address/0x438FAB186911eDa3b340e260DFD26448861f7162#writeContract.
But I failed. The error "This transaction is expected to fail. Trying to execute it is expected to be expensive but fail, and is not recommended." in metamask.
So how can I fix this problem.
If you have experience in tomb.finance, I hope you to help me.
Thanks.

Number of seconds between two dates including leap seconds

I'm fiddling around with time representation in C++.
I would like to have a strictly monotonic representation of time, that handles leap seconds well. The utc_clock in C++20 should be able to do that, and since my compiler doesn't support this version yet, I'm using HowardHinnant/date.
To understand the library better I have started making small test cases, but got stuck on one.
I take two dates, before and after insertion of a leap second and check that duration between those two dates actually has the extra second.
This is the test case:
TEST(DateTime, TimeLeap)
{
using namespace std::chrono;
using namespace date;
// Two dates with a leap second in between
// https://en.wikipedia.org/wiki/Leap_second
auto t1 = clock_cast<utc_clock>(static_cast<sys_days>(2016_y/December/31));
auto t2 = clock_cast<utc_clock>(static_cast<sys_days>(2017_y/January/1));
EXPECT_EQ(duration_cast<seconds>(t2 - t1).count(), 24 * 3600 + 1);
}
but it fails for me:
common/tests/datetime.cpp:39: Failure
Expected: duration_cast<seconds>(t2 - t1).count()
Which is: 86400
To be equal to: 24 * 3600 + 1
Which is: 86401
It seems that the conversion between sys_clock and utc_clock doesn't add the leap second.
Suspecting that the problem is the resolution of sys_days, I've also tried doing a time_point_cast<seconds>(...) before the clock_cast<utc_clock>, but the result didn't change.
I've also tried using 2017-01-02 as the second date, in case there was an issue with distinction between 2016-12-31 23:59:60 and 2017-01-01 00:00 -- the leap second also didn't appear there.
It looks like you're using the OS supplied timezone database (USE_OS_TZDB=1), and that the leapseconds aren't being read. This can be confirmed with:
cout << get_tzdb().leap_seconds.size() << '\n';
This should output 27 (currently), but for you I imagine it is outputting 0. This means leapsecond data is missing.
With a recent (2020-09-11) commit: https://github.com/HowardHinnant/date/commit/ba99134b8a7c4a6e7d28d738a0234a85dc6bd827, the leapsecond data is read from either one of these files:
zoneinfo/leapseconds
zoneinfo/leap-seconds.list
Both of these files are IANA-supplied, but have slightly different formats. Either file will do as they have duplicate information in them. tz.cpp will search for both. If your platform doesn't ship either one of these files, you can download it from the IANA data download and copy it into place manually.

What really are options of the "read_format" attribute of the "perf_event_attr" structure?

I'm currently using the perf_event_open syscall (on Linux systems), and I try to understand a configuration parameter of this syscall which is given by the struct perf_event_attr structure.
It's about the read_format option.
Has anyone can see on the man page of this syscall, this parameter is related to the output of this call.
But I don't understand what every possible argument can do.
Especially these two possibilities:
PERF_FORMAT_TOTAL_TIME_ENABLED
PERF_FORMAT_TOTAL_TIME_RUNNING
Can anyone with that information give me a straight answer?
Ok.
I've looked a little further, and I think I have found an answer.
PERF_FORMAT_TOTAL_TIME_ENABLED: It seems that an "enabled time" refer to the difference between the time the event is no longer observed, and the time the event is registered as "to be observed".
PERF_FORMAT_TOTAL_TIME_RUNNING: It seems that an "running time" refer to the sum of the time the event is truly observed by the kernel. It's smaller or equal to PERF_FORMAT_TOTAL_TIME_ENABLED.
For example :
You tell to your kernel that you want to observe the X event at 1:13:05 PM. Your kernel create a "probe" on X, and start to record the activity.
Then, for an unknown reason, you tell to stop the record for the moment at 1:14:05 PM.
Then, you resume the record at 1:15:05 PM.
Finally, you stop the record at 1:15:35 PM.
You have 00:02:30 enabled time (1:15:35 PM - 1:13:05 PM = 00:02:30)
and 00:01:30 running time (1:14:05 PM - 1:13:05 PM + 1:15:35 PM - 1:15:05 PM = 00:01:30)
The read_format attribute can have both values using a mask. In C++, it looks like that :
event_configuration.read_format = PERF_FORMAT_TOTAL_TIME_ENABLED | PERF_FORMAT_TOTAL_TIME_RUNNING;
where event_configuration is an instance of struct perf_event_attr.

Score-P callpath depth limitation of 30 exceeded

I am profiling a code with Scalasca 2.0 that uses some recoursions.
When I run the analyzer with scalasca -analyze myexec , it does not rise any error to the end, where it says:
Score-P callpath depth limitation of 30 exceeded.
Reached callpath depth was 34
At this point, the scalasca results are corrupted and I cannot run cube over the produced output files.
I know for sure that the number of self-calls, of the recoursions won't be greater than 34.
I have read that there is a variable taking into account the number of "measured call-paths" (see. https://www.dkrz.de/Nutzerportal-en/doku/blizzard/program-analysis/profiling). So, I also tried to run scalasca with export ESD_FRAMES=40 but scalasca still says the limit is 30.
So, Is there a way to shift this scalasca limit to an higher value?
I write my answer 2 months after you posted the question so chances are you have already found a solution.
In score-p 1.4+ it can be fixed with:
export SCOREP_PROFILING_MAX_CALLPATH_DEPTH=128

measuring concurent loop times in erlang

I create a round of processes in erlang and wish to measure the time that it took for the first message to pass throigh the network and the entire message series, each time the first node gets the message back it sends another one.
right now in the first node i have the following code:
receive
stop->
io:format("all processes stopped!~n"),
true;
start->
statistics(runtime),
Son!{number, 1},
msg(PID, Son, M, 1);
{_, M} ->
{Time1, _} = statistics(runtime),
io:format("The last message has arrived after ~p! ~n",[Time1*1000]),
Son!stop;
of course i start the statistics when sending the first message.
as you can see i use the Time_Since_Last_Call for the first message loop and wish to use the Total_Run_Time for the entire run, the problem is that Total_Run_Time is accumulative since i start the statistics for the first time.
The second thought i had in mind is using another process with 2 receive loops getting the times for each one adding them and printing but i'm sure that erlang can do better than this.
i guess the best method to solve this is somehow flush the Total_Run_Time, but i couldn't find how this could be done. any ideas how this can be tackled?
One way to measure round-trip times would be to send a timestamp along with each message. When the first node receives the message, it can then measure the round-trip time, calculating Total_Run_Time - Timestamp.
To calculate the total run time, I would memorize the first timestamp in the process state (or dictionary), and calculate the total run time when stopping the test.
Besides, given that you mention the network, are you sure that the CPU time (which is what statistics(runtime) calculates is what you're after? Perhaps, wall clock time would be more appropriate.