I see that dt is often used in function arguments, e.g., CCScheduler.update(dt).
Does anyone know what it's supposed to represent?
Thanks
Ah yes. Some programmers' favorite passtime: abbreviate every variable until only they understand the meaning through conditioned memory.
dt stands for "delta time". It means the time passed since the last call to the same method. In layman's terms "delta" simply means "difference".
In game engines delta time usually refers to the time passed since the last frame was rendered. It is defined mathematically as:
deltaTime = timeNow - timeOfPreviousCallToMethod;
To understand why this is used and how to use it and how probably not to use it in iOS games read my blog post about delta time.
Related
This documentation page of boost::math::tools::brent_find_minima says about its first argument:
The function to minimise: a function object (or C++ lambda) ... with no maxima occurring in that interval.
But what happens if this is not the case? (After all, this condition is rather difficult to pre-ensure, especially since the function is usually expensive to evaluate at many points.) Best would be to detect violations to this condition on the fly.
If this condition is violated, does boost throw an exception, or does it exhibit undefined behavior?
A workaround I am thinking of is to build the checking into the lambda ("function to minimize"), by capturing and maintaining a std::map<double,double> holding all the points that have been evaluated, and comparing each new evaluation with its nearest neighbor in each direction, to check whether there may be a local maximum. But I don't want to do all that if it isn't necessary.
There is no way for this to be done. If you read Corless's A Graduate Introduction to Numerical Methods, you'll read a very interesting point: All numerically defined functions are discontinuous halfway between representables, and have zero derivatives between representables. Basically they can be thought of as a sum of Heaviside functions.
So none of them are differentiable in the mathematical sense. Ok, maybe you think this is a bit unfair-the scale should be zoomed out. But how much? We know that |x-1| isn't differentiable at x=1, but how could a computer tell that? How does it know that there isn't some locally smooth mollifier that makes it differentiable between x=1-eps and x=1+eps? I don't think there's a good answer to this question.
One of the most difficult problems in this class arises in quadrature. Some of these methods work fast when the complex extension of the function has poles far from the real axis. Try to numerically determine that.
Function spaces are impossible to determine numerically. Users just have to get it right.
Both of them can be used to get time elapsed in nanoseconds.
The former uses scoped glBeginQuery/glEndQuery. Is that the difference?
Is that the difference?
You say that as though it were some minor difference.
GL_TIME_ELAPSED delivers the GPU time it took to process the commands in the query's scope (ie: glBegin/EndQuery). GL_TIMESTAMP is not a count of anything. It merely gets the GPU time, in nanoseconds, since... well, something. The start time is implementation defined, but it is always increasing (unless it overflows).
To put it another way, GL_TIME_ELAPSED is like a stopwatch: the time between when you start and stop. It's a delta. GL_TIMESTAMP is like looking at a clock: it's always increasing. It's an absolute time, but it's relative to something implementation dependent.
They are functionally identical, except for the difference between using glBeginQuery()/glEndQuery() and glQueryCounter(), as you pointed out.
See: the examples portion of the ARB_timer_query specification.
I'm struggling a little with reading the textual output the GPerfTools generate. I think part of the problem is that I don't fully understand how the sampling method operates.
From Wikipedia I gather that profilers based on sample functions usually work by sending an interrupt to the OS and querying the program's current instruction pointer. Now my knowledge about assembly is a little rusty, so I'm wondering what it means if the instruction pointer points to method m at any given time? I.e. does it mean that the function is about to be called or does it mean it's currently executed, or both?
There's a difference if I'm not mistaken, because in the first case the sample count (i.e. times m is seen while taking a sample) translates to the absolute call count of m, while in the latter case it simply translates to times seen, i.e. a mere indication of relative time spent in this method.
Can someone clarify?
My program runs on a Windows computer on which timezone is not PST/PDT. But it needs to operate according to PST/PDT time rules.
Wrt to summer/winter time, the program needs to know
the next date when PDT changes to PST or vice versa.
How can I program in C++ finding the next summertime<->wintertime switch ?
Since the start and end of Daylight Savings Time have changed due to various acts of Congress, the information of the next savings transition is not fixed. I don't know if you need to reboot to apply DST changes, but if you do, you might want to update your estimate of the next transition more frequently than once.
The native API to get this information is GetTimeZoneInformationForYear. You can pass in a specific time zone and year. That function fills out a TIME_ZONE_INFORMATION struct; the relevant information you want is TIME_ZONE_INFORMATION::DaylightDate and TIME_ZONE_INFORMATION::StandardDate
If you are on Windows, use a C# class to do this and return the results to your C++ program via your interop of choice. Otherise, you'll likely wind up rebuilding the .Net code that does this in C+, and in the process miss all the edge cases that .Net will handle for you.
You can use TimeZoneInfo.Local and then get the adjustment rules for it.
Ugly brute force method:
Call time(NULL) to get the current time as a time_t value.
Use localtime() to convert this value to a struct tm. (Consider adjusting the tm_hour member to 12, so you're checking noon every day.)
Repeatedly add 1 day to the tm_day member of your struct tm, then use mktime() to convert back to time_t.
Use difftime() to compare each incremented time_t value to the previous one. When `difftime() gives you a value that's not close to 86400.0 (the number of seconds in 1 day), you've found a DST transition. If you do this 365 times without finding a transition, something is wrong.
You can probably take some shortcuts if you're willing to make some assumptions about the representation of time_t.
Obviously this is only an outline of a solution -- and I haven't tried it myself.
And I've just re-read the question and realized that I've completely ignored the part where you said that the computer isn't on PST or PDT. (Can you set the timezone for the program?)
I try to measure the clock cyles needed to execute a piece of code on the TMS32064x+ DSP that comes with the OMAP ZOOM 3430 MDK. I look at the "Programmer's Guide" of the DSP chip and it says that the DSP supports the clock() function.
What I do is really simple, I just do
start = clock();
for (i=0;i<100;i++){
/* do something here */
}
stop = clock();
total = stop - start;
and then put the values of "start","stop" and "total" to a previously allocated shared memory with the ARM processor. Then I simply print it to the screen at the ARM side.
The problem is, in my first executes, I always get the same "total" value, and then in my next runs I always get 0! The "start" and "stop" values go along with the "total" value.
The strangest thing is that they seem to follow a bit pattern! I put the output below:
# ./sampleapp
Total = 63744
Start clock() value = 0x000000f9
Stop clock() value = 0x0000f9f9
# ./sampleapp
Total = 4177526784
Start clock() value = 0x00f9f9f9
Stop clock() value = 0xf9f9f9f9
# ./sampleapp
Total clock cyles = 0
Start clock() value = 0xf9f9f9f9
Stop clock() value = 0xf9f9f9f9
Apparantly, clock() is not functioning well, but I'm not sure if this is because of something I do wrong or because this type of thing is not supported with the hardware I have. Any ideas why this might be happening?
From reading the questions so far, I'd say the Original Poster has substantially more knowledge of this matter than the contributors so far, and that the suspicion that the clock() is broken (or not supported, and returns an undefined result) on the DSP seems quite likely.
Curiously, Why do you require a previously allocated shared memory. Why don't you try with a normal stack variable? Is there anything that I am missing?
Maybe you need to initialize the clock first.
How are you printing it out? maybe the issue is actually with displaying the result?
on most platforms clock_t is a long long. If you're using printf with %d you might get variable results which is what you're seeing.
Assuming the start and end variable are of type 'clock_t', and your shared memory assumes the same on the other end's interpretation of the numbers passed, then your problem is not with the call to clock, and your handleing of the difference between the start end end times.
I believe your problem is in the shared memory between the two. Can you please post code to show how you're sharing memory between two separate processors?
Perhaps you could use some inline assembly to access the CPU's counter registers directly.
The TMS320C64x+ has a 64-bit timestamp register in TSCL, TSCH. The counter is not enabled on reset, you must first write to the register to start the counter (maybe this is the problem with clock?). Reading from the register is not quite trivial as each half must be read with a separate instruction (and you can get interrupts...).