ConsolFun.LAST does not return last values - rrdtool

The purpose is to do statistics on user count of my website. I want to display current user count and previous 10 days user count.
Here is a test I did:
RrdDef rrdDef = new RrdDef("test", 60*60*24);
rrdDef.setStartTime(Util.getTime() - 60*60*24*11);
rrdDef.addDatasource(Stats.USER_COUNT.name(), DsType.GAUGE, 60*60*24, 0.0, Double.NaN);
rrdDef.addArchive(ConsolFun.LAST, 0.5, 1, 10);
RrdDb rrdDb = new RrdDb(rrdDef);
rrdDb.close();
rrdDb = new RrdDb("test");
Calendar cal = Calendar.getInstance();
cal.add(Calendar.DATE, -7);
rrdDb.createSample().setAndUpdate(String.format("%d:%d", (Util.getTimestamp(cal)), 1));
cal.add(Calendar.DATE, 1);
rrdDb.createSample().setAndUpdate(String.format("%d:%d", (Util.getTimestamp(cal)), 2));
cal.add(Calendar.DATE, 1);
rrdDb.createSample().setAndUpdate(String.format("%d:%d", (Util.getTimestamp(cal)), 3));
cal.add(Calendar.DATE, 1);
rrdDb.createSample().setAndUpdate(String.format("%d:%d", (Util.getTimestamp(cal)), 4));
cal.add(Calendar.DATE, 1);
rrdDb.createSample().setAndUpdate(String.format("%d:%d", (Util.getTimestamp(cal)), 5));
cal.add(Calendar.DATE, 1);
rrdDb.createSample().setAndUpdate(String.format("%d:%d", (Util.getTimestamp(cal)), 6));
cal.add(Calendar.DATE, 1);
rrdDb.createSample().setAndUpdate(String.format("%d:%d", (Util.getTimestamp(cal)), 7));
rrdDb.close();
rrdDb = new RrdDb("test");
FetchRequest fetchRequest = rrdDb.createFetchRequest(ConsolFun.LAST, Util.getTime() - 60*60*24*7, Util.getTime());
FetchData fetchData = fetchRequest.fetchData();
System.out.println(fetchData.dump());
rrdDb.close();
Here is the output
1404345600: NaN
1404432000: +2.0000000000E00
1404518400: +2.4654861111E00
1404604800: +3.4654861111E00
1404691200: +4.4654861111E00
1404777600: +5.4654861111E00
1404864000: +6.4654861111E00
1404950400: NaN
1405036800: NaN
Here is what I was expecting
1404345600: NaN
1404432000: +1.0000000000E00
1404518400: +2.0000000000E00
1404604800: +3.0000000000E00
1404691200: +4.0000000000E00
1404777600: +5.0000000000E00
1404864000: +6.0000000000E00
1404950400: +7.0000000000E00
1405036800: NaN
Where am I wrong?

You are falling afoul of Data Normalisation.
While the LAST type RRA is holding value of the last of the PDP (primary data points) that make up each CDP (consolidated data point), you have forgotten two things.
First, since your RRA is set so that 1cdp = 1pdp, there is in fact no consolidation going on at all (in this case, LAST, MAX, MIN and AVG will all do the same thing when presented with a single PDP to consolidate).
Secondly, your incoming data are not coming in on an interval boundary, and so are being normalised to fit into the boundary.
The internal intervals in RRDtool are always based on GMT (UCT) midnight; this doesn't matter much if you're dealing with intervals measured in minutes or seconds, but your interval is a whole day. You are using the Calendar object to create your base date/time as 'now' and are then incrementing by a day at a time. However your base date and time is not on an interval boundary, so the value ends up being split between two separate intervals, causing the decimal you see when fetching the data; also your timezone is likely different and so your midnight is not GMT's midnight.
See Alex van den Bogaerdt's tutorial on Data Normalisation for more technical details on how this works.

Related

How to get the value from a table within a table with Lua?

I'm writing a material system for a game engine that I'm working where a Lua script is basically used as a config file for the material.
I'm storing values in a table, but for vector values (vec2, vec3, etc.) I am embedding a table inside the main table to hold the multiple values, like so:
material = {
color = {0.2, 0.3, 1}
}
I want to get the individual values of color, and this is what I've tried to get the first value:
lua_getglobal(L, "material");
if (!lua_istable(L, -1)) {return;};
lua_pushstring(L, "color");
lua_gettable(L, -2);
if (lua_istable(L, -1)) {
lua_rawgeti(L, -1, 0);
printf("%f\n", lua_tonumber(L, -1));
}
lua_pop(L, 1);
But it only ever prints 0.0, no matter the first value in the color table. What am I doing wrong?
The first index should be 1 and not 0.
lua_rawgeti(L, -1, 1);

Cocos2d-x 4.0 Lens3D and Waves3D Animations

I used below code to make water like animation for background image
auto background = Sprite::create(TEX_MM_BG);
background->setPosition(Vec2(SW*0.5f, SH*0.5f));
auto nodeGrid = NodeGrid::create();
nodeGrid->addChild(background);
this->addChild(nodeGrid, 0);
ActionInterval* lens = Lens3D::create(10, Size(32, 24), Vec2(100, 180), 150);
ActionInterval* waves = Waves3D::create(10, Size(15, 10), 18, 15);
nodeGrid->runAction(RepeatForever::create(Sequence::create(waves,lens, NULL)));
Animation look is good. But it stops 10 seconds then play 10 seconds then again stops 10 seconds...it repeats. How to avoid stoping in middle ?
It doesn't stop it is applying waves effect followed by lens effect. While applying lens effect the animation of waves stops.
Correct way to code this will be to use a Spawn:
ActionInterval* lens = Lens3D::create(10, Size(32, 24), Vec2(100, 180), 150);
ActionInterval* waves = Waves3D::create(10, Size(15, 10), 18, 15);
// Spawn will run both effects at the same time.
auto lensWaveSpawn = Spawn::createWithTwoActions(lens, waves);
auto seq = Sequence::create(lensWaveSpawn, nullptr);
nodeGrid->runAction(RepeatForever::create(seq));

Updating just parts of a large OpenGL VBO at run-time without latency

I am trying to update a large VBO in OpenGL which has about 4,000,000 floats in it, but I only need to update the elements which are changing (<1%), and this needs to happen at run time.
I have pre-computed the indices which need changing, but because they are fragmented throughout the VBO I have to send 1000 individual glBufferSubDataARB calls with the appropriate index offsets (i'm not sure if this is a problem or not)
I have set the VBO to use STREAM_DRAW_ARB because the update to the VBO occurs every 5 seconds.
Even if I update just 1000 of the objects in the VBO (so about 16,000 floats spread over 1000 calls) I notice a small but noticable latency.
I believe this maybe due to the VBO being used for drawing whilst it is being updated as I've heard this can result in latency. I only know of solutions to this problem when you are updating the entire VBO - for example: OpenGL VBO updating data
However, because my VBO is so large, I would think sending 4,000,000 data elements every 5 seconds would be a lot slower and use up a lot of the CPU-GPU bandwidth. So I was wondering if anybody knows how to avoid the VBO waiting for the GPU to finish for it to be updated doing it the way I am - fragmented over a VBO, updated over about a thousand calls.
Anyway, the following is a section of my code which updates the buffer every 5 seconds with usually around 16,000 floats of the 4,000,000 present (but as I say, using about 1000 calls).
for(unsigned int kkkk = 0;kkkk < surf_props.quadrant_indices[0].size();kkkk++)
{
temp_surf_base_colour[0] = surf_props.quadrant_brightness[0][kkkk];
temp_surf_base_colour[1] = 1.0;
temp_surf_base_colour[2] = surf_props.quadrant_brightness[0][kkkk];
temp_surf_base_colour[3] = 1.0;
temp_surf_base_colour[4] = surf_props.quadrant_brightness[0][kkkk];
temp_surf_base_colour[5] = 1.0;
temp_surf_base_colour[6] = surf_props.quadrant_brightness[0][kkkk];
temp_surf_base_colour[7] = 1.0;
temp_surf_base_colour[8] = surf_props.quadrant_brightness[0][kkkk];
temp_surf_base_colour[9] = 1.0;
temp_surf_base_colour[10] = surf_props.quadrant_brightness[0][kkkk];
temp_surf_base_colour[11] = 1.0;
temp_surf_base_colour[12] = surf_props.quadrant_brightness[0][kkkk];
temp_surf_base_colour[13] = 1.0;
temp_surf_base_colour[14] = surf_props.quadrant_brightness[0][kkkk];
temp_surf_base_colour[15] = 1.0;
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vb_colour_surf);
glBufferSubDataARB(GL_ARRAY_BUFFER_ARB, sizeof(GLfloat) * ((numb_surf_prims * 4) + surf_props.quadrant_indices[0][kkkk] * 16)), sizeof(GLfloat) * 16, temp_surf_base_colour);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
}

C++: Calculating Moving FPS

I would like to calculate the FPS of the last 2-4 seconds of a game. What would be the best way to do this?
Thanks.
Edit: To be more specific, I only have access to a timer with one second increments.
Near miss of a very recent posting. See my response there on using exponential weighted moving averages.
C++: Counting total frames in a game
Here's sample code.
Initially:
avgFps = 1.0; // Initial value should be an estimate, but doesn't matter much.
Every second (assuming the total number of frames in the last second is in framesThisSecond):
// Choose alpha depending on how fast or slow you want old averages to decay.
// 0.9 is usually a good choice.
avgFps = alpha * avgFps + (1.0 - alpha) * framesThisSecond;
Here's a solution that might work for you. I'll write this in pseudo/C, but you can adapt the idea to your game engine.
const int trackedTime = 3000; // 3 seconds
int frameStartTime; // in milliseconds
int queueAggregate = 0;
queue<int> frameLengths;
void onFrameStart()
{
frameStartTime = getCurrentTime();
}
void onFrameEnd()
{
int frameLength = getCurrentTime() - frameStartTime;
frameLengths.enqueue(frameLength);
queueAggregate += frameLength;
while (queueAggregate > trackedTime)
{
int oldFrame = frameLengths.dequeue();
queueAggregate -= oldFrame;
}
setAverageFps(frameLength.count() / 3); // 3 seconds
}
Could keep a circular buffer of the frame time for the last 100 frames, and average them? That'll be "FPS for the last 100 frames". (Or, rather, 99, since you won't diff the newest time and the oldest.)
Call some accurate system time, milliseconds or better.
What you actually want is something like this (in your mainLoop):
frames++;
if(time<secondsTimer()){
time = secondsTimer();
printf("Average FPS from the last 2 seconds: %d",(frames+lastFrames)/2);
lastFrames = frames;
frames = 0;
}
If you know, how to deal with structures/arrays it should be easy for you to extend this example to i.e. 4 seconds instead of 2. But if you want more detailed help, you should really mention WHY you haven't access to an precise timer (which architecture, language) - otherwise everything is like guessing...

adjust bitmap image brightness/contrast using c++

adjust image brightness/contrast using c++ without using any other 3rd party library or dependancy
Image brightness is here - use the mean of the RGB values and shift them.
Contrast is here with other languages solutions available as well.
Edit in case the above links die:
The answer given by Jerry Coffin below covers the same topic and has links that still live.
But, to adjust brightness, you add a constant value to each for the R,G,B fields of an image. Make sure to use saturated math - don't allow values to go below 0 or above the maximum allowed in your bit-depth (8-bits for 24-bit color)
RGB_struct color = GetPixelColor(x, y);
size_t newRed = truncate(color.red + brightAdjust);
size_t newGreen = truncate(color.green + brightAdjust);
size_t newBlue = truncate(color.blue + brightAdjust);
For contrast, I have taken and slightly modified code from this website:
float factor = (259.0 * (contrast + 255.0)) / (255.0 * (259.0 - contrast));
RGB_struct color = GetPixelColor(x, y);
size_t newRed = truncate((size_t)(factor * (color.red - 128) + 128));
size_t newGreen = truncate((size_t)(factor * (color.green - 128) + 128));
size_t newBlue = truncate((size_t)(factor * (color.blue - 128) + 128));
Where truncate(int value) makes sure the value stays between 0 and 255 for 8-bit color. Note that many CPUs have intrinsic functions to do this in a single cycle.
size_t truncate(size_t value)
{
if(value < 0) return 0;
if(value > 255) return 255;
return value;
}
Read in the image with a library just as the Independent JPEG library. When you have raw data, you can convert it from RGB to HSL or (preferably) CIE Lab*. Both contrast and brightness will basically just involve adjustments to the L channel -- to adjust brightness, just adjust all the L values up or down by an appropriate amount. To adjust contrast, you basically adjust the difference between a particular value and the center value. You'll generally want to do this non-linearly, so values near the middle of the range are adjusted quite a bit, but values close to the ends or the range aren't affected nearly as much (and any that are at the very ends, aren't changed at all).
Once you've done that, you can convert back to RGB, and then back to a normal format such as JPEG.