Place values above line graph dots - chart.js

I feel this should be a very simple parameter but I have looked across a large number of SO questions and simply not found it (and the docs).
I have a line graph and would like to place the value of the point permanently above it like so:
I'm not worried about the shading (although if its possible great) but is there a parameter I can enable to place the values above? I don't want them to hover because I'm actually converting the graph to an image.

Related

Can one normalize a PCA for specific features?

When dealing with data sets that have hundreds of dimensions, some phenotypic and some metadata, I would like to "normalize" the effect of specific (multiple) features on PCAs.
I can get the contribution of specific features by plotting biplots, however, I would like to present the data where the effect of these has been considered and normalized for.
I tried normalizing by specific columns, but not sure this is the right way to go about this, or whether or not I can do this for multiple ones. I'm more of a newbie to dimension reduction; I feel like I'm missing something fundamental here. I'd appreciate any input. Thanks!

How can you graph error as a shaded region?

When using TGraphErrors, the error bars appear as crosses, in the absence of significant X errors and many, many data points (such as MCA with 16k bins or so) I'd like to be able to remove the single points and single error bars and graph the error as a shaded region bounding the curve from above and below.
But I'm still a rank beginner at using ROOT, and I cannot figure out how to leverage TGraphErrors to do what I want. Will I need to instead use a TMultiGraph instead (and calculate the above and below bounding curves) and if so how can I control the shading region?
Something like the below would be along the lines of what I'm looking for. Source
Take a look at the TGraphPainter documentation which gives a few examples. One way is to draw the TGRaphErrors using option 4:
A smoothed filled area is drawn through the end points of the vertical error bars.
You will probably find that to get the final plot to look as you want, you have to draw the same graph multiple times - once to get the shaded region, then again on top to get the central curve.
This blog post gives a working example. It's written in PyROOT, but can be easily adapted to C++.

Pattern matching/recognition library for vectors (like OpenCV for image input)

Does anyone know a good pattern matching/recognition library in C++ (oss preferred) that is able to detect whether a list of vectors is an arrow or some other class?
I already know OpenCV but this is meant to be used for raster graphics (or did I missed something?)... but I already have a vector geometry and it sounds strange to convert them back into a raster graphic where you have to detect the edges again.
So what I need is a library that uses a list of vectors as input instead of a raster graphic and can recognize if the vectors are an arrow (independent from the direction) and extract the parts of the arrow (head/tip/tail etc.).
Anyone who knows such a lib or has a hint where to look for this kind of problem (algorithms etc.)?
I try to change the way a UI is used. I already tried protractor algorithm and divided the recognition step into different parts, e.g. for arrow example:
draw, stop drawing and take result
treat first line as body (route line, arrow shaft)
wait for accept (=> result is recognised as simple line replace hand drawn graphic with route graphic) or next draw process
draw arrow head and take result coordinates
wait for accept/finish button (=> result is recognised as arrow and is no simple route)
a) replace hand drawn vectors by correct arrow graphic
b) or go on with any fletchings? bla, bla, bla
But I want to do this in a single step for all vector lines (regardless of the order and direction). Are there any recommendations?
And what if the first is a a polyline with an angle and there is also a recognition of a caret but the follow up symbology needs to decide between them?
I want to draw commands instead of searching it them in a burdened menu. But it is important to detect also the parts of a graphic (e.g. center line, left line, ...) and keep aspect ratio (dimension) as far as it is possible, which means that key coordinates should be kept, too (e.g. arrow tip). This is important for replacing the hand-drawn vectors with the corrected standard graphic.
Is this possible with a lib as a single task or should I stay at the current concept of recognising each polyline separately and look at the input order (e.g. first line must be the direction)?
You can look here to get an idea: http://depts.washington.edu/aimgroup/proj/dollar/
There is the $1 Recognizer algorithm and some derived ones and you can try them online.
The problem is, that my "commands" consists of multiple lines and every line might have a different special meaning in the context to get the complete graphic. The algorithms and libraries I already know (like the $1 Recognizer above) are more related to single gestures instead of a complex order of multiple gesture inputs which gets the precise meaning if interpreted as a whole sketch.
I think continuing with the interpretion of each line separately and not puting it into the whole context (recognise the whole sketch) could lead to a dead end. But maybe a mixed approach might get it.
Real life comparism: It is like when somebody draws a horse. You wouldn't say it is a horse if he just started to draw the first line - you'll need some more input, e.g. 4 legs etc.
(Well, I know not everyone is good in drawing and some horses could look like cows... but anyway, this should give you an idea what I mean.)
Any hints?
Update: I've found a video here that is close to the problem. The missing link is how parts of the structure are accessible after the recognition but this can be done in a separate step, too (after knowing what the drawing shows).
In my humble opinion I'don't think that there's a library in the wild that fulfils such specific needs. In the end you'll end up writing custom code.
Either way, the first thing you'll have to do is to extract classification features from every gesture you detect. You'll have then to put your acquired feature vectors in a feature space. Once you do this, there are literally a million things you can do in order to classify the feature vectors to one of the available classes (e.g., arrow, triangle etc.). For example, the guys from the University of Washington in the link you've supplied are doing their feature extraction in steps 1,2 and 3 and they classify the acquired feature vector in step 4.
The idea of breaking the gesture into sub-gestures sounds tempting, though I have a suspicion it will introduce problems in a matter of ways (e.g., how to detect the end of a sub-gesture and the beginning of the next) and it will also introduce a significant overhead
since you will end up in additional steps and a short of a decision tree structure.
One other thing that I forgot to mention above is that you will also need to create a training data-set of a reasonable size in order to train your classifiers.
I won't get into the trouble of suggesting libraries, classifiers, linear algebra packages etc. since this is out of the scope in the first place (i.e., kindly I would suggest to search the web for specific components that will help you build your application).

How to project/unproject when in an openGL display list

I have openGL code that renders some objects and displays text labels for some of them. Displaying a label is done by projecting the appropriate vertex to the screen using gluProject, and then adding a small offset so the label is beside the vertex. This way each label is the same distance from its vertex on the screen.
I didn't originally use a display list for this (apart from the display lists for the glyphs), and it worked correctly (if somewhat slowly). Now I build a display list for the entire scene, and find that the labels are placed incorrectly.
It took me a while, but I think I have basically found the problem: gluProject takes as parameters the projection matrix, model-view matrix, and the viewport. I see no way to provide them other than calling glGetDoublev(GL_MODELVIEW_MATRIX, ...), etc. But glGet functions are "not allowed" in a display list, which - empirically - seems to mean that they don't cause an error, but rather execute immediately. So the matrix data being compiled into the display list is from list compilation time instead of list execution time (which is a problem because I need to precompile the list, not execute it immediately). At least this is my current theory.
Can anyone confirm or deny that this would cause the problem?
How does one solve this? I just want to do what gluProject does, but using the list's current matrices.
Note: I'm aware that various functions/approaches are deprecated in recent versions of openGL; please spare me answers along the lines of "you shouldn't be doing that" ;-)
Think about it: glGet… places some data in your process memory, possibly on the stack. There is absolutely no way, how a display list could even reproduce the calculations performed on data, that is not even in its reach. Add to this, that GLU (note the U) functions are not part of OpenGL, hence don't make it to the display list. GLU functions also are not GPU accelerated, all the calculations happen on the CPU and due to the API design data transfer is rather inefficient.
Scrunities like those, which as you find out, make display lists rather impractical are among the reasons, why they have been stripped from later versions of OpenGL. Or in other words: Don't use them.
Instead use Vertex Buffer Object and Index Buffers. A labeling system like yours can be implemented using instancing, fed by a list of the target positions. If instancing is not available you need to supply redundant position attributes to the label's vertex attribute vector.
Anyway: In your case making proper use of shaders and VBOs will easily outperform any display list based solution (because you can't display list everything).
Rather odd, but working would be calls to glRasterPos, glBitmap (hence glutBitmap text calls) put in a display list, and the offset applied in the projection matrix before the actual projection mapping, i.e.
glMatrixMode(GL_PROJECITON);
glLoadIdentity();
scene_projection();
draw_scene();
glMatrixMode(GL_PROJECITON);
glLoadIdentity();
glTranslatef(...); /* for the offset */
scene_projection();
draw_labels();
Though this is how I'd have done it 12 years ago. Definitely not today.

Counting objects on a grid with OpenCV

I'm relatively new to OpenCV, and I'm working on a project where I need to count the number of objects on a grid. the grid is the background of the image, and there's either an object in each space or there isn't; I need to count the number present, and I don't really know where to start. I've searched here and other places, but can't seem to find what I'm looking for. I will need to be tracking the space numbers of the grid in the future, so I will also eventually need to know whether each grid space is occupied or empty. I'm not going so far as to ask for a coded example, but does anybody know of any source or tutorials to accomplish this task or one similar to it? Thanks for your help!
Further Details: images will come from a stable-mounted camera, objects are of relatively uniform shape, but varying size and color.
I would first answer a few questions:
Will an object be completely enclosed in a grid cell? Or can it be placed on top of a grid line? (In other words, will the object hide a line from the camera?)
Will more than one object be in one cell?
Can an object occupy more than one cell? (closely related to question 1)
Given reasonable answers to those questions, I believe the problem can be broken into two parts: first, identify the centers of each grid space. To count objects, you can then sample that region to see if anything "not background" is there.
You can then assume that a grid space is defined by four strong, regularly-placed, corner features. (For the sake of discussion, I'll assume you've performed the initial image preparation as needed: histogram equalization, gaussian blur for noise reduction, etc.) From there, you might try some of OpenCV's methods for finding corners (Harris corner detector, cvGoodFeaturesToTrack, etc). It's likely that you can borrow some of the techniques found in OpenCV's square finding example (samples/c/square.c). For this task, it's probably sufficient to assume that the grid center is just the centroid of each set of "adjacent" (or sufficiently near) corners.
Alternatively, you might use the Hough transform to identify the principal horizontal and vertical lines in the image. You can then determine the intersection points to identify the extents of each grid cell. This implementation might be more challenging since inferring structure (or adjacency) from "nearby" vertices in order to find a grid center seems more difficult.