I have a problem when I draw a curve in Qt thanks to the Qwt library.
The curve drawing is fine, excepted for the fact that the last point and the first point are connected, the QwtPlot actually draws a polygon with the data I provide...
I've looked into the QwtPlotCurve settings and parameters, but couldn't find anything relevant that could fix that.
The other hint I have thanks to this thread (which sadly is unanswered) is that it has to do with the data I provide.
Currently, I'm using a circular fixed-size buffer (an std::array) to store values I poll continuously. Then, I use the member function
setRawSamples(const double* xData, const double* yData, int size);
to set my curve's data (with the pointers I get with std::array::data()). It means that the list of points to draw isn't ordered (i.e. the point which has the smallest abscissa isn't the first point in the double*).
Could it be the source of the problem ? How can I fix that and just draw a curve and not a polygon ?
A curve draws as a non-closed polygon by default. Each sample point is connected to the previous sample point in provided array (but the first sample isn't connected to the last sample). Qwt doesn't care about ordering of your points by abscissa values. For example, if you provide samples (0, 0), (0, 1), (1, 1), (1, 0), (0, 0), it will draw a rectangle. If you want to avoid self-intersections in curve, you need to provide samples ordered by abscissa.
Related
I need to detect this ball: and find its position and radius using opencv. I have downloaded many codes, but neither of them works. Any helps are highly appreciated.
I see you have quite a setup installed. As mentioned in the comments, please make sure that you have appropriate lighting to capture the ball, as well as making the ball distinguishable from it's surroundings by painting it a different colour.
Once your setup is optimized for detection, you may proceed via different ways to track your ball (stationary or not). A few ways may be:
Feature detection : Via Hough Circles, detect 2D circles (and their radius) that lie within a certain range of color, as explained below
There are many more ways to detect objects via feature detection, such as this clever blog points out.
Object Detection: Via SURF, SIFT and many other methods, you may detect your ball, calculate it's radius and even predict it's motion.
This code uses Hough Circles to compute the ball position, display it in real time and calculate it's radius in real time. I am using Qt 5.4 with OpenCV version 2.4.12
void Dialog::TrackMe() {
webcam.read(cim); /*call read method of webcam class to take in live feed from webcam and store each frame in an OpenCV Matrice 'cim'*/
if(cim.empty()==false) /*if there is something stored in cim, ie the webcam is running and there is some form of input*/ {
cv::inRange(cim,cv::Scalar(0,0,175),cv::Scalar(100,100,256),cproc);
/* if any part of cim lies between the RGB color ranges (0,0,175) and (100,100,175), store in OpenCV Matrice cproc */
cv::HoughCircles(cproc,veccircles,CV_HOUGH_GRADIENT,2,cproc.rows/4,100,50,10,100);
/* take cproc, process the output to matrice veccircles, use method [CV_HOUGH_GRADIENT][1] to process.*/
for(itcircles=veccircles.begin(); itcircles!=veccircles.end(); itcircles++)
{
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), 3, cv::Scalar(0,255,0), CV_FILLED); //create center point
cv::circle(cim,cv::Point((int)(*itcircles)[0],(int)(*itcircles)[1]), (int)(*itcircles)[2], cv::Scalar(0,0,255),3); //create circle
}
QImage qimgprocess((uchar*)cproc.data,cproc.cols,cproc.rows,cproc.step,QImage::Format_Indexed8); //convert cv::Mat to Qimage
ui->output->setPixmap(QPixmap::fromImage(qimgprocess));
/*render QImage to screen*/
}
else
return; /*no input, return to calling function*/
}
How does the processing take place?
Once you start taking in live input of your ball, the frame captured should be able to show where the ball is. To do so, the frame captured is divided into buckets which are further divides into grids. Within each grid, an edge is detected (if it exists) and thus, a circle is detected. However, only those circles that pass through the grids that lie within the range mentioned above (in cv::Scalar) are considered. Thus, for every circle that passes through a grid that lies in the specified range, a number corresponding to that grid is incremented. This is known as voting.
Each grid then stores it's votes in an accumulator grid. Here, 2 is the accumulator ratio. This means that the accumulator matrix will store only half as many values as resolution of input image cproc. After voting, we can find local maxima in the accumulator matrix. The positions of the local maxima are corresponding to the circle centers in the original space.
cproc.rows/4 is the minimum distance between centers of the detected circles.
100 and 50 are respectively the higher and lower threshold passed to the canny edge function, which basically detects edges only between the mentioned thresholds
10 and 100 are the minimum and maximum radius to be detected. Anything above or below these values will not be detected.
Now, the for loop processes each frame captured and stored in veccircles. It create a circle and a point as detected in the frame.
For the above, you may visit this link
I want to draw a ring (circle with big border) with the shaperenderer.
I tried two different solutions:
Solution: draw n-circles, each with 1 pixel width and 1 pixel bigger than the one before. Problem with that: it produces a graphic glitch. (also with different Multisample Anti-Aliasing values)
Solution: draw one big filled circle and then draw a smaller one with the backgroundcolor. Problem: I can't realize overlapping ring shapes. Everything else works fine.
I can't use a ring texture, because I have to increase/decrease the ring radius dynamic. The border-width should always have the same value.
How can I draw smooth rings with the shaperenderer?
EDIT:
Increasing the line-width doesn't help:
MeshBuilder has the option to create a ring using the ellipse method. It allows you to specify the inner and outer size of the ring. Normally this would result in a Mesh, which you would need to render yourself. But because of a recent change it is also possible to use in conjunction with PolygonSpriteBatch (an implementation of Batch that allows more flexible shapes, while SpriteBatch only allows quads). You can use PolygonSpriteBatch instead of where you normally would use a SpriteBatch (e.g. for your Stage or Sprite class).
Here is an example how to use it: https://gist.github.com/xoppa/2978633678fa1c19cc47, but keep in mind that you do need the latest nightly (or at least release 1.6.4) for this.
Maybe you can try making a ring some other way, such as using triangles. I'm not familiar with LibGDX, so here's some
pseudocode.
// number of sectors in the ring, you may need
// to adapt this value based on the desired size of
// the ring
int sectors=32;
float outer=0.8; // distance to outer edge
float inner=1.2; // distance to inner edge
glBegin(GL_TRIANGLES)
glNormal3f(0,0,1)
for(int i=0;i<sectors;i++){
// define each section of the ring
float angle=(i/sectors)*Math.PI*2
float nextangle=((i+1)/sectors)*Math.PI*2
float s=Math.sin(angle)
float c=Math.cos(angle)
float sn=Math.sin(nextangle)
float cn=Math.cos(nextangle)
glVertex3f(inner*c,inner*s,0)
glVertex3f(outer*cn,outer*sn,0)
glVertex3f(outer*c,outer*s,0)
glVertex3f(inner*c,inner*s,0)
glVertex3f(inner*cn,inner*sn,0)
glVertex3f(outer*cn,outer*sn,0)
}
glEnd()
Alternatively, divide the ring into four polygons, each of which consists of one quarter of the whole ring. Then use ShapeRenderer to fill each of these polygons.
Here's an illustration of how you would divide the ring:
if I understand your question,
maybe, using glLineWidth(); help you.
example pseudo code:
size = 5;
Gdx.gl.glLineWidth(size);
mShapeRenderer.begin(....);
..//
mShapeRenderer.end();
I have a Qt app where I have to find the HSV range of a couple of pixels around click coordinates, to track later on. This is how I do it:
cv::Mat temp;
cv::cvtColor(frame, temp, CV_BGR2HSV); //frame is pulled from a video or jpeg
cv::Vec3b hsv=temp.at<cv::Vec3b>(frameX,frameY); //sometimes SIGSEGV?
qDebug() << hsv.val[0]; //look up H
qDebug() << hsv.val[1]; //look up S
qDebug() << hsv.val[2]; //look up V
//just base values so far, will work on range later
emit hsvDownloaded(hsv.val[0], hsv.val[0]+5, hsv.val[1], 255, hsv.val[2], 255); //send to GUI which automaticly updates worker thread
Now, things are odd. Those are the results (red circle indicates the click location):
With red it's weird, upper half of the shape is detected correctly, lower half is not, despite it being a solid mass of the same colour.
And for an actual test
It detects HSV {95,196,248} which is frankly absurd (base values way too high). None of the pixels that were detected isn't even the one that was clicked. The best values to detect that ball 100% of the time are H:35-141 S:0-238 V:65-255. I've wanted to get a HSV range from a normalized histogram, but I can't even get the base values right. What's up? When OpenCV pulls a frame using kalibrowanyPlik.read(frame); , the default colour scheme is BGR, right?
Why would the colour detection work so randomly?
As berak has mentioned, your code looks like you've used the indices to access pixel in the wrong order.
That means your pixel locations are wrong, except for pixel that lie on the diagonal, so clicked objects that are around the diagonal will be detected correctly, while all the others won't.
To not get confused again and again, I want you to understand why OpenCV uses (row,col) ordering for indices:
OpenCV uses matrices to represent images. In mathematics, 2D matrices use (row,col) indexing, have a look at http://en.wikipedia.org/wiki/Index_notation#Two-dimensional_arrays and watch at the indices. So for matrices, it is typical to use the row index first, followed by the column index.
Unfortunately, images and pixel typically have a (x,y) indexing, which corresponds to x/y axis/direction in mathematical graphs and coordinate systems. So here the x position is used first, followed by the y position.
Luckily, OpenCV provides two different versions of .at method, one to access pixel-positions and one to access matrix elements (which are exactly the same elements in the end).
matrix.at<type>(row,column) // matrix indexing to access elements
// which equals
matrix.at<type>(y,x)
and
matrix.at<type>(cv::Point(x,y)) // pixel/position indexing to access elements
since the first version should be slightly more efficient it should be preferred if the positions aren't already given as cv::Point objects. So the best way often is to remember, that openCV uses matrices to represent images and it uses matric index notations to access elements.
btw, I've seen people wondering why matrix.at<type>(cv::Point(y,x)) doesn't work the way intended after they've learned that openCV images use the "wrong ordering". I hope this question doesn't come up after my explanation.
one more btw: in school I already wondered, why matrices index rows first, while graphs of functions index x axis first. I found it stupid to not use the "same" ordering for both but I still had to live with it :D (and at the end, both don't have much to do with the other)
I'm trying to implement a 'raypicker' for selecting objects within my project. I do not fully understand how to implement this, but I understand conceptually how it should work. I've been trying to learn how to do this, but most tutorials I find go way over my head. My current code is based on one of the recent tutorials I found, here.
After several hours of revisions, I believe the problem I'm having with my raypicker is actually the creation of the ray in the first place. If I substitute/hardcode my near/far planes with a coordinate that would undisputably be located within the region of a triangle, the picker identifies it correctly.
My problem is this: my ray creation doesn't seem to fully take my current "camera" or perspective into account, so camera rotation won't affect where my mouse is.
I believe to remedy this I need something like using gluUnProject() or something, but whenever I used this the x,y,z coordinates returned would be incredibly small,
My current ray creation is a mess. I tried to use methods that others proposed initially, but it seemed like whatever method I tried it never worked with my picker/intersection function.
Here's the code for my ray creation:
void oglWidget::mousePressEvent(QMouseEvent *event)
{
QVector3D nearP = QVector3D(event->x()+camX, -event->y()-camY, -1.0);
QVector3D farP = QVector3D(event->x()+camX, -event->y()-camY, 1.0);
int i = -1;
for (int x = 0; x < tileCount; x++)
{
bool rayInter = intersect(nearP, farP, tiles[x]->vertices);
if (rayInter == true)
i = x;
}
if (i != -1)
{
tiles[i]->showSelection();
}
else
{
for (int x = 0; x < tileCount; x++)
tiles[x]->hideSelection();
}
//tiles[0]->showSelection();
}
To repeat, I used to load up the viewport, model & projection matrices, and unproject the mouse coordinates, but within a 1920x1080 window, all I get is values in the range of -2 to 2 for x y & z for each mouse event, which is why I'm trying this method, but this method doesn't work with camera rotation and zoom.
I don't want to do pixel color picking, because who knows I may need this technique later on, and I'd rather not give up after the amount of effort I put in so far
As you seem to have problems constructing your rays, here's how I would do it. This has not been tested directly. You could do it like this, making sure that all vectors are in the same space. If you use multiple model matrices (or stacks thereof) the calculation needs to be repeated separately with each of them.
use pos = gluUnproject(winx, winy, near, ...) to get the position of the mouse coordinate on the near plane in model space; near being the value given to glFrustum() or gluPerspective()
origin of the ray is the camera position in model space: rayorig = inv(modelmat) * camera_in_worldspace
the direction of the ray is the normalized vector from the position from 1. to the ray origin: raydir = normalize(pos - rayorig)
On the website linked they use two points for the ray and they don't seem to normalize the ray direction vector, so this is optional.
Ok, so this is the beginning of my trail of breadcrumbs.
I was somehow having issues with the QT datatypes for the matrices, and the logic pertaining to matrix transformations.
This particular problem in this question resulted from not actually performing any transformations whatsoever.
Steps to solving this problem were:
Converting mouse coordinates into NDC space (within the range of -1 to 1: x/screen width * 2 - 1, y - height / height * 2 - 1)
grabbing the 4x4 matrix for my view matrix (can be the one used when rendering, or re calculated)
In a new vector, have it equal the inverse view matrix multiplied by the inverse projection matrix.
In order to build the ray, I had to do the following:
Take the previously calculated value for the matrices that were multiplied together. This will be multiplied by a vector 4 (array of 4 spots), where it will hold the previously calculated x and y coordinates, as well as -1, then +1.
Then this vector will be divided by the last spot value of the entire vector
Create another vector 4 which was just like the last, but instead of -1, put "1" .
Once again divide that by its last spot value.
Now the coordinates for the ray have been created at the far and near planes, so it can intersect with anything along it in the scene.
I opened a series of questions (because of great uncertainty with my series of problems), so parts of my problem overlap in them too.
In here, I learned that I needed to take the screen height into consideration for switching the origin of the y axis for a Cartesian system, since windows has the y axis start at the top left. Additionally, retrieval of matrices was redundant, but also wrong since they were never declared "properly".
In here, I learned that unProject wasn't working because I was trying to pull the model and view matrices using OpenGL functions, but I never actually set them in the first place, because I built the matrices by hand. I solved that problem in 2 fold: I did the math manually, and I made all the matrices of the same data type (they were mixed data types earlier, leading to issues as well).
And lastly, in here, I learned that my order of operations was slightly off (need to multiply matrices by a vector, not the reverse), that my near plane needs to be -1, not 0, and that the last value of the vector which would be multiplied with the matrices (value "w") needed to be 1.
Credits goes to those individuals who helped me solve these problems:
srobins of facepunch, in this thread
derhass from here, in this question, and this discussion
Take a look at
http://www.realtimerendering.com/intersections.html
Lot of help in determining intersections between various kinds of geometry
http://geomalgorithms.com/code.html also has some c++ functions one of them serves your purpose
I have a lot of lines and planes which are around, for example, (0.5, 0.5, 0.5) point. Also I have area where they have importance, it's a cube. And lines, planes have possibility to intersect this area, and be outside of it. Can I hide part of all elements, and parts of elements, which are not included in my area? Does Vtk have opportunity to do it very simple? Or I need to do it by myself? I want to write, for example SetBounds(bounds), and after that all what isn't included in cube dissapear.
Try using vtkClipDataSet with the clip-function set to vtkBox. Finally, render the output from the vtkClipDataSet filter.
vtkNew<vtkBox> box;
box->SetBounds(.....); // set the bounds of interest.
vtkNew<vtkClipDataSet> clipper;
clipper->SetInputConnection(....); // set to your data producer
clipper->SetClipFunction(box.GetPointer());
// since clipper will produce an unstructured grid, apply the following to
// extract a polydata from it.
vtkNew<vtkGeometryFilter> geomFilter;
geomFilter->SetInputConnection(clipper->GetOutputPort());
// now, this can be connected to the mapper.
vtkNew<vtkPolyDataMapper> mapper;
mapper->SetInputConnection(geomFilter->GetOutputPort());