I'm using this article: nonlingr as a font to understand non linear transformations, in the section GLYPHS ALONG A PATH he explains how to use a parametric curve to transform an image, i'm trying to apply a cubic bezier to an image, however i have been unsuccessfull, this is my code:
OUT.aloc(IN.width(), IN.height());
//get the control points...
wVector p0(values[vindex], values[vindex+1], 1);
wVector p1(values[vindex+2], values[vindex+3], 1);
wVector p2(values[vindex+4], values[vindex+5], 1);
wVector p3(values[vindex+6], values[vindex+7], 1);
//this is to calculate t based on x
double trange = 1 / (OUT.width()-1);
//curve coefficients
double A = (-p0[0] + 3*p1[0] - 3*p2[0] + p3[0]);
double B = (3*p0[0] - 6*p1[0] + 3*p2[0]);
double C = (-3*p0[0] + 3*p1[0]);
double D = p0[0];
double E = (-p0[1] + 3*p1[1] - 3*p2[1] + p3[1]);
double F = (3*p0[1] - 6*p1[1] + 3*p2[1]);
double G = (-3*p0[1] + 3*p1[1]);
double H = p0[1];
//apply the transformation
for(long i = 0; i < OUT.height(); i++){
for(long j = 0; j < OUT.width(); j++){
//t = x / width
double t = trange * j;
//apply the article given formulas
double x_path_d = 3*t*t*A + 2*t*B + C;
double y_path_d = 3*t*t*E + 2*t*F + G;
double angle = 3.14159265/2.0 + std::atan(y_path_d / x_path_d);
mapped_point.Set((t*t*t)*A + (t*t)*B + t*C + D + i*std::cos(angle),
(t*t*t)*E + (t*t)*F + t*G + H + i*std::sin(angle),
1);
//test if the point is inside the image
if(mapped_point[0] < 0 ||
mapped_point[0] >= OUT.width() ||
mapped_point[1] < 0 ||
mapped_point[1] >= IN.height())
continue;
OUT.setPixel(
long(mapped_point[0]),
long(mapped_point[1]),
IN.getPixel(j, i));
}
}
Applying this code in a 300x196 rgb image all i get is a black screen no matter what control points i use, is hard to find information about this kind of transformation, searching for parametric curves all i find is how to draw them, not apply to images. Can someone help me on how to transform an image with a bezier curve?
IMHO applying a curve to an image sound like using a LUT. So you will need to check for the value of the curve for different image values and then switch the image value with the one on the curve, so, create a Look-Up-Table for each possible value in the image (e.g : 0, 1, ..., 255, for a gray value 8 bit image), that is a 2x256 matrix, first column has the values from 0 to 255 and the second one having the value of the curve.
Related
Yes, I know that it is a popular problem. But I found nowhere the full clear implementing code without using OpenGL classes or a lot of headers files.
Okay, the math solution is to transfer ellipsoid to sphere. Then find intersections dots (if they exist of course) and make inverse transformation. Because affine transformation respect intersection.
But I have difficulties when trying to implement this.
I tried something for sphere but it is completely incorrect.
double CountDelta(Point X, Point Y, Sphere S)
{
double a = 0.0;
for(int i = 0; i < 3; i++){
a += (Y._coordinates[i] - X._coordinates[i]) * (Y._coordinates[i] - X._coordinates[i]);
}
double b = 0.0;
for(int i = 0; i < 3; i++)
b += (Y._coordinates[i] - X._coordinates[i]) * (X._coordinates[i] - S._coordinates[i]);
b *= 2;
double c = - S.r * S.r;
for(int i = 0; i < 3; i++)
c += (X._coordinates[i] - S._coordinates[i]) * (X._coordinates[i] - S._coordinates[i]);
return b * b - 4 * a * c;
}
Let I have start point P = (Px, Py, Pz), direction V = (Vx, Vy, Vz), ellipsoid = (Ex, Ey, Ec) and (a, b, c). How to construct clear code?
Let a line from P to P + D intersecting a sphere of center C and radius R.
WLOG, C can be the origin and R unit (otherwise translate by -C and scale by 1/R). Now using the parametric equation of the line and the implicit equation of the sphere,
(Px + t Dx)² + (Py + t Dy)² + (Pz + t Dz)² = 1
or
(Dx² + Dy² + Dz²) t² + 2 (Dx Px + Dy Py + Dz Pz) t + Px² + Py² + Pz² - 1 = 0
(Vectorially, D² t² + 2 D P t + P² - 1 = 0 and t = (- D P ±√((D P)² - D²(P² - 1))) / D².)
Solve this quadratic equation for t and get the two intersections as P + t D. (Don't forget to invert the initial transformations.)
For the ellipsoid, you can either plug the parametric equation of the line directly into the implicit equation of the conic, or reduce the conic (and the points simultaneously) and plug in the reduced equation.
I'm writing a program that can draw a line between two points with filled circles. The circles:
- shouldn't overlap each other
- be as close together as possible
- and the centre of each circle should be on the line.
I've written a function to produce the circles, however I'm having trouble calculating position of each circle so that they are correctly lined up
void addCircles(scrPt endPt1, scrPt endPt2)
{
float xLength, yLength, length, cSquare, slope;
int numberOfCircles;
// Get the x distance between the two points
xLength = abs(endPt1.x - endPt2.x);
// Get the y distance between the two points
yLength = abs(endPt1.y - endPt2.y);
// Get the length between the points
cSquare = pow(xLength, 2) + pow(yLength, 2);
length = sqrt(cSquare);
// calculate the slope
slope = (endPt2.y - endPt1.y) / (endPt2.x - endPt1.x);
// Find how many circles fit inside the length
numberOfCircles = round(length / (radius * 2) - 1);
// set the position of each circle
for (int i = 0; i < numberOfCircles; i++)
{
scrPt circPt;
circPt.x = endPt1.x + ((radius * 2) * i);
circPt.y = endPt1.y + (((radius * 2) * i) * slope);
changeColor();
drawCircle (circPt.x, circPt.y);
}
This is what the above code produces:
I'm quite certain that the issue lies with this line, which sets the y value of the circle:
circPt.y = endPt1.y + (((radius * 2) * i) * slope);
Any help would be greatly appreciated
I recommend to calculate the direction of the line as a unit vector:
float xDist = endPt2.x - endPt1.x;
float yDist = endPt2.y - endPt1.y;
float length = sqrt(xDist*xDist + yDist *yDist);
float xDir = xDist / length;
float yDir = yDist / length;
Calculate the distance from one center point to the next one, numberOfSegments is the number of sections and not the number of circles:
int numberOfSegments = (int)trunc( length / (radius * 2) );
float distCpt = numberOfSegments == 0 ? 0.0f : length / (float)numberOfSegments;
A center point of a circle is calculated by the adding a vector the the start point of the line. The vector pints in the direction of the line and its length is given, by the distance between 2 circles multiplied by the "index" of the circle:
for (int i = 0; i <= numberOfSegments; i++)
{
float cpt_x = endPt1.x + xDir * distCpt * (float)i;
float cpt_y = endPt1.y + yDir * distCpt * (float)i;
changeColor();
drawCircle(cpt_x , cpt_y);
}
Note, the last circle on a line may be redrawn, by the first circle of the next line. You can change this by changing the iteration expression of the for loop - change <= to <:
for (int i = 0; i < numberOfSegments; i++)
In this case at the end of the line won't be drawn any circle at all.
What I am trying to do:
Make an empty 3D image (.dcm in this case) with image direction as
[1,0,0;
0,1,0;
0,0,1].
In this image, I insert an oblique trajectory, which essentially represents a cuboid. Now I wish to insert a hollow hemisphere in this cuboid (cuboid with all white pixels - constant value, hemisphere can be anything but differentiable), so that it is aligned along the axis of the trajectory.
What I am getting
So I used the general formula for a sphere:
x = x0 + r*cos(theta)*sin(alpha)
y = y0 + r*sin(theta)*sin(alpha)
z = z0 + r*cos(alpha)
where, 0 <= theta <= 2 * pi, 0 <= alpha <= pi / 2, for hemisphere.
What I tried to achieve this
So first I thought to just get the rotation matrix, between the image coordinate system and the trajectory coordinate system and multiply all points on the sphere with it. This didn't give me desired results as the rotated sphere was scaled and translated. I don't get why this was happening as I checked the points myself.
Then I thought why not make a hemisphere out of a sphere, which is cut at by a plane lying parallel to the y,z plane of the trajectory coordinate system. For this, I calculated the angle between x,y and z axes of the image with that of the trajectory. Then, I started to get hemisphere coordinates for theta_rotated and alpha_rotated. This didn't work either as instead of a hemisphere, I was getting a rather weird sphere.
This is without any transformations
This is with the angle transformation (second try)
For reference,
The trajectory coordinate system :
[-0.4744, -0.0358506, -0.8553;
-0.7049, 0.613244, 0.3892;
-0.5273, -0.787537, 0.342;];
which gives angles:
x_axis angle 2.06508 pi
y_axis angle 2.2319 pi
z_axis angle 1.22175 pi
Code to generate the cuboid
Vector3d getTrajectoryPoints(std::vector<Vector3d> &trajectoryPoints, Vector3d &target1, Vector3d &tangent1){
double distanceFromTarget = 10;
int targetShift = 4;
target -= z_vector;
target -= (tangent * targetShift);
Vector3d vector_x = -tangent;
y_vector = z_vector.cross(vector_x);
target -= y_vector;
Vector3d start = target - vector_x * distanceFromTarget;
std::cout << "target = " << target << "start = " << start << std::endl;
std::cout << "x " << vector_x << " y " << y_vector << " z " << z_vector << std::endl;
double height = 0.4;
while (height <= 1.6)
{
double width = 0.4;
while (width <= 1.6){
distanceFromTarget = 10;
while (distanceFromTarget >= 0){
Vector3d point = target + tangent * distanceFromTarget;
//std::cout << (point + (z_vector*height) - (y_vector * width)) << std::endl;
trajectoryPoints.push_back(point + (z_vector * height) + (y_vector * width));
distanceFromTarget -= 0.09;
}
width += 0.09;
}
height += 0.09;
}
}
The height and width as incremented with respect to voxel spacing.
Do you guys know how to achieve this and what am I doing wrong? Kindly let me know if you need any other info.
EDIT 1
After the answer from #Dzenan, I tried the following:
target = { -14.0783, -109.8260, -136.2490 }, tangent = { 0.4744, 0.7049, 0.5273 };
typedef itk::Euler3DTransform<double> TransformType;
TransformType::Pointer transform = TransformType::New();
double centerForTransformation[3];
const double pi = std::acos(-1);
try{
transform->SetRotation(2.0658*pi, 1.22175*pi, 2.2319*pi);
// transform->SetMatrix(transformMatrix);
}
catch (itk::ExceptionObject &excp){
std::cout << "Exception caught ! " << excp << std::endl;
transform->SetIdentity();
}
transform->SetCenter(centerForTransformation);
Then I loop over all the points in the hemisphere and transform them using,
point = transform->TransformPoint(point);
Although, I'd prefer to give the matrix which is equal to the trajectory coordinate system (mentioned above), the matrix isn't orthogonal and itk wouldn't take it. It must be said that I used the same matrix for resampling this image and extracting the cuboid and this was fine. Thence, I found the angles between x_image - x_trajectory, y_image - y_trajectory and z_image - z_trajectory and used SetRotation instead which gives me the following result (still incorrect):
EDIT 2
I tried to get the sphere coordinates without actually using the polar coordinates. Following discussion with #jodag, this is what I came up with:
Vector3d center = { -14.0783, -109.8260, -136.2490 };
height = 0.4;
while (height <= 1.6)
{
double width = 0.4;
while (width <= 1.6){
distanceFromTarget = 5;
while (distanceFromTarget >= 0){
// Make sure the point lies along the cuboid direction vectors
Vector3d point = center + tangent * distanceFromTarget + (z_vector * height) + (y_vector * width);
double x = std::sqrt((point[0] - center[0]) * (point[0] - center[0]) + (point[1] - center[1]) * (point[1] - center[1]) + (point[2] - center[2]) * (point[2] - center[2]));
if ((x <= 0.5) && (point[2] >= -136.2490 ))
orientation.push_back(point);
distanceFromTarget -= 0.09;
}
width += 0.09;
}
height += 0.09;
}
But this doesn't seem to work either.
This is the output
I'm a little confused about your first plot because it appears that the points being displayed are not defined in the image coordinates. The example I'm posting below assumes that voxels must be part of the image coordinate system.
The code below transforms the voxel coordinates in the image space into the trajectory space by using an inverse transformation. It then rasterises a 2x2x2 cube centered around 0,0,0 and a 0.9 radius hemisphere sliced along the xy axis.
Rather than continuing a long discussion in the comments I've decided to post this. Please comment if you're looking for something different.
% define trajectory coordinate matrix
R = [-0.4744, -0.0358506, -0.8553;
-0.7049, 0.613244, 0.3892;
-0.5273, -0.787537, 0.342]
% initialize 50x50x50 3d image
[x,y,z] = meshgrid(linspace(-2,2,50));
sz = size(x);
x = reshape(x,1,[]);
y = reshape(y,1,[]);
z = reshape(z,1,[]);
r = ones(size(x));
g = ones(size(x));
b = ones(size(x));
blue = [0,1,0];
green = [0,0,1];
% transform image coordinates to trajectory coordinates
vtraj = R\[x;y;z];
xtraj = vtraj(1,:);
ytraj = vtraj(2,:);
ztraj = vtraj(3,:);
% rasterize 2x2x2 cube in trajectory coordinates
idx = (xtraj <= 1 & xtraj >= -1 & ytraj <= 1 & ytraj >= -1 & ztraj <= 1 & ztraj >= -1);
r(idx) = blue(1);
g(idx) = blue(2);
b(idx) = blue(3);
% rasterize radius 0.9 hemisphere in trajectory coordinates
idx = (sqrt(xtraj.^2 + ytraj.^2 + ztraj.^2) <= 0.9) & (ztraj >= 0);
r(idx) = green(1);
g(idx) = green(2);
b(idx) = green(3);
% plot just the blue and green voxels
green_idx = (r == green(1) & g == green(2) & b == green(3));
blue_idx = (r == blue(1) & g == blue(2) & b == blue(3));
figure(1); clf(1);
plot3(x(green_idx),y(green_idx),z(green_idx),' *g')
hold('on');
plot3(x(blue_idx),y(blue_idx),z(blue_idx),' *b')
axis([1,100,1,100,1,100]);
axis('equal');
axis('vis3d');
You can generate you hemisphere in some physical space, then transform it (translate and rotate) by using e.g. RigidTransform's TransformPoint method. Then use TransformPhysicalPointToIndex method in itk::Image. Finally, use SetPixel method to change intensity. Using this approach you will have to control the resolution of your hemisphere to fully cover all the voxels in the image.
Alternative approach is to construct a new image into which you create you hemisphere, then use resample filter to create a transformed version of the hemisphere in an arbitrary image.
I'm trying to play around with some OpenCV and thought up an interesting little scenario to work on.
Basically, I want to take a pixel, add the colour values from the 3 neighbouring pixels (so (x, y), (x+1, y) (x, y+1) and (x+1, y+1)) and divide the result by 4 to get an average colour value. Then the next set of pixels I process is (x+2, y+2) with it's 3 neighbours.
I then also want to be able to do a similar thing, but with 9 pixels (with the chosen co-ordinate to work from being the centre).
Initially I started with a gaussian blur type masking, but that's not the result I want to acheive. As from those calculations, I just want to get 1 pixel value. So the output image will be 1/4 or a 1/9 of the size. So for now I've got it working where I've literally written out the calculation in a for loop as:
for (int i = 1; i < myImage.rows -1; i++)
{
b = 0;
for (int k = 1; k < myImage.cols -1; k++)
{
//9 pixel radius
Result.at<Vec3b>(a, b)[1] = (myImage.at<Vec3b>(i-1, k-1)[1]+myImage.at<Vec3b>(i-1, k)[1]+myImage.at<Vec3b>(i+1, k)[1] + myImage.at<Vec3b>(i, k)[1]+myImage.at<Vec3b>(i, k-1)[1]+myImage.at<Vec3b>(i, k+1)[1] + myImage.at<Vec3b>(i + 1, k+1)[1] + myImage.at<Vec3b>(i-1, k + 1)[1] + myImage.at<Vec3b>(i + 1, k - 1)[1]) / 9;
Result.at<Vec3b>(a, b)[2] = (myImage.at<Vec3b>(i-1, k-1)[2]+myImage.at<Vec3b>(i-1, k)[2]+myImage.at<Vec3b>(i+1, k)[2] + myImage.at<Vec3b>(i, k)[2]+myImage.at<Vec3b>(i, k-1)[2]+myImage.at<Vec3b>(i, k+1)[2] + myImage.at<Vec3b>(i + 1, k+1)[2] + myImage.at<Vec3b>(i-1, k + 1)[2] + myImage.at<Vec3b>(i + 1, k - 1)[2]) / 9;
Result.at<Vec3b>(a, b)[0] = (myImage.at<Vec3b>(i-1, k-1)[0]+myImage.at<Vec3b>(i-1, k)[0]+myImage.at<Vec3b>(i+1, k)[0] + myImage.at<Vec3b>(i, k)[0]+myImage.at<Vec3b>(i, k-1)[0]+myImage.at<Vec3b>(i, k+1)[0] + myImage.at<Vec3b>(i + 1, k+1)[0] + myImage.at<Vec3b>(i-1, k + 1)[0] + myImage.at<Vec3b>(i + 1, k - 1)[0]) / 9;
//4 pixel radius
// Result.at<Vec3b>(a, b)[1] = (myImage.at<Vec3b>(i, k)[1] + myImage.at<Vec3b>(i + 1, k)[1] + myImage.at<Vec3b>(i, k + 1)[1] + myImage.at<Vec3b>(i, k - 1)[1] + myImage.at<Vec3b>(i - 1, k)[1]) / 5;
// Result.at<Vec3b>(a, b)[2] = (myImage.at<Vec3b>(i, k)[2] + myImage.at<Vec3b>(i + 1, k)[2] + myImage.at<Vec3b>(i, k + 1)[2] + myImage.at<Vec3b>(i, k - 1)[2] + myImage.at<Vec3b>(i - 1, k)[2]) / 5;
// Result.at<Vec3b>(a, b)[0] = (myImage.at<Vec3b>(i, k)[0] + myImage.at<Vec3b>(i + 1, k)[0] + myImage.at<Vec3b>(i, k + 1)[0] + myImage.at<Vec3b>(i, k - 1)[0] + myImage.at<Vec3b>(i - 1, k)[0]) / 5;
b++;
}
a++;
}
Obviously, it's possible to setup the two options as different function that is called, but I'm just wondering if there's a more efficient way of achieveing this, that would let the size of the mask be changed.
Thanks for any help!
I'm assuming that you want to do this all without built-in functions (like resize, mean, or filter2d) and just want to directly address the image using at. There are further optimizations that can be made, but this is intended as a reasonable and understandable improvement on the original code.
Also, it should be noted that I ignore any extra rows/columns when the image size is not exactly divisible by the scale factor. You'll need to specify the expected behavior if you want something different.
The first thing I'd do is change what you think of as the target pixel. Assume you have a 3x3 neighborhood like so:
1 2 3
4 5 6
7 8 9
We're going to take the mean value of all of these pixels anyway, so whether we call pixel 5 the target or pixel 1 makes no difference to the resulting image. I'm going to call pixel 1 the target because it makes the math cleaner.
The 1 pixel will always be on coordinates divisible by the scaling factor. If the scaling factor is 2, the coordinates of 1 will always be even.
Second, rather than loop over the original image dimensions, which actually results in recalculating the same pixel in Result numerous times, I'm going to loop over the dimensions of Result and figure out which pixels in the original image contribute to each pixel in the result.
So to find neighborhood in the original image that corresponds to pixel (x, y) in the result image, we just have to look for pixel 1 of that neighborhood. Since it's a multiple of the scaling factor, it's just
(x * scaleFactor, y * scaleFactor)
Finally, we need to add two more nested loops to loop over the scaleFactor x scaleFactor window. This is the part the avoids having to type out those long calculations.
In the 3x3 example above, for example, pixel 9 in the neighborhood of (x, y) will be:
(x * scaleFactor + 2, y * scaleFactor + 2)
I also do the mean calculation directly in a vector rather than doing each channel individually. This means that our results will overflow a uchar, so I use Vec3i and cast it back to a Vec3b after the division. This is one place where you should consider using a built-in function mean to calculate the average over the window as it will remove the need for these new loops.
So, if our original image is myImage, we have:
int scaleFactor = 3;
Mat Result(myImage.rows/scaleFactor, myImage.rows/scaleFactor,
myImage.type(), Scalar::all(0));
for (int i = 0; i < Result.rows; i++)
{
for (int k = 0; k < Result.cols; k++)
{
// make sum an int vector so it can hold
// value = scaleFactor x scaleFactor x 255
Vec3i areaSum = Vec3i(0,0,0);
for (int m = 0; m < scaleFactor; m++)
{
for (int n = 0; n < scaleFactor; n++)
{
areaSum += myImage.at<Vec3b>(i*scaleFactor+m, k*scaleFactor+n);
}
}
Result.at<Vec3b>(i,k) = Vec3b(areaSum/(scaleFactor*scaleFactor));
}
}
Here are a couple of samples...
Original:
scaleFactor = 2:
scaleFactor = 3:
scaleFactor = 5:
I am working on a form of autocalibration for an optics device which is currently performed manually. The first part of the calibration is to determine whether a light beam has illuminated the set of 'calibration' points.
I am using OpenCV and have thresholded and cropped the image to leave only the possible relevant points. I know want to determine if these points lie along a stright (horizontal) line; if they a sufficient number do the beam is in the correct position! (The points lie in a straight line but the beam is often bent so hitting most of the points suffices, there are 21 points which show up as white circles when thresholded).
I have tried using a histogram but on the thresholded image the results are not correct and am now looking at Hough lines, but this detects straight lines from edges wwhere as I want to establish if detected points lie on a line.
This is the threshold code I use:
cvThreshold(output, output, 150, 256, CV_THRESH_BINARY);
The histogram results with anywhere from 1 to 640 bins (image width) is two lines at 0 and about 2/3rds through of near max value. Not the distribution expected or obtained without thresholding.
Some pictures to try to illistrate the point (note the 'noisy' light spots which are a feature of the system setup and cannot be overcome):
12 points in a stright line next to one another (beam in correct position)
The sort of output wanted (for illistration, if the points are on the line this is all I need to know!)
Any help would be greatly appreciated. One thought was to extract the co-ordinates of the points and compare them but I don't know how to do this.
Incase it helps anyone here is a very basic (the first draft) of some simple linaear regression code I used.
// Calculate the averages of arrays x and y
double xa = 0, ya = 0;
for(int i = 0; i < n; i++)
{
xa += x[i];
ya += y[i];
}
xa /= n;
ya /= n;
// Summation of all X and Y values
double sumX = 0;
double sumY = 0;
// Summation of all X*Y values
double sumXY = 0;
// Summation of all X^2 and Y^2 values
double sumXs = 0;
double sumYs = 0;
for(int i = 0; i < n; i++)
{
sumX = sumX + x[i];
sumY = sumY + y[i];
sumXY = sumXY + (x[i] * y[i]);
sumXs = sumXs + (x[i] * x[i]);
sumYs = sumYs + (y[i] * y[i]);
}
// (X^2) and (Y^2) sqaured
double Xs = sumX * sumX;
double Ys = sumY * sumY;
// Calculate slope, m
slope = (n * sumXY - sumX * sumY) / (n* sumXs - Xs);
// Calculate intercept
intercept = ceil((sumY - slope * sumX) / n);
// Calculate regression index, r^2
double r_top = (n * sumXY - sumX * sumY);
double r_bottom = sqrt((n* sumXs - Xs) * (n* sumYs - Ys));
double r = 0;
// Check line is not perfectly vertical or horizontal
if(r_top == 0 || r_bottom == 0)
r = 0;
else
r = r_top/ r_bottom;
There are more efficeint ways of doing this (see CodeCogs or AGLIB) but as quick fix this code seems to work.
To detect Circles in OpenCV I dropped the Hough Transform and adapeted codee from this post:
Detection of coins (and fit ellipses) on an image
It is then a case of refining the co-ordinates (removing any outliers etc) to determine if the circles lie on a horizontal line from the slope and intercept values of the regression.
Obtain the x,y coordinates of the thresholded points, then perform a linear regression to find a best-fit line. With that line, you can determine the r^2 value which effectively gives you the quality of fit. Based on that fitness measure, you can determine your calibration success.
Here is a good discussion.
you could do something like this, altough it is an aproximation:
var dw = decide a medium dot width in pixels
maxdots = 0;
for each line of the image {
var dots = 0;
scan by incrementing x by dw {
if (color==dotcolor) dots++;
}
if (dots>maxdots) maxdots=dots;
}
maxdots would be the best result...