parsing coordinates on image map - coords

I have a clickable html image map of the US, but I would like to resize it to half its current size. This means I'm also going to need to divide all the coord values in half so that the click areas are still accurate. Instead of doing this manually, is there an easy way to traverse the DOM and automatically divide all the coordinates by 2? Here's the html:
<div id="map">
<img class="map" src="images/us_map.jpg" width="960" height="593" usemap="#usa">
<map name="usa">
<area href="#" title="SC" shape="poly" coords="735,418, 734,419, 731,418, 731,416, 729,413, 727,411, 725,410, 723,405, 720,399, 716,398, 714,396, 713,393, 711,391, 709,390, 707,387, 704,385, 699,383, 699,382, 697,379, 696,378, 693,373, 690,373, 686,371, 684,369, 684,368, 685,366, 687,365, 687,363, 693,360, 701,356, 708,355, 724,355, 727,356, 728,360, 732,359, 745,358, 747,358, 760,366, 769,374, 764,379, 762,385, 761,391, 759,392, 758,394, 756,395, 754,398, 751,401, 749,404, 748,405, 744,408, 741,409, 742,412, 737,417, 735,418"></area>
<area href="#" title="HI" shape="poly" coords="225,521, 227,518, 229,517, 229,518, 227,521, 225,521"></area>
<area href="#" title="HI" shape="poly" coords="235,518, 241,520, 243,520, 244,516, 244,513, 240,512, 236,514, 235,518"></area>
'

I have something like:
var rects = document.getElementsByTagName('area');
var ratio = 1/2;
for (i = 0; i < rects.length; i++) {
coords = rects[i].coords.split(",");
coord = '';
for (j = 0; j < coords.length; j++) {
newCoord = coords[j].trim() * ratio;
coord += newCoord;
if (j + 1 < coords.length) {
coord +=', ';
}
}
rects[i].coords = coord;
}

Related

Creating cursor trail with fragment shader

I wish to draw a simple mouse trail using fragment shaders, similar in appearance to drawing the following in processing (omitting the step of clearing the canvas). I cannot wrap my head around the setup necessary to achieve this.
// processing reference using cursor as paintbrush
void setup () {
size(400, 400);
background(255);
fill(0);
}
void draw () {
ellipse(mouseX, mouseY, 20, 20);
}
Here's my vain approach, based on this shadertoy example:
I draw a simple shape at cursor position
void main(void) {
float pct = 0.0;
pct = distance(inData.v_texcoord.xy, vec2(mouse.x, 1.-mouse.y)) * SIZE;
pct = 1.0 - pct - BRIGHTNESS;
vec3 blob = vec3(pct);
fragColor = vec4( blob, 1.0 );
}
Then my confusion begins. My thinking goes that I'd need to mix the output above with a texture containing my previous pass. This creates at least a solid trail, albeit copying the previous pass only within a set distance from the mouse position.
#shader pass 1
void main(void) {
float pct = 0.0;
pct = distance(inData.v_texcoord.xy, vec2(mouse.x, 1.-mouse.y)) * SIZE;
pct = 1.0 - pct - BRIGHTNESS;
vec3 blob = vec3(pct);
vec3 stack = texture(prevPass, inData.v_texcoord.xy).xyz;
fragColor = vec4( blob*.1 + (stack*2.), 1.0 );
}
#shader pass 2
void main(void) {
fragColor = texture(prevPass,inData.v_texcoord);
}
Frankly, I'm a little bit in the blue about how to draw without data and "stack" previous draw calls in webgl on a conceptual level, and I'm having a hard time finding beginner documentation.
I would be grateful if someone could point me towards where my code and thinking becomes faulty, or point me towards some resources.
What you need to do is:
After doing your first pass rendering (i.e. making an ellipse at the cursor position), copy the contents of the framebuffer to a different image.
Then pass this image as an sampler input to the next pass. Notice how that shadertoy example ahs 2 images.
You can make a simple HTML/Javascript trail with this code:
<!DOCTYPE html>
<style>
.trail { /* className for trail elements */
position: absolute;
height: 6px; width: 6px;
border-radius: 3px;
background: teal;
}
body {
height: 300px;
}
</style>
<body>
<script>
document.body.addEventListener("mousemove", moved);
// create, style, append trail elements
var trailElements = [];
var numOfTrailElements = 10;
for (var i = 0; i < numOfTrailElements; i++) {
var element = document.createElement('div');
element.className = 'trail';
document.body.appendChild(element);
trailElements.push(element);
}
// when mouse moves, display trail elements in wake of mouse pointer
var counter = 0; // current trail element index
function moved(event) {
trailElements[counter].style.left = event.clientX + 'px';
trailElements[counter].style.top = event.clientY + 'px';
if (counter == 9) {
counter = 0;
} else {
counter += 1;
}
}
</script>
</body>
<!doctype html>
<style>
.trail { /* className for the trail elements */
position: absolute;
height: 6px; width: 6px;
border-radius: 3px;
background: black;
}
body {
height: 300px;
}
</style>
<body>
<script>
var dots = [];
for (var i = 0; i < 12; i++) {
var node = document.createElement("div");
node.className = "trail";
document.body.appendChild(node);
dots.push(node);
}
var currentDot = 0;
addEventListener("mousemove", function(event) {
var dot = dots[currentDot];
dot.style.left = (event.pageX - 3) + "px";
dot.style.top = (event.pageY - 3) + "px";
currentDot = (currentDot + 1) % dots.length;
});
</script>
</body>

Outline of pixels after detecting object (without convex hull)

The idea is to use grabcut (OpenCV) to detect the image inside a rectangle and create a geometry with Direct2D.
My test image is this:
After performing the grab cut, resulting in this image:
the idea is to outline it. I can use an opacity brush to exclude it from the background but I want to use a geometric brush in order to be able to append/widen/combine geometries on it like all other selections in my editor (polygon, lasso, rectangle, etc).
If I apply the convex hull algorithm to the points, I get this:
Which of course is not desired for my case. How do I outline the image?
After getting the image from the grabcut, I keep the points based on luminance:
DWORD* pixels = ...
for (UINT y = 0; y < he; y++)
{
for (UINT x = 0; x < wi; x++)
{
DWORD& col = pixels[y * wi + x];
auto lumthis = lum(col);
if (lumthis > Lum_Threshold)
{
points.push_back({x,y});
}
}
}
Then I sort the points on Y and X:
std::sort(points.begin(), points.end(), [](D2D1_POINT_2F p1, D2D1_POINT_2F p2) -> bool
{
if (p1.y < p2.y)
return true;
if ((int)p1.y == (int)p2.y && p1.x < p2.x)
return true;
return false;
});
Then, for each line (traversing the above point array from top Y to bototm Y) I create "groups" for each line:
struct SECTION
{
float left = 0, right = 0;
};
auto findgaps = [](D2D1_POINT_2F* p,size_t n) -> std::vector<SECTION>
{
std::vector<SECTION> j;
SECTION* jj = 0;
for (size_t i = 0; i < n; i++)
{
if (i == 0)
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
continue;
}
if ((p[i].x - jj->right) < 1.5f)
{
jj->right = p[i].x;
}
else
{
SECTION jp;
jp.left = p[i].x;
jp.right = p[i].x;
j.push_back(jp);
jj = &j[j.size() - 1];
}
}
return j;
};
I'm stuck at this point. I know that from an arbitrary set of points many polygons are possible, but in my case the points have defined what's "left" and what's "right". How would I proceed from here?
For anyone interested, the solution is OpenCV contours. Working example here.

How to account for spacing between tiles in a tile sheet

My tile-sheet has tiles that are 64x64, however between each tile there is a 10px gap and i need to account for that gap when setting the texture rectangle in the image in order to draw that tile
I tried simply adding the space upon setting the texture rectangle but the image still looks distorted
for (auto y = 0u; y < map.getTileCount().y; ++y)
{
for (auto x = 0u; x < map.getTileCount().x; ++x)
{
auto posX = static_cast<float>(x * map.getTileSize().x);
auto posY = static_cast<float>(y * map.getTileSize().y);
sf::Vector2f position(posX, posY);
tileSprite.setPosition(position);
auto tileID = tiles[y * map.getTileCount().x + x].ID; //the id of the current tile
if (tileID == 0)
{
continue; //empty tile
}
auto i = 0;
while (tileID < tileSets[i].getFirstGID())
{
++i;
}
auto relativeID = tileID - tileSets[i].getFirstGID();
auto tileX = relativeID % tileSets[i].getColumnCount();
auto tileY = relativeID / tileSets[i].getColumnCount();
textureRect.left = tileX * tileSets[i].getTileSize().x; //i am guessing this is where
// i should account for the spacing
textureRect.top = tileY * tileSets[i].getTileSize().y;
tileSprite.setTexture(mTextureHolder.get(Textures::SpriteSheet));
tileSprite.setTextureRect(textureRect);
mMapTexture.draw(tileSprite);
}
}
The code itself is working and its drawing the tiles in the correct sizes, if i use a normal 64x64 tileset without any spacing the final image looks right however with spacing included the tiles are cut out.
How do i add the gap between the tiles when setting the texture rectangle?
this is how it looks:
this is how it should look:
(NOTE: The "how it should look" image is from the Tiled editor )
Removing the spaces with a python script i found and gimp fixed the problem, however if anyone knows how to account for the spacing feel free to answer as i might need it someday

Zooming towards center of Camera on 2d Plane

Once again, camera zooming on a 2D-Plane. I searched a lot and know that there are similar questions, but I am obviously way too stupid to apply what I was able to find.
Basically I multiply the distance of all elements to the origin by mouseDelta, which is a double between 0.5 and 1. works fine for all elements, but since the anchor of the camera (camX, camY) are the upper left corner of the camera, the objects in the focus of the cam change their position in relation to the focus. I want to scroll "towards" the focus. Here is what I got, but it behaves really weird:
camX and camY, as mentioned, are the coordinates for the upper left of the cam.
mouseDelta is the zoom-level thats stored globally and is changed by each wheel-event.
screenX is the width of the screen/window (fullscreen anyways)
screenY is the height of the screen/window
if (newEvent.type == sf::Event::MouseWheelMoved) //zoom
{
mouseDelta += ((double)newEvent.mouseWheel.delta)/20;
if (mouseDelta > 1) { mouseDelta = 1; }
else if (mouseDelta < 0.5) { mouseDelta = 0.5; }
//resize graphics
for (int i = 0; i < core->universe->world->nodes.size(); i++) {
core->universe->world->nodes.at(i).pic->setSize(mouseDelta);
}
for (int i = 0; i < core->universe->world->links.size(); i++) {
core->universe->world->links.at(i).pic->setSize(mouseDelta);
}
camX = (camX + screenX/2) - (camX + screenX/2)*mouseDelta;
camY = (camY + screenY/2) - (camY + screenY/2)*mouseDelta;
}

vtk 6.x, Qt: 3D (line, surface, scatter) plotting

I am working on a Qt (4.7.4) project and need to plot data in 2D and 3D coordinate systems. I've been looking into vtk 6.1 because it seems very powerful overall and I will also need to visualize image data at a later point. I basically got 2D plots working but am stuck plotting data in 3D.
Here's what I tried: I'm using the following piece of code that I took from one of vtk's tests ( Charts / Core / Testing / Cxx / TestSurfacePlot.cxx ). The only thing I added is the QVTKWidget that I use in my GUI and its interactor:
QVTKWidget vtkWidget;
vtkNew<vtkChartXYZ> chart;
vtkNew<vtkPlotSurface> plot;
vtkNew<vtkContextView> view;
view->GetRenderWindow()->SetSize(400, 300);
vtkWidget.SetRenderWindow(view->GetRenderWindow());
view->GetScene()->AddItem(chart.GetPointer());
chart->SetGeometry(vtkRectf(75.0, 20.0, 250, 260));
// Create a surface
vtkNew<vtkTable> table;
float numPoints = 70;
float inc = 9.424778 / (numPoints - 1);
for (float i = 0; i < numPoints; ++i)
{
vtkNew<vtkFloatArray> arr;
table->AddColumn(arr.GetPointer());
}
table->SetNumberOfRows(numPoints);
for (float i = 0; i < numPoints; ++i)
{
float x = i * inc;
for (float j = 0; j < numPoints; ++j)
{
float y = j * inc;
table->SetValue(i, j, sin(sqrt(x*x + y*y)));
}
}
// Set up the surface plot we wish to visualize and add it to the chart.
plot->SetXRange(0, 9.424778);
plot->SetYRange(0, 9.424778);
plot->SetInputData(table.GetPointer());
chart->AddPlot(plot.GetPointer());
view->GetRenderWindow()->SetMultiSamples(0);
view->SetInteractor(vtkWidget.GetInteractor());
view->GetInteractor()->Initialize();
view->GetRenderWindow()->Render();
Now, this produces a plot but I can neither interact with it not does it look 3D. I would like to do some basic stuff like zoom, pan, or rotate about a pivot. A few questions that come to my mind about this are:
Is it correct to assign the QVTKWidget interactor to the view in the third line from the bottom?
In the test, a vtkChartXYZ is added to the vtkContextView. According to the documentation, the vtkContextView is used to display a 2D scene but here is used with a 3D chart (XYZ). How does this fit together?
The following piece of code worked for me. No need to explicitly assign an interactor because that's already been taken care of by QVTKWidget.
QVTKWidget vtkWidget;
vtkSmartPointer<vtkContextView> view = vtkSmartPointer<vtkContextView>::New();
vtkSmartPointer<vtkChartXYZ> chart = vtkSmartPointer<vtkChartXYZ>::New();
// Create a surface
vtkSmartPointer<vtkTable> table = vtkSmartPointer<vtkTable>::New();
float numPoints = 70;
float inc = 9.424778 / (numPoints - 1);
for (float i = 0; i < numPoints; ++i)
{
vtkSmartPointer<vtkFloatArray> arr = vtkSmartPointer<vtkFloatArray>::New();
table->AddColumn(arr.GetPointer());
}
table->SetNumberOfRows(numPoints);
for (float i = 0; i < numPoints; ++i)
{
float x = i * inc;
for (float j = 0; j < numPoints; ++j)
{
float y = j * inc;
table->SetValue(i, j, sin(sqrt(x*x + y*y)));
}
}
view->SetRenderWindow(vtkWidget.GetRenderWindow());
chart->SetGeometry(vtkRectf(200.0, 200.0, 300, 300));
view->GetScene()->AddItem(chart.GetPointer());
vtkSmartPointer<vtkPlotSurface> plot = vtkSmartPointer<vtkPlotSurface>::New();
// Set up the surface plot we wish to visualize and add it to the chart.
plot->SetXRange(0, 10.0);
plot->SetYRange(0, 10.0);
plot->SetInputData(table.GetPointer());
chart->AddPlot(plot.GetPointer());
view->GetRenderWindow()->SetMultiSamples(0);
view->GetRenderWindow()->Render();
You might want read the detailed description in vtkRenderViewBase
QVTKWidget *widget = new QVTKWidget;
vtkContextView *view = vtkContextView::New();
view->SetInteractor(widget->GetInteractor());
widget->SetRenderWindow(view->GetRenderWindow());