Why do the images loaded by the wms tiles loaded by the osmdroid overlap? - osmdroid

Requirement: I use the program provided by osmdroid to load the WMS, and the basic graphs can be loaded successfully. However, some graphs will have many overlapping graphs when they are translated or zoomed. I can't find the cause? The following is a graph that is loaded normally and I have problems overlapping?
The following link is loaded by me, but the boundary is the longitude and latitude boundary converted rom the number of tiles
normal picture ; overlap picture
http://www.ign.es/wms-inspire/pnoa-ma?SERVICE=WMS&request=GetMap&width=256&height=256&version=1.1.1&layers=OI.MosaicElement&bbox=-714227.5924785882,5175704.060564015,-709335.6226670891,5180596.030375512&srs=EPSG:3857&format=image/png&transparent=true&styles=OI.MosaicElement.Default
I hope someone can help me analyze the problem?

Related

How to check if image should be flipped vertically before loading using stb_image

I'm writing a project using OpenGL, and loading the textures using stb_image.
Some of the textures are loaded flipped upside down (regarding the y-axis) so I use
"stbi flip image vertically on load" to load them properly.
The problem is that some of the textures I load require flipping, and some not,
But of course my code flipps them all.
how can I check (before loading, or at least before flipping) whether or not to flip the
image?
Short answer: always flip when loading an image from stb_image to an OpenGL texture. Longer answer: you can't know whether a user wants to flip the image themselves. As it was posed, I think your question is answered by the question Kai Burjack linked you to (Should I vertically flip the lines of an image loaded with stb_image to use in OpenGL?) because it clarifies the correct use of this feature of stb_image.
If you are going straight from an image file to an OpenGL texture, then you should always flip during import if you want the "up" of the imported texture to match what users see in their art programs. However, if you want to give users the option to load images upside down independent of how the image looks in the art program, you can totally do that, too. The catch is that the user has to tell you. There's no way to know what the user wants, and IMO artists who want their images upside-down are likely to just make them that way in their art programs anyways.

Capture a specific location of an image using OpenCV

I am trying to organize my trading card collection digitally and am working on building a scanner using ocr to detect the names of my collection.
I need to use a webcam to snap a single image of each card in question. Snapping the image doesn't seem to be to difficult, but I need help determining how to get OpenCV to capture only a specific part of that image for OCR to work with. I'm trying to capture just the text portion of the image so that the artwork on the cards doesn't interfere with the OCR.
If my card will be placed in the same physical location each time, is there a way to get OpenCV to take an image and focus on just the area of the image that I'm interested in.
Thank You
Sour Jack
I am not sure I understand the problem. Do you want to use your OCR algorithm always on the same portion of the snapshot? If so, you can try something like:
roi = img[y:y+height, x:x+width]
There is more information here: http://answers.opencv.org/question/29260/how-to-save-a-rectangular-roi/

Auto white balancing for camera

I am developing a sample camera and I am able to control image sensor directly. The sensors gives out Bayer image and I need to do show images as live view.
I looked at debayering codes and also white balancing. Is there any library in C/C++ that can help me in this process?
Since I need to have live view, I need to do these things very fast and hence I need algorithms that are very fast.
For example, I can change the RGB gains on sensor and hence I need an algorithm that act at that level, instead of acting on generated image.
Is there any library that help to save images in raw format?
simplecv has a function for white balance control:
simplecv project web site

Debugging of image processing code

What kind of debugging is available for image processing/computer vision/computer graphics applications in C++? What do you use to track errors/partial results of your method?
What I have found so far is just one tool for online and one for offline debugging:
bmd: attaches to a running process and enables you to view a block of memory as an image
imdebug: enables printf-style of debugging
Both are quite outdated and not really what I would expect.
What would seem useful for offline debugging would be some style of image logging, lets say a set of commands which enable you to write images together with text (probably in the form of HTML, maybe hierarchical), easy to switch off at both compile and run time, and the least obtrusive it can get.
The output could look like this (output from our simple tool):
http://tsh.plankton.tk/htmldebug/d8egf100-RF-SVM-RBF_AC-LINEAR_DB.html
Are you aware of some code that goes in this direction?
I would be grateful for any hints.
Coming from a ray tracing perspective, maybe some of those visual methods are also useful to you (it is one of my plans to write a short paper about such techniques):
Surface Normal Visualization. Helps to find surface discontinuities. (no image handy, the look is very much reminiscent of normal maps)
color <- rgb (normal.x+0.5, normal.y+0.5, normal.z+0.5)
Distance Visualization. Helps to find surface discontinuities and errors in finding a nearest point. (image taken from an abandoned ray tracer of mine)
color <- (intersection.z-min)/range, ...
Bounding Volume Traversal Visualization. Helps visualizing a bounding volume hierarchy or other hierarchical structures, and helps to see the traversal hotspots, like a code profiler (e.g. Kd-trees). (tbp of http://ompf.org/forum coined the term Kd-vision).
color <- number_of_traversal_steps/f
Bounding Box Visualization (image from picogen or so, some years ago). Helps to verify the partitioning.
color <- const
Stereo. Maybe useful in your case as for the real stereographic appearance. I must admit I never used this for debugging, but when I think about it, it could prove really useful when implementing new types of 3d-primitives and -trees (image from gladius, which was an attempt to unify realtime and non-realtime ray tracing)
You just render two images with slightly shifted position, focusing on some point
Hit-or-not visualization. May help to find epsilon errors. (image taken from metatrace)
if (hit) color = const_a;
else color = const_b
Some hybrid of several techniques.
Linear interpolation: lerp(debug_a, debug_b)
Interlacing: if(y%2==0) debug_a else debug_b
Any combination of ideas, for example the color-tone from Bounding Box Visualization, but with actual scene-intersection and lighting applied
You may find some more glitches and debugging imagery on http://phresnel.org , http://phresnel.deviantart.com , http://picogen.deviantart.com , and maybe http://greenhybrid.deviantart.com (an old account).
Generally, I prefer to dump bytearray of currently processed image as raw data triplets and run Imagemagick to create png from it with number e.g img01.png. In this way i can trace the algorithms very easy. Imagemagick is run from the function in the program using system call. This make possible do debug without using any external libs for image formats.
Another option, if you are using Qt is to work with QImage and use img.save("img01.png") from time to time like a printf is used for debugging.
it's a bit primitive compared to what you are looking for, but i have done what you suggested in your OP using standard logging and by writing image files. typically, the logging and signal export processes and staging exist in unit tests.
signals are given identifiers (often input filename), which may be augmented (often process name or stage).
for development of processors, it's quite handy.
adding html for messages would be simple. in that context, you could produce viewable html output easily - you would not need to generate any html, just use html template files and then insert the messages.
i would just do it myself (as i've done multiple times already for multiple signal types) if you get no good referrals.
In Qt Creator you can watch image modification while stepping through the code in the normal C++ debugger, see e.g. http://labs.qt.nokia.com/2010/04/22/peek-and-poke-vol-3/

Convert a raster image into a polygon using QGis (or another method)

I want to convert several images into polygon shp files using QGis (Quantum GIS 1.6).
I need to do edge detection AND differentiate between several different colors of lines (red, green, yellow and black). I need good edge detection as my images are scanned in at 200 DPI.
I'm open to other suggestions that don't involve QGis. Could I use Photoshop or would Arcgis do a better job of this?
Inkscape has a rather functional vectorizer (Trace Bitmap, IIRC)
http://inkscape.org/doc/tracing/tutorial-tracing.html
Inskscape's native format is SVG (fully vector based). It allows simplifcation of the resulting paths as well. Also, you might use the resulting XML to process automatically.