How do I prevent square images that spans 2 columns from pushing the overall height of the row? - css-grid

Example here: https://springetts.co.uk/test/
I'm having some difficulty figuring out the best way to turn the third image tile into a rectangle without having to change the original image shape.
Here's a mockup of the result I would like:
screenshot of the third image as a rectangle
I've tried a variety of things, but can't seem to figure out an elegant way for it to work nicely and so I would really appreciate some help. Thank you.
I've tried adding height:50% to the in an attempt to crop the nested inside but that did not work.

Just add this CSS rule somewhere.
.page-id-5077 #post-2661 img {
aspect-ratio: 2/1;
object-fit: cover;
}
That will get you pretty close, although you may need to change the aspect ratio slightly to get the height to match the square images in columns 1 and 2, because your rectangle needs to account for the gap between columns 3 and 4. Through experimentation, I found aspect-ratio: 41/20; to be fairly accurate.
PS. While in this case you provided a link which enabled me to understand the problem you’re trying to solve, on Stack Overflow it is considered good practice not to rely on links, because they tend to change over time. Better to write questions which are self-contained and therefore can still be useful in ten years’ time. That means including enough code in your question so that we can understand the problem without reference to external links, or better still, including a working Stack Snippet. More details here.

Related

Computer vision algorithm to use for making lines thinner

I have lecture notes written by a professor using a stylus.
A sample:
The width of the line used here is making reading difficult for me. I would like to make the lines thinner. The only solution I could think of is dilating the image. This gives a passable result:
The picture above is with uniform kernel of shape (2, 2) applied once; I've tried a bunch of kernel types, widths & numbers of iterations to arrive at this version that looks best to me.
However, I can't help but wonder if there's maybe another applicable algorithm that I'm missing; one that could lead to even better results? I wasn't able to google any computer vision approaches to font thinning, so I would appreciate any information on the subject.
Have been monitored such info during several days. Try to use Thinning described here, the link is also in the references to OpenCV-Python-Tutorial on morphological transforms. Taking Image Gradient can help, but it will make the image Grayscale, and with inverting colors you can get black-on-white text. Try to leave original color on black pixels location when original and final images are stacked.

How to detect anomalies in opencv (c++) if threshold is not good enought?

I have grayscale images like this:
I want to detect anomalies on this kind of images. On the first image (upper-left) I want to detect three dots, on the second (upper-right) there is a small dot and a "Foggy area" (on the bottom-right), and on the last one, there is also a bit smaller dot somewhere in the middle of the image.
The normal static tresholding does't work ok for me, also Otsu's method is always the best choice. Is there any better, more robust or smarter way to detect anomalies like this? In Matlab I was using something like Frangi Filtering (eigenvalue filtering). Can anybody suggest good processing algorithm to solve anomaly detection on surfaces like this?
EDIT: Added another image with marked anomalies:
Using #Tapio 's tophat filtering and contrast adjustement.
Since #Tapio provide us with great idea how to increase contrast of anomalies on the surfaces like I asked at the begining, I provide all you guys with some of my results. I have and image like this:
Here is my code how I use tophat filtering and contrast adjustement:
kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3), Point(0, 0));
morphologyEx(inputImage, imgFiltered, MORPH_TOPHAT, kernel, Point(0, 0), 3);
imgAdjusted = imgFiltered * 7.2;
The result is here:
There is still question how to segment anomalies from the last image?? So if anybody have idea how to solve it, just take it! :) ??
You should take a look at bottom-hat filtering. It's defined as the difference of the original image and the morphological closing of the image and it makes small details such as the ones you are looking for flare out.
I adjusted the contrast to make both images visible. The anomalies are much more pronounced when looking at the intensities and are much easier to segment out.
Let's take a look at the first image:
The histogram values don't represent the reality due to scaling caused by the visualization tools I'm using. However the relative distances do. So now the thresholding range is much larger, the target changed from a window to a barn door.
Global thresholding ( intensity > 15 ) :
Otsu's method worked poorly here. It segmented all the small details to the foreground.
After removing noise by morphological opening :
I also assumed that the black spots are the anomalies you are interested in. By setting the threshold lower you include more of the surface details. For example the third image does not have any particularly interesting features to my eye, but that's for you to judge. Like m3h0w said, it's a good heuristic to know that if something is hard for your eye to judge it's probably impossible for the computer.
#skoda23, I would try unsharp masking with fine tuned parameters for the blurring part, so that the high frequencies get emphasized and test it thoroughly so that no important information is lost in the process. Remember that it is usually not good idea to expect computer to do super-human work. If a human has doubts about where the anomalies are, computer will have to. Thus it is important to first preprocess the image, so that the anomalies are obvious for the human eye. Alternative for unsharp masking (or addition) might be CLAHE. But again: remember to fine tune it very carefully - it might bring out the texture of the board too much and interfere with your task.
Alternative approach to basic thresholding or Otsu's, would be AdaptiveThreshold() which might be a good idea since there is a difference in intensity values between different regions you want to find.
My second guess would be first using fixed value thresholding for the darkest dots and then trying Sobel, or Canny. There should exist an optimal neighberhood where texture of the board will not shine as much and anomalies will. You can also try bluring before edge detection (if you've detected the small defects with the thresholding).
Again: it is vital for the task to experiment a lot on every step of this approach, because fine tuning the parameters will be crucial for eventual success. I'd recommend making friends with the trackbar, to speed up the process. Good luck!
You're basically dealing with the unfortunate fact that reality is analog. A threshold is a method to turn an analog range into a discrete (binary) range. Any threshold will do that. So what exactly do you mean with a "good enough" threshold?
Let's park that thought for a second. I see lots of anomalies - sort of thin grey worms. Apparently, you ignore them. I'm applying a different threshold then you are. This may be reasonable, but you're applying domain knowledge that I don't have.
I suspect these grey worms will be throwing off your fixed value thresholding. That's not to say the idea of a fixed threshold is bad. You can use it to find some artifacts and exclude those. Somewhat darkish patches will be missed, but can be brought out by replacing each pixel with the median value of its neighborhood, using a neighborhood size that's bigger than the width of those worms. In the dark patch, this does little, but it wipes out small local variations.
I don't pretend these two types of abnormalities are the only two, but that is really an application domain question and not about techniques. E.g. you don't appear to have ligthing artifacts (reflections), at least not in these 3 samples.

Opencv - align sample image and testing image

I have been in the project of testing the images of sample products compared with a sample product image. I have come up with two approaches but there are problems encountered in each approach.
Method 1. Remove Background, Realign images according to features and then find the difference of two images by subtraction.
Problem: I am thinking about using template matching to extract the region of interest and save as a new picture. However, is it possible to use template matching to extract? I saw the sample provided by opencv can make a frame or rectangle around the matched object. So, it seems feasible for me to make it at the center in the new picture. If it is possible, what is the way of making a square matched as the center of a new picture? It seems a bit difficult as the matched rectangle may not be horizontal.
Method 2. Cascade Classifier Training: it seems I can train the classifier to know what bad images are and what good images are.
Problem: However, from the classifier detection sample by opencv, it was used to compare during a video. Is it possible to do so on images? Also, how could I adjust the sample error or the precision of the classifier detection?
If you have any other feasible suggestions, please kindly give me some advices on them. Thanks for your kind attention!!

Strange coordinate appears after finding moments of circles

i am using open CV and C++. I used find Contours and moments to display the center coordinates of circles in an image. However,there is this strange coordinate which appears in between the good ones. It is [-2147483648,-2147483648]. Does anybody know what it means?
thanks
Chances that this reason occurs could due to the integer type used. Also known as an integer overflow.
This occurs when an arithmetic operation attempts to create a numeric value that is too large for the available storage space which you declared.
However, there may be other reasons as well. From what I noticed, the values are kind of big. May I ask what is the size of the image that you trying to implement the contours and moments on? Cuz I have done something similar myself, and the values are way too big.
If the size of the image you are processing is a not of humongous size(like really really big), then there might be something wrong with the code. Please edit the question to include the code, so that all the stackoverflow people can help you.
This are some links of how to find the centre of a circle, hope you find it useful. I used this links when I was doing a similar program back then.
How to find the coordinates of a point w.r.t another point on an image using OpenCV
http://lfhck.com/question/278176/python-and-opencv-how-do-i-detect-all-filledcirclesround-objects-in-an-image
Sorry it was not about the type that i used but it was the image quality which had problems. I had to blur that part so that it does not detect that color there.

How to detect Text Area from image?

i want to detect text area from image as a preprocessing step for tesseract OCR engine, the engine works well when the input is text only but when the input image contains Nontext content it falls, so i want to detect only text content in image,any idea of how to do that will be helpful,thanks.
Take a look at this bounding box technique demonstrated with OpenCV code:
Input:
Eroded:
Result:
Well, I'm not well-experienced in image processing, but I hope I could help you with my theoretical approach.
In most cases, text is forming parallel, horisontal rows, where the space between rows will contail lots of background pixels. This could be utilized to solve this problem.
So... if you compose every pixel columns in the image, you'll get a 1 pixel wide image as output. When the input image contains text, the output will be very likely to a periodic pattern, where dark areas are followed by brighter areas repeatedly. These "groups" of darker pixels will indicate the position of the text content, while the brighter "groups" will indicate the gaps between the individual rows.
You'll probably find that the brighter areas will be much smaller that the others. Text is much more generic than any other picture element, so it should be easy to separate.
You have to implement a procedure to detect these periodic recurrences. Once the script can determine that the input picture has these characteristics, there's a high chance that it contains text. (However, this approach can't distinguish between actual text and simple horisontal stripes...)
For the next step, you must find a way to determine the borderies of the paragraphs, using the above mentioned method. I'm thinking about a pretty dummy algorithm, witch would divide the input image into smaller, narrow stripes (50-100 px), and it'd check these areas separately. Then, it would compare these results to build a map of the possible areas filled with text. This method wouldn't be so accurate, but it probably doesn't bother the OCR system.
And finally, you need to use the text-map to run the OCR on the desired locations only.
On the other side, this method would fail if the input text is rotated more than ~3-5 degrees. There's another backdraw, beacuse if you have only a few rows, then your pattern-search will be very unreliable. More rows, more accuracy...
Regards, G.
I am new to stackoverflow.com, but I wrote an answer to a question similar to this one which may be useful to any readers who share this question. Whether or not the question is actually a duplicate, since this one was first, I'll leave up to others. If I should copy and paste that answer here, let me know. I also found this question first on google rather than the one i answered so this may benefit more people with a link. Especially since it provides different ways of going about getting text areas. For me, when I looked up this question, it did not fit my problem case.
Detect text area in an image using python and opencv
In the Current time, the best way to detect the text is by using EAST (An Efficient and Accurate Scene Text Detector)
The EAST pipeline is capable of predicting words and lines of text at arbitrary orientations on 720p images, and furthermore, can run at 13 FPS, according to the authors.
EAST quick start tutorial can be found here
EAST paper can be found here