How to make a space between two side-by-side images in xaringan slides using rmarkdown - r-markdown

I want to include two images side-by side way, but don't know how to make a space between the two images. My code is the following:
```{r, echo=FALSE, out.height="30%", out.width="40%"}
knitr::include_graphics(c("url_01", "url_02"))
```

Related

Improving output quality of 'ImageChops'

I've implemented 'ImageChops' to find the difference between 2 images. The differences are pointed out in the output accurately, but the picture lacks clarity. Sometimes the contents of the output overlap.
Is it possible to point out the difference in the images such that the whole image is displayed & the differences are highlighted, like how 'matchTemplate()' displays the complete image & allows us to use a bounding box to highlight the matched areas?
Main image:
Image to be compared with:
output:
My code:
from PIL import Image, ImageChops
img1= Image.open("C:/ImageComparison/Images/img1.png")
img2=Image.open("C:/ImageComparison/Images/img2.png")
diff=ImageChops.difference(img1, img2).convert('RGB')
diff.show()

How to increase the figure size in Html output R Markdown (Knitr)

I have generated an output file with R markdown (see below)
In the image the figures are displayed in the HTML file but much smaller than the text alignment. I tried adjusting the width of the image but it doesnt change this in the output file. I want the image to be as wide as the gray bars. How can I adjust this?

How can I use different colors for different labels in TensorBoard embedding visualization?

I am visualizing sentence embedding using tensorboard. I have label for each sentence embedding. How can I set a color for each label?
For example
embedding vector Labels
[0.2342 0.2342 0.234 0.8453] A
[0.5342 0.9342 0.234 0.1453] B
[0.7342 0.0342 0.124 0.8453] C
[0.8342 0.5342 0.834 0.5453] A
I am able to visualize the embedding vector where each row is labeled by its label. I want to set colors also so that I see points with same label will have same color. Like all "A" will be red, "B" will be green, "C" will be blue and so on?
I searched on Google but didn't get any sample.
Could anyone please share some code to get it done?
Thank you in advanced.
There should be a colour by drop down that you can use.
In case that is not showing up, one of the possible reason could be that you have more than 50 unique labels, which is the hardcoded limit in the current tensorflow code.
Refer to this thread for details.
https://github.com/tensorflow/tensorboard/issues/61

How to show extremal region of text in opencv

I have code from here. It's reference from Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012.
Result is rectan of text region, but I need the output like this for analysis.
How to show extremal regions of text in this code? I try to show image by imshow("",regions);, but it does not work.
in this code
vector< vector<Vec2i> > region_groups;
vector<Rect> groups_boxes;
erGrouping(src, channels, regions, region_groups, groups_boxes, ERGROUPING_ORIENTATION_HORIZ);
sorry for my mistake . I'm not strong langues too!!
I would like to draw region_groups into the white picture

How to detect image location before stitching with OpenCV / C++

I'm trying to merge/stitch 2 images together but found that the default stitcher class in OpenCV could not handle my images.
So I started to write my own..
Unfortunately the images are too large to attach to this message (they are both 12600x9000 pixels in size).. so I'll try to explain as good as possible.
The 2 images are not pictures takes by a camera but are tiff files extracted from a PDF file.
The images themselves were actually CAD drawings, so not much gradients in there and therefore I think the default stitcher class could not handle them.
So far, I managed to extract the features and match them.
Also I used the following well known example to stitch them together:
Mat WarpedImage;
cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(2*img_2.cols,2*img_2.rows));
Mat half(WarpedImage,Rect(0,0,img_1.cols,img_1.rows));
img_1.copyTo(half);
I sort of made it fit.. because my problem is that in my case the 2 images could be aligned vertically or horizontally.
By default, all stitch examples on the internet assume the first image is the left image and the 2nd image is the right image.
So my first question would be:
How can I detect if the image is to the left, right, above or below the first image and create a proper sized new image?
Secondly..
Currently I'm getting the proper image.. however, because I'm not having some decent code to check the ideal width and height of the new image, I have a lot of black/empty space in the new image.
What would be the best C++ code to remove those black area's?
(I'm seeing a lot of Python scripts on the net.. but no C++ examples of this.. and I have 0 Python skills....)
Thank you very much in advance for your help.
Greetings,
Floris.
You can reproject the corners of the second image with perspectiveTransform. With the transformed points you can find the relative position of your image and calculate the new image size that will fit both images. This will also let you deal with the black areas, since you have the boundaries of the two images.