How to center image with Markdown? - r-markdown

I have tried to center an image and I have not been able, I have found that you can center text, Centered text , but I have not found examples of how to center an image, thanks in advance to those who know

In your chunk options.
{r chunkName, fig.align="center", out.height="90%", out.width = "90%", fig.cap=" Whatever.\\label{whatever}"}
knitr::include_graphics("figures/whatever.png")

Related

How can i use Viola Jones Algorithm to detect the Face as a region of interest and crop it till the rectangle box?

I want to detect the face in the video frame and remove the other elements such as background etc. and just want to focus on the facial region, for this i need to use viola jones algorithm, czn anyone give me a hint or suitable answer for this.
import cv2
import sys
imagep='6.jpg'#sys.argv[1]
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
i=cv2.imread(imagep)
gray=cv2.imread(imagep,cv2.COLOR_BGR2GRAY)
f=face_cascade.detectMultiScale(gray,scaleFactor=1.1,minNeighbors=5,minSize=(30,30),flags=cv2.CASCADE_SCALE_IMAGE)
print("Found {0} faces!".format(len(f)))
for(x,y,w,h) in f:
cv2.rectangle(i,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow("Faces found",i)
cv2.waitKey(0)
Once you have the upper left and and bottom right coordinates of the rectangle in which the face is contained, You can just crop the original image on the basis of those coordinates. Let's suppose initial image is stored in the frame variable, follow the code
face = face_recognition.detectMultiScale(frame, scaleFactor = 1.8, minNeighbors = 5) #detects coordinates of face
resultant_image = frame[face[0][1] : (face[0][1] + face[0][3]),face[0][0] : (face[0][0] + face[0][2]), :] # gives you cropped image

how to center allign a text in a custom x y coordinate using reportlab

I am trying to develop one python app. the app should print text in an a4 page and on that a4 sheet already there are 4 rectangle boxes, i have to put text in center allign format in each box. the image is attached as reference.sample image
i have written something like this using reportlab.
self.canvas = canvas.Canvas(name, pagesize=landscape(A4))
self.canvas.drawCentredString(x1,y1,"c1")
but i am not achieving my goal.
rlhelper has some useful functions which may be of assistance here.
If you can identify the mid-point x-coordinate of each box, the following function can work:
import rlhelper
simpleCentredstring('canvas', 'x', 'y', 'string', 'fontcolor', 'font (+weight)','fontsize')

finding spots and lines in image

I was working on following images to find lines and spots in these images. I am working with OpenCV, C++. I have tried HoughLineP, HoughLine, Contour and Canny methods but couldn't get the results. If someone can help or write a pseudo-code, I shall be grateful.
Thanks.
Image to detect line:
Image to detect spot:
Mmmmm - where did you get those awful images? Well, they were worth 2 minutes of effort... the spot is lighter than the rest, so if you divide the image into 100 rectangles and find the brightest ones, you will probably get it... I use ImageMagick here just at the command line - it is installed on most Linux distros and available for OSX and Windows:
convert noise.jpg -crop 10x10# -format "%[mean] %g\n" info: | sort -n
32123.3 640x416+384+291
32394.6 640x416+256+42
32442.2 640x416+320+125
32449.1 640x416+384+250
32459.6 640x416+192+374
32464.4 640x416+0+374
32486.5 640x416+448+125
32491.4 640x416+576+374
32493.7 640x416+576+333
32504.3 640x416+576+83
32520.9 640x416+576+0
32527 640x416+448+0
32621.8 640x416+384+333
32624.1 640x416+320+42
32631.3 640x416+192+333
32637.8 640x416+384+42
32643.4 640x416+512+0
32644.2 640x416+0+0
32652.6 640x416+384+83
32659.1 640x416+128+374
32660.4 640x416+320+208
32662.2 640x416+384+0
32668.5 640x416+256+208
32669.4 640x416+0+333
32676.7 640x416+256+250
32683.5 640x416+256+83
32699.7 640x416+0+208
32701.3 640x416+64+166
32704 640x416+576+208
32704 640x416+64+333
32707.5 640x416+512+208
32710.8 640x416+192+83
32729.8 640x416+320+83
32733.4 640x416+256+166
32735 640x416+576+250
32741 640x416+256+125
32745.4 640x416+0+166
32748.4 640x416+320+166
32751.4 640x416+512+166
32752.4 640x416+512+42
32755.1 640x416+384+208
32770.9 640x416+448+291
32776.8 640x416+128+166
32777.1 640x416+256+0
32795.8 640x416+512+125
32801.5 640x416+128+333
32803.3 640x416+192+125
32805.5 640x416+256+374
32809.6 640x416+448+166
32810 640x416+576+166
32822.2 640x416+0+291
32822.8 640x416+576+42
32826.8 640x416+320+333
32831.7 640x416+320+0
32834.8 640x416+192+42
32837.6 640x416+192+166
32843 640x416+384+125
32862 640x416+64+374
32865.8 640x416+0+42
32871.5 640x416+576+291
32872.5 640x416+0+83
32872.8 640x416+448+333
32873.6 640x416+320+291
32877.5 640x416+448+42
32880.5 640x416+64+208
32883.5 640x416+128+42
32883.9 640x416+192+208
32885.5 640x416+128+208
32889.2 640x416+256+333
32921 640x416+192+291
32923.3 640x416+64+291
32929.2 640x416+512+374
32935.4 640x416+192+250
32938.4 640x416+64+250
32943.5 640x416+448+374
32953.3 640x416+384+374
32954.7 640x416+320+374
32962 640x416+320+250
32966.9 640x416+448+83
32967.3 640x416+128+291
32968.3 640x416+0+250
32970.8 640x416+512+333
32974.5 640x416+64+0
32979.6 640x416+512+291
32983.6 640x416+256+291
32988.9 640x416+448+250
32993.3 640x416+576+125
33012.7 640x416+0+125
33057.3 640x416+512+250
33068.6 640x416+128+250
33102.9 640x416+64+42
33126.1 640x416+512+83
33127.9 640x416+384+166
33139.2 640x416+192+0
33141.4 640x416+64+83
33142.3 640x416+64+125
33181.5 640x416+448+208
33190.8 640x416+128+0
34693 640x416+128+125
36178.3 640x416+128+83
The last 2 rectangles are the brightest, so if I box them in in red and blue you can see what it has found:
convert noise.jpg -fill none -stroke red -draw "rectangle 128,83 192,123" -stroke blue -draw "rectangle 128,125 192,168" result.png
Alternatively, you could create a new image in which each pixel is the mean of the 50x50 square of surrounding pixels in the original image, like this:
convert noise.jpg -virtual-pixel edge -statistic mean 50x50 -auto-level result.png
Of course, you can also threshold that:
convert noise.jpg -virtual-pixel edge -statistic mean 50x50 -auto-level -threshold 80% result.png
As regards the lines, I want to use some type of mode to detect the frequently occurring values within small areas but as the colours vary, I need to reduce the palette of colours to find things that are just similarly coloured so I would go with an approach something like this which reduces the colours then calculates the mode:
convert noise2.jpg -colors 8 -statistic mode 8x8 result.jpg
It needs refinement, but you get the idea hopefully.
Alternatively, you could calculate a new image wherein each pixel is the standard deviation of the surrounding 3x3 pixels in the original image and then look for the ones where this value is lowest - i.e. where the image is darkest which corresponds to areas in the input image where there is least variation in the pixel colours:
convert noise2.png -statistic standarddeviation 3x3 -auto-level result.png

SetViewBox moving the paper

I am using the setViewBox() function in Raphael 2. The width and height is multiplied by a value like (1.2, 1.3 ...). This changes the magnification/ zooming properly but the x and y which I have given as 0,0 makes the paper display its contents after some offset. If i modify the x and y to some positive value after the rendering( using firebug!!) then the top left of the paper moves back and above to its right position. I want to know how will the value be calculated. I have no idea about how the x,y affect the viewbox. If anybody can give me any pointers for this it will be a real help.
I have tried giving the difference between the width/ height divided by 2. Also I must mention that I am not rendering an image but various raphael shapes e.g. rects, paths text etc. in my paper.
Looking forward to some help!
Kavita
this is an example showing how to calculate the setViewBox values, I included jquery (to get my SVG cocntainer X and Y : $("#"+map_name).offset().left and $("#"+map_name).offset().top) and after that I calculated how much zoom I need :
var original_width = 777;
var original_height = 667;
var zoom_width = map_width*100/original_width/100;
var zoom_height = map_height*100/original_height/100;
if(zoom_width<=zoom_height)
zoom = zoom_width;
else
zoom = zoom_height;
rsr.setViewBox($("#"+map_name).offset().left, $("#"+map_name).offset().top, (map_width/zoom), (map_height/zoom));
did you put the center of your scaling to 0,0 like:
element.scale(1.2,1.2,0,0);
this can scale your element without moving the coordinates of the top left corner.

Creating QGradient

Right now I am just trying to create a circle with a gradient fill:
//I want the center to be at 10, 10 in the circle and the radius to be 50 pixels
QRadialGradient radial(QPointF(10, 10), 50);
radial.setColorAt(0, Qt::black); //I want the center to be black
radial.setColorAt(1, Qt::white); //I want the sides to be white
painter.setBrush(QBrush(radial));
painter.drawEllipse(/*stuff*/);
However, all this accomplishes is to show me a totally white circle. How can I rectify this?
I'll try to help you, but I can't speak english very well.
Damn I also can't post images meanwhile... I'll post them on other site.
Sure it will be white. You are using wrong coordinates. Show me your "/* stuff */" variable list, please.
You see, if you set gradient for your widget (in your case its only a little area) you can paint your ellipse in wrong place and it will be surely white: [see pic]
Set Gradients coordinates correct. e.g:
QRadialGradient radial(QPointF(100, 100), 50);
// ...
painter.drawEllipse(50,50,100,100);
[see pic]
In the line
radial.setColorAt( 0, Qt::black );
change it to the line
radial.setColorAt( n, Qt::black );
n being a number between 0 and 1.