How to blend images in python (without blend method) - python-2.7

this is an assignment on my class, so i need to blend to images together with python using interpolation but i am missing something, perhaps you can help me understand what.
Heres my code so far:
from PIL import Image
import numpy as np
image_one=Image.open('capone.pgm')
image_two=Image.open('escobar.pgm')
out=Image.new(image_one.mode, image_two.size)
(l,h)=image_one.size
for j in range(0, h):
for i in range(0, l):
out.getpixel((i,j)),(image_one.getpixel((i,j)) * (1.0 - 0.3) + image_two.getpixel((i,j)) * 0.3 )
out.save("testaando.jpg","JPEG")
out.show()
0.3 is the alpha i want for the blending
the two original images are sime size and mode

getpixel method of PIL.Image returns value of a pixel, but to modify it you need to use putpixel method. So instead of
out.getpixel((i,j)),(image_one.getpixel((i,j)) * (1.0 - 0.3) + image_two.getpixel((i,j)) * 0.3 )
use
out.putpixel((i,j), (image_one.getpixel((i,j)) * (1.0 - 0.3) + image_two.getpixel((i,j)) * 0.3 ))

This is just a guess as there currently is not much information.
The Line:
out.getpixel((i,j)),(image_one.getpixel((i,j)) * (1.0 - 0.3) + image_two.getpixel((i,j)) * 0.3 )
Should be:
out[i, j] = (image_one.getpixel((i,j)) * (1.0 - 0.3) + Image_two.getpixel((i,j)) * 0.3 )

Related

Maximising precision in glsl

Related to this question.
I am having suffering due to lack of precision in the following function in glsl:
double LinearizeDepth(double depth)
{
double z = depth * 2.0 - 1.0; // back to NDC
return (2.0 * near_plane * far_plane) / (far_plane + near_plane - z * (far_plane - near_plane));
}
I want to threshold on a depth value like so
abs(linearised_depth - threshold_value) < epsilon and my code works as expected when epsilon >= 0.5. I want to reduce epsilon to ~0.1. Are there any tricks to help with this?

Unexpected Value for the Calculation of Catalan's Constant

Here's the function:
arbFloat catalan(){
arbFloat output, n = 0, first, second;
std::string preComp, postComp;
do{
preComp = resizeArbtoString(first);
first += (1.0 / pow(2.0, 4.0 * n)) * (
(- 1.0 / (2.0 * pow(8.0*n + 2.0, 2.0))) +
(1.0 / (4.0 * pow(8.0*n + 3.0, 2.0))) +
(- 1.0 / (8.0 * pow(8.0*n + 5.0, 2.0))) +
(1.0 / (8.0 * pow(8.0*n + 6.0, 2.0))) +
(-1.0 / (16.0 * pow(8.0*n + 7.0, 2.0))) +
(1.0 / (2.0 * pow(8.0*n + 1.0, 2.0)))
);
postComp = resizeArbtoString(first);
n++;
} while(preComp != postComp);
n = 0;
do{
preComp = resizeArbtoString(second);
second += (1.0 / pow(2.0 , 12.0 * n)) * (
(1.0 / (16.0 * pow(8.0*n + 2, 2.0))) +
(1.0 / (64.0 * pow(8.0*n + 3, 2.0))) +
(-1.0 / (512.0 * pow(8.0*n + 5, 2.0))) +
(-1.0 / (1024.0 * pow(8.0*n + 6, 2.0))) +
(1.0 / (4096.0 * pow(8.0*n + 7, 2.0))) +
(1.0 / (8.0 * pow(8.0*n + 1, 2.0)))
);
postComp = resizeArbtoString(second);
n++;
} while(preComp != postComp);
output = (3.0*first) - (2.0*second);
return output;
}
This code is meant to calculate the value of Catalan's constant. I am using the method that quickly converges, compared to the slowly-converging equation.
arbFloat is a Boost multi-precision cpp_dec_float with the precision of 100 decimal places:
typedef boost::multiprecision::number<boost::multiprecision::cpp_dec_float<100>> arbFloat
The function resizeArbtoString() simply converts the variable to a string, and resizes it to account for the first decimal and decimal mark. It isn't important here, and I have verified that it does not impact the result.
I get the value of first before and after calculations to stop an infinite loop when the precision is bypassed (same for the variable second). The do-while loop checks this. This is what makes it an iterative method.
The two do-while loops separately get the values of the two summations in the equation (image below). Then the output variable multiplies the first summation by 3, and the second summation by -2, then adds them together.
This function is returning the value of:
0.9159456632705320620288006061761625716626752650763000222738355046579667073981960968090933049341304381
When the value should instead be:
.915965594177219015054603514932384110774149374281672134266498119621763019776254769479356512926115106248574
(correct constant value via OEIS)
Here is the equation that I referenced when creating this function:
Why is this returning an imprecise value?
Your 1.0 / 4096.0 term in second should be -1.0 / 4096.0.

Accuracy warnings in scipy.special

I am running an MCMC sampler which requires the calculation of the hypergeometric function at each step using scipy.special.hyp2f1().
At certain points on my grid (which I do not care about) the solutions to the hypergeometric function are quite unstable and SciPy prints the warning:
Warning! You should check the accuracy
This is rather annoying, and over 1000s of samples may well slow down my routine.
I have tried using special.errprint(0) with no luck, as well as disabling all warnings in Python using both the warnings module and the -W ignore flag.
The offending function (called from another file) is below
from numpy import pi, hypot, real, imag
import scipy.special as special
def deflection_angle(p, (x1, x2)):
# Find the normalisation constant
norm = (p.f * p.m * (p.r0 ** (t - 2.0)) / pi) ** (1.0 / t)
# Define the complex plane
z = x1 + 1j * x2
# Define the radial coordinates
r = hypot(x1, x2)
# Truncate the radial coordinates
r_ = r * (r < p.r0).astype('float') + p.r0 * (r >= p.r0).astype('float')
# Calculate the radial part
radial = (norm ** 2 / (p.f * z)) * ((norm / r_) ** (t - 2))
# Calculate the angular part
h1, h2, h3 = 0.5, 1.0 - t / 2.0, 2.0 - t / 2.0
h4 = ((1 - p.f ** 2) / p.f ** 2) * (r_ / z) ** 2
special.errprint(0)
angular = special.hyp2f1(h1, h2, h3, h4)
# Assemble the deflection angle
alpha = (- radial * angular).conjugate()
# Separate real and imaginary parts
return real(alpha), imag(alpha)`
Unfortunately, hyp2f1 is notoriously hard to compute over some non-trivial areas of the parameter space. Many implementations would dilently produce inaccurate or wildly wrong results. Scipy.special tries hard to at least monitor convergence. An alternative could be to usr arbitrary precision implementations, e.g. mpmath. But these would certainly be quite a bit slower, so MCMC users beware.
EDIT: Ok, this seems to be scipy version dependent. I tried #wrwrwr's example on scipy 0.13.3, and it reproduces what you see: "Warning! You should check the accuracy" is printed regardless of the errprint status. However, doing the same with the dev version, I get
In [12]: errprint(True)
Out[12]: 0
In [13]: hyp2f1(0.5, 2/3., 1.5, 0.09j+0.75j)
/home/br/virtualenvs/scipy_py27/bin/ipython:1: SpecialFunctionWarning: scipy.special/chyp2f1: loss of precision
#!/home/br/virtualenvs/scipy_py27/bin/python
Out[13]: (0.93934867949609357+0.15593972567482395j)
In [14]: errprint(False)
Out[14]: 1
In [15]: hyp2f1(0.5, 2/3., 1.5, 0.09j+0.75j)
Out[15]: (0.93934867949609357+0.15593972567482395j)
So, apparently it got fixed at some point between 2013 and now. You might want to upgrade your scipy version.

OpenCV Brightness & Contrast like in Gimp

I want adjust the contrast in Python's OpenCV like in Gimp. I want do this:
In Gimp it is simple. I just use Colors->Brightness & Contrast, and set contrast to 127.
But I don't know how to do this in Python and I cannot find any solution to this problem.
Check here. The formula is:
new_value = (old_value - 0.5) × contrast + 0.5 + brightness
In python you can easily apply these as matrix operations. Please note that pixel values here are in [0,1], so 0.5 is equivalent to 127 when the range is in [0,255].
In the same link is reported also the code, with pixel-wise operations (I copy just for completeness):
function transform(value, brightness, contrast)
return (value-0.5)*contrast+0.5+brightness
end
function transform_rgb(r,g,b, brightness, contrast)
return transform(r, brightness, contrast), transform(g, brightness, contrast), transform(b, brightness, contrast)
end
function bcontrast_get_rgb(x,y,brightness, contrast)
r,g,b=get_rgb(x,y)
return transform_rgb(r,g,b, brightness, contrast)
end
function bcontrast(brightness, contrast)
for y=0, height-1 do
for x=0, width-1 do
set_rgb(x,y, bcontrast_get_rgb(x,y,brightness,contrast))
end
end
flush ()
end
bcontrast(0.25, 2.0)
The formula for contrast and brightness used by GIMP is defined in this file : https://github.com/GNOME/gimp/blob/GIMP_2_10_10/app/operations/gimpoperationbrightnesscontrast.c
if (brightness < 0.0)
value = value * (1.0 + brightness);
else
value = value + ((1.0 - value) * brightness);
value = (value - 0.5) * slant + 0.5;
Where
brightness = config->brightness / 2.0;
slant = tan ((config->contrast + 1) * G_PI_4);
Assuming the image was normalized to the [0, 1] range beforehand.

Percentage calculation around 0.5 (0.4 = -20% and 0.6 = +20%)

I'm in a strange situation where I have a value of 0.5 and I want to convert the values from 0.5 to 1 to be a percentage and from 0.5 to 0 to be a negative percentage.
As it says in the title 0.4 should be -20%, 0.3 should be -40% and 0.1 should be -80%.
I'm sure this is a simple problem, but my mind is just refusing to figure it out :)
Can anyone help? :)
What we want to do is to scale the range (0; 1) to (-100; 100):
percentage = (value - 0.5) * 200;
The subtraction transforms the value so that it's in the range (-0.5; 0.5), and the multiplication scales it to the range of (-100; 100).
percent = ((value - 0.5) / 0.5) * 100
This will generate from -100 to 100. You want to subtract your zero value (0.5) from the given value, and divide by the range that should give 100% (also 0.5 in your example). Then multiply by 100 to convert to percentage.
Normalize it, and you're done:
// Assuming x is in the range (0,1)
x *= 2.0; // x is in the range (0,2)
x -= 1.0; // (-1,1)
x *= 100; // (-100,100)