ImageScaleToFit() with percentages - coldfusion

I want to scale an image using a scale amount value passed to it from a FORM. The thing is that I'm getting syntax errors.
I have a hidden input in my form like this:
<input type="hidden" name="scale" value="0.16"/>
In my page that processes the image I want it to do this:
<cfset ImageScaleToFit(MyImg, ARGUMENTS.SCALE%, "", "highestQuality")/>
I know the above cfml is not correct but I don't understand how to convert the scale value into a percentage that ColdFusion can work with. In this case, 0.16 means I want the image to be 16% of the original size. So I'm reducing it by 84%. If the scale was 3.5 then I need to increase the size of the image by 350%.
How do I pass the value from the input into the ImageScaleToFit() function. Passing it in as above using ARGUMENTS.Scale% is giving me syntax errors. For some reason its not being compiled to 0.16%
How do I convert the actual scale value e.g. 0.16 so that it ends up being a correct percentage that ColdFusion can work with to correctly scale the image either up or down?

ImageScaleToFit is looking for the value in pixels. You need to take the original image width and multiple that by your scale value
<cfset ImageScaleToFit(MyImg, ARGUMENTS.SCALE * originalImageWidth, "", "highestQuality")/>

Related

Why does resizing an image smaller than the original cause the filesize to increase?

I only resize images where they are bigger than the prescribed size like such:
<cfset BodyImgMainW = 670 />
<cfset BodyImgMainH = 670 />
<cfimage name="BodyImg" source="c:\sample_1365x768.jpg"/>
<cfif BodyImg.width GT BodyImgMaxW OR BodyImg.height GT BodyImgMaxH>
<cfif BodyImg.width GT BodyImgMainW>
<cfset ImageResize(BodyImg, BodyImgMainW, "", "highestQuality")/>
<cfelseif BodyImg.height GT BodyImgMainH>
<cfset ImageResize(BodyImg, "", BodyImgMainH, "highestQuality")/>
</cfif>
<cfimage source="#BodyImg#" action="write" destination="C:\sample_670x377.jpg" overwrite="yes" quality="1" format="jpg"/>
</cfif>
The original image I uploaded for testing was 1,365px × 768px and file size of 36.4KB. But once its been resized to 670px x 377px it ends up with a file size of 89.5KB.
Why does it increase in file size when the image is actually smaller? Is there a way to tell ColdFusion not to do anything to the image other than just make it smaller?
ORIGINAL IMAGE 1,200px × 800px # 192.3KB:
RESIZED IMAGE 670px × 447px # 238.3KB:
TLDR
JPEG has by its nature has artifacts. Compressing an image in general makes artifacts. Doing both makes lots of artifacts. When you saved the image, lots of space was used to encode artifacts.
The full story is long, but let me give the half baked version. JPEG images are bit mapped images, but they are not an exact copy of the original bit map. So what exactly are they?
Take you image, and rather than dealing with 1x1 pixels, consider a 8 x 8 block. In the upper left corner make that single pixel perfect when compared to the original. As for the other 63, create a mathematical approximation of what that should be (See: https://en.wikipedia.org/wiki/Discrete_cosine_transform). Depending on how much detail you want, more and more of the remaining 63 bits will be exactly right as opposed to close.
Now that all the data has been reduced, perform some basic lossless compression
Now let's take a look at what you did. You took and image, it was expanded out to 1365 x 768. That expansion did not create a perfect image. Some of the pixels were off because that is how JPEG works.
When you strunk it down, it was not a mathematical half. Which means the resulting pixels. A single pixel was not an average of what was 4 pixels (interpolation). So you are now averaging a lot of pixels. Keep in mind only 1 in 64 was actually correct.
A new image is created. It also has an 8 x 8 block. But its 8 x 8 block came also has noise from the original encoding and from the interpolation. The encoder cannot know what was a part of the expected image and what was noise. Hence it wants to encode all of the image so that it will accurately represent what it was handed. Encoding all this noise takes up space.

Equivalent of MATLAB’s caxis in OpenCV

Recently, I used the OpenCV library to process the gray image, and I used the MATLAB platform to process (it works well), just like this:
imagesc(I), colormap 'Jet',caxis[0 1];%want to show the Pseudocolor picture
As you see, MATLAB has a function called caxis, which used to scale Pseudocolor axis.
My question is, are there any functions in OpenCV that can realize the caxis function of MATLAB or how should I realize this function?
You have the applyColorMap function. However it's always a mapping for values between 0 and 255, even if all your image's pixels have their value between 20 and 30 (for example).
If you want to apply the whole colormap between your min and max values, you will need to have an intermediary mapping to "normalize" your values to a [0-255] scale. Or you could also create your own Color Map that would take a min and a max as arguments and create its full table accordingly.

Using ImageMagick++ to modify image contrast/brightness

I'm trying to apply contrast and brightness to a bitmap in memory and I'm completely lost. Currently I'm trying to use Magick++ to do it, but if one of the other APIs would work better I'm all ears. I managed to find Magick::Image::sigmoidalContrast() for applying the contrast, but I can't figure out how to get it to work. I'm creating an image, passing it the buffer pointer, then calling that function, but it doesn't seem like it's changing anything so my first though was that it's making a copy and modifying that. Even so, I have no idea how to get the data out of the Magick::Image object.
Here's what I got so far.
Magick::Image image(fBitmapData->mGetTextureWidth(), fBitmapData->mGetTextureHeight(), "RGBA", MagickCore::CharPixel, pixels);
image.sigmoidalContrast(1, 20.0);
The documentation is useless and after searching I could only find hints that the first parameter is actually a boolean, even though it takes a size_t, that specifies whether to add or subtract the contrast, and the second value is something I have no idea what to pass so I'm just using 20.0 to test.
So does anyone know if this will work for contrast, and if not, then how do you apply contrast? And likewise I still have no idea how to apply brightness either and can't find any functions that look like they would work.
Figured it out; The function for contrast I was using was correct, and for brightness I ended up using image.modulate(brightness, 100.0, 100.0);. To get the data out of the image object you can grab the pixels of the entire image by doing
const MagickCore::PixelPacket * magickPixels = image.getConstPixels(0, 0, image.columns(), image.rows());
And then copy the magickPixels data back into the original pixels that were passed into the image constructor. An important thing to note is that the member MagickCore::PixelPacket::opacity is not what you would think it would be. If the pixel is completely transparent you'd think the value would be 0, right? Well for some reason ImageMagick is doing it opposite. So for full transparency the value would be 255. This means you need to do 255 - opacity to get the correct value.
Also be careful of the MAGICKCORE_QUANTUM_DEPTH that ImageMagick was compiled with, as this will change the values drastically. For my code MAGICKCORE_QUANTUM_DEPTH just happened to be defined as 16 so all of the values were a range of 0 to 65535, which I just fixed by doing realValue = magickValue >> 8 when copying the data back over since the texture data is unsigned char values.
Just for clarification on how to use these functions, since the documentation is horrible and completely wrong, the first parameter to signmoidalContrast() is actually a boolean, even though the type is a size_t, that specifies whether to increase the contrast (true) or reduce it (false), and the second is a range from 0.00001 to 20.0. I say 0.00001 because 0.0 is an invalid value so it just needs to be some decimal that is close to but not exactly 0.0.
For modulate() the documentation says that each value should be specified as 1.0 for no change, which is completely wrong. The values are actually a percentage so for no change you would specify 100.0.
I hope that helps someone because it took me all damn day to figure this stuff out.
According to the Imagemagick website - for the command line but may be the same?
-sigmoidal-contrast contrastxmid-point
increase the contrast without saturating highlights or shadows.
Increase the contrast of the image using a sigmoidal transfer function without saturating highlights or shadows. Contrast indicates how much to increase the contrast. For example, near 0 is none, 3 is typical and 20 is a lot. Note that exactly zero is invalid, but 0.0001 is negligibly different from no change in contrast. mid-point indicates where midtones fall in the resultant image (0 is white; 50% is middle-gray; 100% is black). By default the image contrast is increased, use +sigmoidal-contrast to decrease the contrast.
To achieve the equivalent of a sigmoidal brightness change, use -sigmoidal-contrast brightnessx0% to increase brightness and class="arg">+sigmoidal-contrast brightnessx0% to decrease brightness.
On the command line there is a new brightness contrast setting that may be in later versions of magic++?
-brightness-contrast brightness{xcontrast}{%}}
Adjust the brightness and/or contrast of the image.
Brightness and Contrast values apply changes to the input image. They are not absolute settings. A brightness or contrast value of zero means no change. The range of values is -100 to +100 on each. Positive values increase the brightness or contrast and negative values decrease the brightness or contrast. To control only contrast, set the brightness=0. To control only brightness, set contrast=0 or just leave it off.
You may also use -channel to control which channels to apply the brightness and/or contrast change. The default is to apply the same transformation to all channels.
Brightness and Contrast arguments are converted to offset and slope of a linear transform and applied using -function polynomial "slope,offset".
The slope varies from 0 at contrast=-100 to almost vertical at contrast=+100. For brightness=0 and contrast=-100, the result are totally midgray. For brightness=0 and contrast=+100, the result will approach but not quite reach a threshold at midgray; that is the linear transformation is a very steep vertical line at mid gray.
Negative slopes, i.e. negating the image, are not possible with this function. All achievable slopes are zero or positive.
The offset varies from -0.5 at brightness=-100 to 0 at brightness=0 to +0.5 at brightness=+100. Thus, when contrast=0 and brightness=100, the result is totally white. Similarly, when contrast=0 and brightness=-100, the result is totally black.
As the range of values for the arguments are -100 to +100, adding the '%' symbol is no different than leaving it off.
If magick++ is like Imagick it may be lagging a long way behind the Imagemagick options

Fast/Efficent Pixel Access in Magick++

As an educational excercise for myself I'm writing an application that can average a bunch of images. This is often used in Astrophotography to reduce noise.
The library I'm using is Magick++ and I've succeeded in actually writing the application. But, unfortunately, its slow. This is the code I'm using:
for(row=0;row<rows;row++)
{
for(column=0;column<columns;column++)
{
red.clear(); blue.clear(); green.clear();
for(i=1;i<10;i++)
{
ColorRGB rgb(image[i].pixelColor(column,row));
red.push_back(rgb.red());
green.push_back(rgb.green());
blue.push_back(rgb.blue());
}
redVal = avg(red);
greenVal = avg(green);
blueVal = avg(blue);
redVal = redVal*MaxRGB; greenVal = greenVal*MaxRGB; blueVal = blueVal*MaxRGB;
Color newRGB(redVal,greenVal,blueVal);
stackedImage.pixelColor(column,row,newRGB);
}
}
The code averages 10 images by going through each pixel and adding each channel's pixel intensity into a double vector. The function avg then takes the vector as a parameter and averages the result. This average is then used at the corresponding pixel in stackedImage - which is the resultant image. It works just fine but as I mentioned, I'm not happy with the speed. It takes 2 minutes and 30s seconds on a Core i5 machine. The images are 8 megapixel and 16 bit TIFFs. I understand that its a lot of data, but I have seen it done faster in other applications.
Is it my loop thats slow or is pixelColor(x,y) a slow way to access pixels in an image? Is there a faster way?
Why use vectors/arrays at all?
Why not
double red=0.0, blue=0.0, green=0.0;
for(i=1;i<10;i++)
{
ColorRGB rgb(image[i].pixelColor(column,row));
red+=rgb.red();
blue+=rgb.blue();
green+=rgb.green();
}
red/=10;
blue/=10;
green/=10;
This avoids 36 function calls on vector objects per pixel.
And you may get even better performance by using a PixelCache of the whole image instead of the original Image objects. See the "Low-Level Image Pixel Access" section of the online Magick++ documentation for Image
Then the inner loop becomes
PixelPacket* pix = cache[i]+row*columns+column;
red+= pix->red;
blue+= pix->blue;
green+= pix->green;
Now you have also removed 10 calls to PixelColor, 10 ColorRGB constructors, and 30 accessor functions per pixel.
Note, This is all theory; I haven't tested any of it
Comments:
Why do you use vectors for red, blue and green? Because using push_back can perform reallocations, and bottleneck processing. You could instead allocate just once three arrays of 10 colors.
Couldn't you declare rgb outside of the loops in order to relieve stack of unnecessary constructions and destructions?
Doesn't Magick++ have a way to average images?
Just in case anyone else wants to average images to reduce noise, and doesn't feel like too much "educational exercise" ;-)
ImageMagick can do averaging of a sequence of images like this:
convert image1.tif image2.tif ... image32.tif -evaluate-sequence mean result.tif
You can also do median filtering and others by changing the word mean in the above command to whatever you want, e.g.:
convert image1.tif image2.tif ... image32.tif -evaluate-sequence median result.tif
You can get a list of the available operations with:
identify -list evaluate
Output
Abs
Add
AddModulus
And
Cos
Cosine
Divide
Exp
Exponential
GaussianNoise
ImpulseNoise
LaplacianNoise
LeftShift
Log
Max
Mean
Median
Min
MultiplicativeNoise
Multiply
Or
PoissonNoise
Pow
RightShift
RMS
RootMeanSquare
Set
Sin
Sine
Subtract
Sum
Threshold
ThresholdBlack
ThresholdWhite
UniformNoise
Xor

How do I make Google Charts scale an extended encoding line graph properly?

For the life of me, I can't get this graph to display properly with extended encoding.
If I change the axis range from 0 to 15, it looks correct, but if I change the axis from 9 to 15, the data is plotted incorrectly.
This turns out correct:
<img src="http://chart.apis.google.com/chart?cht=lc&chco=125292&chm=B,cee1f5,0,0,0&chls=2&chs=408x237&chxt=x,y&chxl=0:|Jan|Feb|Mar|Apr|May|Jun|Jul&chxr=1,0,15&chd='+extendedEncode(Array(10,15,9,11,12,10,11),15)+'" />
But this scales incorrectly:
<img src="http://chart.apis.google.com/chart?cht=lc&chco=125292&chm=B,cee1f5,0,0,0&chls=2&chs=408x237&chxt=x,y&chxl=0:|Jan|Feb|Mar|Apr|May|Jun|Jul&chxr=1,9,15&chd='+extendedEncode(Array(10,15,9,11,12,10,11),15)+'" />
I have spent hours and hours trying to figure this out, and I feel like I'm missing something incredibly simple. I have to use extended encoding because of the range of numbers my program will ultimately be handling, so changing to "Text Format with Custom Scaling" is not an option.
Read this:
http://code.google.com/apis/chart/image/docs/gallery/line_charts.html#axis_range
and this:
http://code.google.com/apis/chart/image/docs/data_formats.html#axis_scale
When using simple or extended encoding data is brought to 0-100 scale no matter what you use. This let Google Chart reduce url length and fit more dat into the GET HTTP Request.
As a side effect you have to scale the data instead of the axis. Since the axis data is not used when plotting these charts.