Resize the image using PHP vips library - vips

I am using both PHP imagick and PHP vips library for image operations. I am working on image resize operation. For imagick I am using resizeImage() function and for vips I am using resize() function. But the output of both the image are different for same height and width. I want to same output for vips. Here I add my code which I am using for vips. I want same result for vips also, which I got into imagick
<!-- Imagick Code -->
$img = new Imagick($ipFile);
$img->resizeImage(1000, 1000, Imagick::FILTER_LANCZOS, 1, true);
$img->writeimage($opFile);
<!-- VIPS Code -->
$im = Vips\Image::newFromFile($ipFile);
$im = $im->resize($ratio, ['kernel' => 'lanczos2']);
$im->writeToFile($opFile);
Vips Output file:
Imagick Output file:

Don't use resize with php-vips unless you really have to. You'll get better quality, better speed and lower memory use with thumbnail.
The thumbnail operation combines load and resize in one, so it can exploit tricks like shrink-on-load. It knows about transparency, so it'll resize PNGs correctly. It also knows about vector formats like PDF and SVG and will size them in the best way.
Try:
<!-- VIPS Code -->
$im = Vips\Image::thumbnail($ipFile, 1000);
$im->writeToFile($opFile);
Benchmark:
$ /usr/bin/time -f %M:%e convert ~/pics/nina.jpg -resize 1000x1000 x.jpg
238836:0.77
$ /usr/bin/time -f %M:%e vips resize ~/pics/nina.jpg x.jpg .1653439153
60996:0.39
$ /usr/bin/time -f %M:%e vips thumbnail ~/pics/nina.jpg x.jpg 1000
49148:0.16
nina.jpg is a 6k x 4k RGB JPG image.
imagick resizes in 770ms and 240MB of memory
vips resize takes 390ms and 61MB of memory
vips thumbnail takes 160ms and 50MB of memory
Quality will be the same in this case.

Related

How to save pages in a PDF as images using Python

I want to save all the pages in the PDF document as images using python.
I already tried Imagemagick and pypdf. The results are not good with my type of document (containing graphs, old scanned documents).
When using Imagemagick to convert PDF to say PNG, one can specify the density to rasterize the PDF at high resolution. Then you can resize back down if you want. For example,
convert -density 300 image.pdf -resize WxH image.png
If your PDF is CMYK rather RGB, then add -colorspace sRGB right after -density 300 and before reading the image.pdf.
If that is not good enough increase the density to 600 and try again.
WxH is the final images size that you want.
If using Imagemagick 7, then replace convert with magick

work with binary (1 bit per pixel) images c++

The problem follows. It is necessary to work with very large binary images (100000x100000 pixels). Initially did it using Qt's QImage class, it supports Format_Mono format that stores an image as a 1-bit per pixel. And in general, everything was fine, until it turned out that QPainter has limited rasterizer and draw on images whose size is more short (32767x32767) can not be, it just cut off.
I was not able to combine images by more than 32767x32767. Then, I began to look closely to individual libraries. The OpenCV, as I understand it, does not support this format. Regarding ImageMagick, it supports the construction of the image as one-bit per pixel and save it in the same format. However, while working with the image is still stored as an 8bit per pixel and hence there arises a shortage of RAM. Then I decided to try CImg, but it don't suppor 1bbp format, as i understand:
the overall size of the used memory for one instance image (in bytes)
is then 'width x height x depth x dim x sizeof (T)
Where sizeof (T) of course can not be less than sizeof (char)...
It was interesting how QImage in principle works with its Format_Mono format, but honestly, I was tangled in the source code.
So, i have the next question. Is there a library that implemented the ability to create and work with binary images, and in this case they really are stored as a 1-bit per pixel in RAM?
libvips can process huge 1 bit images. It unpacks them to one pixel per byte for processing, but it only keeps the part of the image currently being processed in memory, so you should be OK.
For example, this tiny program makes a 100,000 x 100,000 pixel black image, then pastes in all of the images from the command-line at random positions:
#!/usr/bin/env python
import sys
import random
import gi
gi.require_version('Vips', '8.0')
from gi.repository import Vips
# this makes a 8-bit, mono image of 100,000 x 100,000 pixels, each pixel zero
im = Vips.Image.black(100000, 100000)
for filename in sys.argv[2:]:
tile = Vips.Image.new_from_file(filename, access = Vips.Access.SEQUENTIAL)
im = im.insert(tile,
random.randint(0, im.width - tile.width),
random.randint(0, im.height - tile.height))
im.write_to_file(sys.argv[1])
I can run the program like this:
$ vipsheader wtc.tif
wtc.tif: 9372x9372 uchar, 1 band, b-w, tiffload
$ mkdir test
$ for i in {1..1000}; do cp wtc.tif test/$i.tif; done
$ time ./insert.py x.tif[bigtiff,squash,compression=ccittfax4] test/*.tif
real 1m31.034s
user 3m24.320s
sys 0m7.440s
peak mem: 6.7gb
The [] on the output filename set image write options. Here I'm enabling fax compression and setting the squash option. This means 8-bit one band images should be squashed down to 1 bit for write.
The peak mem result is from watching RES in top. 6.7gb is rather large unfortunately, as it's having to keep input buffers for each of the 1,000 input images.
If you use tiled 1-bit tiff, you can remove the access = option and use operators that need random access, like rotate. If you try to rotate a strip tiff, vips will have to decompress the whole image to a temporary disk file, which you probably don't want.
vips has a reasonable range of standard image processing operators, so you may be able to do what you want just glueing them together. You can add new operators in C or C++ if you want.
That example is in Python for brevity, but you can use Ruby, PHP, C, C++, Go, JavaScript or the command-line if you prefer. It comes with all Linuxes and BSDs, it's on homebrew, MacPorts and fink, and there's a Windows binary.

Shrink the size of a .png file

There are many programs that claim to reduce the size of a .png file but none of the well known ones, optipng , pngcrush , pngquant, allow me to shrink to a specified size. pngcrush tried its hardest, but the result was still way to big for my needs. For .jpg files, jpegoptim has an -m option that does allow me to shrink to the size I need. The obvious solution seemed to be to convert to jpg, shrink to the right size, then convert back, but that doesn't work either, the reconstituted .png file just jumps back to its original size.
Presumably, this has something to do with the structure of .png files.
Is there any way to get a small png file? This png file is an example of something i need to shrink to below 1K bytes.
Thanks for any suggestions!
Use ImageMagick to reduce the colors, then pngcrush to get rid of ancillary chunks:
magick in.png -colors 8 temp.png
pngcrush -rem alla temp.png out.png
results in a 1621-byte file. If you have an older version of ImageMagick, use "convert" instead of "magick". Using "-colors 4" instead of "-colors 8" gets you a 1015-byte file, but the dithering looks very spotty.
Note that these preserve the transparency in the image, while converting to JPEG loses the transparency and makes the background a solid color.
The only solution to your problem that I can think of is to use .jpg instead of .png. The .jpg format was mainly created for its high lossy compression but still gets a good enough image. On the other hand, .png is going for the full transparency and no quality loss. To sum it all up, .jpg is ideal for getting smaller files if quality doesn't matter, and .png is perfect for high-quality images that quality and colour really matter.
Sources:
http://www.labnol.org/software/tutorials/jpeg-vs-png-image-quality-or-bandwidth/5385/, http://www.interactivesearchmarketing.com/jpeg-png-proper-image-formatting/
I can get that 9.5 KB file down to 3.4 KB using the 8-bit palette PNG format. The image has a transparent boundary, which adds unnecessary pixels and an alpha channel for the whole image which isn't needed, since it's rectangular. After stripping the transparent boundary, eliminating the alpha channel, and using a palette, I can get it down to 3.2 KB.
To get any further, I have to use JPEG for lossy compression. At a very low image quality of 5 (out of 100), I can get it down to 1 KB. It shows some artifacts from the severe compression (look around the prompt > and _ to see some of those):

How to re-size .bmp file in C\C++?

I am doing a CBIR system as an assignment.There are 100 .bmp files,but they have different size,how to re-size them to a same size?
Thanks.
Have a look at the CImg Library, it's quite easy to use. You can load your bitmap file then use one of the resize function.
Probably a overkill, but you can take a look on ImageMagick.
You should look at G'MIC, a command-line tool to batch image processing operations.
It is even more advanced than ImageMagick's convert tool.
Basically, you can call it like this :
gmic *.bmp -resize 128,128,1,3,3 -outputp resized_
to resize all your bmp images into 128x128 color images, and save them with filenames prefixed by 'resized_'.
G'MIC is available for Linux, Windows and Mac, at : http://gmic.sourceforge.net

What options for convert (ImageMagick or GraphicsMagick) produce the smallest (filesize) PNG?

ImageMagick creates some pretty large PNGs. GraphicsMagick is a lot better, but I'm still looking for the best options to use with convert to obtain the smallest filesize png.
I have here a large png with a small filesize, and passing this through IM convert I have been unable to reach that filesize, let alone get it smaller. With GM convert I can get it slightly smaller but I'm looking for improvements, generically for any image I come across.
gm convert -quality 95 a_png.png gm.png
convert -quality 95 -depth 8 a_png.png im.png
gm identify *
a_png.png PNG 2560x2048+0+0 PseudoClass 256c 8-bit 60.1K 0.000u 0:01
gm.png[1] PNG 2560x2048+0+0 PseudoClass 256c 8-bit 60.0K 0.000u 0:01
im.png[2] PNG 2560x2048+0+0 DirectClass 8-bit 130.2K 0.000u 0:01
What options for convert produce the smallest PNG filesize?
(Yes, I'm familiar with OptiPNG, PNGOUT and Pngcrush. But I'm after something that will be available without question on every *nix box I happen to be on.)
Looks like you and me are looking for the same answer. Unfortunately there doesn't seem to be many people out there with a good knowledge of GraphicsMagick. This is what I have learned so far,
The quality operator doesn't properly work for any image other than JPEG's. For me it just made the file size bigger when used on PNG's and GIF's.
I have done this to my PNG and GIF files to reduce their size:
gm convert myImage.png +dither -depth 8 -colors 50 myImage.png
+dither stops any dithering of the image when the colors are reduced. (this reduces the file size)
-depth 8 is probably unnecessary as most PNG files are already depth 8.
-colors 50 reduces the number of colors in the image to 50, this is the only way to really reduce the size of a image stored in a lossless format like PNG or GIF.
Obviously for the best image quality/size ratio you cant just reduce the image depth or number of colors without knowing the current depth and number of colors. To determine this information I am doing the following
gm identify -format "file_size:%b,unique_colors:%k,bit_depth:%q" myImage.png
For my image; this returns
file_size:100.7k,unique_colors:13455,bit_depth:8
The problem is when GraphicsMagick reduces colors it always reduces to at least 255, so you can't set the number of colors to 300 for example. Also there seems to be an issue with the alpha channel for PNG files; If the image has transparency in it, reducing colors replaces these colors with transparent; with imagemagick it does not do this.
I just came across this question again so I'll update, GraphicsMagick and ImageMagick have a serious problem. They cannot write out PNG images using a tRNS chunk which means if you try to read in an image that has a tRNS chunk and then write it out, the image will be much bigger. GM is not the best tool for compressing images. You need to use a separate tool such as OptiPNG to compress PNG's again after using Image/GraphicsMagick. I am getting up to 60% smaller files when using OptiPNG after running GraphicsMagick on an image.
Also I was wondering if you have encountered a problem regarding RGBA images and bit depth. For some images I am getting an "Invalid bit depth" exception. I can't see any reason why.
I haven't found a way to do it in the command line, but i did find this free website (http://tinypng.org/) that does an excellent job, my test image got a 71% reduction, final size was only 29% of the original. It looks like you can give it 20 images at a time. I'm looking into how they do it.
http://tinypng.org/