Converting ppm to png - c++

In Linux I am getting .PPM files as the image format, this needs to be converted to PNG and then saved. I was looking at some API's to achieve this conversion from PPM to PNG. Can this be done using GDI+, as this would become native?
If that is not possible then I think freeimage or pnglib can accomplish that, however I would prefer to use native gdi+ if possible.

Quick and dirty: download Imagemagick and use it from CLI:
convert xx.ppm xx.png
or use Imagemagick's dll API

Whilst ImageMagick will happily do what you need, it is actually a sledgehammer to crack a nut, and considerably more cumbersome and space and time-consuming to install than the NetPBM suite.
With that, you would do:
pnmtopng image.ppm > result.png

you can use ffmpeg, for batch conversion you can do
#!/bin/bash
for i in *.ppm;
do name=`echo "$i" | cut -d'.' -f1`
echo "$name"
ffmpeg -i "$i" "${name}.png"
done

Well GDI+ does not natively support the PPM format. So you will need a library whatever you do.

You can use ImageMagick library :
http://www.imagemagick.org/script/index.php
but it does lots of other things.

Related

How to decode MP3 files? How MP3 files stores sounds?

I'm not talking about any concrete language here. I want to analyse the MP3 file, so I want to get some information about sound from specific second (i don't know, tone/height/frequency of sound). How those data is stored in single file?
Unless you have weeks (months?) available to play with it, I would recommend using an existing MP3 decoding library to pull the decoded audio out of the file. In C/C++, there's libMAD or libmpg123, as well as the Windows components. In C#, you can use NAudio or NLayer.
Once you have the decoded data, you'll need to run a FFT, DFT, or DCT over it to convert to frequency & amplitude. The FFT is probably your best bet, though the DFT may give a less "noisy" analysis. YMMV.
Note that all three of the transforms provide amplitude values you can convert to decibel values.
there are some useful MP3 Librarys where you get information about your MP3 file.
If you use C# it could be NAudio.
http://naudio.codeplex.com/
I recommend the program xxd and google for the first steps.
First of all i would look into its binary code.
xxd -b file.mp3
Viewing it as ASCII also exposes some information.
xxd file.mp3
That was my first steps.

libpng warning: iCCP: known incorrect sRGB profile

I'm trying to load a PNG image using SDL but the program doesn't work and this error appears in the console
libpng warning: iCCP: known incorrect sRGB profile
Why does this warning appear? What should I do to solve this problem?
Some applications treat warnings as errors; if you are using such an application, you do have to remove the chunk. You can do that with any variety of PNG editors, like ImageMagick.
With Windows CMD prompt, you will need to cd (change directory) into the folder with the images you want to focus on before you can use the commands listed below.
Libpng-1.6 is more stringent about checking ICC profiles than previous versions; you can ignore the warning. To get rid of it, remove the iCCP chunk from the PNG image.
convert in.png out.png
To remove the invalid iCCP chunk from all of the PNG files in a folder (directory), you can use mogrify from ImageMagick:
mogrify *.png
This requires that your ImageMagick was built with libpng16. You can easily check it by running:
convert -list format | grep PNG
If you'd like to find out which files need to be fixed instead of blindly processing all of them, you can run
pngcrush -n -q *.png
where the -n means don't rewrite the files and -q means suppress most of the output except for warnings. Sorry, there's no option yet in pngcrush to suppress everything but the warnings.
Note: You must have pngcrush installed.
Binary Releases of ImageMagick are here
For Android Projects (Android Studio) navigate into res folder.
For example:
C:\{your_project_folder}\app\src\main\res\drawable-hdpi\mogrify *.png
Use pngcrush to remove the incorrect sRGB profile from the png file:
pngcrush -ow -rem allb -reduce file.png
-ow will overwrite the input file
-rem allb will remove all ancillary chunks except tRNS and gAMA
-reduce does lossless color-type or bit-depth reduction
In the console output you should see Removed the sRGB chunk, and possibly more messages about chunk removals. You will end up with a smaller, optimized PNG file. As the command will overwrite the original file, make sure to create a backup or use version control.
Solution
The incorrect profile could be fixed by:
Opening the image with the incorrect profile using QPixmap::load
Saving the image back to the disk (already with the correct profile) using QPixmap::save
Note: This solution uses the Qt Library.
Example
Here is a minimal example I have written in C++ in order to demonstrate how to implement the proposed solution:
QPixmap pixmap;
pixmap.load("badProfileImage.png");
QFile file("goodProfileImage.png");
file.open(QIODevice::WriteOnly);
pixmap.save(&file, "PNG");
The complete source code of a GUI application based on this example is available on GitHub.
UPDATE FROM 05.12.2019: The answer was and is still valid, however there was a bug in the GUI application I have shared on GitHub, causing the output image to be empty. I have just fixed it and apologise for the inconvenience!
You can also just fix this in photoshop...
Open your .png file.
File -> Save As and in the dialog that opens up uncheck "ICC Profile: sRGB IEC61966-2.1"
Uncheck "As a Copy".
Courageously save over your original .png.
Move on with your life knowing that you've removed just that little bit of evil from the world.
To add to Glenn's great answer, here's what I did to find which files were faulty:
find . -name "*.png" -type f -print0 | xargs \
-0 pngcrush_1_8_8_w64.exe -n -q > pngError.txt 2>&1
I used the find and xargs because pngcrush could not handle lots of arguments (which were returned by **/*.png). The -print0 and -0 is required to handle file names containing spaces.
Then search in the output for these lines: iCCP: Not recognizing known sRGB profile that has been edited.
./Installer/Images/installer_background.png:
Total length of data found in critical chunks = 11286
pngcrush: iCCP: Not recognizing known sRGB profile that has been edited
And for each of those, run mogrify on it to fix them.
mogrify ./Installer/Images/installer_background.png
Doing this prevents having a commit changing every single png file in the repository when only a few have actually been modified. Plus it has the advantage to show exactly which files were faulty.
I tested this on Windows with a Cygwin console and a zsh shell. Thanks again to Glenn who put most of the above, I'm just adding an answer as it's usually easier to find than comments :)
After trying a couple of the suggestions on this page I ended up using the pngcrush solution. You can use the bash script below to recursively detect and fix bad png profiles. Just pass it the full path to the directory you want to search for png files.
fixpng "/path/to/png/folder"
The script:
#!/bin/bash
FILES=$(find "$1" -type f -iname '*.png')
FIXED=0
for f in $FILES; do
WARN=$(pngcrush -n -warn "$f" 2>&1)
if [[ "$WARN" == *"PCS illuminant is not D50"* ]] || [[ "$WARN" == *"known incorrect sRGB profile"* ]]; then
pngcrush -s -ow -rem allb -reduce "$f"
FIXED=$((FIXED + 1))
fi
done
echo "$FIXED errors fixed"
There is an easier way to fix this issue with Mac OS and Homebrew:
Install homebrew if it is not installed yet
$brew install libpng
$pngfix --strip=color --out=file2.png file.png
or to do it with every file in the current directory:
mkdir tmp; for f in ./*.png; do pngfix --strip=color --out=tmp/"$f" "$f"; done
It will create a fixed copy for each png file in the current directory and put it in the the tmp subdirectory. After that, if everything is OK, you just need to override the original files.
Another tip is to use the Keynote and Preview applications to create the icons. I draw them using Keynote, in the size of about 120x120 pixels, over a slide with a white background (the option to make polygons editable is great!). Before exporting to Preview, I draw a rectangle around the icon (without any fill or shadow, just the outline, with the size of about 135x135) and copy everything to the clipboard. After that, you just need to open it with the Preview tool using "New from Clipboard", select a 128x128 pixels area around the icon, copy, use "New from Clipboard" again, and export it to PNG. You won't need to run the pngfix tool.
Thanks to the fantastic answer from Glenn, I used ImageMagik's "mogrify *.png" functionality. However, I had images buried in sub-folders, so I used this simple Python script to apply this to all images in all sub-folders and thought it might help others:
import os
import subprocess
def system_call(args, cwd="."):
print("Running '{}' in '{}'".format(str(args), cwd))
subprocess.call(args, cwd=cwd)
pass
def fix_image_files(root=os.curdir):
for path, dirs, files in os.walk(os.path.abspath(root)):
# sys.stdout.write('.')
for dir in dirs:
system_call("mogrify *.png", "{}".format(os.path.join(path, dir)))
fix_image_files(os.curdir)
some background info on this:
Some changes in libpng version 1.6+ cause it to issue a warning or
even not work correctly with the original HP/MS sRGB profile, leading
to the following stderr: libpng warning: iCCP: known incorrect sRGB
profile The old profile uses a D50 whitepoint, where D65 is standard.
This profile is not uncommon, being used by Adobe Photoshop, although
it was not embedded into images by default.
(source: https://wiki.archlinux.org/index.php/Libpng_errors)
Error detection in some chunks has improved; in particular the iCCP
chunk reader now does pretty complete validation of the basic format.
Some bad profiles that were previously accepted are now rejected, in
particular the very old broken Microsoft/HP sRGB profile. The PNG spec
requirement that only grayscale profiles may appear in images with
color type 0 or 4 and that even if the image only contains gray
pixels, only RGB profiles may appear in images with color type 2, 3,
or 6, is now enforced. The sRGB chunk is allowed to appear in images
with any color type.
(source: https://forum.qt.io/topic/58638/solved-libpng-warning-iccp-known-incorrect-srgb-profile-drive-me-nuts/16)
Using IrfanView image viewer in Windows, I simply resaved the PNG image and that corrected the problem.
Some of the proposed answers use pngcrush with the -rem allb option, which the documentation says is like "surgery with a chainsaw." The option removes many chunks. To prevent the "iCCP: known incorrect sRGB profile" warning it is sufficient to remove the iCCP chunk, as follows:
pngcrush -ow -rem iCCP filename.png
Extending the friederbluemle solution, download the pngcrush and then use the code like this if you are running it on multiple png files
path =r"C:\\project\\project\\images" # path to all .png images
import os
png_files =[]
for dirpath, subdirs, files in os.walk(path):
for x in files:
if x.endswith(".png"):
png_files.append(os.path.join(dirpath, x))
file =r'C:\\Users\\user\\Downloads\\pngcrush_1_8_9_w64.exe' #pngcrush file
for name in png_files:
cmd = r'{} -ow -rem allb -reduce {}'.format(file,name)
os.system(cmd)
here all the png file related to projects are in 1 folder.
I ran those two commands in the root of the project and its fixed.
Basically redirect the output of the "find" command to a text file to use as your list of files to process. Then you can read that text file into "mogrify" using the "#" flag:
find *.png -mtime -1 > list.txt
mogrify -resize 50% #list.txt
That would use "find" to get all the *.png images newer than 1 day and print them to a file named "list.txt". Then "mogrify" reads that list, processes the images, and overwrites the originals with the resized versions. There may be minor differences in the behavior of "find" from one system to another, so you'll have to check the man page for the exact usage.
When I training yolo, the warninglibpng warning: iCCP: known incorrect sRGB profile occurs each epoch. Then I use bash to find the png, then use python3 and opencv(cv2) to rewrite the png files. So, the warning just occurs when rewriting. Steps as follow:
step 1. Create a python file:
# rewrite.py
import cv2, sys, os
fpath = sys.argv[1]
if os.path.exists(fpath):
cv2.imwrite(fpath, cv2.imread(fpath))
step 2. In bash, run:
# cd your image dir
# then find and rewrite png file
find . -iname "*.png" | xargs python3 rewrite.py
for PHP developers having this issue with imagecreatefrompng function
you can try suppressing the warning using #
$img = #imagecreatefrompng($file);
Here is a ridiculously brute force answer:
I modified the gradlew script. Here is my new exec command at the end of the file in the
exec "$JAVACMD" "${JVM_OPTS[#]}" -classpath "$CLASSPATH" org.gradle.wrapper.GradleWrapperMain "$#" **| grep -v "libpng warning:"**

Saving image without compressing

I know that JPG, BMP, GIF and others formats compress image. But can I get snapshot of display and save it without compressing(in binary file) in programming way (using c/c++ for example or other stuff)?
BMP files aren't compressed by default. See here: https://en.wikipedia.org/wiki/BMP_file_format
http://www.zlib.net is your best solution for loss-less compression in C. It's well-maintained, used in a host of different software and compatible with external archivers such as winzip.
C++ offers wrappers around it such as boost::iostreams::zlib and boost::iostreams::gzip.
Zlib uses the deflate algorithm (RFC1951); here a very good explanation of the algorithm: http://www.zlib.net/feldspar.html
The PAM format is uncompressed and really simple to understand.

How does Google's Page Speed lossless image compression work?

When you run Google's PageSpeed plugin for Firebug/Firefox on a website it will suggest cases where an image can be losslessly compressed, and provide a link to download this smaller image.
For example:
Losslessly compressing http://farm3.static.flickr.com/2667/4096993475_80359a672b_s.jpg could save 33.5KiB (85% reduction).
Losslessly compressing http://farm2.static.flickr.com/1149/5137875594_28d0e287fb_s.jpg could save 18.5KiB (77% reduction).
Losslessly compressing http://cdn.uservoice.com/images/widgets/en/feedback_tab_white.png could save 262B (11% reduction).
Losslessly compressing http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.9/themes/base/images/ui-bg_flat_75_ffffff_40x100.png could save 91B (51% reduction).
Losslessly compressing http://www.gravatar.com/avatar/0b1bccebcd4c3c38cb5be805df5e4d42?s=45&d=mm could save 61B (5% reduction).
This applies across both JPG and PNG filetypes (I haven't tested GIF or others.)
Note too the Flickr thumbnails (all those images are 75x75 pixels.) They're some pretty big savings. If this is really so great, why aren't Yahoo applying this server-side to their entire library and reducing their storage and bandwidth loads?
Even Stackoverflow.com stands for some very minor savings:
Losslessly compressing http://sstatic.net/stackoverflow/img/sprites.png?v=3 could save 1.7KiB (10% reduction).
Losslessly compressing http://sstatic.net/stackoverflow/img/tag-chrome.png could save 11B (1% reduction).
I've seen PageSpeed suggest pretty decent savings on PNG files that I created using Photoshop's 'Save for Web' feature.
So my question is, what changes are they making to the images to reduce them by so much? I'm guessing there are different answers for different filetypes. Is this really lossless for JPGs? And how can they beat Photoshop? Should I be a little suspicious of this?
If you're really interested in the technical details, check out the source code:
png_optimizer.cc
jpeg_optimizer.cc
webp_optimizer.cc
For PNG files, they use OptiPNG with some trial-and-error approach
// we use these four combinations because different images seem to benefit from
// different parameters and this combination of 4 seems to work best for a large
// set of PNGs from the web.
const PngCompressParams kPngCompressionParams[] = {
PngCompressParams(PNG_ALL_FILTERS, Z_DEFAULT_STRATEGY),
PngCompressParams(PNG_ALL_FILTERS, Z_FILTERED),
PngCompressParams(PNG_FILTER_NONE, Z_DEFAULT_STRATEGY),
PngCompressParams(PNG_FILTER_NONE, Z_FILTERED)
};
When all four combinations are applied, the smallest result is kept. Simple as that.
(N.B.: The optipng command line tool does that too if you provide -o 2 through -o 7)
For JPEG files, they use jpeglib with the following options:
JpegCompressionOptions()
: progressive(false), retain_color_profile(false),
retain_exif_data(false), lossy(false) {}
Similarly, WEBP is compressed using libwebp with these options:
WebpConfiguration()
: lossless(true), quality(100), method(3), target_size(0),
alpha_compression(0), alpha_filtering(1), alpha_quality(100) {}
There is also image_converter.cc which is used to losslessly convert to the smallest format.
I use jpegoptim to optimize JPG files and optipng to optimize PNG files.
If you're on bash, the command to losslessly optimize all JPGs in a directory (recursively) is:
find /path/to/jpgs/ -type f -name "*.jpg" -exec jpegoptim --strip-all {} \;
You can add -m[%] to jpegoptim to lossy compress JPG images, for example:
find /path/to/jpgs/ -type f -name "*.jpg" -exec jpegoptim -m70 --strip-all {} \;
To optimize all PNGs in a directory:
find /path/to/pngs/ -type f -name "*.png" -exec optipng -o2 {} \;
-o2 is the default optimization level, you can change this from o2 to o7. Notice that higher optimization level means longer processing time.
Take a look at http://code.google.com/speed/page-speed/docs/payload.html#CompressImages which describes some of the techniques/tools.
It's a matter of trading encoder's CPU time for compression efficiency. Compression is a search for shorter representations, and if you search harder, you'll find shorter ones.
There is also a matter of using image format capabilities to the fullest, e.g. PNG8+a instead of PNG24+a, optimized Huffman tables in JPEG, etc.
Photoshop doesn't really try hard to do that when saving images for the web, so it's not surprising that any tool beats it.
See
ImageOptim (lossless) and
ImageAlpha (lossy) for smaller PNG files (high-level description how it works) and
JPEGmini/MozJPEG (lossy) for better JPEG compressor.
To Replicate PageSpeed's JPG Compression Results in Windows:
I was able to get exactly the same compression results as PageSpeed using the Windows version of jpegtran which you can get at www.jpegclub.org/jpegtran. I ran the executable using the DOS prompt (use Start > CMD). To get exactly the same file size (down to the byte) as PageSpeed compression, I specified Huffman optimization as follows:
jpegtran -optimize source_filename.jpg output_filename.jpg
For more help on compression options, at the command prompt, just type: jpegtran
Or to Use the Auto-generated Images from the PageSpeed tab in Firebug:
I was able to follow Pumbaa80's advice to get access to PageSpeed's optimized files. Hopefully the screenshot here provides further clarity for the FireFox environment. (But I was not able to get access to a local version of these optimized files in Chrome.)
And to Clean up the Messy PageSpeed Filenames using Adobe Bridge & Regular Expressions:
Although PageSpeed in FireFox was able to generate optimized image files for me, it also changed their names turning simple names like:
nice_picture.jpg
into
nice_picture_fff5e6456e6338ee09457ead96ccb696.jpg
I discovered that this seems to be a common complaint. Since I didn't want to rename all my pictures by hand, I used Adobe Bridge's Rename tool along with a Regular Expression. You could use other rename commands/tools that accept Regular Expressions, but I suspect that Adobe Bridge is readily available for most of us working with PageSpeed issues!
Start Adobe Bridge
Select all files (using Control A)
Select Tools > Batch Rename (or Control Shift R)
In the Preset field select "String Substitution". The New Filenames fields should now display “String Substitution”, followed by "Original Filename"
Enable the checkbox called “Use Regular Expression”
In the “Find” field, enter the Regular Expression (which will select all characters starting at the rightmost underscore separator):
_(?!.*_)(.*)\.jpg$
In the “Replace with” field, enter:
.jpg
Optionally, click the Preview button to see the proposed batch renaming results, then close
Click the Rename button
Note that after processing, Bridge deselects files that were not affected. If you want to clean all your .png files, you need reselect all the images and modify the configuration above (for "png" instead of "jpg"). You can also save the configuration above as a preset such as "Clean PageSpeed jpg Images" so that you can clean filenames quickly in future.
Configuration Screenshot / Troubleshooting
If you have troubles, it's possible that some browsers might not show the RegEx expression above properly (blame my escape characters) so for a screenshot of the configuration (along with these instructions), see:
How to Use Adobe Bridge's Batch Rename tool to Clean up Optimized PageSpeed Images that have Messy Filenames
In my opinion the best option out there that effectively handles most image formats in a go is trimage.
It effectively utilizes optipng, pngcrush, advpng and jpegoptim based on the image format and delivers near perfect lossless compression.
The implementation is pretty easy if using a command line.
sudo apt-get install trimage
trimage -d images/*
and voila! :-)
Additionally you will find a pretty simple interface to do it manually as well.
There's a very handy batch script that recursively optimizes images beneath a folder using OptiPNG (from this blog):
FOR /F "tokens=*" %G IN ('dir /s /b *.png') DO optipng -nc -nb -o7 -full %G
ONE LINE!
If you are looking for batch processing, keep in mind trimage complains if you don't have Xserver avail. In that case just write a bash or php script to do something like
<?php
echo "Processing jpegs<br />";
exec("find /home/example/public_html/images/ -type f -name '*.jpg' -exec jpegoptim --strip-all {} \;");
echo "Processing pngs<br />";
exec("find /home/example/public_html/images/ -type f -name '*.png' -exec optipng -o7 {} \;");
?>
Change options to suite your needs.
For windows there are several drag'n'drop interfaces for easy access.
https://sourceforge.net/projects/nikkhokkho/files/FileOptimizer/
For PNG files I found this one for my enjoyment, apparently 3 different tools wrapped in this GIU. Just drag and drop and it does it for you.
https://pnggauntlet.com/
It takes time though, try compressing a 1MB png file - I was amazed how much CPU went into this compression comparison which has to be what is going on here. Seems the image is compressed a 100 ways and the best one wins :D
Regarding the JPG compression I to feel its risky to strip of color profiles and all extra info - however - if everyone is doing it - its the business standard so I just did it myself :D
I saved 113MB on 5500 files on a WP install today, so its definately worth it!

Decode JPEG to obtain uncompressed data

I want to decode JPEG files and obtain uncompressed decoded output in BMP/RGB format.I am using GNU/Linux, and C/C++.
I had a look at libjpeg, but there seemed not to be any good documentation available.
So my questions are:
Where is documentation on libjpeg?
Can you suggest other C-based jpeg-decompression libraries?
The documentation for libjpeg comes with the source-code. Since you haven't found it yet:
Download the source-code archive and open the file libjpeg.doc. It's a plain ASCII file, not a word document, so better open it in notepad or another ASCII editor.
There are some other .doc files as well. Most of them aren't that interesting though.
Unfortunately I cannot recommend any other library besides libjpeg. I tried a couple of alternatives, but Libjpeg always won. Is pretty easy to work with once you have the basics done. Also it's the most complete and most stable jpeg library out there.
MagickWand is the C API for ImageMagick:
http://imagemagick.org/script/magick-wand.php
I have not used it, but the documentation looks quite extensive.
You should check out Qt's QImage. It has a pretty easy interface that makes this task really easy. Setup is pretty simple for every platform.
If Qt is overkill, you can try Magick++ http://www.imagemagick.org/Magick++/. It supports similar operations and is also well suited for that sort of task. The last time I used it, I struggled a bit with dependencies for it on Windows, but don't recall much trouble on Linux.
For Magick++'s Image class, the function you probably want is getConstPixels.
I have code that you can copy ( or just use as a reference ) for loading a jpeg image using the libjpeg library.
You can browse the code here: http://code.google.com/p/kgui/source/browse/trunk/kguiimage.cpp
Just look for the function LoadJPGImage.
The code is setup to handle c++ binding of my DataHandle class to it for loading the image, that way the image can be a file or data already in memory or whatever.
A slightly out of the box solution is to acquire a copy of the netpbm tools, which transform images from pretty much any format to any other format via one of several very simple intermediate formats. They work well from the shell, and are most often used in pipes to read some arbitrary image, perform an operation on it, and write it out to some other format.
The pbm formats can be as simple as a plain ASCII header followed by the RGB data in ASCII or binary. They are intended to be simple enough to use without required a library to implement.
JPEG is supported in netpbm by read and write filters that are implemented on top of libjpeg.