I am attempting to find documentation of the psd file format so I can read in a .psd and then save out the individual layers as files, along with do other modifications. Does anyone know of any document in on the .psd file format? (Just for reference, I will be writing this in C++)
If there are any code examples of loading a .psd file in C++ then I would appreciate them being linked.
(Please not turn this into a "just use XXX software". This is not homework, or anything related to that. I am doing this because I think it will be a fun project to work on. I will ask for posts to be down voted if this happens.)
There's also some Objective-C code on GitHub (should be easily understandable for anyone with a C++ background), also source of this gem, which appears to sum it up nicely:
At this point, I'd like to take a moment to speak to you about the Adobe PSD format.
PSD is not a good format. PSD is not even a bad format. Calling it such would be an
insult to other bad formats, such as PCX or JPEG. No, PSD is an abysmal format. Having
worked on this code for several weeks now, my hate for PSD has grown to a raging fire
that burns with the fierce passion of a million suns.
If there are two different ways of doing something, PSD will do both, in different
places. It will then make up three more ways no sane human would think of, and do those
too. PSD makes inconsistency an art form. Why, for instance, did it suddenly decide
that these particular chunks should be aligned to four bytes, and that this alignement
should not be included in the size? Other chunks in other places are either unaligned,
or aligned with the alignment included in the size. Here, though, it is not included.
Either one of these three behaviours would be fine. A sane format would pick one. PSD,
of course, uses all three, and more.
Trying to get data out of a PSD file is like trying to find something in the attic of
your eccentric old uncle who died in a freak freshwater shark attack on his 58th
birthday. That last detail may not be important for the purposes of the simile, but
at this point I am spending a lot of time imagining amusing fates for the people
responsible for this Rube Goldberg of a file format.
Earlier, I tried to get a hold of the latest specs for the PSD file format. To do this,
I had to apply to them for permission to apply to them to have them consider sending
me this sacred tome. This would have involved faxing them a copy of some document or
other, probably signed in blood. I can only imagine that they make this process so
difficult because they are intensely ashamed of having created this abomination. I
was naturally not gullible enough to go through with this procedure, but if I had done
so, I would have printed out every single page of the spec, and set them all on fire.
Were it within my power, I would gather every single copy of those specs, and launch
them on a spaceship directly into the sun.
PSD is not my favourite file format.
Just so you are warned. :)
This will not be a fun project, the .psd format is big. It incorporates every feature Adobe has put into Photoshop over many years.
I believe the specification can be had from Adobe, but they don't just hand it out to the public. You'll have to contact them and jump through some hoops first.
The PSD file format specification as written by Adobe is here;
http://www.adobe.com/devnet-apps/photoshop/fileformatashtml/
Last update: June 2012. As far as I know this is the best available source about the PSD file format even there are few mistakes.
First I recommend starting by dividing PSD into blocks.
Enjoy!
MyPSD::CPSD class is a C++ class that can load images saved in Adobe's Photoshop native format.
http://www.codeproject.com/Articles/10398/Import-Adobe-Photoshop-psd-images
MolecularMatters psd_sdk seems like a good library to take inspiration form: https://github.com/MolecularMatters/psd_sdk
It allows to read layers from a .psd file and much more.
Related
I want to know if it is possible to read PSD files from C++, resize 1 specific Group programmatically, then render the result to a PNG. So far I found some code that reads and renders PSD files "as is", here: http://www.codeproject.com/Articles/10398/Import-Adobe-Photoshop-psd-images
The problem I foresee is that just rendering a PSD as is is pretty easy, but resizing Groups programmatically would require much more effort.
The problem I see is that Photoshop PSD files are notoriously difficult to read... see this poor soul.
Also, the code you link to is from 2006 which pre-dates the current "Smart Objects" and real-time editing so I would doubt it will work for anything recent - though you may be fine if you only want to read some old files, or new uncomplicated ones of a specific project that doesn't use such features.
I would suggest you export the layers from Photoshop as PNGs and then work with that. My answer here might get you started.
So far, there seems to be no library to do what I want to do. Reading out the composite preview from PSD files is easy, as demonstrated in the codeproject link I posted in my question, but manipulating individual groups or layers, then rendering the composite preview again seems impossible for now.
Not sure where else to ask this, so I figured I'd give good old stackoverflow a shot.
Let's say, by chance, I would like to write a library or set of libraries that will create PDF's and convert files to PDF, AND I could care less about how long it will take me to complete (3 months - 10 years.. whatever). I have absolutely no interest in paying for a toolkit... the point of this would be to learn how to manipulate and create files like PDF's. There's nothing business critical about the project, I just want to learn how to do it. Where do I start? I would imagine something like this would be written in C++, but I'm not sure... maybe high level languages would work as well. I'm not looking for someone to tell me exactly how to do it, but send me in the write direction, or at least point out the concepts I would need to concretely grasp before proceeding with such a project.
Any advice and help in directing me here is greatly appreciated : )
Well, you will need a very good understanding of the PDF file format. Adobe publishes the standard and you can start at their site. You can start with the base 1.7 standard and then read the cumulative supplements from there. It is a daunting task, but it can be done and you can pretty much use any language you want, because in the end you are just generating bytes that can be saved to a file.
If you want to convert from, let's say, word documents, it will get a little trickier. Microsoft has published their file formats, which you would have to learn and then learn how to translate that into the corresponding PDF formatting. Also note that the .doc and .docx formats are completely separate file formats and would require separate engines to convert them.
With unlimited time, it is definitely doable, you would just need to ask yourself if it is worth it.
I've seen a lot of games use something similar to a .DAT file or a specific file type that the game has for itself. I'm just beginning with C++ and DirectX and I was interested in keeping my information in something similar to a .DAT.
My initial conception was that it would hold information on the files you wanted to store within the .DAT file. Something similar to a .RAR file. Unfortunately, my googleing skills did not help me in finding the answers.
Right now I'm simply loading textures and sound files from a folder called Data.
EDIT: While I understand that .DAT is short for data, and I've found that a .DAT file generally contains any assortment of information, I'm still unsure about how to go about doing something as packing images and sound files into any type of file and being able to read them.
I'm not sure about using fstreams to achieve my task, however I will look into streams related to storing data and how to properly read from that data. Meanwhile if anyone has another answer to offer based on this new information, it would be appreciated.
EDIT: Thanks to the answers, I stumbled across a similar question on stackoverflow and felt I'd share it here. Combining resources into a single binary file
I don't think there is really such thing as .dat file format. It's short for "data," and different applications just put in some proprietary stuff in it and call it ".dat." You can read up on fstream classes to do file IO in C++. See Input/Output with files.
What you then do is make up your own file format. For example, first 4 byte is int that indicates the number of blocks in the .dat and for each block, you have 4 byte indicating the length of each block, 4 byte indicating the type of the block, the variable length data itself .. something like that.
DAT obviously stands for data, and there is no real or de facto standard on what that extension actually refers to. Your decisions on the best file formats should be based on technical considerations, not pointless attempts at security through obscurity.
Professional games use a technique where they put all the needed resources (models, textures, sounds, ai, config, etc) zipped/packed into a single file thus making it faster to manage, harder to change (some even make use of a virtual filing system from what's inside the data file). Now, for what's inside the file is different depending on the needs of the game and the data structures that you use.
If you're just starting into gamedev, i recommend you stick with keeping all you assets separate and don't bother too much about packing them into a single file.
Now if you really want to start using a packed format here's a good pointer:
Creating a PAK File Format
Here's a link which claims that .dat is a movie format, 'DAT' being short for Digital Audio Tape.
I'm not sure I believe the link, but I do remember something about a Microsoft supported format called DAT, from long ago, when I used an earlier version of Windows.
It makes more sense as a logical extension for a DATA file of some kind.
.dat, as others have said, is literally just a data file. In reality, the file extension means nothing other than association with a program. For example, I could make a word processor that saves all the documents with the .mp3 file extension. These files wouldn't be playable in any media software, but the software might try. File extensions are used to help programs know what types of files they can and cannot open--however those rules don't have to be followed.
Anyway, you can dump any sort of information to a file. Programmers/software writers will often choose .dat as the extension of that file because it has become the standard to signify 'this file just holds a ton of data' and that the data doesn't necessarily hold any standardized headers, footers, or formatting.
A dat file could really contain anything. It might be as simple as a zip archive with the extension changed, or it could be a completely custom file type. If you're just starting out, you probably don't want to write your own file format, although doing so can be fun and educational. If you want to encapsulate your data files into some kind of container, you should probably go with a zip, paq, or maybe tar.gz.
Because the open source geo-coders cannot begin to compare to Google's or even Yahoo's, I would like to start a project to create a good open source geo-coder. Just to clarify, a geo-coder takes some text (usually with some constraints) and returns one or more lat/lon pairs.
I realize that this is a difficult and garguntuan task, so I am wondering how you might get started. What would you read? What algorithms would you familiarize yourself with? What code would you review?
And also, assuming you were going to develop this very agilely, what would you want the first prototype to be able to do?
EDIT: Let's set aside the data question for now. I am going to use OpenStreetMap data, along with a database of waypoints that I have. I would later plan to include other data sets as well, and I realize the geo-coder would be inherently limited by the quality of the original data.
The first (and probably blocking) problem would be: where do you get your data from? (unless you are willing to pay thousands of dollars for proprietary sets).
You could build a geocoding-api on top of OpenStreetMap (they publish their data in dumps on a regular basis) I guess, but that one was still very incomplete last time I checked.
Algorithms are easy. Good mapping data, however, is expensive. Very expensive.
Google drove their cars all over the world, collecting this data among other things.
From a .NET point of view these articles might be interesting for you:
Writing Your Own GPS Applications: Part I
Writing Your Own GPS Applications: Part 2
Writing GIS and Mapping Software for .NET
I've only glanced at the articles but they've been on CodeProject's 'Most Popular' list for a long time.
And maybe this CodePlex project which the author of the articles above made available.
I would start at the absolute beginning by figuring out how you're going to get the data that matches a street address with a geocode. Either Google had people going around with GPS units, OR they got the information from some existing source. That existing source may have been... (all guesses)
The Postal Service
Some existing maps(printed)
A bunch of enthusiastic users that were early adopters of GPS technology who ere more than willing to enter in street addresses and GPS coordinates
Some government entity (or entities)
Their own satellites
etc
I guess what I'm getting at is the information was either imported from somewhere or was input by someone via some interface. As my starting point I would look at how to get that information. In an open source situation, you may be able to get a bunch of enthusiastic people to enter information.
So for my first prototype, boring as it would be, I would create a form for entering information.
Then you need to know the math for figuring out the closest distance (as the crow flies). From there, try to figure out how to include roads. (My guess is you would have to have data point for each and every curve, where you hold the geocode location of the curve, and the angle of the road on a north/south and east/west vector. You'd probably need to take incline into account, too to get accurate road measurements.)
That's just where I'd start.
But in all honesty, I wouldn't even start on this. Other programmers have done it already, I'm more interested in what hasn't already been done.
get my free raw data from somewhere like http://ipinfodb.com/ip_database.php
load it into a database, denormalizing for fast lookups
design my API
build it out as a RESTful web service
return results in varying formats: JSON, XML, CSV, raw text
The first prototype should accept a ZIP code and return lat/lon in raw text.
Last night before going to bed, I browsed through the Scalar Data section of Learning Perl again and came across the following sentence:
the ability to have any character in a string means you can create, scan, and manipulate raw binary data as strings.
An idea immediately hit me that I could actually let Perl scan the pictures that I have stored on my hard disk to check if they contain the string Adobe. It seems by doing so, I can tell which of them have been photoshopped. So I tried to implement the idea and came up with the following code:
#!perl
use autodie;
use strict;
use warnings;
{
local $/="\n\n";
my $dir = 'f:/TestPix/';
my #pix = glob "$dir/*";
foreach my $file (#pix) {
open my $pic,'<', "$file";
while(<$pic>) {
if (/Adobe/) {
print "$file\n";
}
}
}
}
Excitingly, the code seems to be really working and it does the job of filtering out the pictures that have been photoshopped. But problem is many pictures are edited by other utilities. I think I'm kind of stuck there. Do we have some simple but universal method to tell if a digital picture has been edited or not, something like
if (!= /the origianl format/) {...}
Or do we simply have to add more conditions? like
if (/Adobe/|/ACDSee/|/some other picture editors/)
Any ideas on this? Or am I oversimplifying due to my miserably limited programming knowledge?
Thanks, as always, for any guidance.
Your best bet in Perl is probably ExifTool. This gives you access to whatever non-image information is embedded into the image. However, as other people said, it's possible to strip this information out, of course.
I'm not going to say there is absolutely no way to detect alterations in an image, but the problem is extremely difficult.
The only person I know of who claims to have an answer is Dr. Neal Krawetz, who claims that digitally altered parts of an image will have different compression error rates from the original portions. He claims that re-saving a JPEG at different quality levels will highlight these differences.
I have not found this to be the case, in my investigations, but perhaps you might have better results.
No. There is no functional distinction between a perfectly edited image, and one which was the way it is from the start - it's all just a bag of pixels in the end, after all, and any other metadata you can remove or forge all you want.
The name of the graphics program used to edit the image is not part of the image data itself but of something called meta data - which may be stored in the image file but, as others have noted, is neither required (so some programs may not store it, some may allow you an option of not storing it) nor reliable - if you forged an image, you might have forged the meta data as well.
So the answer to your question is "no, there's no way to universally tell if the pic was edited or not, although some image editing software may write its signature into the image file and it'll be left there by carelessness of the editing person.
If you're inclined to learn more about image processing in Perl, you could take a look at some of the excellent modules CPAN has to offer:
Image::Magick - read, manipulate and write of a large number of image file formats
GD - create colour drawings using a large number of graphics primitives, and emit the drawings in various formats.
GD::Graph - create charts
GD::Graph3d - create 3D Graphs with GD and GD::Graph
However, there are other utilities available for identifying various image formats. It's more of a question for Super User, but for various unix distros you can use file to identify many different types of files, and for MacOSX, Graphic Converter has never let me down. (It was even able to open the bizarre multi-file X-ray of my cat's shattered pelvis that I got on a disc from the vet.)
How would you know what the original format was? I'm pretty sure there's no guaranteed way to tell if an image has been modified.
I can just open the file (with my favourite programming language and filesystem API) and just write whatever I want into that file willy-nilly. As long as I don't screw something up with the file format, you'd never know it happened.
Heck, I could print the image out and then scan it back in; how would you tell it from an original?
As other's have stated, there is no way to know if the image was doctored. I'm guessing what you basically want to know is the difference between a realistic photograph and one that has been enhanced or modified.
There's always the option of running some extremely complex image recognition algorithm that would analyze every pixel in your image and do some very complicated stuff to determine if the image was doctored or not. This solution would probably involve AI which would examine millions of photos that are both doctored and those that are not and learn from them. However, this is more of a theoretical solution and isn't very practical... you would probably only see it in movies. It would be extremely complex to develop and probably take years. And even if you did get something like this to work, it probably still wouldn't be 100% correct all the time. I'm guessing AI technology still isn't at that level and could take a while until it is.
A not-commonly-known feature of exiftool allows you to recognize the originating software through an analysis of the JPEG quantization tables (not relying on image metadata). It recognizes tables written by many applications. Note that some cameras may use the same quantization tables as some applications, so this isn't a 100% solution, but it is worth looking into. Here is an example of exiftool run on two images, the first was edited by photoshop.
> exiftool -jpegdigest a.jpg b.jpg
======== a.jpg
JPEG Digest : Adobe Photoshop, Quality 10
======== b.jpg
JPEG Digest : Canon EOS 30D/40D/50D/300D, Normal
2 image files read
This will work even if the metadata has been removed.
There is existing software out there which uses various techniques (compression artifacting, comparison to signature profiles in a database of cameras, etc.) to analyze the actual image data for evidence of alteration. If you have access to such software and the software available to you provides an API for external access to these analysis functions, then there's a decent chance that a Perl module exists which will interface with that API and, if no such module exists, it could probably be created rather quickly.
In theory, it would also be possible to implement the image analysis code directly in native Perl, but I'm not aware of anyone having done so and I expect that you'd be better off writing something that low-level and processor-intensive in a fully-compiled language (e.g., C/C++) rather than in Perl.
http://www.impulseadventure.com/photo/jpeg-snoop.html
is a tool that does the job almost good
If there has been any cloning , there is a variation in the pixel density..or concentration which sometimes shows up.. upon manual inspection
a Photoshop cloned area will have even pixel density(my meaning is variation of Pixels wrt a scanned image)