Creating PDF from PS programmatically on embedded platform - c++

Is there a library/tool that can be used in C/C++ that would convert the PS (post script) file to .PDF file, on embedded platform (proprietary operating system, no windows, no linux)?
I was looking for some kind of library that could be ported to our OS. I have found basically only Ghostscript, but issue there is with the license, if i understood it correctly, we would have to make our source public, which is not possible for us...
Maybe a little bit more background, we are trying to find format that will be easily viewable by user. We already have our output in PS for other reasons (printer). But now we want to provide this output in file by itself, so we are trying to find feasible file format. We are considering the PS itself, but usual user does not have PS viewer, so that's why I am trying to find something to convert this to PDF. So perhaps alternative question could be, is there some another format that can be easily acquired from PS, such that "regular" user can view it?

The main complexity for converting PostScript to something else comes from the fact, that PostScript is a programming language and PostScrip files in fact are programs executed on the printer.
In contrast to PostScript, PDF is not a programming language. When converting PostScript to PDF (or anything else), you actually have to run the PostScript program and record the graphic primitive calls, executed during the execution of the PostScript program.
This general approach is needed, when you want to convert PostScript programs from any source to PDF.
But you wrote, that you are creating the PostScript code yourself. Perhaps your PostScript program is just a linear sequence of calls to drawing primitives and does not use anything like subroutines or control structures.
If not, it might be easy to change your generator to do those computation at creation time,that currently are performed at print time. You would end up in a linear sequence of calls to drawing primitives.
When there are no more computations done at print-time, it should not be too hard to directly create PDF instead of PostScript. This answer mentions an open source PDF generation library, that uses an MIT style license.

The AGPL licence for Ghostscript would require you to make your source open, yes. However Ghostscript is dual licenced, in addition to the AGPL licence you can purchase a commercial licence, which doesn't require you to open source your own code.
Rather than converting to PDF you can, of course, also simply use Ghostscript to render the PostScript to a bitmap, its usually pretty easy to wrap a viewer around that.
I should point out that there are other companies offering commercial licences for PostScript interpreters which are capable of creating PDF files and/or rendering PostScript. Adobe is the obvious one, there's also Global Graphics.
These days there are not many players left in the field, if you want to handle PostScript, and the AGPL or similar licences won't suit you, then you will need to go commercial.

Related

How to package the assets of a game and allow only the engine to be able to read?

I'm developing an engine for a 2D game in C ++ and for some days I've been looking for a way to protect the images and audio of my future game. I know there is no 100% protection and that someone would be able to open these files, but I mean the regular user who just installed the game, prevent it from modifying the sprites, change the sound, overwrite the xml files with game map data.
I downloaded some games made in Unity and noticed that a .assets extension is used, in Diablo 2 it is used .ma0, .mpq, .data, in FEZ .pak, in Super Meat Boy only a .tp file. In other words, you can not open and edit any of these files in a text editor or unzip with winrar, they offer a minimum level of protection. How is this done? Do I have to create my own binary file format or is there any program that makes it easier to work?
You can't.
That "minimum level of protection" is even more minimal than you think. You can open those files in a hex editor and hack away at them. This activity is something that has been commonplace for many decades.
You can encrypt the data, but since the key must be stored in your application and the user has a copy of your application, that can be extracted and/or changed too.
You can add a digital signature to prevent people from modifying the assets then using them ("modding") but, again, this can be altered in your application.
You can obfuscate the assets by shipping them in a proprietary format, but this is usually done purely for functional reasons because, again, someone will reverse engineer them.
Once a thing is on someone else's computer, you have lost control of that thing.
There are actually multiple questions here, iirc:
How can users be prevented from reading game assets?
How can users be prevented from manipulating game assets?
What file format can be used to store game assets?
If you're using an existing engine, it probably has some support for this, and if it is sufficient for your purposes you only need to learn how to use it.
If you need to roll your own, you need to define your requirements clearly and pick a solution which fulfills them. For asset storage, a ZIP based format is probably easiest to handle, all languages have some form of support for that. To protect integrity, you should use cryptographic algorithms: digital signatures to detect tampering, and encryption to prevent reading. These will probably slow down the opening of assets a little bit, but in most cases this should be acceptable.

"Best" Input File Formats for C++? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am starting work on a new piece of software that will end up needing some robust and expandable file IO. There are a lot of formats out there. XML, JSON, INI, etc. However, there are always plusses and minuses so I thought I would ask for some community input.
Here are some rough requirements:
The format is a "standard"...I don't want to reinvent the wheel if I don't have to. It doesn't have to be a formal IEEE standard, but something you could Google and get some information on as a new user, may have some support tools (editors) beyond vi. (Though the software users will generally be computer savvy and happy to use vi.)
Easily integrates with C++. I don't want to have to pull along a 100mb library and three different compilers to get it up and running.
Supports tabular input (2d, n-dimensional)
Supports POD types
Can expand as more inputs are required, binds well to variables, etc.
Parsing speed is not terribly important
Ideally, as easy to write (reflect) as it is to read
Works well on Windows and Linux
Supports compositing (one file referencing another file to read, and so on.)
Human Readable
In a perfect world, I would use a header-only library or some clean STL implementation, but I'm fine with leveraging Boost or some small external library if it works well.
So, what are your thoughts on various formats? Drawbacks? Advantages?
Edit
Options to consider? Anything else to add?
XML
YAML
SQLite
Google Protocol Buffers
Boost Serialization
INI
JSON
There is one excellent format that meets all your criteria:
SQLite!
Please read article about using SQLite as an application file format. Also, please watch Google Tech Talk by D. Richard Hipp (SQLite author) about this very topic.
Now, lets see how SQLite meets your requirements:
The format is a "standard"
SQLite has become format of choice for most mobile environments, and for many desktop apps (Firefox, Thunderbird, Google Chrome, Adobe Reader, you name it).
Easily integrates with C++
SQLite has standard C interface, which is only one source file and one header file. There are C++ wrappers too.
Supports tabular input (2d, n-dimensional)
SQLite table is as tabular as you could possibly imagine. To represent say 3-dimensional data, create table with columns x,y,z,value and store your data as a set of rows like this:
x1,y1,z1,value1
x2,y2,z2,value2
...
Supports POD types
I assume by POD you meant Plain Old Data, or BLOB. SQLite lets you store BLOB fields as is.
Can expand as more inputs are required, binds well to variables
This is where it really shines.
Parsing speed is not terribly important
But SQLite speed is superb. In fact, parsing is basically transparent.
Ideally, as easy to write (reflect) as it is to read
Just use INSERT to write and SELECT to read - what could be easier?
Works well on Windows and Linux
You bet, and all other platforms as well.
Supports compositing (one file referencing another file to read)
You can ATTACH one database to another.
Human Readable
Not in binary, but there are many excellent SQLite browsers/editors out there. I like SQLite Expert Personal on Windows and sqliteman on Linux. There is also SQLite editor plugin for Firefox.
There are other advantages that SQLite gives you for free:
Data is indexable which makes it very fast to search. You just cannot do this using XML, JSON or any other text-only formats.
Data can be edited partially, even when amount of data is very large. You do not have to rewrite few gigabytes just to edit one value.
SQLite is fully transactional: it guarantees that your data is consistent at all times. Even if your application (or whole computer) crashes, your data will be automatically restored to last known consistent state on next first attempt to connect to the database.
SQLite stores your data verbatim: you do not need to worry about escaping junk characters in your data (including zero bytes embedded in your strings) - simply always use prepared statements, that's all it takes to make it transparent. This can be big and annoying problem when dealing with text data formats, XML in particular.
SQLite stores all strings in Unicode: UTF-8 (default) or UTF-16. In other words, you do not need to worry about text encodings or international support for your data format.
SQLite allows you to process data in small chunks (row by row in fact), thus it works well in low memory conditions. This can be a problem for any text based formats, because often they need to load all text into memory to parse it. Granted, there are few efficient stream-based XML parsers out there, but in general any XML parser will be quite memory greedy compared to SQLite.
Having worked quite a bit with both XML and json, here's my rather subjective opinion of both as extendable serialization formats:
The format is a "standard": Yes for both
Easily integrates with C++: Yes for both. In each case you'll probably wind up with some kind of library to handle it. On Linux, libxml2 is a standard, and libxml++ is a C++ wrapper for it; you should be able to get both of those from your distro's package manager. It will take some small effort to get those working on Windows. There appears to be some support in Boost for json, but I haven't used it; I've always dealt with json using libraries. Really, the library route is not very onerous for either.
Supports tabular input (2d, n-dimensional): Yes for both
Supports POD types: Yes for both
Can expand as more inputs are required: Yes for both - that's one big advantage to both of them.
Binds well to variables: If what you mean is some way inside the file itself to say "This piece of data must be automatically deserialized into this variable in my program", then no for both.
As easy to write (reflect) as it is to read: Depends on the library you use, but in my experience yes for both. (You can actually do a tolerable job of writing json using printf().)
Works well on Windows and Linux: Yes for both, and ditto Mac OS X for that matter.
Supports one file referencing another file to read: If you mean something akin to a C #include, then XML has some ability to do this (e.g. document entities), while json doesn't.
Human readable: Both are typically written in UTF-8, and permit line breaks and indentation, and thus can be human-readable. However, I've just been working with a 479 KB XML file that's all on one line, so I had to run it through a prettyprinter to make sense of it. json can also be pretty unreadable, but in my experience is often formatted better than XML.
When starting new projects, I generally prefer json; it's more compact and more human-readable. The main reason I might select XML over json would be if I were worried about receiving badly-formed documents, since XML supports automated document format validation, while you have to write your own validation code with json.
Check out google buffers. This handles most of your requirements.
From their documentation, the high level steps are:
Define message formats in a .proto file.
Use the protocol buffer compiler.
Use the C++ protocol buffer API to write and read messages.
For my purposes, I think the way to go is XML.
The format is a standard, but allows for modification and flexibility for the schema to change as the program requirements evolve.
There are several library options. Some are larger (Xerces-C) some are smaller (ezxml), but there are many options, so we won't be locked in to a single provider or very specific solution.
It can supports tabular input (2d, n-dimensional). This requires more parsing work on "our" end, and is likely the weakest point for XML.
Supports POD types: Absolutely.
Can expand as more inputs are required, binds well to variables, etc. through schema modifications and parser modifications.
Parsing speed is not terribly important, so processing a text file or files is not an issue.
XML can be programmatically written just as easily as read.
Works well on Windows and Linux or any other OS that supports C and text files.
Supports compositing (one file referencing another file to read, and so on.)
Human Readable with many text editors (Sublime, vi, etc.) supporting syntax highlighting out of the box. Many web browsers display the data well.
Thanks for all the great feedback! I think if we wanted a purely binary solution, Protocol Buffers or boost::serialization is likely the way that we would go.

Recommended file formats and graphics libraries for importing 3D model into OpenGL/C++ project?

If you wanted to:
model an object in a 3D editor, e.g. Blender, Maya, etc
export the model into a data/file format
import the model into an project using OpenGL and C/C++
Then:
What file format would you recommend exporting to, i.e. in terms of simplicity, portability, and compatibility (i.e. common/popular)?
What graphics libraries would you recommend using to import the model into your OpenGL C/C++ project (i.e. preferably open source)?
Additionally, are there data/file formats that also capture animation, i.e. an "animated model" format, such that the animation could be modeled in the 3D editor and somehow invoked inside the code (e.g. accessibility to frames in the animation sequence or some other paradigm for saving/loading details related to changes over time)?
Generally speaking, I'm seeking simplicity as the priority, i.e. help to get started with combining my backgrounds in both art and computer science. I am a computer science major at UMass while at the same time, sort of a "pseudo" double-major in art by taking electives in graphic design at my university as well as classes at the Art Institute of Boston during summer/winter sessions So, in other words, I am not a complete newbie, but at the same time I don't really want options that are so overloaded with crazy advanced configurations making it too difficult to get started with a basic demonstration project; i.e. as a first step to understanding how to bridge the gap between these two worlds, e.g. creating a program featuring a 3D character that the user can interact with.
COLLADA (I'm saying it with a "ah" at the end), and Assimp (ple as that).
And so, why COLLADA? Simple:
COLLADA is an open standard made by Khronos (Sony specifically). The beauty of an open standard format is that it's, well, a standard! You can be assured that any output of a standard-conformant product would also read correctly by another standard-conformant product. Sad to say though, some 3d modelling products aren't that particular in their measures for standards conformance for COLLADA. But still be rest assured: Blender, Maya, 3ds Max and all the other big names in 3d modelling has good support for the format.
COLLADA uses XML. This makes it much more simpler for you if your planning to create your own reader or writer.
ADDITIONAL: COLLADA is, I think, the only format that is not tied to a specific company. This is a very good thing for us, you know.
ADDITIONAL 2: It is known that COLLADA is slow to be parsed. That's true. But think of it: all other non-binary formats (like fbx) have also the same issues. For your needs, COLLADA should suffice.
ADDITIONAL 3: COLLADA supports animations!
For the importer library, I highly recommend Assimp. Why?
Assimp has support for any popular format you can imagine. It has a unified interface for all formats so that switching to another format is less of a pain.
Assimp is extensible. You can thus import your proprietary format and still not modify your code.
ADDITIONAL 4: Assimp is open source! Let's support open source software!
First , here you can read about suggested model loading lbs.Lib Assimp is really good and supports many formats.For the preferred formats.Collada-I wouldn't recommend because that is XML (text) based formats which is slow to parse. Obj format is also widespread but suffers from the same problems as Collada.It is still good if you want to write your own parser as its structure is very simple.But Instead I would suggest 3Ds which is binary.It doesn't support animations though.The most popular format today which supports both static mesh and animation is FBX.You can download for free FBX SDK from Autodesk and connect it to your engine.The reason I would choose FBX is because both SDK and the format are really robust.For example ,in FBX you can embed not just geometry and animation but also scene objects as lights ,cameras etc.Autodesk docs are very nice too.
Hope it helps.
I would reccomend using your own custom format that basically just a binary dump of the vertex buffer and index buffers used in your program. (Using d3d terms there, I know opengl has the same concepts but can't remember if they have different names).
I would then write a separate program using assimp that takes pretty much any format and writes out the file in your custom format. Then you can use collada or whatver to store your actual models in, but not have the complixity and slowness of loading that format at run time.

Is there such a thing like a Printer-Markup-Language

I like to print a document. The content of the document are tables and text with different colors. Does a lightwight printer-file-format exist, which can be used like a template?
PS, PDF, DOC files in my opinion are to heavy to parse. May there exist some XML or YAML file format which supports:
Easy creation (maybe with a WYSIWYG-Editor)
Parsing and manipulation with Library-Support
Easy sending to the printer (maybe with Library-Support)
Or do I have to do it the usual way and paint within a CDC?
I noticed you’re using MFC (so, Windows). In that case the answer is a qualified yes. In recent versions of Windows, Microsoft offers the XPS Document API which lets you create and manipulate a PDF-like document using XML, which can then be printed using the XPS Print API.
(For earlier versions of Windows that don’t support this API, you could try to deal with the XPS file format directly, but that is probably a lot harder than using CDC. Even with the API you will be working at a fairly low level.)
End users can generate XPS documents using the XPS print driver that is available for free from Microsoft (and bundled with certain MS products—they probably already have it on their system).
There is no universal language that is supported across all (or even many) printers. While PCL and PS are the most used, there are also printers which only work with specific printer drivers because they only support a proprietary data format (often pre-rendered on the client).
However, you could use XSL-FO to create documents which can then be rendered to a printer driver using library support.
I think something like TeX or LaTeX (or even troff or groff) may meet your needs. Google them and see.
There are also libraries to render documents for print from HTML source. Look at http://libharu.sourceforge.net/ for example. This outputs a printer-ready .PDF
A think that Post Script is a really good choice for that.
It is actually a very simple language, and it must be very easy to parse becuse it is stack-oriented. Then -- most printers supprort it, and even if you have no support you can use GhostScript to convert for many different formats (Consider GS as a "virtual PS supporting printer").
Finally there are a lot of books and tutorials for the language.
About the parsing -- you can actually define new variables and functions in PS. So, maybe, your problem can be solved (almost) entirely using PS.
HTML + CSS can be printed -- properly. CSS was designed to support this with the media attribute to specify that your CSS is for printer layout, not for screen layout. Tools like PRINCE (free + commercial versions) exist to render this for printing.
I think postscript is the markup language used by printers. I read this somewhere, so correct me if postscript is now outdated.
http://en.wikipedia.org/wiki/PostScript
For more powerful suite you can use Latex. It will give options of creating templates where you can just copy the text.
On a more GUI friendly note, MS-Word and other word processors have templates. The issue is they are not of a common standard or markup.
You can also use HTML to render stuff in a common markup but it will not be very printer friendly.

How do I write a Perl script to filter out digital pictures that have been doctored?

Last night before going to bed, I browsed through the Scalar Data section of Learning Perl again and came across the following sentence:
the ability to have any character in a string means you can create, scan, and manipulate raw binary data as strings.
An idea immediately hit me that I could actually let Perl scan the pictures that I have stored on my hard disk to check if they contain the string Adobe. It seems by doing so, I can tell which of them have been photoshopped. So I tried to implement the idea and came up with the following code:
#!perl
use autodie;
use strict;
use warnings;
{
local $/="\n\n";
my $dir = 'f:/TestPix/';
my #pix = glob "$dir/*";
foreach my $file (#pix) {
open my $pic,'<', "$file";
while(<$pic>) {
if (/Adobe/) {
print "$file\n";
}
}
}
}
Excitingly, the code seems to be really working and it does the job of filtering out the pictures that have been photoshopped. But problem is many pictures are edited by other utilities. I think I'm kind of stuck there. Do we have some simple but universal method to tell if a digital picture has been edited or not, something like
if (!= /the origianl format/) {...}
Or do we simply have to add more conditions? like
if (/Adobe/|/ACDSee/|/some other picture editors/)
Any ideas on this? Or am I oversimplifying due to my miserably limited programming knowledge?
Thanks, as always, for any guidance.
Your best bet in Perl is probably ExifTool. This gives you access to whatever non-image information is embedded into the image. However, as other people said, it's possible to strip this information out, of course.
I'm not going to say there is absolutely no way to detect alterations in an image, but the problem is extremely difficult.
The only person I know of who claims to have an answer is Dr. Neal Krawetz, who claims that digitally altered parts of an image will have different compression error rates from the original portions. He claims that re-saving a JPEG at different quality levels will highlight these differences.
I have not found this to be the case, in my investigations, but perhaps you might have better results.
No. There is no functional distinction between a perfectly edited image, and one which was the way it is from the start - it's all just a bag of pixels in the end, after all, and any other metadata you can remove or forge all you want.
The name of the graphics program used to edit the image is not part of the image data itself but of something called meta data - which may be stored in the image file but, as others have noted, is neither required (so some programs may not store it, some may allow you an option of not storing it) nor reliable - if you forged an image, you might have forged the meta data as well.
So the answer to your question is "no, there's no way to universally tell if the pic was edited or not, although some image editing software may write its signature into the image file and it'll be left there by carelessness of the editing person.
If you're inclined to learn more about image processing in Perl, you could take a look at some of the excellent modules CPAN has to offer:
Image::Magick - read, manipulate and write of a large number of image file formats
GD - create colour drawings using a large number of graphics primitives, and emit the drawings in various formats.
GD::Graph - create charts
GD::Graph3d - create 3D Graphs with GD and GD::Graph
However, there are other utilities available for identifying various image formats. It's more of a question for Super User, but for various unix distros you can use file to identify many different types of files, and for MacOSX, Graphic Converter has never let me down. (It was even able to open the bizarre multi-file X-ray of my cat's shattered pelvis that I got on a disc from the vet.)
How would you know what the original format was? I'm pretty sure there's no guaranteed way to tell if an image has been modified.
I can just open the file (with my favourite programming language and filesystem API) and just write whatever I want into that file willy-nilly. As long as I don't screw something up with the file format, you'd never know it happened.
Heck, I could print the image out and then scan it back in; how would you tell it from an original?
As other's have stated, there is no way to know if the image was doctored. I'm guessing what you basically want to know is the difference between a realistic photograph and one that has been enhanced or modified.
There's always the option of running some extremely complex image recognition algorithm that would analyze every pixel in your image and do some very complicated stuff to determine if the image was doctored or not. This solution would probably involve AI which would examine millions of photos that are both doctored and those that are not and learn from them. However, this is more of a theoretical solution and isn't very practical... you would probably only see it in movies. It would be extremely complex to develop and probably take years. And even if you did get something like this to work, it probably still wouldn't be 100% correct all the time. I'm guessing AI technology still isn't at that level and could take a while until it is.
A not-commonly-known feature of exiftool allows you to recognize the originating software through an analysis of the JPEG quantization tables (not relying on image metadata). It recognizes tables written by many applications. Note that some cameras may use the same quantization tables as some applications, so this isn't a 100% solution, but it is worth looking into. Here is an example of exiftool run on two images, the first was edited by photoshop.
> exiftool -jpegdigest a.jpg b.jpg
======== a.jpg
JPEG Digest : Adobe Photoshop, Quality 10
======== b.jpg
JPEG Digest : Canon EOS 30D/40D/50D/300D, Normal
2 image files read
This will work even if the metadata has been removed.
There is existing software out there which uses various techniques (compression artifacting, comparison to signature profiles in a database of cameras, etc.) to analyze the actual image data for evidence of alteration. If you have access to such software and the software available to you provides an API for external access to these analysis functions, then there's a decent chance that a Perl module exists which will interface with that API and, if no such module exists, it could probably be created rather quickly.
In theory, it would also be possible to implement the image analysis code directly in native Perl, but I'm not aware of anyone having done so and I expect that you'd be better off writing something that low-level and processor-intensive in a fully-compiled language (e.g., C/C++) rather than in Perl.
http://www.impulseadventure.com/photo/jpeg-snoop.html
is a tool that does the job almost good
If there has been any cloning , there is a variation in the pixel density..or concentration which sometimes shows up.. upon manual inspection
a Photoshop cloned area will have even pixel density(my meaning is variation of Pixels wrt a scanned image)