Room-saving alternative for QR-Codes? - c++

I am looking for a way to encode 100 byte on paper and hope to find a more room-saving way to do this than QR-Codes.
Now this may sound a little strange, as the information needs room, but e.g. something wider and less tall would be cool.
Any suggestions?
(Also, C++ libraries would be nice.)
EDIT: Keep in mind I need to be able to scan it again. Thanks. :)

There are loads of different types of barcode out there - http://en.wikipedia.org/wiki/Barcode pick any one.

Why not just print the data as a base64 string:
Base64
Should be plenty of freely available libraries to handle the conversion and each 100 byte piece of data would be 34 characters. You could use as small a font as you liked and it fits very nicely with your wider and less tall requirement.

There is a software out there that prints source-code as tiny dots at 600dpi and is then able to convert it back. Maybe you could do that. (Bit its pretty much just printing the QR code smaller)

Related

How to add sound effects to PCM buffered audio in C++

I have an int16_t[] buffer with PCM raw audio data and I want to apply some effects (like echo, reverb, gain...) into it.
I thought that SoX or similar can do the trick for me, but SoX only works with files and other similar libraries that supports adding sound effects seems to add the effects only when the sound is played. So my problem with this is that I want to apply the effect to the samples into my buffer without playing them.
I have never worked with audio, but reading about PCM data I have learned that I can apply gain multiplying each sample value, for example. But I'm looking for any library or relatively easy algorithms that I can use directly in my buffer to get the sound effects applied.
I'm sure there are a lot of solutions to my problem out there if you know what to look for, but it's my first time with audio "processing" and I'm lost, as you can see.
For everyone like me, interested in learning DSP related to audio processing with C++ I want to share my little research results and opinion, and perhaps save you some time :)
After trying several DSP libraries, finally I have found The Synthesis ToolKit in C++ (STK), an open-source library that offer easy and clear interfaces and easy to understand code that you can dive in to learn about various basic DSP algorithms.
So, I recommend to anyone who is starting out and have no previous experience to take a look at this library.
Your int16_t[] buffer contains a sequence of samples. They represent instantaneous amplitude levels. Think of them as the voltage to apply to the speaker at the corresponding instant in time. They are signed numbers with values in the range (-32767,32767]. A stream of constant zeros means silence. A stream of constant -32000 (for example) also means silence, but it will eventually burn your your speaker coil. The position in the array represents time, and the value of each sample represents voltage.
If you want to mix two sample streams together, for example to apply a chirp, you get yourself a sample stream with the chirp in it (record a bird or something). You then add the two sounds sample by sample.
You can do a super-cheesy reverb effect by taking your original sound buffer, lowering its volume (perhaps by dividing all the samples by a constant), and adding it back to your original stream, but shifting the samples by a tenth of a second's worth of array position.
Those are the basics of audio processing. Things get very sophisticated indeed. This field is known as "digital signal processing" and there are plenty of books on the subject.
You can do it either with hacking the audio buffer and trying to do some effects like gain and threshold with simple math operations or do it correct using proper DSP algorithms. If you wish to do it correct, I would recommend using the Speex Library. It's open source and and well tested. www (dot)speex (dot)org. The code should compile on MSVC or linux with minimal effort. This is the fastest way to get a good audio code working with proper DSP techniques. Your code would look like .. please read the AEC example.
st = speex_echo_state_init(NN, TAIL);
den = speex_preprocess_state_init(NN, sampleRate);
speex_echo_ctl(st, SPEEX_ECHO_SET_SAMPLING_RATE, &sampleRate);
speex_preprocess_ctl(den, SPEEX_PREPROCESS_SET_ECHO_STATE, st);
You need to setup the states, the code testecho includes these.

Quick way to output a picture in C++

I'm coding a physical simulation on 2d array and I'm now thinking that I could benefit from having a graphical output. My system is an array of cells (up to 2048*2048 of them) taking binary values, until now I used a prompt or text file output of '+' and '-' but it's not efficient for 2048*2048 lattice and maybe outputting in an image would be quicker and neater. Still, I've never done that. Ideally a library allowing me to write blue and red pixels/cell while parsing my lattice would get the job done. Are there some pre-existing not too long tools for doing it in c++?
Edit: I think that I just found what I was looking for: png++
After no more than 10 lines of coding I got the following output:
All I was asking for! Thank you for the suggestions ;)
You can easily get away without using an external imaging library by outputting a very simple format such as PGM or PBM. Refer to the wikipedia page on Netbpm for more details, but you're essentially outputting all the values as either ASCII or binary numbers, then any image viewer or editor that supports PGM (many of which do) can open and display them. Even if you don't have an editor, something like ImageMagick can easily convert it to a PNG or any other more accessible format.
I've used this technique in the past to quickly visualize 2D data, as you're intending to.
C++ does not have native support for graphics. You need an additional C++ library.
Personally, I suggest you to use Qt, which is free, powerful and cross-platform.

String Compression Algorithm

I've been wanting to compress a string in C++ and display it's compressed state to console. I've been looking around for something and can't find anything appropriate so far. The closest thing I've come to finding it this one:
How to simply compress a C++ string with LZMA
However, I can't find the lzma.h header which works with it anywhere.
Basically, I am looking for a function like this:
std::string compressString(std::string uncompressedString){
//Compression Code
return compressedString;
}
The compression algorithm choice doesn't really matter. Anybody can help me out finding something like this? Thank you in advance! :)
Based on the pointers in the article I'm fairly certain they are using XZ Utils, so download that project and make the produced library available in your project.
However, two caveats:
dumping a compressed string to the console isn't very useful, as that string will contain all possible byte values, most of which aren't displayable on a console
compressing short strings (actually any small quantity of data) isn't what most general-purpose compressors were designed for, in many cases the compressed result will be as big or even bigger than the input. However, I have no experience with LZMA on small data quantities, an extensive test with data representative for your use case will tell you whether it works as expected.
One algorithm I've been playing with that gives good compression on small amounts of data (tested on data chunks sized 300-500 bytes) is range encoding.

Any good postscript drawing libraries?

I need to draw some pictures for my LaTeX documents, and I've found that hand-made PostScript seems to be a good fit (I want to do stuff programatically, need math functions, etc.). I've also tried TikZ but that just seemed overcomplicated and hard to use.
However, using plain standard PostScript is a bit painful since there aren't really any standard functions for drawing shapes (e.g. not even rectangles).
Is there any PostScript library that would include functions for common shapes and make life a bit easier? Seems to me this problem should be fairly common.
Or should I skip PostScript and move on to some superior system? Which one?
A few people and many PostScript drivers define their own set of procedures for drawing shapes. A PostScript driver may output the following shortcuts:
/bd{bind def} bind def
/cp{closepath}bd
/gs{gsave}bd
/gr{grestore}bd
/m{moveto}bd
/rm{rmoveto}bd
/l{lineto}bd
/rl(rlineto}bd
/s{stroke}bd
/f{fill}bd
/sf{gs s gr f}bd
/xx{exch}bd
/rect {4 2 roll m 1 index 0 rl 0 xx rl neg 0 rl cp} bd
Then, a rectangle would be drawn like this:
0 0 100 100 rect sf
The cumbersomeness of this does make PostScript particularly hard to deal with. MetaPost may be a better fit if you your drawings are programmatically/mathematically generated. MetaPost generates encapsulated PostScript (which you can include in your LaTeX document) but it is more suitable for drawing images with algebraic definitions.
I like using matplotlib. It can generate both postscript and PDF directly, it's in python, and it can also do pretty sophisticated plots (hence its name). If you want to hack PostScript directly you'll be able to use psticks in LaTeX, but you'll need to run-trip everything through dvi2ps and then ps2pdf to make PDFs. Do you really want PostScript or PDFs? I think that you want PDFs, right?
OK, I've decided that Asymptote is the best thing since sliced bread. Handles drawing both graphs and arbitrary figures really well, and has a vast number of extension modules (including MetaPost compatibility if you care about that). Additionally it typesets text using LaTeX which is just incredibly cool. As an added bonus it even outputs directly to PDF (or EPS).
I still think it's a bit sad there's no good libraries of routines for good ol' PostScript though.
I have used Asymptote (for graphs though) but I found it tiresome to learn yet another custom language. If you're familiar with Python, you can give PyX a try. Its feature set is similar to that of Asymptote. For example, it can also use LaTeX for typesetting text/math.
Another option is Enthought Enable, but that is probably less suited.
I've had good results constructing images directly in postscript. One helpful convention I've found is to treat objects like glyphs in a font. So each object expects the currentpoint to be set at, say, the bottom left corner, and leaves the currentpoint at the bottom right. The you can put them in an array and forall through it: each object leaves the currentpoint ready for the next one.
Generate SVG, then use something like iText and/or Inkscape to programmatically convert to PDF/PS. I built a publishing stack this way and it worked out really nice.
There are lots of postscript libraries
look here
http://www.ericlindsay.com/computer/printing.htm
and here
http://www.tinaja.com/post01.shtml
and here
http://seit.unsw.adfa.edu.au/staff/sites/gfreeman/qs.html

How do I write a Perl script to filter out digital pictures that have been doctored?

Last night before going to bed, I browsed through the Scalar Data section of Learning Perl again and came across the following sentence:
the ability to have any character in a string means you can create, scan, and manipulate raw binary data as strings.
An idea immediately hit me that I could actually let Perl scan the pictures that I have stored on my hard disk to check if they contain the string Adobe. It seems by doing so, I can tell which of them have been photoshopped. So I tried to implement the idea and came up with the following code:
#!perl
use autodie;
use strict;
use warnings;
{
local $/="\n\n";
my $dir = 'f:/TestPix/';
my #pix = glob "$dir/*";
foreach my $file (#pix) {
open my $pic,'<', "$file";
while(<$pic>) {
if (/Adobe/) {
print "$file\n";
}
}
}
}
Excitingly, the code seems to be really working and it does the job of filtering out the pictures that have been photoshopped. But problem is many pictures are edited by other utilities. I think I'm kind of stuck there. Do we have some simple but universal method to tell if a digital picture has been edited or not, something like
if (!= /the origianl format/) {...}
Or do we simply have to add more conditions? like
if (/Adobe/|/ACDSee/|/some other picture editors/)
Any ideas on this? Or am I oversimplifying due to my miserably limited programming knowledge?
Thanks, as always, for any guidance.
Your best bet in Perl is probably ExifTool. This gives you access to whatever non-image information is embedded into the image. However, as other people said, it's possible to strip this information out, of course.
I'm not going to say there is absolutely no way to detect alterations in an image, but the problem is extremely difficult.
The only person I know of who claims to have an answer is Dr. Neal Krawetz, who claims that digitally altered parts of an image will have different compression error rates from the original portions. He claims that re-saving a JPEG at different quality levels will highlight these differences.
I have not found this to be the case, in my investigations, but perhaps you might have better results.
No. There is no functional distinction between a perfectly edited image, and one which was the way it is from the start - it's all just a bag of pixels in the end, after all, and any other metadata you can remove or forge all you want.
The name of the graphics program used to edit the image is not part of the image data itself but of something called meta data - which may be stored in the image file but, as others have noted, is neither required (so some programs may not store it, some may allow you an option of not storing it) nor reliable - if you forged an image, you might have forged the meta data as well.
So the answer to your question is "no, there's no way to universally tell if the pic was edited or not, although some image editing software may write its signature into the image file and it'll be left there by carelessness of the editing person.
If you're inclined to learn more about image processing in Perl, you could take a look at some of the excellent modules CPAN has to offer:
Image::Magick - read, manipulate and write of a large number of image file formats
GD - create colour drawings using a large number of graphics primitives, and emit the drawings in various formats.
GD::Graph - create charts
GD::Graph3d - create 3D Graphs with GD and GD::Graph
However, there are other utilities available for identifying various image formats. It's more of a question for Super User, but for various unix distros you can use file to identify many different types of files, and for MacOSX, Graphic Converter has never let me down. (It was even able to open the bizarre multi-file X-ray of my cat's shattered pelvis that I got on a disc from the vet.)
How would you know what the original format was? I'm pretty sure there's no guaranteed way to tell if an image has been modified.
I can just open the file (with my favourite programming language and filesystem API) and just write whatever I want into that file willy-nilly. As long as I don't screw something up with the file format, you'd never know it happened.
Heck, I could print the image out and then scan it back in; how would you tell it from an original?
As other's have stated, there is no way to know if the image was doctored. I'm guessing what you basically want to know is the difference between a realistic photograph and one that has been enhanced or modified.
There's always the option of running some extremely complex image recognition algorithm that would analyze every pixel in your image and do some very complicated stuff to determine if the image was doctored or not. This solution would probably involve AI which would examine millions of photos that are both doctored and those that are not and learn from them. However, this is more of a theoretical solution and isn't very practical... you would probably only see it in movies. It would be extremely complex to develop and probably take years. And even if you did get something like this to work, it probably still wouldn't be 100% correct all the time. I'm guessing AI technology still isn't at that level and could take a while until it is.
A not-commonly-known feature of exiftool allows you to recognize the originating software through an analysis of the JPEG quantization tables (not relying on image metadata). It recognizes tables written by many applications. Note that some cameras may use the same quantization tables as some applications, so this isn't a 100% solution, but it is worth looking into. Here is an example of exiftool run on two images, the first was edited by photoshop.
> exiftool -jpegdigest a.jpg b.jpg
======== a.jpg
JPEG Digest : Adobe Photoshop, Quality 10
======== b.jpg
JPEG Digest : Canon EOS 30D/40D/50D/300D, Normal
2 image files read
This will work even if the metadata has been removed.
There is existing software out there which uses various techniques (compression artifacting, comparison to signature profiles in a database of cameras, etc.) to analyze the actual image data for evidence of alteration. If you have access to such software and the software available to you provides an API for external access to these analysis functions, then there's a decent chance that a Perl module exists which will interface with that API and, if no such module exists, it could probably be created rather quickly.
In theory, it would also be possible to implement the image analysis code directly in native Perl, but I'm not aware of anyone having done so and I expect that you'd be better off writing something that low-level and processor-intensive in a fully-compiled language (e.g., C/C++) rather than in Perl.
http://www.impulseadventure.com/photo/jpeg-snoop.html
is a tool that does the job almost good
If there has been any cloning , there is a variation in the pixel density..or concentration which sometimes shows up.. upon manual inspection
a Photoshop cloned area will have even pixel density(my meaning is variation of Pixels wrt a scanned image)