Is tf.py_func allowed at online prediction time?
If yes any examples of how to use it?
Does the answer change if I need to install additional pip packages?
My use-case: I work with text, I need to do word stemming (using porter stemmer), I know how to do it using python, tensorflow doesn't have Ops for that. I would like to use the same text processing at training and prediction time - thus I would like to encode it all into a tensorflow graph.
https://www.tensorflow.org/api_docs/python/tf/py_func comes with known limitations and I would like to know if it will work during training and online prediction before I invest more time into it.
Thanks
Unfortunately, no. Py_func can not be restored from a saved model. However, since your use case involves pre-processing, just invoke the py_func explicitly in all three (train, eval, serving) input functions. This won't work if the py_func is in the middle of your graph, but for stemming, it should work just fine.
I'm trying to save a series of images (16 bit grayscale pgm) as video. The video has to be compressed. My program has to be independent of the codecs installed in the system.
My initial idea was to use OpenCV for this, unfortunately it depends on codecs installed in the system (unless I'm missing something).
I feel like there should be a way to compile an encoder (H264 or similar would be perfect) into the program or redistribute it as a dll with my program. I just can't find any good up to date guidance/examples.
I've been swimming in the deep vast ocean of AV encoding for a couple of days and would really appreciate it if someone could point me to a right direction.
Thanks.
As Ben suggests, it would be a good idea to use an established library in your code.
FFMPEG is probably the most used at the moment - it can be used on the command line, with a 'wrapper' program or the libraries it is built with can be used directly.
I think the last case sounds like the one you want - you can find documentation here:
https://trac.ffmpeg.org/wiki/Using%20libav*
Note the comment about disambiguation at the start - this is important to understand as the project lib and the library (which is what you want) are different things.
and there is some notes in this answer on how to build it into a program:
FFMpeg sample program
I am currently creating a game. My game will use music from an mp3 file that the user sends in in order to make decisions on where to place things, how fast the level moves, etc. I am fairly new at this, I have been reading information about mp3. Currently I have found all the frames in the mp3 file that I am using. I don't really know where to go from here. What I want to do is measure the frequencies of the sound wave of the music at certain times (like every sec) and then based on that frequency, do what I need to for the game. I don't know whether I should decode the mp3, that looks like a lot of work and I don't want to do that if I don't have 2 or if I can just read the bytes in the frame and convert them without decoding anything. I am developing this in c#, using the game engine FlatRedBall. I am not using any libraries. I am also planning on selling this game so I would like to avoid using other people's code if I can avoid it. Please someone help me, I just need a direction to go from here. I know how to parse the header and calculate the framelength, I just don't know the next step in what I want to do...
Convert your music to .ogg format which is free and use free library to play it.
Note: I was going to post this as a comment but it quickly grew too big. :)
Writing your own MP3 enconder/decoder is probably going to take a good ammount of effort; effort which would probably be better spent on your game itself. Therefore, is possible, I would be all means try to use an open source library.
That said, most good MP3 libraries are LGPL/GPL licensed. This means you can use it in a commercial setting, as long as you dynamically link to it. Also the SDL Mixer library, as of version 1.2.12, supports MP3s and is under a more permissive zlib license, but since you mention C# I don't know if stable and up-to-date bindings are available. Also since your project isn't written in SDL to begin with, it might be hard to integrate it.
Also, as #pro_metedor hinted, perhaps using a more open format could help in licensing issues. In general, OGG achieves better compression than MP3, which is a plus for things like download size, bandwidth/resource usage, etc.
Just shop around for a while, and try to be a little flexible. I'm sure you'll find something nice! :)
I haven't found any server side panorama making from stitching images or a video. I would like an open source alternative, but found any. I just don't want to go trough the hassle of developing all this on my own since but paid software usually are closed source and not very flexible.
I've seen some nifty panorama from video software in the iphone and thought it would be easy to find on *nix systems but with no luck.
Any help will be appreciated. Thanks in advance.
The only option I know is to use panostart (which is part of hugin) from whatever server language you are using.
See here for more information and other command line tools that do parts of the process more specifically.
panostart just works with images, so obviously if you wanted it to work with videos then you would have to process the videos with something like ffmpeg -i z.mov -f image2 export2\%d.png to generate images to pass to panostart.
The other alternative which requires more effort is to write something that uses libpano13 and libffmpeg directly.
You can have a look to VideoStitch, a command line is provided to automate the stitching.
Last night before going to bed, I browsed through the Scalar Data section of Learning Perl again and came across the following sentence:
the ability to have any character in a string means you can create, scan, and manipulate raw binary data as strings.
An idea immediately hit me that I could actually let Perl scan the pictures that I have stored on my hard disk to check if they contain the string Adobe. It seems by doing so, I can tell which of them have been photoshopped. So I tried to implement the idea and came up with the following code:
#!perl
use autodie;
use strict;
use warnings;
{
local $/="\n\n";
my $dir = 'f:/TestPix/';
my #pix = glob "$dir/*";
foreach my $file (#pix) {
open my $pic,'<', "$file";
while(<$pic>) {
if (/Adobe/) {
print "$file\n";
}
}
}
}
Excitingly, the code seems to be really working and it does the job of filtering out the pictures that have been photoshopped. But problem is many pictures are edited by other utilities. I think I'm kind of stuck there. Do we have some simple but universal method to tell if a digital picture has been edited or not, something like
if (!= /the origianl format/) {...}
Or do we simply have to add more conditions? like
if (/Adobe/|/ACDSee/|/some other picture editors/)
Any ideas on this? Or am I oversimplifying due to my miserably limited programming knowledge?
Thanks, as always, for any guidance.
Your best bet in Perl is probably ExifTool. This gives you access to whatever non-image information is embedded into the image. However, as other people said, it's possible to strip this information out, of course.
I'm not going to say there is absolutely no way to detect alterations in an image, but the problem is extremely difficult.
The only person I know of who claims to have an answer is Dr. Neal Krawetz, who claims that digitally altered parts of an image will have different compression error rates from the original portions. He claims that re-saving a JPEG at different quality levels will highlight these differences.
I have not found this to be the case, in my investigations, but perhaps you might have better results.
No. There is no functional distinction between a perfectly edited image, and one which was the way it is from the start - it's all just a bag of pixels in the end, after all, and any other metadata you can remove or forge all you want.
The name of the graphics program used to edit the image is not part of the image data itself but of something called meta data - which may be stored in the image file but, as others have noted, is neither required (so some programs may not store it, some may allow you an option of not storing it) nor reliable - if you forged an image, you might have forged the meta data as well.
So the answer to your question is "no, there's no way to universally tell if the pic was edited or not, although some image editing software may write its signature into the image file and it'll be left there by carelessness of the editing person.
If you're inclined to learn more about image processing in Perl, you could take a look at some of the excellent modules CPAN has to offer:
Image::Magick - read, manipulate and write of a large number of image file formats
GD - create colour drawings using a large number of graphics primitives, and emit the drawings in various formats.
GD::Graph - create charts
GD::Graph3d - create 3D Graphs with GD and GD::Graph
However, there are other utilities available for identifying various image formats. It's more of a question for Super User, but for various unix distros you can use file to identify many different types of files, and for MacOSX, Graphic Converter has never let me down. (It was even able to open the bizarre multi-file X-ray of my cat's shattered pelvis that I got on a disc from the vet.)
How would you know what the original format was? I'm pretty sure there's no guaranteed way to tell if an image has been modified.
I can just open the file (with my favourite programming language and filesystem API) and just write whatever I want into that file willy-nilly. As long as I don't screw something up with the file format, you'd never know it happened.
Heck, I could print the image out and then scan it back in; how would you tell it from an original?
As other's have stated, there is no way to know if the image was doctored. I'm guessing what you basically want to know is the difference between a realistic photograph and one that has been enhanced or modified.
There's always the option of running some extremely complex image recognition algorithm that would analyze every pixel in your image and do some very complicated stuff to determine if the image was doctored or not. This solution would probably involve AI which would examine millions of photos that are both doctored and those that are not and learn from them. However, this is more of a theoretical solution and isn't very practical... you would probably only see it in movies. It would be extremely complex to develop and probably take years. And even if you did get something like this to work, it probably still wouldn't be 100% correct all the time. I'm guessing AI technology still isn't at that level and could take a while until it is.
A not-commonly-known feature of exiftool allows you to recognize the originating software through an analysis of the JPEG quantization tables (not relying on image metadata). It recognizes tables written by many applications. Note that some cameras may use the same quantization tables as some applications, so this isn't a 100% solution, but it is worth looking into. Here is an example of exiftool run on two images, the first was edited by photoshop.
> exiftool -jpegdigest a.jpg b.jpg
======== a.jpg
JPEG Digest : Adobe Photoshop, Quality 10
======== b.jpg
JPEG Digest : Canon EOS 30D/40D/50D/300D, Normal
2 image files read
This will work even if the metadata has been removed.
There is existing software out there which uses various techniques (compression artifacting, comparison to signature profiles in a database of cameras, etc.) to analyze the actual image data for evidence of alteration. If you have access to such software and the software available to you provides an API for external access to these analysis functions, then there's a decent chance that a Perl module exists which will interface with that API and, if no such module exists, it could probably be created rather quickly.
In theory, it would also be possible to implement the image analysis code directly in native Perl, but I'm not aware of anyone having done so and I expect that you'd be better off writing something that low-level and processor-intensive in a fully-compiled language (e.g., C/C++) rather than in Perl.
http://www.impulseadventure.com/photo/jpeg-snoop.html
is a tool that does the job almost good
If there has been any cloning , there is a variation in the pixel density..or concentration which sometimes shows up.. upon manual inspection
a Photoshop cloned area will have even pixel density(my meaning is variation of Pixels wrt a scanned image)