Google Inceptionism: obtain images by class - computer-vision

In the famous Google Inceptionism article,
http://googleresearch.blogspot.jp/2015/06/inceptionism-going-deeper-into-neural.html
they show images obtained for each class, such as banana or ant. I want to do the same for other datasets.
The article does describe how it was obtained, but I feel that the explanation is insufficient.
There's a related code
https://github.com/google/deepdream/blob/master/dream.ipynb
but what it does is to produce a random dreamy image, rather than specifying a class and learn what it looks like in the network, as shown in the article above.
Could anyone give a more concrete overview, or code/tutorial on how to generate images for specific class? (preferably assuming caffe framework)

I think this code is a good starting point to reproduce the images Google team published. The procedure looks clear:
Start with a pure noise image and a class (say "cat")
Perform a forward pass and backpropagate the error wrt the imposed class label
Update the initial image with the gradient computed at the data layer
There are some tricks involved, that can be found in the original paper.
It seems that the main difference is that Google folks tried to get a more "realistic" image:
By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.

Related

Style transfer on large image. (in chunks?)

I am looking into various style transfer models and I noted that they all have limited resolution (when running on Pixel 3, for example, I couldn't go beyond 1,024x1,024, OOM otherwise).
I've noticed a few apps (eg this app) which appear to be doing style transfer for up to ~10MP images, these apps also show progress bar which I guess means that they don't just call a single tensorflow "run" method for entire image as otherwise they won't know how much was processed.
I would guess they are using some sort of tiling, but naively splitting the image into 256x256 produces inconsistent style (not just on the borders).
As this seems like an obvious problem I tried to find any publications about this, but I couldn't find any. Am I missing something?
Thanks!
I would guess people split the model into multiple ones (for VGG it is easy to do manually, eg. via layers) and then use model_summary Keras function (or benchmarks) to estimate relative time it takes for each step and thus guide progress bar. Such separation probably also saves memory as tensorflow lite might not be clever enough to reuse memory storing intermediate activations from lower layers once they are not needed.

Classification with machine learning and a small database

I want to create a valve detection and classification like this video : https://www.youtube.com/watch?v=VY92fqmSdfA
to detect the positions Open and close and intermediate of the valve.
I have done some research and I have found some methods to resolve this problem, but i have some conditions to respect to resolve this problem :
Condition 1 : Use machine learning in the application, I can't use simple methods like Template matching,...
Condition 2 : Use a small database (Minimum 10 images by classe, maximum 40 images by classe)
Condition 3 : detect the position of the valve if the camera position changes, so I can't use only colors to detect the valve handle.
I want to use HOG (Histogram oriented gradient) + SVM/ANN but HOG needs a lot of images to train SVM/ANN.
I dont know if I can resolve this problem respecting this conditions?
As we know, the most important thing that ML approaches need to work properly is data. So, I'd say your 1st and 2nd conditions are conflicting with each other. In addition, your 3rd condition is adding more complexity in the problem. You can solve it including more data from different angles and illumination conditions. But again, it's conflicting with condition 2.
Even so, if you'd like to follow the ML path, I'd recommend you to use a pre-trained model, a strong data augmentation and, maybe, an ensemble of models to help increase the detection. As the problem is not that hard, it should work.

Detour recast tileLayer what is it about

I would like to ask how tileLayers in detour work.
Can i layer any 2 tiles resulting in "cuts" being merged into bigger holes and in what manner, are there any limitations? How many of these layers can i have and how they interact when navigating. Can i have for example basic always unchanged layer with whole map and then multiple layers with user placed geometry etc...
I'am talking about dtNavMeshCreateParams::tileLayer from original lib. There is some implementation using it in TempObstackes example. But it is not well explained (it has many custom things inside but i'am interested only in function of those layers)
Thank you in advance, i cant find much info about this online so any help would really be appreceated.
So actually i finally (many hours) found my answer to what these layers are used for:
http://digestingduck.blogspot.com/2011/02/heightfield-layer-portals.html
(btw this seems to be blog of navmesh lib author, enjoy)

Best approach for doing full-text search with list-of-integers documents

I'm working on a C++/Qt image retrieval system based on similarity that works as follows (I'll try to avoid irrelevant or off-topic details):
I take a collection of images and build an index from them using OpenCV functions. After that, for each image, I get a list of integer values representing important "classes" that each image belongs to. The more integers two images have in common, the more similar they are believed to be.
So, when I want to query the system, I just have to compute the list of integers representing the query image, perform a full-text search (or similar) and retrieve the X most similar images.
My question is, what's the best approach to permorm such a search?
I've heard about Lucene, Lemur and other indexing methods, but I don't know if this kind of full-text searchs are the best way, given the domain is reduced (only integers instead of words).
I'd like to know about the alternatives in terms of efficiency, accuracy or C++ friendliness.
Thanks!
It sounds to me like you have a vectorspace model, so Lucene or a similar product may work well for you. In general, an inverted-index model will be good if:
You don't know the number of classes in advance
There are a lot of classes relative to the number of images
If your problem doesn't fit these criteria, a normal relational DB might work better, as Thomas suggested. If it meets #1 but not #2, you could investigate one of the "column oriented" non-relational databases. I'm not familiar enough with these to tell you how well they would work, but my intuition is that you'll need to replicate a lot of the functionality in an IR toolkit yourself.
Lucene is written in Java and I don't know of any C++ ports. Solr exposes Lucene as a web service, so it's easy enough to access it that way from whatever language you choose.
I don't know much about Lemur, but it looks like it has a similar vectorspace model, and it's written in C++, so that might be easier for you to use.
You can take a look at Lucene for image retrieval (LIRE) here: http://www.semanticmetadata.net/2006/05/19/lire-lucene-image-retrieval-04-released/
If I'm mistaken, you are trying to implement a typical bag of words image retrieval am I correct? If so you are probably trying to build an inverted file index. Lucene on its own is not suitable as you probably have already realized as it index text instead of numbers. Using its classes for querying the index would also be a problem as it is not designed to "parse" (i.e. detect keypoints, extract descriptors then vector-quantize them) image into the query vector.
LIRE on the other hand have been modified to index feature vectors. However, it does not appear to work out of the box for bag of words model. Also, I think I've read on the author's website that it currently uses brute force matching rather than the inverted file index to retrieve the images but I would expect it to be easier to extend than Lucene itself for your purposes.
Hope this helps.

How do I write a Perl script to filter out digital pictures that have been doctored?

Last night before going to bed, I browsed through the Scalar Data section of Learning Perl again and came across the following sentence:
the ability to have any character in a string means you can create, scan, and manipulate raw binary data as strings.
An idea immediately hit me that I could actually let Perl scan the pictures that I have stored on my hard disk to check if they contain the string Adobe. It seems by doing so, I can tell which of them have been photoshopped. So I tried to implement the idea and came up with the following code:
#!perl
use autodie;
use strict;
use warnings;
{
local $/="\n\n";
my $dir = 'f:/TestPix/';
my #pix = glob "$dir/*";
foreach my $file (#pix) {
open my $pic,'<', "$file";
while(<$pic>) {
if (/Adobe/) {
print "$file\n";
}
}
}
}
Excitingly, the code seems to be really working and it does the job of filtering out the pictures that have been photoshopped. But problem is many pictures are edited by other utilities. I think I'm kind of stuck there. Do we have some simple but universal method to tell if a digital picture has been edited or not, something like
if (!= /the origianl format/) {...}
Or do we simply have to add more conditions? like
if (/Adobe/|/ACDSee/|/some other picture editors/)
Any ideas on this? Or am I oversimplifying due to my miserably limited programming knowledge?
Thanks, as always, for any guidance.
Your best bet in Perl is probably ExifTool. This gives you access to whatever non-image information is embedded into the image. However, as other people said, it's possible to strip this information out, of course.
I'm not going to say there is absolutely no way to detect alterations in an image, but the problem is extremely difficult.
The only person I know of who claims to have an answer is Dr. Neal Krawetz, who claims that digitally altered parts of an image will have different compression error rates from the original portions. He claims that re-saving a JPEG at different quality levels will highlight these differences.
I have not found this to be the case, in my investigations, but perhaps you might have better results.
No. There is no functional distinction between a perfectly edited image, and one which was the way it is from the start - it's all just a bag of pixels in the end, after all, and any other metadata you can remove or forge all you want.
The name of the graphics program used to edit the image is not part of the image data itself but of something called meta data - which may be stored in the image file but, as others have noted, is neither required (so some programs may not store it, some may allow you an option of not storing it) nor reliable - if you forged an image, you might have forged the meta data as well.
So the answer to your question is "no, there's no way to universally tell if the pic was edited or not, although some image editing software may write its signature into the image file and it'll be left there by carelessness of the editing person.
If you're inclined to learn more about image processing in Perl, you could take a look at some of the excellent modules CPAN has to offer:
Image::Magick - read, manipulate and write of a large number of image file formats
GD - create colour drawings using a large number of graphics primitives, and emit the drawings in various formats.
GD::Graph - create charts
GD::Graph3d - create 3D Graphs with GD and GD::Graph
However, there are other utilities available for identifying various image formats. It's more of a question for Super User, but for various unix distros you can use file to identify many different types of files, and for MacOSX, Graphic Converter has never let me down. (It was even able to open the bizarre multi-file X-ray of my cat's shattered pelvis that I got on a disc from the vet.)
How would you know what the original format was? I'm pretty sure there's no guaranteed way to tell if an image has been modified.
I can just open the file (with my favourite programming language and filesystem API) and just write whatever I want into that file willy-nilly. As long as I don't screw something up with the file format, you'd never know it happened.
Heck, I could print the image out and then scan it back in; how would you tell it from an original?
As other's have stated, there is no way to know if the image was doctored. I'm guessing what you basically want to know is the difference between a realistic photograph and one that has been enhanced or modified.
There's always the option of running some extremely complex image recognition algorithm that would analyze every pixel in your image and do some very complicated stuff to determine if the image was doctored or not. This solution would probably involve AI which would examine millions of photos that are both doctored and those that are not and learn from them. However, this is more of a theoretical solution and isn't very practical... you would probably only see it in movies. It would be extremely complex to develop and probably take years. And even if you did get something like this to work, it probably still wouldn't be 100% correct all the time. I'm guessing AI technology still isn't at that level and could take a while until it is.
A not-commonly-known feature of exiftool allows you to recognize the originating software through an analysis of the JPEG quantization tables (not relying on image metadata). It recognizes tables written by many applications. Note that some cameras may use the same quantization tables as some applications, so this isn't a 100% solution, but it is worth looking into. Here is an example of exiftool run on two images, the first was edited by photoshop.
> exiftool -jpegdigest a.jpg b.jpg
======== a.jpg
JPEG Digest : Adobe Photoshop, Quality 10
======== b.jpg
JPEG Digest : Canon EOS 30D/40D/50D/300D, Normal
2 image files read
This will work even if the metadata has been removed.
There is existing software out there which uses various techniques (compression artifacting, comparison to signature profiles in a database of cameras, etc.) to analyze the actual image data for evidence of alteration. If you have access to such software and the software available to you provides an API for external access to these analysis functions, then there's a decent chance that a Perl module exists which will interface with that API and, if no such module exists, it could probably be created rather quickly.
In theory, it would also be possible to implement the image analysis code directly in native Perl, but I'm not aware of anyone having done so and I expect that you'd be better off writing something that low-level and processor-intensive in a fully-compiled language (e.g., C/C++) rather than in Perl.
http://www.impulseadventure.com/photo/jpeg-snoop.html
is a tool that does the job almost good
If there has been any cloning , there is a variation in the pixel density..or concentration which sometimes shows up.. upon manual inspection
a Photoshop cloned area will have even pixel density(my meaning is variation of Pixels wrt a scanned image)