Photo gallery using different sized images in fixed viewable area - slideshow

I'm hoping you can help and I'm hoping this isn't a duplicate (I've already been searching for the past 2 hours trying to find a solution).
What I'm currently doing:
I am using the Pikachoose Photo Gallery - http://www.pikachoose.com but I am open to others. I just need a very basic photo gallery that I can populate from a database. I have gotten the photo gallery looking 95% of how I need it to look but I'm running into a big problem. Although the width is set, the height is not as this would cause skewing of the image. I would assume most of the images will be landscape. So what happens while the slider rotates through the images, the main larger viewable area changes height based on the image that is loaded. It would be fine if all images were the same height/width, but that's not a possibility as I will not have control over the images loaded.
They are loaded into an unordered list and when clicked, will open a page that has the full image.
What I need it to do:
I'm familiar with how sprites work, but I wanted to know if there was a way to incorporate that type of functionality into the slider (i.e. Flickr does for thumbnails). I want to have a fixed height/width viewable area, such as 220px x 150px, and have the image load into that making the longest side 220 + 40 pixels (in case it's a narrow image) and center it, but you still only see 220px x 150px. Same with the thumbnails, I want them to be set width/height 50x50 pixels, which they are right now, but the images are squished. If I can solve either the main image or the thumbnail, I think I can solve it for the other. But I don't even know where to start.
See below for a visual example. Note: I meant for the thumbnails to read "set shortest side to 50px" not longest...

Related

Error in train_object_detector.cpp dlib

I was trying to run train_object_detector.cpp in dlib library to train it for pedestrian detection. I'm using INRIA dataset and when i tried to use it, there was an exception:
exception thrown!
Error! An impossible set of object boxes was given for training. All
the boxes
need to have a similar aspect ratio and also not be smaller than about
1600
pixels in area. The following images contain invalid boxes:
crop001002.png
crop001027.png
crop001038.png
crop001160.png
crop001612.png
crop001709.png
Try the -h option for more information.
when i removed these photos, it did run and loaded all photos but then another exception was thrown
exception thrown!
An impossible set of object labels was detected. This is happening
because none
of the object locations checked by the supplied image scanner is a
close enough
match to one of the truth boxes. To resolve this you need to either
lower the
match_eps or adjust the settings of the image scanner so that it hits
this truth box. Or you could adjust the offending truth rectangle so
it can be matched by the current image scanner. Also, if you are using
the scan_image_pyramid object then you could try using a finer image
pyramid or adding more detection templates. E.g. if one of your
existing detection templates has a matching width/height ratio and
smaller area than the offending rectangle then a finer image pyramid
would probably help.
please help me to deal with that.
Did you label your images using ImgLab?
When you label your images with this tool, keep in mind that your bounding boxes must have a similar aspect ration and that these bounding boxes must be smaller than the sliding window.
Usually, the example that you are running should dynamically calculate the size of the sliding window according to the provided boxes.
I'd suggest that you modify the source code a bit to do further tracking for the error source, if non of these helps.

How do I compress only the top part of an image in inkscape?

On inkscape, I am trying to make an SVG of a map that I drew, and I am trying to line it up with another map that I traced off of it. I imported pictures of both maps and tried to align them, however, the pictures were taken at a slightly different angle with my phone. When I stretched them out, rotated them, and aligned them, one was bigger on the top than the other one. Is there a way that I could compress the top part of the image, or a way to get around this problem?

Finding transformation using VIPS

I am trying to compare two images where one of them is rotated and shifted. I need to find the transformation from one to another so that I can resample and compare/subtract using VIPS to see the difference. Is there a way to do this?
nip2 has a couple of ways of doing this.
Load two images and click Toolkits / Images / Transform / Linear Match. A pair of tie points will appear on your two images: drag them to mark a pair of features on each image. The output image will be the second image resamped to match the first. There are some options to automatically improve your tie points, and to only rotate. It should be quick, even for very large images.
There's also an automatic transform finder. The auto search will only work for pairs of images which are rather similar; for example, it won't be able to match an x-ray and a visible image. To try this, load two images (they must be exactly the same size), and click Toolkits / Image / Transform / Rubber Sheet / Find. This will find a transform that matches the second image to the first. You can set how long it searches and the error threshold. It won't work for very large images (more than a few GB).
After you've found a transform, you can apply it to any other image with Toolkits / Image / Transform / Rubber Sheet / Apply. It'll take account of changes of scale, so you can find on a small image and apply on a large one.
Unfortunately, the auto transform finder was written by a friend of mine and he can't release the source. It's compiled into the Windows nip2 binary, or on linux you have to download a binary plugin and put it in the vips lib area.
http://www.vips.ecs.soton.ac.uk/supported/current/linux-64/

Finding all the regions in a webpage's image

I am working on a project where I need to find different regions present in an image(for any web page) like - navigation bar, menu bar, body, advertisement section etc. First I want to segment my entire image into distinct regions/sections using Image processing.
What I have done:
1st approach: I ran edge detection algorithm(Canny), this way I could see different regions in the form of rectangular boxes. However, I couldn't find a way to recognize all these regions.
2nd approach: I dealt with Hough transform to get all the horizontal and vertical lines which can help me in deciding different rectangular sections in the image. However, I am not able to come up with some concrete approach to use this houghlines to find all the rectangular regions imbibed in the image.
Any kind of your help is highly appreciated!

About image backgrounds while preparing training dataset for cascaded classifier

I have a question about preparing the dataset of positive samples for a cascaded classifier that will be used for object detection.
As positive samples, I have been given 3 sets of images:
a set of colored images in full size (about 1200x600) with a white background and with the object displayed at a different angles in each image
another set with the same images in grayscale and with a white background, scaled down to the detection window size (60x60)
another set with the same images in grayscale and with a black background, scaled down to the detection window size (60x60)
My question is that in set 1, should the background really be white? Should it not instead be an environment that the object is likely to be found in in the testing dataset? Or should I have a fourth set where the images are in their natural environments? How does environment figure into the training samples?
The background should be a typical environment of the object, because when you actually try to detect the objects, the search window will always include some of the background. The best thing is to crop the objects from natural images.
If you use the trainCascadeObjectDetector function in MATLAB, you do not even have to crop the samples. It lets you specify multiple bounding boxes per image. You also do not have to worry about the size of the samples, because trainCascadeObjectDetector will resize them for you.
There is a very handy GUI app on MATLAB file exchange for labeling objects of interest in images designed for use with trainCascadeObjectDetector.
Edit: couple of other points. Your negative images should also contain backgrounds typically associated with your objects of interest. Here is a tutorial that explains how to prepare training data and how to set some of the parameters.