Why does srcset resize image? - srcset

I'm having a weird behaviour using srcset and I'm having a hard time understanding it. I've done a CodePen: http://codepen.io/anon/pen/dYBvNM
The problem is that I have a set of images (that Shopify generates) of various sizes: 240px, 480px, 600px and 1024px. The problem is that those are the maximum sizes. This means that if a merchant uploads a smaller image (let's say 600px), the 1024px version will be 600px, not 1024px. I cannot know that in advance, so I'm forced to simply add all the sizes as a "best case":
<img
src="my_1024x1024.jpg"
srcset="my_240px.jpg 240w, my_480px.jpg 480w, my_600px.jpg 600w, my_1024px 1024w"
sizes="(max-width: 35em) 100vh, 610px"
>
The weirdness happen when the image is indeed smaller than the expected max size. When that the case, the browser correctly select the appropriate image (in this case, it would select the 1024 version on a 15' Retina), but as the image is actually smaller than 1024px (size that I've indicated), the browser is actually resizing the image to be smaller than its native resolution.
You can compare in the CodePen http://codepen.io/anon/pen/dYBvNM that those two images are the 1024px version, but in the one using srcset, the rendering is actually smaller than with src only. I would have expected that it would leave the image at its native resolution.
Could you please explain why does that?
THanks!

The way it works is that 'w' descriptors are calculated into 'x' descriptors by dividing the given value with the effective size from the sizes attribute. So for instance, if 1024w is picked and the size is 610px, then 1024/610 = 1.67868852459016x, and that is the pixel density of the image that the browser will apply. If the image is then not in fact 1024 pixels wide, the browser will still apply this same density, which will "shrink" the image, because that's the right thing to do in the valid case when the image width and 'w' descriptor match.
You have to make the descriptors match the resource width. When the user uploads an image, you can check its width and use that as the biggest descriptor in sizes (if it's smaller than 1024), and remove the descriptors that are bigger than the given image width.

Related

How to detect if an image contains only white color with C++

We are writing a piece of software which downloads tiles from the internet from WMS servers (these are map servers, and they provide images as map data for various locations on the globe) and then displays them inside a window, using Qt and some OpenGL bindings.
Some of these servers contain data only for specific regions on the planet, and if you request and area outside of what they support it they provide you just a blank white image, which we do not want to use since they occupy extra space. So the question is:
How to identify whether an image contains only 1 color (white), or not.
What we have tried till now is the following:
Create a QImage, loop over every pixel of it, see if it differs from white. This is extremely slow, and since we want this to be a more or less realtime application, this idea sadly does not work.
Check if the image size is the same as an empty image size, but this also does not work, since it might happen that:
There is another image with the same size which actually contains data
It might be that tiles which are over an ocean have just one color, a light blue, and we need those tiles.
Do a "post processing" of the downloaded images and remove them from the scene later, but this looks ugly from the users' perspective that tiles are just appearing and disappearing ...
Request transparent images from the WMS servers, but due to some OpenGL mishappenings, when rendering, these images appear as black only on some (mostly low-end) video cards.
Any idea, library to use, direction or even code is welcome, and we need a C++ solution, since our app is C++.
Edit for those suggesting to sample pixels only from a few points in the map:
and
The two images above (yes, the left image contains a very tiny piece of Norway in the corner), would be eliminated if we would assume that the image is entirely white based only sampling a few points, in case none of those points actually touch any color than white. Link to the second image: https://wms.geonorge.no/skwms1/wms.sjokartraster2?LAYERS=all&SRS=EPSG:900913&FORMAT=image/png&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&BBOX=-313086.067812500,9079495.966562500,0.000000000,9392582.034375001&WIDTH=256&HEIGHT=256&TRANSPARENT=false
The correct and most reliable way would be to uncompress the PNG bytes and check each pixel in a tight loop.
The most usual source of an image process routine being "slow" is invoking a function call per-pixel. So if you are calling QImage::pixel in a nested loop for each row/column, it will not have the performance you desire.
Instead, take advantage of the fact that QImage gives you raw image bytes via the scanLine method or the bits method:
Something like this might work:
const int bytes_per_line = qimage.bytesPerLine();
unsigned char white_row[MAX_WIDTH * 4];
memset(white_row, 0xff, sizeof(white_row));
bool allWhite = true;
for (int row = 0; allWhite && (row < height); row++)
{
unsigned char* row_data = qimage.scanLine(row);
allWhite = !memcmp(row_data, white_row, bytes_per_line);
}
The above loop terminates pretty fast the moment a non-white pixel is encountered.

Ensure constant Mat size OpenCV

am using the findContours() function to find a business card in an image, however sometimes the card is very small in the image. It still finds the card but when I go to do further processing on the image I get unexpected results from cards due to the inconsistency in size of the card.
How can I take outImg and ensure it is always of size x,y?
You can use cv:resize to do it. Unlike in the other answer, You provide a desired-size cv::Size structure and leave the scale factor arguments.
cv::resize(theSourceOfAllEvil,myAwesomeMatrixResized,cv::Size(width,heigth));

C++: How to interpret a byte array representation of an image?

I'm trying to work with this camera SDK, and let's say the camera has this function called CameraGetImageData(BYTE* data), which I assume takes in a byte array, modifies it with the image data, and then returns a status code based on success/failure. The SDK provides no documentation whatsoever (not even code comments) so I'm just guestimating here. Here's a code snippet on what I think works
BYTE* data = new BYTE[10000000]; // an array of an arbitrary large size, I'm not
// sure what the exact size needs to be so I
// made it large
CameraGetImageData(data);
// Do stuff here to process/output image data
I've run the code w/ breakpoints in Visual Studio and can confirm that the CameraGetImageData function does indeed modify the array. Now my question is, is there a standard way for cameras to output data? How should I start using this data and what does each byte represent? The camera captures in 8-bit color.
Take pictures of pure red, pure green and pure blue. See what comes out.
Also, I'd make the array 100 million, not 10 million if you've got the memory, at least initially. A 10 megapixel camera using 24 bits per pixel is going to use 30 million bytes, bigger than your array. If it does something crazy like store 16 bits per colour it could take up to 60 million or 80 million bytes.
You could fill this big array with data before passing it. For example fill it with '01234567' repeated. Then it's really obvious what bytes have been written and what bytes haven't, so you can work out the real size of what's returned.
I don't think there is a standard but you can try to identify which values are what by putting some solid color images in front of the camera. So all pixels would be approximately the same color. Having an idea of what color should be stored in each pixel you may understand how the color is represented in your array. I would go with black, white, reg, green, blue images.
But also consider finding a better SDK which has the documentation, because making just a big array is really bad design
You should check the documentation on your camera SDK, since there's no "standard" or "common" way for data output. It can be raw data, it can be RGB data, it can even be already compressed. If the camera vendor doesn't provide any information, you could try to find some libraries that handle most common formats, and try to pass the data you have to see what happens.
Without even knowing the type of the camera, this question is nearly impossible to answer.
If it is a scientific camera, chances are good that it adhers to the IEEE 1394 (aka IIDC or DCAM) standard. I have personally worked with such a camera made by Hamamatsu using this library to interface with the camera.
In my case the camera output was just raw data. The camera itself was monochrome and each pixel had a depth-resolution of 12 bit. Therefore, each pixel intensity was stored as 16-bit unsigned value in the result array. The size of the array was simply width * height * 2 bytes, where width and height are the image dimensions in pixels the factor 2 is for 16-bit per pixel. The width and height were known a-priori from the chosen camera mode.
If you have the dimensions of the result image, try to dump your byte array into a file and load the result either in Python or Matlab and just try to visualize the content. Another possibility is to load this raw file with an image editor such as ImageJ and hope to get anything out from it.
Good luck!
I hope this question's solution will helps you: https://stackoverflow.com/a/3340944/291372
Actually you've got an array of pixels (assume 1 byte per pixel if you camera captires in 8-bit). What you need - is just determine width and height. after that you can try to restore bitmap image from you byte array.

Facebook graph api image sources not matching size given in response

I need to get the largest image Facebook has saved. According to the docs the image returned in the 'source' field should be a maximum of 960px wide now. Which I can confirm. But if you look at the 'images' field there are loads of other urls at apparently different, and larger sizes. However when I actually follow the urls the images aren't the size reported at all! They are never larger than 960. See this example: http://graph.facebook.com/10150369820292147?fields=images. Can we not get access to larger than 960? I thought they were saving larger images now as they have a full screen option in the gallery.
Thanks
The width and height fields of the first entry in the images array indicate how large the image should be for display inside of a 2048 x 2048 box, but the actual image file is still limited to 960 x 960 pixels.

256x256 icons trouble again, or how to get TRUE icon size through IImageList

I get the System image list by calling SHGetImageList:
SHGetImageList(SHIL_LAST, IID_IImageList, (void**)&imList);
I have a list of 256x256 images, but size of small icons which have not 256 version, have size 256 too. I need to get each icon with it's true size. How can i find out its size?
I'm get the size of an icon by using the method:
IImageList::GetIconSize
Ok. Now I know about IImageList::GetIconSize getting all icon's size equals 256x256. Then another question, how to know real image size?
p.s. Sorry for my english.
An image list can only hold images of the same size. If you have a 256x256 image list, it will always return 256x256 images. To retreive images in other sizes, you need to access the other sized image lists that the Shell provides.