Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
http://tinypng.org/ is a great service, they optimized my png images by ~67%. How does their service work? How can they minimize size and quality of pictures still remains the same?
The answer's right on that web page:
When you upload a PNG (Portable Network Graphics) file, similar
colours in your image are combined. This technique is called
“quantisation”. Because the number of colours is reduced, 24-bit PNG
files can be converted to much smaller 8-bit indexed colour images.
All unnecessary metadata is stripped too. The result: tiny 8-bit PNG
files with 100% support for transparency. Have your cake and eat it
too!
It turns 24-bit RGB files into palettized 8-bit ones. You lose some color depth, but for small images it's often imperceptible.
You can do the same thing manually on the command line with this awesome tool:
http://pngquant.org/
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
In my current app I need to share screens ala skype or discord, I'd prefer to not use external libs, but will if I have to.
So far I have been sending the screenshots in down-scaled bitmap form over TCP sockets and repainting the window every few milliseconds, this is of course an effort I knew was doomed from the start, is there any api that could save me?
Any help appreciated.
While I haven't implemented it myself, I believe that what's usually done is the screen is broken into 16x16 pixel blocks. You can keep the previous screenshot, take a new one, compare which blocks have changed and send only the 16x16 blocks that have changes in them.
You can further improve performance by having a change threshold. If fewer than x pixels have changed in a block, don't send yet. Or if the cumulative sum of the changes in a block (the difference between corresponding pixels) is below some threshold, don't send that block.
The blocks are also often compressed using a lossy compression scheme that really shrinks down the required size you need to send per block. The image blocks are often also sent in 4:2:2 mode, meaning you store the red and blue channels at half the resolution of the green channel. This is based on how the visual system works, but it explains why things that are pure red or pure blue sometimes get blockiness or fringing around them when screen sharing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a quite specific question: I want to draw a matrix of numbers in a greyscale image. The higher the number the brighter the output. Is there a way to do that in a C++ programme without having dependencies to a graphic library like Qt?
I looked through some examples of ImageMagick, but I'm not sure how to implement its functions in C++.
Answer
In case someone stumbles upon this question:
a modified code of the example shown here was a handy and easy solution
It's hard without a library. SFML seems easy to use.
EDIT
Also you have 3 other questions hidden in your question:
1- To save an image you can use sf::image::saveToFile
2- To get brighter number for higher number: You need to normalize your numbers to [MinColorYouWant MaxColorYouWant] (for example: [128,255]). Thus the normalization value corresponding to each number will become the color of the number.
3- SFML uses RGBA images by default. Just set the RGB channels equal to make it look greyscale.
EDIT 2 : I've fixed the example normalization from [128,256] to [128,255].
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
im trying to learn how to read a jpeg image as an array of pixels in c++ or c. so far ive learned that i have to include a outside library such as libjpg.h.
ive been told that the jpeg is formated in a RGB structure where each pixel gives 3 values. is this true? and if so how would i read values for a purely black and white image?
the purpose of this question is that i am trying to assign a pointer to the top right corner of a white squre in a black picture.
if someone could show me how to read out the vaules that are given to me for this situation so i could assign this pointer i would be greatful.
Let's suppose you run with libjpeg. You'll allocate a buffer and then call jpeg_read_scanlines a sufficient number of times to get all of your decompressed image data into memory. You can read scanlines (rows) individually and reformat them as needed. If the image is grayscale, the RGB values will all be equal, so you can just read one of them.
Paul Bourke's site has some pretty good usage examples of libjpeg.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I want to try and make a program that can identify pictures. Since I'm using the pixel colors as input, should I have 3 inputs for each pixel in the image? (RGB values)
Color images are normally defined by at least three channels: R(red), G(green) and B(blue). You may also have alpha and all sorts of other channels. So yes, for one pixel you will have 3 inputs.
You have to clarify what exactly "identify pictures" entails.
"Identify pictures" is a very vague term.
You might want to take a look at something like OpenCV for handling image data. Within that library, the Mat structure provides very transparent pixel storage and access.
As far as semantics go, the function doing the "identification" would ideally accept an image object as an input, as opposed to image channels.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I am looking for sample GLSL fragment shader code that can convert RGB frame (say pixel format as ARGB) to YUV (say YUV420).
Imagine an RGB frame of size 1920x1080. I like to use fragment shader to convert it to YUV frame.
Can you point me to code that can be compiled and run on UBuntu box?
For future reference, a bunch of colorspace conversions in GLSL shaders can be found in Gstreamer gst-plugins-gl codebase :)
First, you should know that your question is poorly phrased. Nobody is going to write you sample code. You can use a search engine (or even search this site) for code to convert from RGB to YUV.
I have written an answer similar to what you're looking for here. It converts from RGB to YIQ, does some shifting of the hue and converts back. You can use the Y'CbCr matrix for the color conversion instead of YIQ, if that's what you need.
It doesn't down-convert to 4:2:0, though. That should be easy enough to do, though. Once it's in Y'CbCr format, you can downsample the appropriate channels as you see fit. I recommend doing a low-pass filter on those channel first to avoid aliasing artifacts.
I don't work with Linux, so haven't tested on Ubuntu. Good luck.
This link has a YUV 4:2:2 v210 to RGB implementation using a compute shader GLSL. YUV 4:2:2 v210 --> RGB GLSL source code