Eric Brasseur explained a “bug” in the scaling algorithm of current image processing software. It’s not really a bug, technically, or mathematically. Calculating the numerical average of the surroundings of a pixel as the new color value is a pretty correct approach to scale an image down—if it’s seen as a data matrix. Rather, it’s visually not the thing you’d expect.
Technically speaking, the problem is that “the computations are
performed as if the scale of brightnesses was linear while in fact it is an exponential scale.” In mathematical terms: “a gamma of 1.0 is assumed while it is 2.2.”
Here’s an example of what might occur:
The wrong way:
- Take this image as a start:
- Simply scale it down to 50%:
Obviously, this might not be what you intended.
The right way:
- Take again this original image:
- Set gamma to 0.45 (=1⁄2.2); In GIMP, go to Color→Levels...→Input Levels and set the middle value (1.00) to 0.45:
- Scale this image down to 50%:
- And finally set gamma to 2.2:
The visual appearance of the resulting image is obviously closer to that of the original.
On the command line, you could do it this way with ImageMagick:
$ convert in.png -depth 16 -gamma 0.4545 -scale 50% -gamma 2.2 -depth 8 out.png
Look at the page linked at the beginning of this post to find more details on this topic and solutions for other software.