Eric Brasseur explained a “bug” in the scaling algorithm of current image processing software. It’s not really a bug, technically, or mathematically. Calculating the numerical average of the surroundings of a pixel as the new color value is a pretty correct approach to scale an image down—if it’s seen as a data matrix. Rather, it’s visually not the thing you’d expect.
Technically speaking, the problem is that “the computations are
performed as if the scale of brightnesses was linear while in fact it is an exponential scale.” In mathematical terms: “a gamma of 1.0 is assumed while it is 2.2.”
Here’s an example of what might occur:
The wrong way:
- Take this image as a start:
- Simply scale it down to 50%:
Obviously, this might not be what you intended.
The right way: