Advantages: - The method affects very directly the visibility as supersampling is only done if there were a visible colour change in the picture. - The numbers of samples per pixels is usually kept at a moderate rate. - Supersampling of one pixel can also affect its (already traced) neighbours. Therefore a aliasing effect discovered only in the next line will correct the upper line.
Disadvantages: - The last point mentioned as advantage is limited. Only the line just above and the pixel left to the current can be changed. This leads to the effect that thin lines going from upper-left to lower-right come out better than other. You clearly see this assymetrie in test scene 2. - Objects smaller than a pixel are only caught randomly. - The number of samples does not scale with the difficulty of the aliasing effect. If supersampling is done there are always n^2 samples taken -- no matter of how big the colour difference were or how the supersamples are
Advantages: - Scales with the complexity of the alias effect.
Disadvantages: - Quickly leads to a high number of samples taken. - Does extreamly bad with thin lines. - Objects smaller than a pixel are only caught randomly. - See method 3-5 for further improvable things.
All my new methods are based on the second, adaptive method. That means they all go for the samples at the pixels' corners first and decide then what to do. These methods therefore are in first line thought as replacement/improvement to method 2. In the used test scenes method 3/4 have shown to always perform equally or better than method 2. Of course there could be a scene constructed which showed the opposite.
Method 3 does differently. It does not go depth-first but one depth after the other. So in this example it will trace the 9 points for depth 1. Then it sees that the lower part will not change any more. Because of this it multiplies the threshold value by a factor 2! Now it checks again if the colour difference in the upper part is more than threshold. If not it stops. The idea behind this is: only continue taking additional samples if the still-changing-area can change the pixels colour more than (the orginial) threshold. Like this, method 3 is sensitive to how much above the threshold the colour difference is. Generally it always multiplies the original threshold by the factor (pixels area)/(area that will not change any more) before proceeding to the next depth.
This is the only difference between method 3 and 2. The same strategy of increasing the threshold while going deeper into recursion is also used for method 4 and 5.
Let's have a look at how a thin line crosses the pixels:
The red grid marks pixel boundaries. The yellow points mark samples that
are taken by the method 4 as described above. The pixels marked with 'g' get
correctly supersampled but the ones with 'w' turn out compleatly white. Method 2
would fail in the same way - even though it takes more samples. How does this
happen? The 'w'-pixels come out compleatly white because the samples at the
corners all miss the line. But from the previous pixel we already know that
a black area is crossing the pixel border as we have supersampled that pixel.
We can use this information! That is what method 4 does. If it has to
check if the colour difference between two points is greater than the
treshold it does not only consider the samples at the two points but
also all the samples which have already been previously taken on the
line between the two points.
Like that it catchs thin lines similarly to method 1:
And like method 1 it does better with lines going from upper left to lower right.