### Sampling and Reconstruction

Digital Image Synthesis
*Yung-Yu Chuang*

11/1/2007

*with slides by Pat Hanrahan, Torsten Moller and Brian Curless*

**Sampling theory**

• Sampling theory: the theory of taking discrete
*sample values (grid of color pixels) from *

functions defined over continuous domains
*(incident radiance defined over the film plane)*
and then using those samples to reconstruct new
functions that are similar to the original

(reconstruction).

**Sampler**: selects sample points on the image plane
**Filter**: blends multiple samples together

**Aliasing**

• Reconstruction generates an approximation to the original function. Error is called aliasing.

*sample position*
*sample value*

sampling reconstruction

**Sampling in computer graphics**

• Artifacts due to sampling - Aliasing

– Jaggies – Moire

– Flickering small objects – Sparkling highlights

– Temporal strobing (such as Wagon-wheel effect)

• Preventing these artifacts - Antialiasing

**Jaggies**

**Retort sequence by Don Mitchell**

**Staircase pattern or jaggies**

**Moire pattern**

• Sampling the equation

)
sin(*x*^{2} + *y*^{2}

**Fourier transforms**

• Most functions can be decomposed into a weighted sum of shifted sinusoids.

• Each function has two representations

– Spatial domain - normal representation – Frequency domain - spectral representation

• *The Fourier transform converts between the *
spatial and frequency domain

**Spatial**
**Domain**

**Frequency**
**Domain**

( ) ( )

( ) 1 ( )

2

*i x*

*i x*

*F* *f x e* *dx*

*f x* *F* *e* *d*

ω

ω

ω

ω ω

π

∞

−

−∞

∞

−∞

=

=

### ∫

### ∫

)
*(x*

*f* *F*(ω)

**Fourier analysis**

*spatial domain* *frequency domain*

**Fourier analysis**

*spatial domain* *frequency domain*

**Fourier analysis**

*spatial domain* *frequency domain*

**Convolution**

• *Definition*

• *Convolution Theorem: Multiplication in the frequency *
domain is equivalent to convolution in the space
domain.

• *Symmetric Theorem: Multiplication in the space *
domain is equivalent to convolution in the frequency
domain.

*f* ⊗ ↔ × *g* *F G*

*f g* × ↔ ⊗ *F* *G*

### ( ) ( ) ( )

*h x* = ⊗ = *f* *g* ∫ *f x g x* ′ − *x dx* ′ ′

**1D convolution theorem example**

**2D convolution theorem example**

### ∗

### × ⇒

### ⇒

*f(x,y)* *h(x,y)* *g(x,y)*

*F(s*_{x}*,s*_{y}*)* *H(s*_{x}*,s*_{y}*)* *G(s*_{x}*,s*_{y}*)*

**The delta function**

• Dirac delta function, zero width, infinite height and unit area

**Sifting and shifting** **Shah/impulse train function**

*frequency domain*
*spatial domain*

,

**Sampling**

*band limited*

**Reconstruction**

*The reconstructed function is obtained by interpolating *
*among the samples in some manner*

**In math forms**

) ( ) III(s) )

(

~ (

*s*
*s*

*F*

*F* = ∗ ×Π

) ( sinc ) III ) (

~ (

*x*
*(x)*

*x*
*f*

*f* = × ∗

### ∑

^{∞}

−∞

=

−

=

*i*

*i*
*f*
*i*
*x*
*x*

*f*( ) sinc( ) ( )

~

**Reconstruction filters**

The sinc filter, while ideal, has two drawbacks:

• It has a large support (slow to compute)

• It introduces ringing in practice

The box filter is bad because its Fourier transform is a sinc filter which includes high frequency contribution from the infinite series of other copies.

**Aliasing**

*increase sample *
*spacing in*

*spatial domain*

*decrease sample *
*spacing in*

*frequency domain*

**Aliasing**

*high-frequency*
*details leak into*
*lower-frequency*
*regions*

**Sampling theorem** **Sampling theorem**

• For band limited functions, we can just increase the sampling rate

• However, few of interesting functions in computer graphics are band limited, in particular, functions with discontinuities.

• It is mostly because the discontinuity always falls between two samples and the samples provides no information about this discontinuity.

**Aliasing**

• Prealiasing: due to sampling under Nyquist rate

• Postaliasing: due to use of imperfect reconstruction filter

**Antialiasing**

• Antialiasing = Preventing aliasing

1. Analytically prefilter the signal

– Not solvable in general

2. Uniform supersampling and resample 3. Nonuniform or stochastic sampling

**Antialiasing (Prefiltering)**

*It is blurred, but*
*better than aliasing*

**Uniform Supersampling**

• Increasing the sampling rate moves each copy of the spectra further apart, potentially

reducing the overlap and thus aliasing

• Resulting samples must be resampled (filtered) to image sampling rate

**Samples** **Pixel**

*s* *s*

*s*

*Pixel* =

### ∑

*w Sample*⋅

**Point vs. Supersampled**

**Point** **4x4 Supersampled**

**Checkerboard sequence by Tom Duff**

**Analytic vs. Supersampled**

**Exact Area** **4x4 Supersampled**

**Non-uniform Sampling**

• Uniform sampling

– The spectrum of uniformly spaced samples is also a set of uniformly spaced spikes

– Multiplying the signal by the sampling pattern corresponds to placing a copy of the spectrum at each spike (in freq. space) – Aliases are coherent (structured), and very noticeable

• Non-uniform sampling

– Samples at non-uniform locations have a different spectrum; a single spike plus noise

– Sampling a signal in this way converts aliases into broadband noise

– Noise is incoherent (structurelss), and much less objectionable

• Aliases can’t be removed, but an be made less noticeable.

**Antialiasing (nonuniform sampling)**

• The impulse train is modified as

• It turns regular aliasing into noise. But random noise is less distracting than coherent aliasing.

### ∑

^{∞}

−∞

= ⎟⎟⎠

⎜⎜ ⎞

⎝

⎛ ⎟

⎠

⎜ ⎞

⎝

⎛ + −

*i*

*iT*

*x-* ξ

δ 2

1

**Distribution of Extrafoveal Cones**

**Monkey eye **
**cone distribution**

**Fourier transform**

**Yellott theory**

ν**Aliases replaced by noise**

ν**Visual system less sensitive to high freq noise**

**Poisson disk noise (Yellott)**

• Blue noise

• Spectrum should be noisy and lack any concentrated spikes of energy (to avoid coherent aliasing)

• Spectrum should have deficiency of low- frequency energy (to hide aliasing in less noticeable high frequency)

**Jittered Sampling**

**Add uniform random jitter to each sample**

**Jittered vs. Uniform Supersampling**

**4x4 Jittered Sampling** **4x4 Uniform**

**Prefer noise over aliasing**

*reference* *aliasing* *noise*

**Antialiasing (adaptive sampling)**

• Take more samples only when necessary.

However, in practice, it is hard to know where we need supersampling. Some heuristics could be used.

• It only makes a less aliased image, but may not be more efficient than simple supersampling particular for complex scenes.

**Application to ray tracing**

• Sources of aliasing: object boundary, small objects, textures and materials

• Good news: we can do sampling easily

• Bad news: we can’t do prefiltering (because we do not have the whole function)

• Key insight: we can never remove all aliasing, so we develop techniques to mitigate its impact on the quality of the final image.

**Under-sampling** **Aliasing**

**Stochastic sampling** **Stochastic sampling**

**pbrt** **sampling interface**

• Creating good sample patterns can substantially improve a ray tracer’s efficiency, allowing it to create a high-quality image with fewer rays.

• Because evaluating radiance is costly, it pays to spend time on generating better sampling.

**• core/sampling.*, samplers/***

**• random.cpp, stratified.cpp, **
**bestcandidate.cpp, **

**lowdiscrepancy.cpp, **

**An ineffective sampler**

**A more effective sampler** **Main rendering loop**

**void Scene::Render() {**

**Sample** ***sample = new Sample(surfaceIntegrator,**
**volumeIntegrator,**
**this);**

**...**

**while (sampler->GetNextSample(sample)) {**
**RayDifferential ray;**

**float rW = camera->GenerateRay(*sample, &ray);**

**<Generate ray differentials for camera ray>**

**float alpha;**

**Spectrum Ls = 0.f;**

**if (rW > 0.f)**

**Ls = rW * Li(ray, sample, &alpha);**

**...**

**camera->film->AddSample(*sample,ray,Ls,alpha);**

**...**

**}**
**...**

**camera->film->WriteImage();**

**}**

fill in eye ray info and other samples for integrator

**Sample**

**struct Sample {**

**Sample(SurfaceIntegrator *surf, **
**VolumeIntegrator *vol,**
**const Scene *scene);**

**...**

**float imageX, imageY;**

**float lensU, lensV;**

**float time;**

**// Integrator Sample Data**
**vector<u_int> n1D, n2D;**

**float **oneD, **twoD;**

**...**

**}**

store required information for one eye ray sample

Sample is allocated once in Render(). Sampler is called to fill in the information for each eye ray. The integrator can ask for multiple 1D and/or 2D samples, each with an arbitrary number of entries, e.g. depending on #lights.

For example, WhittedIntegrator does not need samples.

DirectLighting needs samples proportional to #lights.

Note that it stores all samples required for one eye ray. That is, it may depend on depth.

**Data structure **

3 1 2

**mem**

**oneD** **twoD**

**n1D** **n2D**

•Different types of lights require different number of samples, usually 2D samples.

•Sampling BRDF requires 2D samples.

•Selection of BRDF components requires 1D samples.

2 2 1 1 2 2

**bsdfComponent** **lightSample** **bsdfSample**

integrator sample

allocate together to avoid cache miss filled in by integrators

**Sample**

**Sample::Sample(SurfaceIntegrator *surf,**

**VolumeIntegrator *vol, const Scene *scene) {**
**// calculate required number of samples **

**// according to integration strategy**
**surf->RequestSamples(this, scene);**

**vol->RequestSamples(this, scene);**

**// Allocate storage for sample pointers**
**int nPtrs = n1D.size() + n2D.size();**

**if (!nPtrs) {**

**oneD = twoD = NULL;**

**return;**

**}**

**oneD=(float **)AllocAligned(nPtrs*sizeof(float *));**

**twoD = oneD + n1D.size();**

**Sample**

**// Compute total number of sample values needed**
**int totSamples = 0;**

**for (u_int i = 0; i < n1D.size(); ++i)**
**totSamples += n1D[i];**

**for (u_int i = 0; i < n2D.size(); ++i)**
**totSamples += 2 * n2D[i];**

**// Allocate storage for sample values**

**float *mem = (float *)AllocAligned(totSamples ***
**sizeof(float));**

**for (u_int i = 0; i < n1D.size(); ++i) {**
**oneD[i] = mem;**

**mem += n1D[i];**

**}**

**for (u_int i = 0; i < n2D.size(); ++i) {**
**twoD[i] = mem;**

**mem += 2 * n2D[i];**

**}**
**}**

**DirectLighting::RequestSamples**

**void RequestSamples(Sample *sample, Scene *scene) {**
**if (strategy == SAMPLE_ALL_UNIFORM) {**

**u_int nLights = scene->lights.size();**

**lightSampleOffset = new int[nLights];**

**bsdfSampleOffset = new int[nLights];**

**bsdfComponentOffset = new int[nLights];**

**for (u_int i = 0; i < nLights; ++i) {**
**const Light *light = scene->lights[i];**

**int lightSamples = **

**scene->sampler->RoundSize(light->nSamples);**

**lightSampleOffset[i] = **

**sample->Add2D(lightSamples);**

**bsdfSampleOffset[i] = **

**sample->Add2D(lightSamples);**

**bsdfComponentOffset[i] = **
**sample->Add1D(lightSamples);**

**}**

**lightNumOffset = -1;**

**}**

**DirectLighting::RequestSamples**

**else {**

**// Allocate and request samples for sampling one **
**light**

**lightSampleOffset = new int[1];**

**lightSampleOffset[0] = sample->Add2D(1);**

**lightNumOffset = sample->Add1D(1);**

**bsdfSampleOffset = new int[1];**

**bsdfSampleOffset[0] = sample->Add2D(1);**

**bsdfComponentOffset = new int[1];**

**bsdfComponentOffset[0] = sample->Add1D(1);**

**}**
**}**

**PathIntegrator::RequestSamples**

**void PathIntegrator::RequestSamples(Sample *sample,**
**const Scene *scene) **

**{**

**for (int i = 0; i < SAMPLE_DEPTH; ++i) {**

**lightPositionOffset[i] = sample->Add2D(1);**

**lightNumOffset[i] = sample->Add1D(1);**

**bsdfDirectionOffset[i] = sample->Add2D(1);**

**bsdfComponentOffset[i] = sample->Add1D(1);**

**outgoingDirectionOffset[i] = sample->Add2D(1);**

**outgoingComponentOffset[i] = sample->Add1D(1);**

**}**
**}**

**Sampler **

**Sampler(int xstart, int xend, **

**int ystart, int yend, int spp);**

**bool** **GetNextSample(Sample *sample);**

**int** **TotalSamples()**

**samplesPerPixel ***

**(xPixelEnd - xPixelStart) ***
**(yPixelEnd - yPixelStart);**

sample per pixel range of pixels

**Random sampler**

**RandomSampler::RandomSampler(…) {**
**...**

**// Get storage for a pixel's worth of stratified **
**samples imageSamples = (float *)AllocAligned(5 * **

**xPixelSamples * yPixelSamples * sizeof(float));**

**lensSamples = imageSamples +**

**2 * xPixelSamples * yPixelSamples;**

**timeSamples = lensSamples +**

**2 * xPixelSamples * yPixelSamples;**

**// prepare samples for the first pixel**

**for (i=0; i<5*xPixelSamples*yPixelSamples; ++i)**
**imageSamples[i] = RandomFloat();**

**// Shift image samples to pixel coordinates**

**for (o=0; o<2*xPixelSamples*yPixelSamples; o+=2) {**
**imageSamples[o] += xPos;**

**imageSamples[o+1] += yPos; }**
**samplePos** **= 0;**

**}**

Just for illustration; does not work well in practice

*private copy of the *
*current pixel position*

*#samples for current pixel*

**Random sampler**

**bool RandomSampler::GetNextSample(Sample *sample) {**
**if (samplePos** **== xPixelSamples * yPixelSamples) {**

**// Advance to next pixel for sampling**
**if (++xPos == xPixelEnd) {**

**xPos = xPixelStart;**

**++yPos; }**

**if (yPos == yPixelEnd)**
**return false;**

**for (i=0; i < 5*xPixelSamples*yPixelSamples; ++i)**
**imageSamples[i] = RandomFloat();**

**// Shift image samples to pixel coordinates**

**for (o=0; o<2*xPixelSamples*yPixelSamples; o+=2) **
**{ imageSamples[o] += xPos;**

**imageSamples[o+1] += yPos; }**
**samplePos = 0;**

**}**

*number of generated *
*samples in this pixel*

*generate all samples for one pixel at once*

**Random sampler**

**// Return next sample point according to samplePos**
**sample->imageX = imageSamples[2*samplePos];**

**sample->imageY = imageSamples[2*samplePos+1];**

**sample->lensU = lensSamples[2*samplePos];**

**sample->lensV = lensSamples[2*samplePos+1];**

**sample->time = timeSamples[samplePos];**

**// Generate samples for integrators**

**for (u_int i = 0; i < sample->n1D.size(); ++i)**
**for (u_int j = 0; j < sample->n1D[i]; ++j)**
**sample->oneD[i][j] = RandomFloat();**

**for (u_int i = 0; i < sample->n2D.size(); ++i)**
**for (u_int j = 0; j < 2*sample->n2D[i]; ++j)**

**sample->twoD[i][j] = RandomFloat();**

**++samplePos;**

**return true;**

**}**

**Random sampling**

*completely *
*random*
*a pixel*

**Stratified sampling**

• Subdivide the sampling domain into non-
*overlapping regions (strata) and take a single *
sample from each one so that it is less likely to
miss important features.

**Stratified sampling**

*completely *
*random*

*stratified *
*uniform*

*stratified *
*jittered*

turn aliasing into noise

**Comparison of sampling methods**

*256 samples per pixel as reference*

*1 sample per pixel (no jitter)*

**Comparison of sampling methods**

*1 sample per pixel (jittered)*

*4 samples per pixel (jittered)*

**Stratified sampling**

*reference* *random* *stratified *

*jittered*

**High dimension**

• D dimension means N^{D} cells.

• Solution: make strata separately and associate them randomly, also ensuring good distributions.

**Stratified sampler**

**if (samplePos == xPixelSamples * yPixelSamples) {**
**// Advance to next pixel for stratified sampling**
**...**

**// Generate stratified samples for (xPos, yPos)**
**StratifiedSample2D(imageSamples,**

**xPixelSamples, yPixelSamples, jitterSamples);**

**StratifiedSample2D(lensSamples,**

**xPixelSamples, yPixelSamples, jitterSamples);**

**StratifiedSample1D(timeSamples, **

**xPixelSamples*yPixelSamples, jitterSamples);**

**// Shift stratified samples to pixel coordinates**
**...**

**// Decorrelate sample dimensions**

**Shuffle(lensSamples,xPixelSamples*yPixelSamples,2);**

**Shuffle(timeSamples,xPixelSamples*yPixelSamples,1);**

**samplePos = 0;**

**}**

**Stratified sampling**

**void StratifiedSample1D(float *samp, int nSamples,**
**bool jitter) {**

**float invTot = 1.f / nSamples;**

**for (int i = 0; i < nSamples; ++i) {**

**float delta = jitter ? RandomFloat() : 0.5f;**

***samp++ = (i + delta) * invTot;**

**}**
**}**

**void StratifiedSample2D(float *samp, int nx, int ny,**
**bool jitter) {**

**float dx = 1.f / nx, dy = 1.f / ny;**

**for (int y = 0; y < ny; ++y)**
**for (int x = 0; x < nx; ++x) {**

**float jx = jitter ? RandomFloat() : 0.5f;**

**float jy = jitter ? RandomFloat() : 0.5f;**

***samp++ = (x + jx) * dx;**

***samp++ = (y + jy) * dy;**

**}**
**}**

n stratified samples within [0..1]

nx*ny stratified samples within [0..1]X[0..1]

**Shuffle**

**void Shuffle(float *samp, int count, int dims) {**
**for (int i = 0; i < count; ++i) {**

**u_int other = RandomUInt() % count;**

**for (int j = 0; j < dims; ++j)**

**swap(samp[dims*i + j], samp[dims*other + j]);**

**}**
**}**

d-dimensional vector swap

**Stratified sampler**

**// Return next _StratifiedSampler_ sample point**
**sample->imageX = imageSamples[2*samplePos];**

**sample->imageY = imageSamples[2*samplePos+1];**

**sample->lensU = lensSamples[2*samplePos];**

**sample->lensV = lensSamples[2*samplePos+1];**

**sample->time = timeSamples[samplePos];**

// what if integrator asks for 7 stratified 2D samples

**// Generate stratified samples for integrators**
**for (u_int i = 0; i < sample->n1D.size(); ++i)**

**LatinHypercube(sample->oneD[i], sample->n1D[i], 1);**

**for (u_int i = 0; i < sample->n2D.size(); ++i)**

**LatinHypercube(sample->twoD[i], sample->n2D[i], 2);**

**++samplePos;**

**return true;**

**Latin hypercube sampling**

• Integrators could request an arbitrary n samples.

nx1 or 1xn doesn’t give a good sampling pattern.

A worst case for stratified sampling LHS can prevent this to happen

**Latin Hypercube**

**void LatinHypercube(float *samples, **

**int nSamples, int nDim) **
**{**

**// Generate LHS samples along diagonal**
**float delta = 1.f / nSamples;**

**for (int i = 0; i < nSamples; ++i)**
**for (int j = 0; j < nDim; ++j)**

**samples[nDim*i+j] = (i+RandomFloat())*delta;**

**// Permute LHS samples in each dimension**
**for (int i = 0; i < nDim; ++i) {**

**for (int j = 0; j < nSamples; ++j) {**
**u_int other = RandomUInt() % nSamples;**

**swap(samples[nDim * j + i],**
**samples[nDim * other + i]);**

**}**
**}**
**}**

**note the difference with shuffle**

**Stratified sampling** **Stratified sampling**

*1 camera sample and 16 shadow samples per pixel*

*16 camera samples and each with 1 shadow sample per pixel*

**This is better because StratifiedSampler**
could generate a good LHS pattern for this case

**Low discrepancy sampling**

• A possible problem with stratified sampling

• Discrepancy can be used to evaluate the quality of patterns

**Low discrepancy sampling**

When B is the set of AABBs with a corner at the origin, this is called star discrepancy

set of N sample points a family of shapes

volume estimated by sample number

real volume maximal difference

**1D discrepancy**

*Uniform is optimal! However, we have learnt that *
*Irregular patterns are perceptually superior to uniform *
*samples. Fortunately, for higher dimension, the low-*
*discrepancy patterns are less uniform and works *
*reasonably well as sample patterns in practice.*

Next, we introduce methods specifically designed for generating low-discrepancy sampling patterns.

**Radical inverse**

*• A positive number n can be expressed in a base b as *

*• A radical inverse function in base b converts a *

*nonnegative integer n to a floating-point number in [0,1)*

**inline double RadicalInverse(int n, int base) {**
**double val = 0;**

**double invBase = 1. / base, invBi = invBase;**

**while (n > 0) {**

**int d_i = (n % base);**

**val += d_i * invBi;**

**n /= base;**

**invBi *= invBase;**

**}**

**return val;**

**}**

**van der Corput sequence**

• The simplest sequence

• Recursively split 1D line in half, sample centers

• Achieve minimal possible discrepancy

**High-dimensional sequence**

• Two well-known low-discrepancy sequences

– Halton – Hammersley

• Use relatively prime numbers as bases for each dimension

• Achieve best possible discrepancy for N-D

• Can be used if N is not known in advance

• All prefixes of a sequence are well distributed so as additional samples are added to the sequence, low discrepancy will be maintained

**Halton sequence**

recursively split the dimension
into p_{d} parts, sample centers

**Hammersley sequence**

• Similar to Halton sequence.

• Slightly better discrepancy than Halton.

• Needs to know N in advance.

**Folded radical inverse**

*• Add the offset i to the ith digit d** _{i}* and take the

*modulus b.*

• It can be used to improve Hammersley and Halton, called Hammersley-Zaremba and Halton-Zaremba.

**Radial inverse**

*Halton* *Hammersley*

*Better for that there *
*are fewer clumps.*

**Folded radial inverse**

*Halton* *Hammersley*

*The improvement is*
*more obvious*

**Low discrepancy sampling**

*stratified jittered, 1 sample/pixel*

*Hammersley sequence, 1 sample/pixel*

**Best candidate sampling **

• Stratified sampling doesn’t guarantee good sampling across pixels.

*• Poisson disk pattern addresses this issue. The *
Poisson disk pattern is a group of points with no
two of them closer to each other than some
specified distance.

*• It can be generated by dart throwing. It is *
time-consuming.

*• Best-candidate algorithm by Dan Mitchell. It *
randomly generates many candidates but only
inserts the one farthest to all previous samples.

**Best candidate sampling**

*stratified jittered* *best candidate*
*It avoids holes and clusters.*

**Best candidate sampling**

• Because of it is costly to generate best candidate pattern, pbrt computes a “tilable pattern” offline (by treating the square as a rolled torus).

**• tools/samplepat.cpp→sampler/sampledata.cpp**

**Best candidate sampling**

*stratified jittered, 1 sample/pixel*

*best candidate, 1 sample/pixel*

**Best candidate sampling**

*stratified jittered, 4 sample/pixel*

*best candidate, 4 sample/pixel*

**Comparisons**

*reference* *low-discrepancy* *best candidate*

**Reconstruction filters**

• Given the chosen image samples, we can do the following to compute pixel values.

1. reconstruct a continuous function L’ from samples 2. prefilter L’ to remove frequency higher than

Nyquist limit

3. sample L’*at pixel locations*

• Because we will only sample L’ at pixel locations, we do not need to explicitly

reconstruct L’s. Instead, we combine the first two steps.

**Reconstruction filters**

• Ideal reconstruction filters do not exist because of discontinuity in rendering. We choose

nonuniform sampling, trading off noise for aliasing. There is no theory about ideal reconstruction for nonuniform sampling yet.

• Instead, we consider an interpolation problem

### ∑ ∑

−

−

−

= −

*i* *i* *i*

*i* *i* *i* *i* *i*

*y*
*y*
*x*
*x*
*f*

*y*
*x*
*L*
*y*
*y*
*x*
*x*
*y* *f*

*x*

*I* ( , )

) , ( ) , ) (

,

( (*x*,*y*)

)
,
(*x*_{i}*y** _{i}*
final value

filter sampled radiance

**Filter**

*• provides an interface to f(x,y)*

**• Film**stores a pointer to a filter and use it to
filter the output before writing it to disk.

**Filter::Filter(float xw, float yw)**
**Float Evaluate(float x, float y);**

**• filters/* (box, gaussian, mitchell, sinc, **
**triangle)**

width, half of support

x, y is guaranteed to be within the range;

range checking is not necessary

)
,
(*x* *y*
*f*

**Box filter**

• Most commonly used in graphics. It’s just about the worst filter possible, incurring postaliasing by high-frequency leakage.

**Float BoxFilter::Evaluate(float x, float y)**
**{**

**return 1.;**

**}**

no need to normalize since the weighted sum is divided by the total weight later.

**Triangle filter**

**Float TriangleFilter::Evaluate(float x, float y) **
**{**

**return max(0.f, xWidth-fabsf(x)) ***
**max(0.f, yWidth-fabsf(y));**

**}**

**Gaussian filter**

• Gives reasonably good results in practice

**Float GaussianFilter::Evaluate(float x, float y) **
**{**

**return Gaussian(x, expX)*Gaussian(y, expY);**

**}** Gaussian essentially has a infinite support; to compensate
this, the value at the end is calculated and subtracted.

**Mitchell filter**

• parametric filters, tradeoff between ringing and blurring

• Negative lobes improve sharpness; ringing starts to enter the image if they become large.

**Mitchell filter**

• Separable filter

• Two parameters, B and C, B+2C=1 suggested

FFT of a cubic filter.

Mitchell filter is a
combination of cubic
filters with C^{0}and C^{1}
Continuity.

**Windowed sinc filter**

*sinc*
*Lanczos*

τ π

τ π

/ / ) sin

( *x*

*x* *x*

*w* =

**Comparisons**

*box*

*Mitchell*

**Comparisons**

*windowed sinc*

*Mitchell*

**Comparisons**

*box* *Gaussian* *Mitchell*