# Sampling theory

## Full text

(1)

### Digital Image Synthesis g g y Yung-Yu Chuang

with slides by Pat Hanrahan, Torsten Moller and Brian Curless

(2)

(3)

(4)

(5)

(6)

2

2

(7)

(8)

### • The Fourier transform converts between the spatial and frequency domain

Spatial Frequency

i x

Spatial Domain

Frequency Domain

i x





(9)

(10)

(11)

(12)

## 

(13)

(14)

x

y

x

y

x

y

(15)

(16)

(17)

(18)

(19)

(20)

i



i

(21)

### Reconstruction filters

The sinc filter, while ideal, has two drawbacks:

• It has a large support (slow to compute)

I i d i i i

• It introduces ringing in practice

The box filter is bad because its Fourier transform is a sinc its Fourier transform is a sinc filter which includes high

frequency contribution from th i fi it i f th the infinite series of other copies.

(22)

(23)

(24)

(25)

(26)

(27)

(28)

(29)

(30)

(31)

(32)

s s

s

Samples Pixel

(33)

(34)

(35)

### • Uniform sampling

– The spectrum of uniformly spaced samples is also a set of The spectrum of uniformly spaced samples is also a set of uniformly spaced spikes

– Multiplying the signal by the sampling pattern corresponds to

l i f h h ik (i f )

placing a copy of the spectrum at each spike (in freq. space) – Aliases are coherent (structured), and very noticeable

### • Non-uniform sampling

– Samples at non-uniform locations have a different spectrum; a single spike plus noise

– Sampling a signal in this way converts aliases into broadband noise

Noise is incoherent (structureless) and much less objectionable – Noise is incoherent (structureless), and much less objectionable

(36)



i

(37)

(38)

(39)

(40)

(41)

### Distribution of extrafoveal cones

Monkey eye cone distribution

Fourier transform cone distribution

(42)

(43)

(44)

(45)

(46)

(47)

(48)

(49)

(50)

(51)

### Main rendering loop for each task

void SamplerRenderer::Run() { Sampler *sampler

...

// Allocate space for samples and intersections int maxSamples = sampler >MaximumSampleCount();

int maxSamples = sampler->MaximumSampleCount();

Sample *samples=origSample->Duplicate(maxSamples);

RayDifferential *rays=new RayDifferential[maxSamples];

Spectrum *Ls = new Spectrum[maxSamples];

Spectrum *Ls = new Spectrum[maxSamples];

Spectrum *Ts = new Spectrum[maxSamples];

Intersection *isects = new Intersection[maxSamples]

...

(52)

### Main rendering loop for each task

while ((sCnt=sampler->GetMoreSamples(samples,rng))>0){

for (int i = 0; i < sCnt; ++i) { ...

float rayWeight = camera->GenerateRayDifferential(

samples[i], &rays[i]);

...

if (rayWeight > 0.f)

Ls[i] = rayWeight * renderer->Li(scene, rays[i],

l [i] i [i] [i])

&samples[i], rng, arena, &isects[i], &Ts[i]);

...

} // end for

if ( l >R tR lt ( l L ))

if (sampler->ReportResults(samples, rays, Ls, …)) for (int i = 0; i < sCnt; ++i) {

...

}

} // end while

camera >film >UpdateDisplay(

camera->film->UpdateDisplay(

sampler->xPixelStart, sampler->yPixelStart, sampler->xPixelEnd+1, sampler->yPixelEnd+1);

(53)

(54)

### Sampler

void Sampler::ComputeSubWindow(int num, int count, int *XStart, int *XEnd, int *YStart, int *YEnd) {, , , ) {

int dx=xPixelEnd-xPixelStart,dy=yPixelEnd-yPixelStart;

int nx = count, ny = 1;

while ((nx & 0x1) == 0 && 2 * dx * ny < dy * nx) { nx >>= 1; ny <<= 1;

} }

int xo = num % nx, yo = num / nx;

float tx0=float(xo)/float(nx),tx1=float(xo+1)/float(nx);( ) ( ), ( ) ( ) float ty0=float(yo)/float(ny),ty1=float(yo+1)/float(ny);

*XStart = Floor2Int(Lerp(tx0,xPixelStart,xPixelEnd));

*XEnd = Floor2Int(Lerp(tx1,xPixelStart,xPixelEnd));

*YStart = Floor2Int(Lerp(ty0,yPixelStart,yPixelEnd));

*YEnd = Floor2Int(Lerp(ty1 yPixelStart yPixelEnd));

*YEnd = Floor2Int(Lerp(ty1,yPixelStart,yPixelEnd));

}

(55)

### Sample

struct CameraSample { float imageX, imageY;

store required information for generating camera rays

g , g ;

float lensU, lensV;

float time;

};

struct Sample : public CameraSample {

Sample(Sampler *sampler SurfaceIntegrator *surf store required information for one eye ray sample Sample(Sampler *sampler, SurfaceIntegrator *surf,

VolumeIntegrator *vol, const Scene *scene);

uint32 t Add1D(uint32 t num);_ ( _ )

### Note that it stores all samples

// Sample Public Data

vector<uint32_t> n1D, n2D;

float **oneD **twoD;

### Note that it stores all samples required for one eye ray. That is, it may depend on depth.

float **oneD, **twoD;

};

(56)

(57)

### •Selection of BRDF components requires 1D samples.

3 1 2

D t D

n1D n2D 2 2 1 1 2 2

### sample

allocate together to avoid cache miss filled in by integrators

oneD twoD allocate together to avoid cache miss

mem

lightNumOffset 1 2 3

lightSampleOffset bsdfSampleOffset

1 3 5 2 4 6

### PathIntegrator

lightNumOffset lightSampleOffset bsdfSampleOffset

(58)

### Sample

Sample::Sample(Sampler *sampler, SurfaceIntegrator

*surf, VolumeIntegrator *vol, Scene *scene) {

if (surf) surf->RequestSamples(sampler,this,scene);

if ( l) l S l ( l hi )

if (vol) vol->RequestSamples(sampler, this, scene);

AllocateSampleMemory();

} }

void Sample::AllocateSampleMemory() { int nPtrs = n1D.size() + n2D.size();

if (!nPtrs) {

D t D NULL t

oneD = twoD = NULL; return;

}

oneD = AllocAligned<float *>(nPtrs);

oneD AllocAligned<float >(nPtrs);

twoD = oneD + n1D.size();

(59)

### Sample

int totSamples = 0;

for (uint32 t i = 0; i < n1D.size(); ++i)( _ ; (); ) totSamples += n1D[i];

for (uint32_t i = 0; i < n2D.size(); ++i) totSamples += 2 * n2D[i];

float *mem = AllocAligned<float>(totSamples);

float *mem = AllocAligned<float>(totSamples);

for (uint32_t i = 0; i < n1D.size(); ++i) { oneD[i] = mem; mem += n1D[i];[ ] [ ]

}

for (uint32_t i = 0; i < n2D.size(); ++i) { twoD[i] = mem; mem += 2 * n2D[i];

} } }

(60)

### PathIntegrator::RequestSamples

void PathIntegrator::RequestSamples(Sampler *sampler, Sample *sample, const Scene *scene)

{

for (int i = 0; i < SAMPLE_DEPTH; ++i) {

lightSampleOffsets[i]=LightSampleOffsets(1 sample);

lightSampleOffsets[i]=LightSampleOffsets(1,sample);

bsdfSampleOffsets[i]=BSDFSampleOffsets(1,sample);

pathSampleOffsets[i]=BSDFSampleOffsets(1,sample);

} }

(61)

### LightSampleOffsets

struct LightSampleOffsets {

LightSampleOffsets(int count, Sample *sample);

int nSamples, componentOffset, posOffset;

};

LightSampleOffsets::LightSampleOffsets(int count, Sample *sample) {

l

nSamples = count;

p p ( p )

}

Li htS l Off t Li htS l LightSampleOffsets LightSample

(62)

### DirectLighting::RequestSamples

void DirectLightingIntegrator::RequestSamples(

Sampler *sampler, Sample *sample, Scene *scene) { if ( t t SAMPLE ALL UNIFORM) {

if (strategy == SAMPLE_ALL_UNIFORM) {

uint32_t nLights = scene->lights.size();

lightSampleOffsets=new LightSampleOffsets[nLights];

b dfS l Off t BSDFS l Off t [ Li ht ] bsdfSampleOffsets = new BSDFSampleOffsets[nLights];

for (uint32_t i = 0; i < nLights; ++i) { const Light *light = scene->lights[i];

i S l li h S l

int nSamples = light->nSamples;

if (sampler) nSamples=sampler->RoundSize(nSamples);

lightSampleOffsets[i]

= LightSampleOffsets(nSamples, sample);

bsdfSampleOffsets[i]

= BSDFSampleOffsets(nSamples, sample);

}

lightNumOffset = -1;

}

(63)

### DirectLighting::RequestSamples

else {

lightSampleOffsets = new LightSampleOffsets[1];

li htS l Off t [0]

lightSampleOffsets[0]

= LightSampleOffsets(1, sample);

b dfS l Off t BSDFS l Off t [1]

bsdfSampleOffsets = new BSDFSampleOffsets[1];

bsdfSampleOffsets[0] = BSDFSampleOffsets(1, sample);

} } }

(64)

### Random sampler

Just for illustration; does not work well in practice

RandomSampler::RandomSampler(int xstart, int xend, int ystart, int yend, int ns,

float sopen, float sclose) { xPos = xPixelStart;

yPos = yPixelStart;

nSamples = ns;

imageSamples = AllocAligned<float>(5 * nSamples);

lensSamples = imageSamples + 2 * nSamples;p g p p ; timeSamples = lensSamples + 2 * nSamples;

// prepare samples for the first pixel

RNG rng(xstart + ystart * (xend-xstart));

RNG rng(xstart + ystart (xend xstart));

for (int i = 0; i < 5 * nSamples; ++i) imageSamples[i] = rng.RandomFloat();

for (int o = 0; o < 2 * nSamples; o += 2) { for (int o = 0; o < 2 * nSamples; o += 2) {

imageSamples[o] += xPos;

imageSamples[o+1] += yPos; }

l 0

private copy of the current pixel position samplePos = 0;

} #samples consumed for current pixel

(65)

### Random sampler

Sampler *RandomSampler::GetSubSampler(

int num, int count) , ) {

int x0, x1, y0, y1;

ComputeSubWindow(num, count, &x0, &x1, &y0, &y1);

if (x0 == x1 || y0 == y1) return NULL;

return new RandomSampler(x0 x1 y0 y1 nSamples return new RandomSampler(x0, x1, y0, y1, nSamples,

shutterOpen, shutterClose);

} }

(66)

### Random sampler

int RandomSampler::GetMoreSamples(

Sample *sample, RNG &rng) {p p , g) { if (samplePos==nSamples) {

if(xPixelStart==xPixelEnd || yPixelStart==yPixelEnd) return 0;

if (++xPos == xPixelEnd) {xPos=xPixelStart; ++yPos;}

if (yPos == yPixelEnd) return 0;

if (yPos == yPixelEnd) return 0;

for (int i = 0; i < 5 * nSamples; ++i)

generate all samples for one pixel at once

( p )

imageSamples[i] = rng.RandomFloat();

for (int o = 0; o < 2 * nSamples; o += 2) {

imageSamples[o]+=xPos; imageSamples[o+1]+=yPos;

}

samplePos = 0;

samplePos = 0;

}

(67)

### Random sampler

sample->imageX = imageSamples[2*samplePos];

sample->imageY = imageSamples[2*samplePos+1];p g g p [ p ];

sample->lensU = lensSamples[2*samplePos];

sample->lensV = lensSamples[2*samplePos+1];

sample->time = Lerp(timeSamples[samplePos],

shutterOpen, shutterClose);

for (uint32 t i = 0; i < sample >n1D size(); ++i) for (uint32_t i = 0; i < sample->n1D.size(); ++i)

for (uint32_t j = 0; j < sample->n1D[i]; ++j) sample->oneD[i][j] = rng.RandomFloat();p [ ][j] g ()

for (uint32_t i = 0; i < sample->n2D.size(); ++i) for (uint32_t j = 0; j < 2*sample->n2D[i]; ++j)

sample->twoD[i][j] = rng.RandomFloat();

++samplePos;

return 1;

return 1;

}

(68)

(69)

(70)

(71)

(72)

(73)

(74)

D

(75)

### StratifiedSampler::GetMoreSamples

if (yPos == yPixelEnd) return 0;

int nSamples = xPixelSamples * yPixelSamples;

// Generate initial stratified samples float *bufp = sampleBuf;

float *imageSamples = bufp; bufp += 2 * nSamples;

float *lensSamples = bufp; bufp += 2 * nSamples;

float *timeSamples = bufp;p p;

StratifiedSample2D(imageSamples, xPixelSamples,

yPixelSamples, rng, jitterSamples);

StratifiedSample2D(lensSamples xPixelSamples StratifiedSample2D(lensSamples, xPixelSamples,

yPixelSamples, rng, jitterSamples);

StratifiedSample1D(timeSamples, xPixelSamples *

yPixelSamples rng jitterSamples);

yPixelSamples, rng, jitterSamples);

for (int o=0;o<2*xPixelSamples*yPixelSamples;o+=2){

i S l [ ] i S l [ 1]

imageSamples[o]+=xPos; imageSamples[o+1]+=yPos;

}

(76)

### StratifiedSampler::GetMoreSamples

Shuffle(lensSamples,xPixelSamples*yPixelSamples,2,rng);

Shuffle(timeSamples,xPixelSamples*yPixelSamples,1,rng);

for (int i = 0; i < nSamples; ++i) {

samples[i].imageX = imageSamples[2*i];

samples[i].imageY = imageSamples[2*i+1];

samples[i].lensU = lensSamples[2*i];

samples[i].lensV = lensSamples[2*i+1];

samples[i].time = Lerp(timeSamples[i], p [ ] p( p [ ],

shutterOpen, shutterClose);

for (uint32_t j = 0; j < samples[i].n1D.size(); ++j) LatinHypercube(samples[i] oneD[j]

LatinHypercube(samples[i].oneD[j],

samples[i].n1D[j], 1, rng);

for (uint32_t j = 0; j < samples[i].n2D.size(); ++j) LatinHypercube(samples[i] twoD[j]

LatinHypercube(samples[i].twoD[j],

samples[i].n2D[j], 2, rng);

}

if ( i l d) { i lS }

if (++xPos == xPixelEnd) {xPos = xPixelStart; ++yPos;}

return nSamples;

}

(77)

### Stratified sampling

void StratifiedSample1D(float *samp, int nSamples, RNG &rng, bool jitter) { /

n stratified samples within [0..1]

float invTot = 1.f / nSamples;

for (int i = 0; i < nSamples; ++i) {

float delta = jitter ? rng.RandomFloat() : 0.5f;

*samp++ = min((i+delta)*invTot, OneMinusEpsilon);

}

} nx*ny stratified samples within [0..1]X[0..1]

}

void StratifiedSample2D(float *samp, int nx, int ny, RNG &rng, bool jitter) {

float dx = 1 f / nx dy = 1 f / ny;

nx ny stratified samples within [0..1]X[0..1]

float dx = 1.f / nx, dy = 1.f / ny;

for (int y = 0; y < ny; ++y)

for (int x = 0; x < nx; ++x) {

float jx = jitter ? rng RandomFloat() : 0 5f;

float jx = jitter ? rng.RandomFloat() : 0.5f;

float jy = jitter ? rng.RandomFloat() : 0.5f;

*samp++ = min((x + jx) * dx, OneMinusEpsilon);

* i (( j ) * d O i il )

*samp++ = min((y + jy) * dy, OneMinusEpsilon);

} }

(78)

### Shuffle

template <typename T>

void Shuffle(T *samp, int count, int dims, RNG &rng) ( p, , , g) {

for (int i = 0; i < count; ++i) {

u_int other = i+(rng.RandomUInt()%(count-i));

for (int j = 0; j < dims; ++j)

swap(samp[dims*i + j] samp[dims*other + j]);

swap(samp[dims*i + j], samp[dims*other + j]);

} }

}

(79)

(80)

### Latin Hypercube

void LatinHypercube(float *samples,

int nSamples, int nDim, RNG &rng) {

// Generate LHS samples along diagonal float delta = 1.f / nSamples;

for (int i = 0; i < nSamples; ++i) for (int j = 0; j < nDim; ++j)

samples[nDim*i+j] = min((i+(rng.RandomFloat())) p [ j] (( ( g ()))

*delta, OneMinusEpsilon);

// Permute LHS samples in each dimension for (int i = 0; i < nDim; ++i) {

note the difference with shuffle

for (int i = 0; i < nDim; ++i) {

for (int j = 0; j < nSamples; ++j) {

u_int other=j+(rng.RandomUInt() % (nSamples-j));

swap(samples[nDim * j + i]

swap(samples[nDim * j + i],

samples[nDim * other + i]);

} }

} }

(81)

(82)

### 1 l d 16 h dl i l

This is better because StratifiedSampler could generate a good LHS pattern for this case

(83)

(84)

(85)

(86)

### nonnegative integer n to a floating-point number in [0,1) g g g p [ ,)

inline double RadicalInverse(int n int base) { inline double RadicalInverse(int n, int base) {

double val = 0;

double invBase = 1. / base, invBi = invBase;

while (n > 0) { while (n > 0) {

int d_i = (n % base);

val += d_i * invBi;

/

n /= base;

invBi *= invBase;

}

return val;

}

(87)

(88)

(89)

d

(90)

(91)

i

(92)

(93)

(94)

(95)

(96)

(97)

(98)

(99)

(100)

(101)

(102)

### Main rendering loop for each task

while ((sCnt=sampler->GetMoreSamples(samples,rng))>0){

for (int i = 0; i < sCnt; ++i) { ...

float rayWeight = camera->GenerateRayDifferential(

samples[i], &rays[i]);

...

if (rayWeight > 0.f)

Ls[i] = rayWeight * renderer->Li(scene, rays[i],

l [i] i [i] [i])

&samples[i], rng, arena, &isects[i], &Ts[i]);

...

} // end for

if ( l >R tR lt ( l L ))

if (sampler->ReportResults(samples, rays, Ls, …)) for (int i = 0; i < sCnt; ++i) {

...

}

} // end while

camera >film >UpdateDisplay(

camera->film->UpdateDisplay(

sampler->xPixelStart, sampler->yPixelStart, sampler->xPixelEnd+1, sampler->yPixelEnd+1);

(103)

(104)

### false and (xPos, yPos) is advanced.

(105)

if (supersamplePixel) {

LDPixelSample(xPos, yPos, shutterOpen, shutterClose, maxSamples, samples,

sampleBuf, rng);

return maxSamples;

} else {

if ( Pos Pi elEnd) ret rn 0 if (yPos == yPixelEnd) return 0;

LDPixelSample(xPos, yPos, shutterOpen, shutterClose, minSamples, samples,

shutterClose, minSamples, samples, sampleBuf, rng);

return minSamples;

} }

(106)

bool AdaptiveSampler::ReportResults(…) { if (supersamplePixel) {( p p ) {

supersamplePixel = false;

if (++xPos == xPixelEnd) {

xPos = xPixelStart; ++yPos; } return true;

} else if (needsSupersampling( )) { } else if (needsSupersampling(…)) {

supersamplePixel = true;

return false;

} else {

if (++xPos == xPixelEnd) {

xPos = xPixelStart; ++yPos;

}

return true;

return true;

} }

(107)

### needsSupersampling

Sample *samples, const RayDifferential *rays, p p , y y , const Spectrum *Ls, const Intersection *isects, int count)

{

switch (method) {

for (int i = 0; i < count-1; ++i)

if (isects[i].shapeId != isects[i+1].shapeId ||( [ ] p [ ] p ||

isects[i].primitiveId != isects[i+1].primitiveId) return true;

return false;

(108)

### needsSupersampling

float Lavg = 0.f;g ;

for (int i = 0; i < count; ++i) Lavg += Ls[i].y();

Lavg /= count;

const float maxContrast = 0.5f;

for (int i = 0; i < count; ++i) for (int i = 0; i < count; ++i)

if (fabsf(Ls[i].y() - Lavg) / Lavg > maxContrast) return true;

return false;

}

return false;

}

(109)

(110)

(111)

(112)

(113)

i

i

i

i

i

i

i

i

i

i

final value

(114)

(115)

### float BoxFilter::Evaluate(float x, float y)

no need to normalize since the weighted

### return 1.;

no need to normalize since the weighted sum is divided by the total weight later.

(116)

(117)

### }

Gaussian essentially has a infinite support; to compensate this, the value at the end is calculated and subtracted.

this, the value at the end is calculated and subtracted.

(118)

(119)

### • Two parameters, p , B and C, B+2C=1 suggested gg

FFT of a cubic filter.

Mitchell filter is a combination of cubic combination of cubic filters with C0 and C1 Continuity.

(120)

(121)

(122)

(123)

(124)

(125)

### Film

class Film { public:

p

Film(int xres, int yres)

: xResolution(xres), yResolution(yres) { } add samples for later reconstruction by weighted average

virtual void AddSample(const CameraSample &sample, const Spectrum &L) = 0;

const Spectrum &L) = 0;

simply sum samples’ contributions, not average them. Pixels with more samples will be brighter. It is used by light transport methods such as MetropolisRender

virtual void Splat(const CameraSample &sample, const Spectrum &L) = 0;

const Spectrum &L) = 0;

(126)

### Film

the sample extent could be a bit larger than the pixel extent virtual void GetSampleExtent(int *xstart, p ( ,

int *xend, int *ystart, int *yend) const = 0;

virtual void GetPixelExtent(int *xstart,

int *xend, int *ystart, int *yend) const = 0;

be notified when a region has been recently updated. Do nothing as default.

default.

virtual void UpdateDisplay(int x0, int y0, int x1, int y1, float splatScale = 1.f);

generate the final image for saving to disk or displaying. It accepts a scale factor.

virtual void WriteImage(float splatScale = 1 f)=0;

virtual void WriteImage(float splatScale = 1.f)=0;

const int xResolution, yResolution;y };

(127)

### plug in in pbrt It filters samples and writes the plug-in in pbrt. It filters samples and writes the resulting image to disk.

ImageFilm::ImageFilm(int xres, int yres,Filter *filt, float crop[4] string &filename bool openWindow)

float crop[4],string &filename, bool openWindow) {in NDC space. useful for

debugging, or rendering on different computers and

on some system, it can be configured to open a window and show the

image as it’s being rendered ...

i l l k d i l ( i lC

different computers and assembling later

image as it’s being rendered

pixels = new BlockedArray<Pixel>(xPixelCount, yPixelCount);

<precompute filter table>

<precompute filter table>

}

(128)

### sample  

 

i i i

i i i i i

y y x x f

y x L y y x x y f

x

I ( , )

) , ( ) ,

) ( , (

(129)

(130)

(131)

(132)

Updating...

## References

Related subjects :