• 沒有找到結果。

JournalofFunctionalProgramming DependentTypesforRelationalProgramDerivationShin-ChengMu,Hsiang-ShangKo,andPatrikJanssonToappearin AlgebraofProgramminginAgda

N/A
N/A
Protected

Academic year: 2022

Share "JournalofFunctionalProgramming DependentTypesforRelationalProgramDerivationShin-ChengMu,Hsiang-ShangKo,andPatrikJanssonToappearin AlgebraofProgramminginAgda"

Copied!
84
0
0

加載中.... (立即查看全文)

全文

(1)

Algebra of Programming in Agda

Dependent Types for Relational Program Derivation

Shin-Cheng Mu, Hsiang-Shang Ko, and Patrik Jansson

To appear in Journal of Functional Programming

(2)

Terms

Algebra of Programming A semi-formal approach to deriving functional programs from relational specifications.

Agda A dependently-typed functional programming language being actively developed.

Dependent types A family of type systems capable of expressing various correctness properties.

(3)

In this introductory presentation

introductory serving as an introduction to a subject or topic [...] intended to persuade someone to purchase something for the first time

— Oxford American Dictionaries I will (casually) introduce how squiggolists treat algorithm design, and report an experiment that faithfully encodesAoP-style derivations in Agda.

By “faithfully” I mean correctness of a derivation is guaranteed by Agda’s type system, while certain degree of readability is retained.

(4)

Inductive datatypes

The most natural datatypes in functional programming are inductive datatypes.

data N : Set where zero : N

suc : N → N

data [ ] (A : Set) : Set where [ ] : [ A ]

:: : A → [ A ] → [ A ]

(5)

Pattern matching

Functions on inductive datatypes can be defined by pattern matching.

+ : N → N → N zero + n = n

(suc m) + n = suc (m + n) sum : [ N ] → N

sum [ ] = zero

sum (x :: xs) = x + sum xs

(6)

Fold

The pattern of inductive definitions is captured in a higher-order function foldr .

foldr : (A → B → B) → B → [ A ] → B foldr f e [ ] = e

foldr f e (x :: xs) = f x (foldr f e xs) Then alternatively sum can be defined as foldr + zero.

(7)

The Fold-Fusion Theorem

It is natural to state theorems about functional programs. One of the most useful theorems is the Fold-Fusion Theorem.

Theorem (Fold-Fusion)

Let f : A → B → B, e : B, g : A → C → C , and h : B → C . If h (f x y ) = g x (h y ),

then

h · foldr f e = foldr g (h e).

(8)

Proof of the Fold-Fusion Theorem

The theorem can be inductively proved by simple equational reasoning.

Base case.

h (foldr f e [ ])

= { definition of foldr } h e

= { definition of foldr } foldr g (h e) [ ].

(9)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Inductive case

h (foldr f e (x :: xs))

= { fusion condition } g x (h (foldr f e xs))

= { induction hypothesis } g x (foldr g (h e) xs)

= { definition of foldr } foldr g (h e) (x :: xs). 

(10)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Inductive case

h (foldr f e (x :: xs))

= { definition of foldr } h (f x (foldr f e xs))

= { induction hypothesis } g x (foldr g (h e) xs)

= { definition of foldr } foldr g (h e) (x :: xs). 

(11)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Inductive case

h (foldr f e (x :: xs))

= { definition of foldr } h (f x (foldr f e xs))

= { fusion condition } g x (h (foldr f e xs))

= { definition of foldr } foldr g (h e) (x :: xs). 

(12)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Inductive case

h (foldr f e (x :: xs))

= { definition of foldr } h (f x (foldr f e xs))

= { fusion condition } g x (h (foldr f e xs))

= { induction hypothesis } g x (foldr g (h e) xs)

(13)

Inductive case

h (foldr f e (x :: xs))

= { definition of foldr } h (f x (foldr f e xs))

= { fusion condition } g x (h (foldr f e xs))

= { induction hypothesis } g x (foldr g (h e) xs)

= { definition of foldr } foldr g (h e) (x :: xs). 

(14)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Sketch of a derivation of radix sort

Gibbons, J. 1999.A Pointless Derivation of Radix Sort. J. Funct. Program. 9, 3 (May. 1999), 339–346.

The function

mktree : [ Composite → Field ] →

[ Composite ] → Tree [ Composite ] receives a significance order of fields and classifies the input list first by the most significant field, and for each bucket classifies the elements by the second-most significant field, and so on.

(15)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Sketch of a derivation of radix sort

Gibbons, J. 1999.A Pointless Derivation of Radix Sort. J. Funct. Program. 9, 3 (May. 1999), 339–346.

Imagine a datatype Composite consisting of a fixed number of Field s, where the type Field is finite and totally-ordered.

mktree : [ Composite → Field ] →

[ Composite ] → Tree [ Composite ] receives a significance order of fields and classifies the input list first by the most significant field, and for each bucket classifies the elements by the second-most significant field, and so on.

(16)

Sketch of a derivation of radix sort

Gibbons, J. 1999.A Pointless Derivation of Radix Sort. J. Funct. Program. 9, 3 (May. 1999), 339–346.

Imagine a datatype Composite consisting of a fixed number of Field s, where the type Field is finite and totally-ordered.

The function

mktree : [ Composite → Field ] →

[ Composite ] → Tree [ Composite ] receives a significance order of fields and classifies the input list first by the most significant field, and for each bucket classifies the elements by the second-most significant field, and so on.

(17)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Sketch of a derivation of radix sort

It is apparent that

treesort : [ Composite → Field ] → [ Composite ] → [ Composite ] treesort ds = flatten · mktree ds

correctly sorts a list, where flatten : Tree A → [ A ] collects elements stored in the leaves in order.

treesort = (flatten ·) · mktree.

The Fold-Fusion Theorem is now applicable and radix sort is immediately derived. Moreover, the fusion condition is a statement of stability.

(18)

Sketch of a derivation of radix sort

It is apparent that

treesort : [ Composite → Field ] → [ Composite ] → [ Composite ] treesort ds = flatten · mktree ds

correctly sorts a list, where flatten : Tree A → [ A ] collects elements stored in the leaves in order.

By writing mktree as a (higher-order) fold, we get treesort = (flatten ·) · mktree.

The Fold-Fusion Theorem is now applicable and radix sort is immediately derived. Moreover, the fusion condition is a statement of stability.

(19)

Richard Bird

Unlike other formalisms, functional programming offers a unique opportunity to exploit a compositional approach to Algorithm Design, and to demonstrate the

effectiveness of the mathematics of program construction in the presentation of many algorithms[.]

Functional Algorithm Design. Sci. Comput. Program. 26, 1-3 (May. 1996), 15–31.

(20)

Relations

Relations can be regarded as potentially partial and

non-deterministic functions. Then the usual deterministic, total functions become a special case.

A relation R mapping values of A to values of B is denoted by R : B ← A. We write “b R a” if R maps a to b.

For example, ≤ : N ← N maps a natural number to itself or a smaller natural number, and ∈ : A ← ℘ A maps a set of A’s to one of its elements.

(21)

Operators on relations

Relation composition c (R · S ) a ≡ ∃b. c R b ∧ b S a Converse a R b ≡ b R a

Power transpose ΛR a = { b | b R a } Meet b (R ∩ S ) a ≡ b R a ∧ b S a

Join b (R ∪ S ) a ≡ b R a ∨ b S a

Product relator (b, d ) (R × S ) (a, c) ≡ b R a ∧ d S c Left/right division Division introduces a form of universal

quantification into the relational language. We will skip its definition and expand relations defined with division to pointwise formulas in this presentation.

(22)

Relational fold

Relational fold is a generalisation of functional fold, and the former can be defined in terms of the latter.

Given R : B ← A × B and s : ℘ B, define

foldR R s = ∈ · foldr (Λ(R · (id × ∈))) s, where id is the identity function. Its type is B ← [ A ].

(23)

An operational explanation

foldR R s = ∈ · foldr (Λ(R · (id × ∈))) s The set s records possible base cases.

Given a new element a : A and an inductively-computed set

bs : ℘ B, id × ∈ chooses an element b from bs and then R (a, b) is non-deterministically computed. All results that can be generated from such a process are collected by the Λ operator into a set.

Finally ∈ takes one result from the inductively-computed set.

Compare this procedure with the powerset (subset) construction that converts a non-deterministic finite automaton into a deterministic one.

(24)

Example: subsequence

The relation

subseq : [ A ] ← [ A ]

subseq = foldR (outr ∪ cons) nil

maps a list to one of its subsequences, where nil = {[ ]} and outr : B ← A × B

outr (a, b) = b cons : [ A ] ← A × [ A ] cons (x , xs) = x :: xs.

(25)

Example: orderedness

The relation

ordered : [ A ] ← [ A ]

ordered = foldR (cons · lbound ) nil

maps a list to itself if it is increasingly sorted, where lbound maps a pair (x , xs) to itself if x is a lower bound of xs.

If a relation R satisfies R ⊆ id , it is called a coreflexive relation or a coreflexive for short. Coreflexives are used in specifications to filter out those values with some particular property. One can see that both ordered and lbound are coreflexives.

(26)

The activity-selection problem

Suppose we have a set S = {a1, a2, . . . , an} of n proposed activities that wish to use a resource, such as a lecture hall, which can be used by only one activity at a time. Each activity ai has a start time si and a finish time fi, where 0 ≤ si< fi < ∞. If selected, activity ai takes place during the half-open time interval [si, fi). Activities ai and aj are compatible if the intervals [si, fi) and [sj, fj) do not overlap (i.e., ai and aj are compatible if si ≥ fj or sj≥ fi). The activity-selection problem is to select a maximum-size subset of mutually compatible activities.

Cormen, T. H., Leiserson, C. E., Rivest, R. L., and Stein, C. 2001Introduction to Algorithms. MIT Press.

If the activities are decreasingly sorted by finish time, there is a simple algorithm.

(27)

Maximum

Given an order R : A ← A, the relation max R : A ← ℘ A maps a set to one of its maximum elements with respect to R:

max R = ∈ ∩ (R/3).

Expanding the point-free definition,

x (max R) s ≡ x ∈ s ∧ ∀y . y ∈ s ⇒ y R x .

(28)

Specification

We can give a relational specification for the activity-selection problem:

max ≤`· Λ(mutex · subseq).

Generate an arbitrary plan by subseq.

Ensure the plan is legitimate by the coreflexive mutex . Collect all such plans by the Λ operator.

Choose one of the longest plans by max ≤`.

Any function contained in this relation is considered a correct implementation.

(29)

The Greedy Theorem

Theorem (Bird and de Moor)

Let R : A ← A be a preorder and S : A ← FA be monotonic on R, i.e.,

S · FR ⊆ R · S . Then

max R · Λ([S ]) ⊇ ([max R · ΛS ]).

Bird, R. and de Moor, O. 1997Algebra of Programming. Prentice-Hall, Inc.

(30)

Specialisation to lists

Corollary

Let R : A ← A be a preorder, S : A ← B × A be monotonic on R, i.e.,

S · (id × R) ⊆ R · S , and s : ℘ A be a set. Then

max R · Λ(foldR S s) ⊇ foldR (max R · ΛS ) (Λ(max R) s).

(31)

Fusing mutex · subseq into a fold

The coreflexive mutex is defined by

mutex = foldR (cons · compatible) nil where the coreflexive compatible lets a pair (x , xs) of type

Activity × [ Activity ] go through if x does not overlap with any of the activities in xs.

By relational fold-fusion, mutex · subseq can be fused into foldR R nil if we can find a relation R such that

mutex · (outr ∪ cons) = R · (id × mutex ).

(32)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Synthesis of R

We calculate:

mutex · (outr ∪ cons)

= { product relator; catamorphisms }

(outr · (id × mutex )) ∪ (cons · compatible · (id × mutex ))

= { composition distributes over join } (outr ∪ (cons · compatible)) · (id × mutex ).

So mutex · subseq = foldR (outr ∪ (cons · compatible)) nil .

(33)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Synthesis of R

We calculate:

mutex · (outr ∪ cons)

= { composition distributes over join } (mutex · outr ) ∪ (mutex · cons)

= { composition distributes over join } (outr ∪ (cons · compatible)) · (id × mutex ).

So mutex · subseq = foldR (outr ∪ (cons · compatible)) nil .

(34)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Synthesis of R

We calculate:

mutex · (outr ∪ cons)

= { composition distributes over join } (mutex · outr ) ∪ (mutex · cons)

= { product relator; catamorphisms }

(outr · (id × mutex )) ∪ (cons · compatible · (id × mutex ))

So mutex · subseq = foldR (outr ∪ (cons · compatible)) nil .

(35)

Synthesis of R

We calculate:

mutex · (outr ∪ cons)

= { composition distributes over join } (mutex · outr ) ∪ (mutex · cons)

= { product relator; catamorphisms }

(outr · (id × mutex )) ∪ (cons · compatible · (id × mutex ))

= { composition distributes over join } (outr ∪ (cons · compatible)) · (id × mutex ).

So mutex · subseq = foldR (outr ∪ (cons · compatible)) nil .

(36)

Checking the monotonicity condition

Now we wish to apply the Greedy Theorem. That means we need to check the monotonicity condition

(outr ∪ (cons · compatible)) · (id × ≤`)

⊆ ≤`· (outr ∪ (cons · compatible)), which says for two partial plans xs and ys with

length xs ≤ length ys, no matter which complete plan xs0 is computed from xs, there is always a complete plan ys0 computed from ys such that length xs0 ≤ length ys0.

This is not true, however.

(37)

Strengthening the order

A typical strategy is to refine the order ≤` to a stronger one. Since the problematic case happens when the two partial plans under consideration are of the same length, we define

E = <`∪ (=`∩ (post-compatible\post-compatible)) where post-compatible : Activity ← [ Activity ] maps a plan to a compatible activity that finishes later than all activities in the plan.

Now the monotonicity condition with respect to E holds and the Greedy Theorem gives us

foldR (max E · Λ(outr ∪ (cons · compatible))) nil.

(38)

Cleaning up

Take a closer look at

max E · Λ(outr ∪ (cons · compatible)).

Given a partial plan xs and an incoming activity x , the relation picks a best plan with respect to E from, depending on whether x is compatible with xs, the set {xs, x :: xs} or the set {xs}.

In the first case x :: xs should be chosen since it is strictly longer than xs. In the second case there is only one candidate.

Since x finishes later than xs, it suffices to check whether x starts later than xs.

(39)

The final program

Define

compatible-cons : Activity → [ Activity ] → [ Activity ] compatible-cons x [ ] = x :: [ ]

compatible-cons x (y :: xs) =

if start x ≥ finish y then x :: y :: xs else y :: xs.

The final program is a simple fold:

foldr compatible-cons [ ].

(40)

The derivation

max ≤`· Λ(mutex · subseq)

= { fusion }

max ≤`· Λ(foldR (outr ∪ (cons · compatible)) nil )

⊇ { strengthening }

max E · Λ(foldR (outr ∪ (cons · compatible)) nil)

⊇ { Greedy Theorem }

foldR (max E · Λ(outr ∪ (cons · compatible))) nil

= { cleaning up }

foldr compatible-cons [ ].

(41)

Donald E. Knuth

Science is knowledge which we understand so well that we can teach it to a computer; and if we don’t fully understand something, it is an art to deal with it. [...] we should continually be striving to transform every art into a science: in the process, we advance the art.

Computer Programming as an Art. Commun. ACM 17, 12 (Dec. 1974), 667–673.

(42)

Curry-Howard correspondence

Observe the following two logic inference rules:

ϕ ∧ ψ

(∧E) ψ

ϕ → ψ ϕ

(→E) ψ

The left one resembles the type of outr : A × B → B, and the right one looks like the types involved in function application.

Curry-Howard correspondence is a family of formal correspondences between logic deduction systems and computational calculi. Its famous slogan is “propositions are types, and proofs are programs”.

Thus writing proofs is no different from writing programs! All we need is an expressive enough programming language.

(43)

Roadmap

We will go through the following:

1 first-order intuitionistic logic,

2 preorder reasoning combinators,

3 modelling of sets and relations,

4 the specification (in detail),

5 the Partial Greedy Theorem, and

6 finally, the derivation.

More complicated proofs cannot be explained in this presentation since more background is required. However it is worth noting that important theorems such as Fold-Fusion Theorem and Greedy Theorem can be proved in an “equational” style close to the proofs in theAoPbook.

(44)

Encoding first-order intuitionistic logic in Agda

absurdity data ⊥ : Set where

truth data > : Set where tt : >

conjunction data × (A B : Set) : Set where , : A → B → A × B

disjunction data ] (A B : Set) : Set where inj1 : A → A ] B

inj2 : B → A ] B

implication A → B

(45)

Encoding first-order intuitionistic logic in Agda

exists data ∃ {A : Set} (P : A → Set) : Set where , : (witness : A) → P witness → ∃ P forall (x : A) → P x or ∀ x → P x equality data ≡ {A : Set} : A → A → Set where

refl : {x : A} → x ≡ x For example, the type

(n : N) → (n ≡ zero ] ∃ (λ m → n ≡ suc m)) says every natural number is either zero or a successor of some natural number.

(46)

More on products

In fact, × and ∃ are special cases of the following Σ-type:

data Σ (A : Set) (B : A → Set) : Set where , : (a : A) → B a → Σ A B

which can be seen as a kind of generalised products whose second component’s type depends on the first component. We can define extractor functions

proj1: {A : Set} {B : A → Set} → Σ A B → A proj1 (a , b) = a

proj2: {A : Set} {B : A → Set} → (σ : Σ A B) → B (proj1 σ) proj2 (a , b) = b.

(47)

More on equality

≡ is reflexive by definition, and can be proven to be transitive and symmetric. It is also a congruence with respect to function application.

trans : {A : Set} {x y z : A} → x ≡ y → y ≡ z → x ≡ z trans refl refl = refl

sym : {A : Set} {x y : A} → x ≡ y → y ≡ x sym refl = refl

cong : {A B : Set} (f : A → B) {x y : A} → x ≡ y → f x ≡ f y cong f refl = refl

(48)

Equational reasoning

We can write a chain of equality in Agda as expression1

≡h exp1≡exp2 i expression2

≡h exp2≡exp3 i . . .

≡h expn−1≡expn i expressionn 

(49)

Equational reasoning

We can write a chain of equality in Agda as expression1

≡h exp1≡exp2 i expression2

≡h exp2≡exp3 i . . .

≡h expn−1≡expn i expressionn 

(50)

Equational reasoning

We can write a chain of equality in Agda as expression1

≡h exp1≡exp2 i expression2

≡h exp2≡exp3 i . . .

≡h expn−1≡expn i expressionn 

(51)

Equational reasoning

We can write a chain of equality in Agda as expression1

≡hexp1≡exp2 i expression2

≡h exp2≡exp3 i . . .

≡h expn−1≡expn i expressionn 

(52)

Equational reasoning

We can write a chain of equality in Agda as expression1

≡h exp1≡exp2 i expression2

≡h exp2≡exp3 i . . .

≡h expn−1≡expn i expressionn 

(53)

Preorder reasoning combinators

Concretely,

≡h i : {A : Set} (x : A) {y z : A} → x ≡ y → y ≡ z → x ≡ z x ≡h x ≡y i y ≡z = trans x ≡y y ≡z

 : {A : Set} (x : A) → x ≡ x x  = refl.

The same technique works for any preorder.

(54)

Proving the Fold-Fusion Theorem

foldr -fusion : {A B C : Set}

(h : B → C ) {f : A → B → B} {g : A → C → C } {e : B} → (∀ x y → h (f x y ) ≡ g x (h y )) →

∀ xs → h (foldr f e xs) ≡ foldr g (h e) xs foldr -fusion h {f } {g } {e} fusion-condition [ ] = refl foldr -fusion h {f } {g } {e} fusion-condition (x :: xs) =

h (foldr f e (x :: xs))

≡h refl i

h (f x (foldr f e xs))

≡h fusion-condition x (foldr f e xs) i g x (h (foldr f e xs))

≡h cong (g x) (foldr -fusion h fusion-condition xs) i g x (foldr g (h e) xs)

≡h refl i

foldr g (h e) (x :: xs) 

(55)

Modelling sets

A set of A’s can be specified by its characteristic function of type A → Set. Concretely, if s : A → Set, then x : A is considered a member of s if s x has a proof, i.e., there is a term that has type s x . Thus we define

℘ A = A → Set.

For example, define

singleton : {A : Set} → A → ℘ A singleton x = λ y → x ≡ y .

Then singleton x is the set containing a single element x .

(56)

Modelling relations

Set-theoretically, a relation of type B ← A is a subset of A × B, so we may define B ← A as ℘ (A × B) = A × B → Set. But we found the following isomorphic representation more convenient:

B ← A = B → A → Set.

For example, an Agda function is lifted to a relation by the combinator

fun : {A B : Set} → (A → B) → (B ← A) fun f = λ y x → f x ≡ y .

(57)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Operators on relations revisited

( R · S ) c a = ∃ (λ b → c R b × b S a) converse : {A B : Set} → (B ← A) → (A ← B)

(R ) a b = R b a

power transpose Λ : {A B : Set} → (B ← A) → (A → ℘ B) Λ R a = λ b → b R a

join t : {A B : Set} →

(B ← A) → (B ← A) → (B ← A) (R t S ) b a = R b a ] S b a

(58)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Operators on relations revisited

composition · : {A B C : Set} →

(C ← B) → (B ← A) → (C ← A) ( R · S ) c a = ∃ (λ b → c R b × b S a)

power transpose Λ : {A B : Set} → (B ← A) → (A → ℘ B) Λ R a = λ b → b R a

join t : {A B : Set} →

(B ← A) → (B ← A) → (B ← A) (R t S ) b a = R b a ] S b a

(59)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Operators on relations revisited

composition · : {A B C : Set} →

(C ← B) → (B ← A) → (C ← A) ( R · S ) c a = ∃ (λ b → c R b × b S a) converse : {A B : Set} → (B ← A) → (A ← B)

(R ) a b = R b a

join t : {A B : Set} →

(B ← A) → (B ← A) → (B ← A) (R t S ) b a = R b a ] S b a

(60)

Warming up Switching to relations Solving an optimisation problem Formalisation in Agda Conclusions

Operators on relations revisited

composition · : {A B C : Set} →

(C ← B) → (B ← A) → (C ← A) ( R · S ) c a = ∃ (λ b → c R b × b S a) converse : {A B : Set} → (B ← A) → (A ← B)

(R ) a b = R b a

power transpose Λ : {A B : Set} → (B ← A) → (A → ℘ B) Λ R a = λ b → b R a

(R t S ) b a = R b a ] S b a

(61)

Operators on relations revisited

composition · : {A B C : Set} →

(C ← B) → (B ← A) → (C ← A) ( R · S ) c a = ∃ (λ b → c R b × b S a) converse : {A B : Set} → (B ← A) → (A ← B)

(R ) a b = R b a

power transpose Λ : {A B : Set} → (B ← A) → (A → ℘ B) Λ R a = λ b → b R a

join t : {A B : Set} →

(B ← A) → (B ← A) → (B ← A) (R t S ) b a = R b a ] S b a

(62)

Inclusion

We can define set inclusion

⊆ : {A : Set} → ℘ A → ℘ A → Set s1⊆ s2 = ∀ x → (s1 x → s2 x ) and relational inclusion

v : {A B : Set} → (B ← A) → (B ← A) → Set R v S = ∀ b a → (R b a → S b a).

Both can be shown to be reflexive and transitive, so we can build preorder reasoning combinators for them.

(63)

Encoding the specification

Our goal is to build up the relation

act-sel -spec : [ Activity ] ← [ Activity ] act-sel -spec = max ≤` • Λ(mutex · subseq).

If we can construct a term of type

∃ (λ f → f · fin-ordered v act-sel -spec · fin-ordered ) where fin-ordered is a coreflexive explicitly stating that the input list is required to be sorted by finish time, the witness to this existential proposition is an algorithm solving the activity-selection problem, correctness guaranteed by the type checker.

(64)

The “maximum” part

Maximum is easily defined:

max : {A : Set} → (A ← A) → (A ← ℘ A)

(max R ) a s = s a × (∀ a0 → s a0 → a0 R a).

In our library it is defined in point-free style.

Assuming ≤ : N ← N is given,

` : {A : Set} → ([ A ] ← [ A ]) xs ≤`ys = length xs ≤ length ys.

(65)

Fold, the third time

Define

foldR : {A B : Set} → (B ← A × B) → ℘ B → (B ← [ A ]) foldR R s = ∈ • foldr (Λ(R · (idR × ∈))) s

where • is a special type of composition defined by

• : {A B C : Set} → (C ← ℘ B) → (A → ℘ B) → (C ← A) (R • f ) c a = R c (f a).

The reason we need such composition is due to predicativity, an issue we will ignore in this presentation.

(66)

Subsequence, identically

Define

subseq : {A : Set} → ([ A ] ← [ A ]) subseq = foldR (outr t cons) nil where

cons : {A : Set} → (A ← (A × [ A ])) cons = fun (uncurry :: )

outr : {A B : Set} → (B ← (A × B)) outr = fun proj2

nil : {A : Set} → ℘ A nil = singleton [ ].

(67)

Junctional check

Recall that

mutex = foldR (cons · compatible) nil

where compatible is a coreflexive which, given a pair (x , xs) of type Activity × [ Activity ], checks whether x is disjoint from every activity in xs. The relation fin-ordered has the same structure.

Define

check C = foldR (cons · C ) nil

to capture this pattern. Then mutex = check compatible.

The only component we haven’t constructed is compatible.

(68)

Compatibility between an activity and a plan

Define

compatible-set : ℘ (Activity × [ Activity ]) compatible-set (a , [ ]) = >

compatible-set (a , x :: xs) = disjoint a x × compatible-set (a , xs) where

disjoint : Activity ← Activity

disjoint a x = finish x ≤ start a ] finish a ≤ start x . Then

compatible : (Activity × [ Activity ]) ← (Activity × [ Activity ]) compatible (y , ys) (x , xs) =

(y , ys) ≡ (x , xs) × compatible-set (x , xs).

(69)

The Partial Greedy Theorem

Since the domain of act-sel -spec = max ≤` • Λ(mutex · subseq) is restricted by the coreflexive fin-ordered , we need a variant of the Greedy Theorem to deal with partiality of input.

partial -greedy -thm : {A B : Set}

{S : B ← (A × B)} {s : ℘ B} {R : B ← B}

{C : (A × [ A ]) ← (A × [ A ])} {D : (A × B) ← (A × B)} → R · R v R → C v idR → D v idR

S · (idR× R) · D v R · S →

(idR × foldR S s) · C v D · (idR× foldR S s) → foldR (max R • ΛS ) (Λ(max R) s) · check C

v (max R • Λ(foldR S s)) · check C

(70)

The Partial Greedy Theorem

Since the domain of act-sel -spec = max ≤` • Λ(mutex · subseq) is restricted by the coreflexive fin-ordered , we need a variant of the Greedy Theorem to deal with partiality of input.

partial -greedy -thm :{A B : Set}

{S : B ← (A × B)} {s : ℘ B} {R : B ← B}

{C : (A × [ A ]) ← (A × [ A ])} {D : (A × B) ← (A × B)} → R · R v R →C v idR →D v idR

S · (idR× R) · D v R · S →

(idR × foldR S s) · C v D · (idR× foldR S s)→ foldR (max R • ΛS ) (Λ(max R) s) · check C

v (max R • Λ(foldR S s)) · check C

(71)

The Partial Greedy Theorem

Since the domain of act-sel -spec = max ≤` • Λ(mutex · subseq) is restricted by the coreflexive fin-ordered , we need a variant of the Greedy Theorem to deal with partiality of input.

partial -greedy -thm :{A B : Set}

{S : B ← (A × B)} {s : ℘ B} {R : B ← B}

{C : (A × [ A ]) ← (A × [ A ])} {D : (A × B) ← (A × B)} → R · R v R →C v idR →D v idR

S · (idR× R) · D v R · S →

(idR × foldR S s) · C v D · (idR× foldR S s)→ foldR (max R • ΛS ) (Λ(max R) s)· check C

v (max R • Λ(foldR S s))· check C

(72)

The Partial Greedy Theorem

Since the domain of act-sel -spec = max ≤` • Λ(mutex · subseq) is restricted by the coreflexive fin-ordered , we need a variant of the Greedy Theorem to deal with partiality of input.

partial -greedy -thm :{A B : Set}

{S : B ← (A × B)} {s : ℘ B} {R : B ← B}

{C : (A × [ A ]) ← (A × [ A ])} {D : (A × B) ← (A × B)} → R · R v R →C v idR →D v idR

S · (idR× R)· Dv R · S →

(idR × foldR S s) · C v D · (idR× foldR S s)→ foldR (max R • ΛS ) (Λ(max R) s) · check C

v (max R • Λ(foldR S s)) · check C

(73)

The Partial Greedy Theorem

Since the domain of act-sel -spec = max ≤` • Λ(mutex · subseq) is restricted by the coreflexive fin-ordered , we need a variant of the Greedy Theorem to deal with partiality of input.

partial -greedy -thm :{A B : Set}

{S : B ← (A × B)} {s : ℘ B} {R : B ← B}

{C : (A × [ A ]) ← (A × [ A ])} {D : (A × B) ← (A × B)} → R · R v R →C v idR →D v idR

S · (idR× R) · D v R · S →

(idR × foldR S s) · C v D · (idR× foldR S s)→ foldR (max R • ΛS ) (Λ(max R) s) · check C

v (max R • Λ(foldR S s)) · check C

(74)

The main derivation

(Let S = outr t (cons · compatible).) activity -selection-derivation :

∃ (λ f → fun f · fin-ordered v act-sel -spec · fin-ordered ) activity -selection-derivation = ,

(max ≤` • Λ(mutex · subseq)) · fin-ordered

wh comp-monotonic-l (minΛ-cong -w mutex·subseq-is-fold ) i (max ≤` • Λ(foldR S nil )) · fin-ordered

wh comp-monotonic-l (max-monotonic E-refines-≤`) i (max E • Λ(foldR S nil)) · fin-ordered

. . .

(75)

Applying the Partial Greedy Theorem

(Assume fin-ordered = check fin-ubound .) . . .

(max E • Λ(foldR S nil)) · fin-ordered wh fin-ubound -promotion i

(max E • Λ(foldR (S · fin-ubound) nil)) · fin-ordered

wh partial -greedy -thm E-trans fin-uboundvidR fin-ubound vidR monotonicity fin-ubound -homo i

foldR (max E • Λ(S · fin-ubound)) (Λ(max E ) nil) · fin-ordered wh comp-monotonic-l (foldR-monotonic v-refl max-E-nil⊇nil) i

foldR (max E • Λ(S · fin-ubound)) nil · fin-ordered . . .

(76)

Cleaning up

. . .

foldR (max E • Λ(S · fin-ubound)) nil · fin-ordered wh comp-monotonic-l algebra-refinement i

foldR (fun (uncurry compatible-cons) · fin-ubound ) nil · fin-ordered wh fin-ubound -demotion i

foldR (fun (uncurry compatible-cons)) nil · fin-ordered wh comp-monotonic-l (foldR-to-foldr compatible-cons []) i

fun (foldr compatible-cons [ ]) · fin-ordered



(77)

Advantages of program derivation

Insights are reified as theorems, which can be discovered, proved, collected, and systematically studied.

These theorems then provide hints and directions to solve other problems.

A derivation is fun to read, shows the connection between the specification and the derived algorithm, and can likely point out why the algorithm works.

Beautiful theories about structures and semantics of programs are built up along with development of program derivation.

(78)

Disadvantages of program derivation

The theory is not yet rich enough to cover all classical textbook algorithms.

The work of laying problems in suitable forms and discovering theorems is extremely hard, and perhaps too abstract for a young field like Algorithm Design.

That perhaps explains partially why the community is shrinking.

Relations are inherently harder to manipulate, since there are a great number of operators and associated laws, and inequational reasoning is less deterministic. Even when we step back to functional derivation, low-level calculations can still be very tedious.

(79)

Jeremy Gibbons

We are interested in extending what can be calculated precisely because we are not that interested in the calculations themselves.

Calculating Functional Programs. In Keiichi Nakata, editor, Proceedings of

ISRG/SERG Research Colloquium. School of Computing and Mathematical Sciences, Oxford Brookes University, November 1997. Technical Report CMS-TR-98-01.

(80)

Advantages of mechanised formalisation

Proving theorems by hand is prone to errors. Automatic theorem proving does not make human readers gain intuition and thus confidence. Machine-assisted theorem proving strikes a balance between the two.

That is, we still write proofs ourselves and become confident about the theorem, while the machine assists us to ensure not a single detail is overlooked, making the proof practically infallible.

(81)

Disadvantages of mechanised formalisation

Formalisation is difficult.

“Not a single detail is overlooked” means every detail needs to be taken care about, including, say, associativity of natural number addition.

Formal proofs are unlikely to be written in the traditional essay-like style and often harder to read.

(82)

Leslie Lamport

The proof style I advocate is a refinement of one, called natural deduction, [..., which] has been viewed primarily as a method of writing proofs in formal logic. What I will describe is a practical method for writing the less formal proofs of ordinary mathematics. It is based on

hierarchical structuring — a successful tool for managing complexity.

How to Write a Proof. American Mathematical Monthly 102, 7 (August–September 1993) 600–608. Also appeared in Global Analysis in Modern Mathematics, Karen Uhlenbeck, editor. Publish or Perish Press, Houston. Also appeared as SRC Research Report 94.

(83)

Robert Pollack

Many people who pursue formal mathematics are seeking the beauty of complete concreteness, which contributes to their own appreciation of the material being

formalised, while to many outside the field formalisation is “just filling in the details” of conventional

mathematics. But “just” might be infeasible unless serious thought is given to representation of both the logic of formalisation and the mathematics being formalised.

How to Believe a Machine-Checked Proof. In G. Sambin and J. Smith, editors, Twenty-Five Years of Constructive Type Theory. Oxford University Press, 1998.

(84)

Immanuel Kant

Now it does indeed seem natural that, as soon as we have left the ground of experience, we should, through careful enquiries, assure ourselves as to the foundations of any building that we propose to erect, not making use of any knowledge that we possess without first

determining whence it has come, and not trusting to principles without knowing their origin.

Critique of Pure Reason, 1929, Trans. N. Kemp Smith, New York: St. Martin’s Press.

參考文獻

相關文件

A factorization method for reconstructing an impenetrable obstacle in a homogeneous medium (Helmholtz equation) using the spectral data of the far-field operator was developed

Wang, Solving pseudomonotone variational inequalities and pseudocon- vex optimization problems using the projection neural network, IEEE Transactions on Neural Networks 17

Define instead the imaginary.. potential, magnetic field, lattice…) Dirac-BdG Hamiltonian:. with small, and matrix

The average Composite CPI for the first half year of 2012 increased by 6.42% year-on- year, of which the price index of Alcoholic Beverages &amp; Tobacco (+29.19%); and Food

The average Composite CPI for the first ten months of 2012 increased by 6.18% year-on-year, of which price index of Alcoholic Beverages &amp; Tobacco (+30.85%); and Food

The average Composite CPI for the first seven months of 2012 increased by 6.37% year-on- year, of which the price index of Alcoholic Beverages &amp; Tobacco (+29.66%); and Food

The Composite CPI for June 2008 increased by 1.11% month-to-month, of which the price indices of Clothing &amp; footwear, Food &amp; non-alcoholic beverages and Transport rose by

The Composite CPI for June 2009 increased by 0.39% month-to-month, with the price indices of Transport; Clothing &amp; Footwear; and Food &amp; Non-Alcoholic Beverages rising by