• 沒有找到結果。

Bi-Decomposing Large Boolean Functions via Interpolation and Satisfiability Solving

N/A
N/A
Protected

Academic year: 2022

Share "Bi-Decomposing Large Boolean Functions via Interpolation and Satisfiability Solving"

Copied!
83
0
0

加載中.... (立即查看全文)

全文

(1)

Bi-Decomposing Large Boolean Functions via Interpolation and Satisfiability Solving

Student: Ruei-Rung Lee Advisor: Dr. Jie-Hong Roland Jiang

Graduate Institute of Electronics Engineering National Taiwan University

Abstract

Boolean function bi-decomposition is a fundamental operation in logic synthe- sis. Roughly speaking, bi-decomposition is a special kind of decomposition which decomposes a function into a two-input function composed with two other input functions. Precisely speaking, a function f (X) is bi-decomposable under a variable partition XA, XB, XC on X if it can be written as h(fA(XA, XC), fB(XB, XC)) for some functions h, fA, and fB. The quality of a bi-decomposition is mainly deter- mined by its variable partition. A good variable partition makes the decomposition disjoint, i.e. XC =∅, and balanced, i.e. |XA| ≈ |XB|. Such a good decomposition reduces communication and circuit complexity, and yields simple physical design solutions. Prior BDD-based methods may not be scalable to the decomposition due to the memory explosion problem. Also as decomposability is checked under a fixed variable partition, searching a feasible or good partition may run through costly enumeration that requires separate and independent decomposability checkings.

(2)

We propose a solution to check whether a function is bi-decomposable using incremental SAT solving. Then, we derive subfunctions by Craig interpolation if decomposable. Most importantly, we integrate the variable partition into SAT solv- ing. In other words, we do not need to specify the variable partition before checking bi-decomposability. If the given function is bi-decomposable, the SAT solver will generate a feasible and non-trivial variable partition. Experimental results show that the capacity of bi-decomposition can be scaled up substantially to handle large designs.

When n functions are bi-decomposed separately, we have 2n subfunctions. If some subfunctions are complementary or identical to each other, then these sub- functions can be replaced with each other. By doing so, we can bi-decompose n functions to fewer than 2n subfunctions and reduce circuit size further. Accord- ingly, it motivates us to bi-decompose multiple functions simultaneously such that some subfunctions can be shared. In this thesis, we focus on multiple-output or- decomposition. Experimental results show that this technique can be used to reduce circuit size.

Keywords: Bi-decomposition, variable partition, SAT solving, Craig interpo- lation, unit assumption, cofactor.

(3)

Contents

Chinese Abstract i

Acknowledge ii

List of Figures vii

List of Tables viii

1 Introduction 1

1.1 Single-Output Bi-decomposition . . . 2

1.2 Multiple-Output Bi-decomposition . . . 4

1.3 Our Contribution . . . 5

1.4 Organization of the Thesis . . . 6

2 Preliminaries 7 2.1 Propositional Satisfiability . . . 8

2.1.1 SAT Solvers . . . 9

2.1.2 Incremental SAT Solving . . . 9

(4)

Contents

2.2 Resolution Proof . . . 10

2.3 Craig’s Interpolation . . . 12

2.4 Circuit to CNF Conversion . . . 14

2.5 Bi-decomposition . . . 15

3 Single-Output Bi-decomposition 17 3.1 OR Bi-decomposition . . . 17

3.1.1 Decomposition of Completely Specified Functions . . . 18

3.1.2 Decomposition of Incompletely Specified Functions . . . 31

3.2 AND Bi-decomposition . . . 32

3.3 XOR Bi-decomposition . . . 32

3.3.1 Decomposition of Completely Specified Functions . . . 33

4 Multiple-Output Bi-decomposition 40 4.1 2-output Bi-decomposition . . . 41

4.1.1 Decomposition with Known Variable Partition . . . 41

4.1.2 Decomposition with Unknown Variable Partition . . . 45

4.2 2-output Bi-decomposition with a Complementary Subfunction . . . 46

4.2.1 Decomposition with Known Variable Partition . . . 47

4.2.2 Decomposition with Unknown Variable Partition . . . 51

4.3 2-output Bi-decomposition with two Shared Subfunctions . . . 52

4.3.1 Case I: f2 = ¬fA∨ fB . . . 52

4.3.2 Case II: f2 =¬fA∨ ¬fB. . . 56

(5)

Contents

4.4 Multiple-Output Bi-decomposition . . . 58

5 Experimental Results 61

5.1 Implementation Issues . . . 61 5.2 Experimental Results . . . 63

6 Conclusions 70

(6)

List of Figures

1.1 Single-output Bi-decomposition . . . 2

2.1 The boolean space ofφA,φB and φ . . . 12

2.2 Example of Refutation Proof and Interpolant . . . 13

3.1 or-decomposition . . . 18

3.2 Decomposition chart of or2-decomposable function . . . 19

3.3 Decomposition chart of a function which can not be or2-decomposed . . . 20

3.4 Construction offA.. . . 22

3.5 Construction offB.. . . 23

3.6 Summary of control variables in single-output bi-decomposition . . . 26

3.7 xor-decomposition . . . 33

3.8 Decomposition chart of xor-decomposable function . . . 34

4.1 2-output bi-decomposition . . . 41

4.2 Summary of control variables in 2-output bi-decomposition . . . 46

4.3 2-output bi-decomposition with a complementary subfunction . . . 47

4.4 2-output bi-decomposition with 2 shared subfunctions . . . 53

(7)

List of Figures

4.5 Multiple-output Bi-decomposition . . . 59

5.1 |XC|/|X| and ||XA| − |XB||/|X| in the enumeration of variable partitions in OR2-decomposition. . . 64

5.2 Variable partition in OR2-decomposition. . . 65

5.3 Variable partition in XOR-decomposition. . . 65

5.4 Variable partition without UNSAT core . . . 66

5.5 Variable partition with UNSAT core . . . 66

5.6 An example of overlap cost function . . . 67

5.7 Trade-off curve with Size and Overlap . . . 68

(8)

List of Tables

5.1 Bi-decomposition of PO functions . . . 69

(9)

Chapter 1

Introduction

Functional decomposition [1, 6] is a fundamental operation on Boolean functions that breaks a large function into a set of smaller subfunctions while keeping the functionality unchanged. In other words, f on variables X is decomposed into h, g1, . . . , gm with f (X) = h(g1(X), . . . , gm(X)), often m <|X|. Bi-decomposition plays an important role in the study of circuit and communication complexity. Also it has pivotal applications on multi-level and FPGA logic synthesis. Extensive re- search has been published on this subject, see e.g. [15] for a detailed introduction.

When m = 2, the decomposition is known as bi-decomposition [3, 4, 11, 14, 16]. In all the cases of functional decomposition, bi-decomposition is the simplest nontriv- ial one, but the most widely applied since a logic netlist is often expressed as a network of two-fanin gates. Since the BDD-based approaches for bi-decomposition suffer from the memory explosion problem, the scalability of bi-decomposition is re-

(10)

1.1. Single-Output Bi-decomposition

Figure 1.1: Single-output Bi-decomposition

stricted. Consequently, we study SAT-based approaches to enhance the scalability and capability of bi-decomposition.

1.1 Single-Output Bi-decomposition

Figure 1.1 shows that a function f (X) is bi-decomposed as h(fA(XA, XC), fB(XB, XC)).

Note that XC is the variable set common to fAand fB. Moreover, variable sets XA and XB are local to fA and fB, respectively. A primary issue of bi-decomposition is variable partition. For f (X) = h(fA(XA, XC), fB(XB, XC)), the variable parti- tion {XA, XB, XC} on X (i.e. XA, XB, XC are pairwise disjoint and XA∪ XB XC = X) mainly determines the decomposition quality. A particularly desirable bi-decomposition is disjoint, i.e., |XC| = 0, and balanced, i.e., |XA| ≈ |XB|. An

(11)

1.1. Single-Output Bi-decomposition

ideal bi-decomposition reduces circuit and communication complexity, and in turn simplifies physical design. Effective approaches to bi-decomposition can be impor- tant not only in large-scale circuit minimization, but also in early design closure if combined well with physical design partitioning.

Modern approaches to bi-decomposition, such as [11], were BDD-based for its powerful capability supporting various Boolean manipulations. However, there are two main restrictions with the prior methods.

1. Due to the common memory explosion problem, they are not scable to handle large Boolean functions.

2. Since decomposability is checked under a fixed variable partition, it has to try a lot of enumerations to find a feasible or even good partition. Furthermore, those decomposability checkings are separate and independent to each other.

Hence, it would not only consume a lot of time but also run out of memory.

To overcome these restrictions, we propose a method based on satisfiability (SAT) solving. The formulation is inspired by the recent work [9], where a SAT-based for- mulation made the computation of functional dependency scalable to large functions.

Our main results include:

1. a pure SAT-based solution to bi-decomposition,

2. subfunction derivation by Craig interpolation and cofactoring, and

(12)

1.2. Multiple-Output Bi-decomposition

3. most importantly, automatic variable partition using incremental SAT solving under unit assumptions.

Therefore, the scalability of bi-decomposition and the optimality of variable partition can be dramatically improved. Experiments show that promising results on the scalability of bi-decomposition and the optimality of variable partition.

In the closest work [11], it gave a more general approach allowing weak decomposi- tion. In other words, XAor XB can be empty. However, this thesis focuses on strong decomposition (namely, XA and XBcannot be empty). Besides, because don’t cares are more difficult to handle in SAT than in BDD, they were exploited in [11] for logic sharing. In this thesis, we do not discuss don’t cares except or-decomposition.

1.2 Multiple-Output Bi-decomposition

In this part, we discuss whether two or more functions can be bi-decomposed simul- taneously and share their subfunctions such that the cirucit size is redued further.

In this thesis, we mainly discuss multiple-output or-decomposition. To begin with, we discuss 2-output or-decomposition. That is, two functions are bi-decomposed such that they share one subfunction. Then, we discuss how to bi-decompose two functions such that they share both subfunctions. Finally, we generalize 2-output bi-decomposition to n-output or-decomposition sharing one subfunction.

(13)

1.3. Our Contribution

1.3 Our Contribution

Our contribution can be summarized as follows.

• Single-output Bi-decomposition: We study the necessary and sufficient condition that determines whether a given function is bi-decomposable, and then derive subfunctions using SAT solving and Craig interpolation; there- fore, we can extend the bi-decomposition capability and scalability. The main results include:

1. Instead of relying on BDDs, we extend bi-decomposition capacity thought pure SAT-based formulation.

2. Obtain subfunctions by Craig interpolation and cofactoring.

3. Automate variable partition by incremental SAT solving under unit as- sumptions. Thereby the scalability of bi-decomposition and the optimal- ity of variable partition can be substantially improved.

Therefore, we can bi-decompose large boolean functions effectively.

• Multiple-output bi-decomposition: Multiple-output bi-decomposition is

very similar to single-output bi-decomposition. We study the necessary and sufficient condition of multiple-output bi-decomposition, and propose a SAT- based computation. Experimental results show that our approach is applicable to large boolean functions and minimize circuit sizes.

(14)

1.4. Organization of the Thesis

1.4 Organization of the Thesis

This thesis is organized as follows. Chapter 2 gives the preliminaries. Chapter 3 presents our method on single-output bi-decomposition. Chapter 4 extends single- output bi-decomposition to n-output bi-decomposition. Chapter 5 is dedicated to experimental results. Finally, conclusions and future work are given in Chapter 6.

(15)

Chapter 2

Preliminaries

In this chapter, we will introduce some necessary notations and background knowl- edge for this thesis. Section 2.1 explains the propositional satisfiability. Section 2.2 and 2.3 explains refutation proof and Craig interpolation respectively. Section 2.4 shows how to convert a circuit to a CNF formula in linear time briefly. The definition of bi-decomposition is in Section 2.5. We define some basic notations. As conventional notation, sets are denoted in upper-case letters, e.g. S and set elements are represented in lower-case letters, e.g. e∈ S. The cardinality of S is denoted as

|S|. S1, S2, . . . , Sk are a partition of a set S and denoted as {S1|S2| . . . |Sk}, if they satisfy the following conditions.

1. Si ⊆ S, for i = 1, . . . , k

2. Si∩ Sj =∅, fori = j.

(16)

2.1. Propositional Satisfiability

3. 

iSi = S.

S1 \ S2 denotes a set whose elements are in S1 but not in S2. For a set X of Boolean variables, its set of valuations (or truth assignments) is denoted as [[X]], e.g., [[X]] ={(0, 0), (0, 1), (1, 0), (1, 1)} for X = {x1, x2}. For two functions f and g, if f equals g up to complementation, i.e. f = g or f =¬g, it is denoted as f ∼= g.

2.1 Propositional Satisfiability

We introduce detailed definitions about propositional satisfiability problems. Let V = {v1, . . . , vk} be a finite set of Boolean variables, where vi ∈ {0, 1} for i = 1, . . . , k. A literal l is either a Boolean variable vi or its negated form ¬vi. A clause c is in the form of disjunction of literals. A SAT instance is a conjunction of clauses, i.e., in the so-called conjunctive normal form (CNF). In the sequel, a clause set C ={c1, . . . , ck} represents the CNF formula c1∧ · · · ∧ ck. An assignment over V gives every Boolean variable vi a Boolean value either TRUE or FALSE.

The Boolean satisfiability, also known as SAT, is the problem of finding a variable assignment for a given Boolean formula such that the formula evaluates to TRUE.

If such assignment exists, we say that the formula is satisfiable. Otherwise, the formula is unsatisfiable which implies that the formula is FALSE for every possible variable assignments.

(17)

2.1. Propositional Satisfiability

2.1.1 SAT Solvers

Given a SAT instance, the satisfiability (SAT) problem asks whether it is satisfiable or not. A SAT solver is designated to solve the SAT problem. If the formula is satisfiable, then they will return one satisfying assignment. They play an important role in the EDA field. Many practical applications, such as automatic test pattern generation, timing analysis, delay fault testing, logic verification and synthesis, ex- ploit the power of SAT solvers. In the last year, the BDD computations has been restricted because of their lack of scalability. On the other hand, the improvements of SAT solvers made them more popular.

2.1.2 Incremental SAT Solving

Typically, a SAT solver accepts a CNF formula as input, solves it, and outputs the satisfying assignment or an “unsatisfiable” statement. However, sometimes we wish to solve many similar SAT problems. It is troublesome to solve each problem alone.

The overhead is expensive because the solver may carry similar inference over and over again.

Some solvers implement DPLL-style backtracking search procedure [27,28] (such as MiniSat [7] and zChaff [29]). For every conflict detected during the search, those solvers analyze the condition and record the learning clause. The learning clauses can help solvers to avoid searching the same type of conflicts again. Those solvers

(18)

2.2. Resolution Proof

cache the learning clauses, and then those learning clauses may not only be useful in the same problem, but also in later similar problems. That is, with incremental SAT solving, we can save much time in solving many similar problems.

2.2 Resolution Proof

If a CNF formula is unsatisfiable, some SAT solvers can generate a resolution proof [13] to prove the formula is unsatisfiable indeed. Resolution is a rule of inference.

We can iteratively apply this rule to check the satisfiability of the formula. The following defines resolution.

Definition 2.1 (v∨ c1) and (¬v ∨ c2) are two clauses, where v and ¬v are literals.

Besides, c1 and c2 are the disjunction of other literals. (c1 ∨ c2) is the resolution results of those two clauses on variable v. The variable v is called the pivot variable, and the clause (c1∨ c2) is called the resolvent of c1 and c2

Proposition 2.1 The conjunction (v∨c1) and (¬v∨c2) implies its resolvent (c1∨c2).

In other words,

(v∨ c1)∧ (¬v ∨ c2)⇒ (c1∨ c2).

The following theorem shows that there must exist a resolution proof for every unsatisfiable CNF formula.

Theorem 2.1 [26]

(19)

2.2. Resolution Proof

For an unsatisfiable SAT instance, there exists a sequence of resolution steps producing an empty clause eventually.

Usually only a subset of the clauses of a SAT instance involves in the resolution steps leading to an empty clause.

Definition 2.2 A refutation proof of an unsatisfiable CNF formula C is a directed acyclic graph (DAG) Γ = (N, A), where

1. every node in N represents a clause which is either a root clause in C or a resolvent clause having exactly two predecessor nodes,

2. every arc in A connects a node to its ancestor node, and

3. the unique leaf of Γ corresponds the empty clause.

Modern SAT solvers,(e.g., MiniSat [7]) are capable of producing a refutation proof from an unsatisfiable SAT instance.

Definition 2.3 The unsatisfiable core Σ of a resolution proof Γ = (N, A) is a subset of N . Each node in N has a path to the unique leaf of Γ, i.e. the empty clause.

The unsatisfiable core corresponds to the subset of the caluses of a SAT instance and results in the unsatisfiability of the SAT instance. There may exist more than one unsatisfiable core for a given SAT instance.

(20)

2.3. Craig’s Interpolation

Figure 2.1: The boolean space ofφA,φB and φ

2.3 Craig’s Interpolation

Theorem 2.2 (Craig Interpolation Theorem) [5] Given two Boolean formulas

φA and φB, if φA∧ φB is unsatisfiable, then there exists a Boolean formula φ with the three properties including:

1. φA⇒ φ,

2. φ⇒ ¬φB, and

3. φ refers only to the common input variables of φA and φB.

The Boolean formula φ is referred to as the interpolant of φA and φB. We shall assume that φA and φB are in CNF formula. So a refutation proof of φA∧ φB is available from a SAT solver.

The relation between φA, φB and φ is showed in Figure 2.1. Figure 2.1 shows the boolean space of φA, φB and φ. Because φA ⇒ φ, the boolean space of φA is contained in the boolean space of φ. Similarly, φ ⇒ ¬φB, so the boolean space of

(21)

2.3. Craig’s Interpolation

φ is disjoint from the boolean space of φB. We can say that the interpolant φ is an overapproximation of φA, but it does not intersect φB.

There exist some algorithms [10, 30, 31] showed that we can construct an in- terpolant from a refutation proof of an unsatisfiable CNF. In addition, the time complexity is linear to the proof. In this work, we adopt the method of [10].

(a) A refutation proof (b) A corresponding interpolant of (a)

Figure 2.2: Example of Refutation Proof and Interpolant

We introduce how to construct an interpolant briefly. Let A and B be two sets of clauses with a refutation proof Γ. We divide variables into three groups.

1. The variables are global if they appear both in A and B.

2. The variables are local if they appear only in A.

3. The remained variables are irrelevant.

(22)

2.4. Circuit to CNF Conversion

Similarly, a literal is global or local depending on the variable it contains. Define global(c) as the disjunction of global literals in clause c. Then, the interpolant φ can be constructed by the following rules. Define a boolean formula p(c) for each clause c in Γ.

1. For each root clause c in Γ, p(c) = global(c), if c belongs to A. Otherwise, p(c)

= TRUE.

2. For each intermediate clause c in Γ. Let c1, c2 be the predecessors of c and let v be their pivot variable. p(c) = p(c1)∨ p(c2), if v is local. Otherwise, p(c) = p(c1)∧ p(c2).

The boolean formula p(c) corresponding the empty clause is an interpolant of A respect to B. Figure 2.2 depicts an example of refutation proof and interpolant. A interpolant Figure 2.2(b) is derived from the refutation proof Figure 2.2(a) by the above algorithm, where φA = (a)(¬a ∨ b), φB = (¬b ∨ c)(¬c), b is the common variable, a and c are local to φA and φB, respectively.

2.4 Circuit to CNF Conversion

Given a circuit netlist, it can be converted to a CNF formula in linear time by introducing extra intermediate variables [17]. Therefore, we can assume that the CNF formula of a Boolean formula φ is available from such conversion.

(23)

2.5. Bi-decomposition

2.5 Bi-decomposition

Definition 2.4 Given a completely specified Boolean function f , variable x is a

support variable of f if fx = f¬x, where fx and f¬x are the positive and negative cofactors of f on x, respectively.

Definition 2.5 An incompletely specified function F is a 3-tuple (q, d, r), where

the completely specified functions q, d, and r represent the onset, don’t-care set and offset functions, respectively.

Definition 2.6 A completely specified function f (X) is op bi-decomposable, or simply op-decomposable, under variable partition X = {XA|XB|XC} if it can be written as f (X) = fA(XA, XC) op fB(XB, XC), where op is some binary oper- ator. The decomposition is called disjoint if XC =∅, and non-disjoint otherwise.

Note that bi-decomposition trivially holds if XA∪XC or XB∪XC equals X. The corresponding variable partition is called trivial variable partition. We are concerned about non-trivial bi-decomposition. In the sequel, a binary operator op can be or2, and2, and xor. Essentially or2-, and2-, and xor-decompositions form the basis of all types of bi-decompositions because any bi-decomposition is simply one of the three cases with some proper complementation on f , fA and/or fB.

Definition 2.7 Completely specified functions f1(X1) and f2(X2) are 2-input bi- decomposable, under variable partition X1 = {XA|XB1| XC1| XC2|XC3|XC} and X2 = {XA|XB2| XC1| XC2|XC3|XC} if they can be written as f1(X1) = fA(XA,

(24)

2.5. Bi-decomposition

XC1, XC2, XC)∨ fB1(XB1, XC1, XC3, XC) and f2(X2) = fA (XA, XC1, XC2, XC) fB2(XB2, XC2, XC3, XC).

Definition 2.8 Completely specified functions f1(X1) and f2(X2) are 2-input bi- decomposable with a complementary subfunction, under variable partition X1 = {XA|XB1| XC1| XC2|XC3|XC} and X2 ={XA| XB2 | XC1| XC2|XC3|XC} if f1(X1) = fA(XA, XC1, XC2, XC )∨fB(XB1, XC1, XC3, XC) and f2(X2) =¬fA( XA, XC1, XC2, XC)∨ fB ( XB2, XC2, XC3, XC).

Definition 2.9 Completely specified functions f1(X) and f2(X) are 2-input bi- decomposable with two shared subfunctions, under variable partition X ={XA|XB|XC} if they can be written as f1(X) = fA1(XA, XC)∨fB1(XB, XC) and f2(X) = fA2(XA, XC) fB2 ( XB, XC) where fA1 ∼= fA2 and fB1 ∼= fB2.

Definition 2.10 Completely specified functions f1(X1) ,f2(X2),...,fn(Xn) are multiple- output bi-decomposable, if fi(Xi) = fAi(YA) ∨fBi(YBi) for 1 ≤ i ≤ n, where Xi = YA∪ YBi and fAi ∼= fA1.

(25)

Chapter 3

Single-Output Bi-decomposition

In this chapter, we present our effective method for single-output bi-decomposition.

We show that our method can not only check bi-decomposability but also obtain subfunctions.

3.1 OR Bi-decomposition

We show that or2-decomposition can be achieved using SAT solving. If there ex- ists a non-trivial or2-decomposition, we obtain a feasible variable partition and the corresponding subfunctions fA and fB. Section 3.1.1 and Section 3.1.2 assume the function which we want to decompose is completely and incompletely specified function, respectively.

(26)

3.1. OR Bi-decomposition

Figure 3.1: or-decomposition

3.1.1 Decomposition of Completely Specified Functions

At first, we assume the variable partition is known and check whether the function is bi-decomposable under this variable partition. Moreover, if the function is bi- decomposable indeed, we describe how to obtain the corresponding subfunctions.

And then, we assume the variable partition is unknown and show how to derive a feasible and non-trivial variable partition.

Decomposition with known variable partition

Given a function f (X) and a non-trivial variable partition X = {XA|XB|XC}, we study if f can be written as fA(XA, XC)∨fB(XB, XC) for some functions fAand fB. Figure 3.1 depicts a schematic chart of or2-decomposition.

Observe the four-input function in Figure 3.2. The function is f (a, b, c, d) =

(27)

3.1. OR Bi-decomposition

Figure 3.2: Decomposition chart of or2-decomposable function

¬ab ∨ cd with XA = {c, d}, XB = {a, b} and XC = {}. The function is or2- decomposable under this variable partition. The or2-decomposed function is:

f (a, b, c, d) = fA(a, b)∨ fB(c, d) =¬ab ∨ cd

The following proposition is the foundation of or2-decomposition.

Proposition 3.1 [11] Given a completely specified function f (X), it can be ex-

pressed as fA(XA, XC)∨fB(XB, XC) for some functions fAand fB if and only if the quantified Boolean formula

f (X)∧ ∃XA.¬f(X) ∧ ∃XB.¬f(X) (3.1)

is unsatisfiable.

By renaming quantified variables, the quantifiers of Formula (3.1) can be removed.

That is, Formula (3.1) can be expressed as the quantifier-free formula as follows

(28)

3.1. OR Bi-decomposition

Figure 3.3: Decomposition chart of a function which can not be or2-decomposed

because existential quantification is implicit in satisfiability checking.

Proposition 3.2 A completely specified function f (X) can be written as fA(XA, XC) fB(XB, XC) for some functions fA and fB if and only if the Boolean formula

f (XA, XB, XC)∧ ¬f(XA , XB, XC)∧ ¬f(XA, XB , XC) (3.2)

is unsatisfiable.

Figure 3.3 shows an example which can not be or2-decomposed. That is, for Figure 3.3, Formulas (3.2) is satisfiable. f (XA, XB, XC) means that there is a 1- entry in the decomposition chart. ¬f(XA , XB, XC) and ¬f(XA, XB , XC) means that there exist 0-entries in the corresponding column and row of the 1-entry. For a function f (XA, XB, XC), we want to derive fA and fB such that f (XA, XB, XC) = fA(XA, XC)∨fB(XB, XC). ¬f(XA , XB, XC) implies that fB(XB, XC) is 0. Similarly,

¬f(XA, XB , XC) implies that fA(XA, XC) is 0. However, f (XA, XB, XC) implies that fA(XA, XC) or fB(XB, XC) is 1. Consequently, the example in Figure 3.3 can

(29)

3.1. OR Bi-decomposition

not be or2-decomposed.

Note that the complementations in Formulas (3.1) and (3.2) need not be com- puted. Instead, the complementations can be achieved by adding inverters in the corresponding circuit before circuit-to-CNF conversion, or alternatively by asserting the corresponding output variables to be false in SAT solving.

By checking satisfiability of Formula (3.2), we can determine whether a function is or2-decomposition. A remaining problem to be resolved is how to derive fA and fBif the function is or2-decomposition. We show that they can be obtained through interpolation from a refutation proof of Formula (3.2). Consider partitioning the clause set of Formula (3.2) into two subsets fAon and fAoff as follows.

fAon: f (XA, XB, XC)∧ ¬f(XA , XB, XC) (3.3)

fAoff :¬f(XA, XB , XC) (3.4)

Then the corresponding interpolant corresponds to an implementation of fA. Ac- cording to Formula 3.2, construct a corresponding network in Figure 3.4 to derive fA .

On the other hand, to derive fB we perform a similar computation. Since fAhas been obtained, we can use fA to set the don’t care condition of fB. Therefore, we modify and partition the clause set of Formula (3.2) into two subsets fBon and fBoff as follows.

fBon : f (XA, XB, XC)∧ ¬fA(XA, XC) (3.5)

(30)

3.1. OR Bi-decomposition

Figure 3.4: Construction offA.

fBoff :¬f(XA , XB, XC) (3.6)

Then the corresponding interpolant corresponds to an implementation of fB. Fig- ure 3.5 illustrates a corresponding network to obtain fB.

The following theorem shows the correctness of the above construction.

Theorem 3.1 For any or2-decomposable function f under variable partition X =

{XA|XB|XC}, we have f(X) = fA(XA, XC)∨ fB(XB, XC) for fA and fB derived from the above construction.

Proof.

To show that the interpolant obtained from the unsatisfiable conjunction of For- mulas (3.3) and (3.4) indeed corresponds to fA, observe that XA and XC variables

(31)

3.1. OR Bi-decomposition

Figure 3.5: Construction offB.

are the only common variables shared by Formulas (3.3) and (3.4). Intuitively, For- mula (3.3) (respectively Formula (3.4)), for every valuation on XC, characterizes the set of valuations on XA that must be in the onset (respectively offset) of fA. As the interpolant is an over-approximation of the onset and disjoint from the offset, it is a valid implementation of fA.

On the other hand, XB and XC variables are the only common variables shared by Formulas (3.5) and (3.6). They respectively corresponds to the care onset and care offset of fB, where fA sets the don’t care condition of fB. Therefore, the interpolant obtained from the unsatisfiable conjunction of Formulas (3.5) and (3.6) is a valid implementation of fB.

Moveover, we show a detailed proof. According to Craig Interpolation Theorem,

(32)

3.1. OR Bi-decomposition

fA and fB have the following property.

f (XA, XB, XC)∧ ¬f(XA , XB, XC)⇒ fA(XA, XC) (3.7)

fA(XA, XC)⇒ f(XA, XB , XC) (3.8)

f (XA, XB, XC)∧ ¬fA(XA, XC)⇒ fB(XB, XC) (3.9)

fB(XB, XC)⇒ f(XA , XB, XC) (3.10)

We want to prove that f (XA, XB, XC) equals fA(XA, XC)∨ fB(XB, XC). In other words, we have to prove

f (XA, XB, XC)⇔ fA(XA, XC)∨ fB(XB, XC). (3.11)

(⇒)

Formula (3.9) can be rewritten as following formula.

f (XA, XB, XC)⇒ fA(XA, XC)∨ fB(XB, XC). (3.12)

(⇐)

By formulas (3.8) and (3.10), we can obtain the following formulas.

fA(XA, XC)⇒ f(XA, XB, XC) (3.13)

(33)

3.1. OR Bi-decomposition

fB(XB, XC)⇒ f(XA, XB, XC) (3.14)

Formulas (3.13) and (3.14) imply the following formula.

fA(XA, XC)∨ fB(XB, XC)⇒ f(XA, XB, XC) (3.15)

From (⇒)(⇐), we confirm that f(XA, XB, XC) equals fA(XA, XC)∨fB(XB, XC).

2

Remark:

An interpolant itself is in fact a netlist composed of or2 and and2 gates [10].

The “bi-decomposed” netlist however may contain some amount of redundancy;

moreover variable partitioning is not used in its derivation.

Decomposition with unknown variable partition

The previous construction assumes that a variable partition X ={XA|XB|XC} is given. We further automate variable partition in the derivation of fA and fB as follows. Since we do not know which variable set every primary inputs belong to, we instantiate variables X into X and X. In addition, For each variable xi ∈ X, we introduce two control variables αxi and βxi to represent which variable set xi belongs to.

Let CA be the clause set of

(34)

3.1. OR Bi-decomposition

Figure 3.6: Summary of control variables in single-output bi-decomposition

f (X)∧ ¬f(X)

i

((xi ≡ xi)∨ αxi) (3.16)

and CB be the clause set of

¬f(X)

i

((xi ≡ xi)∨ βxi), (3.17)

where x ∈ X and x ∈ X are the instantiated versions of x ∈ X. Observe that (αxi, βxi) = (0, 0), (0, 1), (1, 0), and (1, 1) indicate xi ∈ XC, xi ∈ XB, xi ∈ XA, and xi can be in either XA or XB, respectively. Figure 3.6 summarizes the above conditon of different valuations of control variables.

In SAT solving the conjunction of Formulas (3.16) and (3.17), we make unit assumptions [7] on the control variables. Under an unsatisfiable unit assumption, the SAT solver will return a final conflict clause consisting of only the control variables.

(35)

3.1. OR Bi-decomposition

Notice that every literal in the conflict clause is of positive phase because the conflict arises from a subset of the control variables set to 0. It reveals that setting to 0 the control variables present in the conflict clause is sufficient making the whole formula unsatisfiable. Hence setting to 1 the control variables absent from the conflict clause can not affect the unsatisfiability.

The more the control variables can be set to 1, the better the bi-decomposition is because |XC| is smaller. In essence, this final conflict clause indicates a variable partition XA, XB, XC on X. For example, the conflict clause (αx1+ βx1+ αx2+ βx3) indicates that the unit assumption αx1 = 0, βx1 = 0, αx2 = 0, and βx3 = 0 results in the unsatisfiability. It in turn suggests that x1 ∈ XC, x2 ∈ XB, and x3 ∈ XA.

To see how the new construction works, imagine setting all the control variables to 0. As SAT solvers tend to refer to a small subset of the clauses relevant to a refutation proof, it may return a conflict clause with just a few literals. It in effect conducts a desirable variable partition.

This perception, unfortunately, is flawed in that SAT solvers are very likely to return a conflict clause that consists of all the control variables reflecting the trivial variable partition XC = X. In order to avoid trivial variable partitions, we initially specify two distinct variables xa and xb to be in XA and XB, respectively, and all other variables in XC, that is, having (αxa, βxa) = (1, 0), (αxb, βxb) = (0, 1), and xi, βxi) = (0, 0) for i = a, b in the unit assumption. We call such an initial variable partition as a seed variable partition.

(36)

3.1. OR Bi-decomposition

If the conjunction of Formulas (3.16) and (3.17) is unsatisfiable under a seed partition, then the corresponding bi-decomposition is successful. As SAT solvers often refer to a small unsatisfiable core, the returned variable partition is desirable because|XC| tends to be small. Otherwise, if the seed partition fails, we should try another one.

For a given function f (X) with |X| = n, the existence of non-trivial or2- decomposition can be checked with at most (n− 1) + · · · + 1 = n(n − 1)/2 different seed partitions. On the other hand, we may enumerate different variable partitions using different seed partitions to find one that is more balanced and closer to dis- joint. Even from a successful seed partition, we may further refine the returned variable partition by reducing the corresponding unsatisfiable core. The process can be iterated until the unsatisfiable core is minimal.

Lemma 3.1 For an unsatisfiable conjunction of Formulas (3.16) and (3.17) under

a seed variable partition, the final conflict clause contains only the control variables, which indicates a valid non-trivial variable partition.

Proof. The values of control variables are specified in the unit assumption as

if they are in the first decision level. In solving an unsatisfiable instance, both 0 and 1 valuations of any other variable must have been tried and failed, and only the control variables are not valuated in both cases. Because unit assumption causes the unsatisfiability, the final learned clause indicates the conflict decisions made in the first decision level. On the other hand, as discussed earlier this clause corresponds

(37)

3.1. OR Bi-decomposition

to a valid variable partition, which is non-trivial since |XA|, |XB| ≥ 1 due to the

seed variable partition. 2

Theorem 3.2 asserts the correctness of the construction.

Theorem 3.2 For any or2-decomposable function f , we have f (X) = fA(XA, XC) fB(XB, XC) for fA, fB, and a non-trivial variable partition X = {XA|XB|XC} de- rived from the above construction.

Proof. Given an unsatisfiable conjunction of Formulas (3.16) and (3.17) under a

seed variable partition, by Lemma 3.1 the final learned clause indicates which vari- ables assigned in XC are unnecessary and can be placed in XA or XB instead. The resultant partition is indeed a non-trivial variable partition. Under the obtained variable partition, Theorem 3.1 is applicable, and functions fA and fB can be ob-

tained. 2

One might speculate about whether (αx, βx) = (1, 1) is possible as it tends to suggest that x can be in either XA or XB. To answer this question, we study the condition that xi can be in either XA or XB.

Lemma 3.2 [14] If f is bi-decomposable under some variable partition, then the

cofactors fx and f¬x for any variable x are both bi-decomposable under the same variable partition.

Proof. The lemma follows from the fact that cofactor and Boolean operations,

(38)

3.1. OR Bi-decomposition

such as ∧, ∨, ¬, commute. 2

The converse however is not true. The following theorem gives the condition that x can be in either XA or XB.

Theorem 3.3 Let X = {Xa|Xb|XC|{x}} for some x ∈ X. A function f = fA(XA, XC) ∨ fB(XB, XC) can be bi-decomposed under variable partition {Xa {x}|Xb|XC} as well as under variable partition {Xa|Xb∪{x}|XC} if and only if both

fx and f¬x are themselves or2-decomposable under variable partition {Xa|Xb|XC}, and also (fx ≡ 1 ∧ f¬x ≡ 1) ⇒ (fx ≡ f¬x) under every c∈ [[XC]].

Proof.

(=⇒)

For bi-decomposable f , the bi-decomposability of fxand f¬xfollows from Lemma 3.2.

On the other hand, assume by contradiction that (fx ≡ 1 ∧ f¬x ≡ 1) ⇒ (fx ≡ f¬x) does not hold under every c ∈ [[XC]], i.e., (fx ≡ 1 ∧ f¬x ≡ 1) ∧ (fx ≡ f¬x) under some c∈ [[XC]]. Since fx ≡ f¬x, assume without loss of generality that there exists some ma∈ [[Xa]] and mb ∈ [[Xb]] such that fx(ma, mb, c) = 0 and f¬x(ma, mb, c) = 1.

Then, f¬x(ma, Xb, c) ≡ 1 since x can be in Xa and f is or2-decomposable. Also, f¬x(ma, Xb, c) ≡ 1 implies f¬x(Xa, Xb, c) ≡ 1 since x can be in Xb and f is or2- decomposable. It contradicts with the assumption that f¬x(Xa, Xb, c) ≡ 1.

(⇐=)

(39)

3.1. OR Bi-decomposition

Consider under every c ∈ [[XC]]. For fx ≡ f¬x, x is not a support variable of f and thus can be redundantly placed in XAor XB. On the other hand, for fx ≡ f¬x, at least one of fx and f¬xis a tautology. If fx ≡ 1, f is or2-decomposable no matter x∈ XA or x∈ XB since f¬x is or2-decomposable. Similarly, if f¬x ≡ 1, f is or2- decomposable. Hence x can be in either of XA and XB in the or2-decomposition

of f . 2

That is, under every c ∈ [[XC]] either x is not a support variable of f , or fx or f¬x equals constant one. It is interesting to note that only the former can make x, βx) = (1, 1). Whenever the latter happens, (αx, βx) equals (0, 1), (1, 0), or (0, 0) if the solver is unable to identify a minimal unsatisfiable core. To see it, consider fx ≡ 1 (similar for f¬x≡ 1) and f¬x ≡ fx under some c∈ [[XC]]. If (αx, βx) = (1, 1), Formula (3.2) reduces to

(∃x.f(Xa, Xb, c, x))∧ ¬(∀x.f(Xa, Xb, c, x))∧

¬(∀x.f(Xa, Xb, c, x))

= 1∧ ¬f¬x(Xa, Xb, c)∧ ¬f¬x(Xa, Xb, c),

which is satisfiable because f¬x ≡ 1 under c. Hence (αx, βx) = (1, 1) only if x is not a support variable of f .

3.1.2 Decomposition of Incompletely Specified Functions

Proposition 3.2 can be generalized for incompletely specified functions as follows.

(40)

3.2. AND Bi-decomposition

Proposition 3.3 Given an incompletely specified function F = (q, d, r), there ex-

ists a completely specified function f with f (X) = fA(XA, XC) ∨ fB(XB, XC), q(X)⇒ f(X), and f(X) ⇒ ¬r(X) if and only if the Boolean formula

q(XA, XB, XC)∧ r(XA , XB, XC)∧ r(XA, XB , XC) (3.18)

is unsatisfiable.

The derivations of fAand fBcan be computed in a way similar to the aforementioned construction.

3.2 AND Bi-decomposition

Proposition 3.4 [14] A function f is and2-decomposable if and only if ¬f is or2-decomposable.

By decomposing ¬f as fA∨ fB, we obtain f = ¬fA∧ ¬fB. Hence our results on or2-decomposition are convertible to and2-decomposition.

3.3 XOR Bi-decomposition

In this section, We check xor-decomposability using SAT solving and obtain a feasible variable partition if there exists a non-trivial xor-decomposition. How-

參考文獻

相關文件

In this paper, we have studied a neural network approach for solving general nonlinear convex programs with second-order cone constraints.. The proposed neural network is based on

Huang, A nonmonotone smoothing-type algorithm for solv- ing a system of equalities and inequalities, Journal of Computational and Applied Mathematics, vol. Hao, A new

In this paper, we extend this class of merit functions to the second-order cone complementarity problem (SOCCP) and show analogous properties as in NCP and SDCP cases.. In addition,

Abstract In this paper, we consider the smoothing Newton method for solving a type of absolute value equations associated with second order cone (SOCAVE for short), which.. 1

Since the generalized Fischer-Burmeister function ψ p is quasi-linear, the quadratic penalty for equilibrium constraints will make the convexity of the global smoothing function

Like the proximal point algorithm using D-function [5, 8], we under some mild assumptions es- tablish the global convergence of the algorithm expressed in terms of function values,

A convenient way to implement a Boolean function with NAND gates is to obtain the simplified Boolean function in terms of Boolean operators and then convert the function to

Our main goal is to give a much simpler and completely self-contained proof of the decidability of satisfiability of the two-variable logic over data words.. We do it for the case