• 沒有找到結果。

Mathematical Representation

We exemplify two cases how network coding apply to multicasting system in different network and gain the improvement compared to the method nowadays. However, we can’t foresee and control every operation in every node intuitively, expecting our straightforward innovation work successfully. In this subchapter, we will formulate the mathematical model to generalize the network coding issue. Adhere to this formulation, we can analyze and resolve the problem systematically.

Network coding is proposed to enhance the flow in the network by doing some computa-tion of original data either in sources or intermediates. Every data packet flooding in network can be regarded as one combination of all intrinsic data. ( Here, only the linear operation is discussed for the implementation simplicity.) We find that the original data spans one space, and packets in every edge span another. That is to say, there exists one mapping in every edge between two spaces. The functionality of every node becomes to map the entire received symbols from its incoming edges to a symbol for outgoing edges. Network coding can be converted to the mechanism for encoding process of every edge.

For the clarification, the definition and symbol notations used in our mathematical Rep-resentation are listed as followed. The notation is quoted by [2].

Notations

• Source: A node without any truly incoming edges.

• Every edge in graph represents channel with capacity data unit per unit time.

• In(T)/Out(T): The set of incoming/outgoing edges of node T.

• In(S): a set of imaginary edges without originating nodes.

• ω: The number of the imaginary edges.

• data unit: An element of GF (F ).

• message x: A ω-dimension row vector ∈ Fω.

• A network code is in GF (F ) and ω dimension.

Definition 2.3.1. A network consists of a local encoding mapping k˜e : F|In(T )| → F

for each node T in the network and each channel e ∈ Out(T ).

By Definition 2.3.1, we construct the the transform between the incoming and outgoing edges in one node. Since the acyclic network provides the upstream to downstream procedure, data is transmitted by the path composed of edges. The mapping of each edge is equivalent to continual transforming by the passed edges before. Hence, we give another definition to represent the outcome of the processing of the recursive mapping.

Definition 2.3.2. A network consists of a local encoding mapping ˜ke : F|In(T )| → F and a global encoding mapping ˜fe : Fω → F for each edge e in the network such that:

• For every node T and edge e ∈ Out(T ), ˜fe(x) is uniquely determinded by ( ˜fd(x), d ∈ In(T )), and ˜ke is the mapping via

( ˜fd(x), d ∈ In(T )) 7−→ ˜fe(x)

• The mapping ˜fe are the natural projections from the space Fω to the ω different coor-dinates,respectively.

Considering the physical implementation, it is desirable that the fast computation and simple circuit in the node. Therefore, the linear transformation is involved. If the encoding mapping ˜fe(x) is linear, there exists a corresponding column vector fe with ω dimension such that the product x · fe is equal to ˜fe(x), where x is the ω-dimensional row vector data generated from the source. Similarly, there exists |In(T )|-dimensional column vector kesuch that y · ke = ˜ke(y), where y ∈ F|In(T )| represents the symbol received in the node T. Since every edge has its own mapping column vector, we can formulate the operation in the node of every edges connected in one node. If a pair of edge (d, e) is linked by one node T with d ∈ In(T ) and e ∈ Out(T ), we call these two edges an adjacent pair. Therefore, we can formulate the coding process by matrix form in every node.

Definition 2.3.3. Network consists of a scalar kd,e, called the local encoding kernel, for every adjacent pair (d,e). Meanwhile, the encoding kernel at the node T means the |In(T )| ×

|Out(T )| matrix

KT = [kd,e]d∈In(T ),e∈Out(T )

The network coding can be therefore viewed as forming the effective matrix of every node, and every edge can be viewed as a series of computation of the column vector in every matrix of the node that data passes. Note that the structure of matrix assures the order of linked edges.

Definition 2.3.4. A network consists of a scalar kd,e, for every adjacent pair (d,e)in the network as well as an ω-dimensional column vector fe for every channel e such that:

• fe =P

d∈In(T )kd,efd , where e ∈ Out(T ).

• The vector fe for the ω imaginary channels e ∈ In(S) form the natural basis of the vector space Fω.

• The vector fe is called global encoding kernel for the channel e.

2.3.1 Butterfly Network over GF (2)

W

Figure 2.7: Corresponding mapping of butterfly network in Fig 2.3

The corresponding edge mapping and operation matrix of every node in Fig 2.3 is showed in Fig 2.7. The imaginary edges of source S is two, and global encoding kernel of two edges, fos and fos, represent the mapping of the original data to produce the information data b1 and b2. The exclusive-or operation means the computation is in GF (2). According the matrix of every node, we can calculate the global coding kernel fe of every edge.

We give some examples to derive the global encoding kernel in Definition 2.4. Observing the source matrix KS with 2 incoming and 2 outgoing edges, the element of matrix repre-sents the scalar of two specified linked edge. Based on the definition2.4, we can finds that

the equivalent global encoding kernel is the summation of the global encoding kernel with corresponding scalar in node matrix.

fST = X

Fig 2.7 is the special case that the chosen finite field F is 2. However, scalars in every matrix and computations are done in the GF (F ), and it can be generalized in Fig 2.8.

W

Figure 2.8: General mapping modified of butterfly network in Fig 2.7

2.3.2 Butterfly Network over GF (F )

In Fig 2.8, each global kernel can be calculated by the same steps described above. The design parameters are the scalars in every matrix such as n, p, q, r, . . . , z. The assignment of all scalars influences the efficiency of the network utility. Concerning to sink Y , if we want to approach the theoretical maximum, 2, the global kernel fT Y and fXY should be linear independent ,namely , the space spanned by these two vector should also be 2. The condition of another sink Z is the same. If the two vectors are linear dependent, the sink will suffer the flow decreasing.Therefore, we can remark that when the source transmits a message of ω data units into the network, a receiving node T obtains sufficient information to decode the message if and only if dim(VT) = ω, of which a necessary prerequisite is that maxflow(T ) ≤ ω. The prerequisite assures the necessity to applying network coding to enhance utility of the network. If maxflow(T ) > ω, the entire network is capable of affording the whole being transmitted data. There exists no bottleneck in the network and transmission will certainly accomplished without difficulty.

We convert linear network coding to matrix forming, and comprehend that the key to enhance the throughput and decode information successfully is the well designed coefficients in every matrix of each node in whole network. However, it is difficult to implement this concept directly, and the random coding mechanism is recommended in the next subchapter.

相關文件