• 沒有找到結果。

Throughput flow

2.2 Flow of Model

2.2.3 Throughput flow

The throughput flow of the fluid queuing model is the strategy as below: Given network C, inflow µ0, period time θ = 0 ∼ t. Each particle from µ0 is manipulated such that sending the maximal mass of particle throughput C at inflow µ0 during θ = 0∼ t.

Definition 4. The maximal mass of particle is denoted by Mput(C, µ0, t). Sometimes we instead inflow function of infinite inflow rate during θ = 0∼ t to afford an upper bound of Mput(C, µ0, t), denoted by Mput(C,∞, t). When it is clear from the context, we use Mput(t) to denote Mput(C,∞, t).

2.3 Networks of Model

Definition 5 (Parallel­link Network). Given a network C in fluid queuing model, C is Parallel­link networks if e = (s, t) for all e∈ E.

Definition 6 (Series­parallel Network). Series­parallel networks are defined by Induc­

tion: Start from parallel­link networks, each time can do series­linking or parallel­linking operation to another well­defined series­parallel network. All possible results form series­

parallel networks.

Definition 7. Given the series­parallel network C, the maximal number of each path’s nodes among all paths is denoted by D(C).

Definition 8 (Parallel­group­link Network). Given ϵ > 0 and a parallel­link network C.

If there exists [L1, R1], ..., [LN, RN], RLi

i ≤ 1 + ϵ and LRi+1i 1ϵ, and the delay time of each edge of C is in one of the intervals, then C is called parallel­group­link network.

Connected from the routing game with groups of similar links, refer to [11].

Figure 2.1: The diagram of “(2 + 2)­parallel­link Network”. In the first stage C1, the source is s and the sink is p; In the second stage C2, the source is p and the sink is t.

Definition 9 ((2 + 2)­parallel­link Network). Given parallel­link networks C1, C2 and C1’s inflow µ0. Now, series­link C1 to C2, denoted by C1+ C2. Consider OPT flow and EQU flow on this linking network C1+C2, the maximal number of used edges in OPT flow or EQU flow in C1 and C2 part are denoted by m1 and m2 respectively. If m1, m2 ≤ 2, we called C1+ C2 is a (2 + 2)­parallel­link network.

Chapter 3

Main

We will show several PoA’s bounds in the fluid queuing model in this chapter. These include some tight upper bound on parallel­link networks or (2+2)­parallel­link networks and some finite upper bound on series­parallel networks.

3.1 Parallel­link Networks

First of all, we need Lemma 1 to evaluate the upper bound of the Price of Anarchy for networks.

Lemma 1. Consider the fluid queuing model with network C, inflow µ0. For all time θ0, if

Mput(L(θ0))≤ k ∗ Q(θ0),

then TEQU ≤ (k + 1) ∗ TOP T, where Mput is the maximal mass of particle throughput network, Q is the particle existing in network, and TOP T, TEQU are the arriving time of all particle in OPT flow, EQU flow respectively.

Proof. For the shortest travel time ˆL = L(ˆθ) := ltθ)− ˆθ, we have:

TEQU− TOP T = ltθ)− TOP T = ˆL + ˆθ− TOP T (∗)

≤ ˆL,

Figure 3.1: The diagram of “PoA of 2 Example”. Left part is the parallel­link network with edges{e1 = (a, 0), e2 = (1,13)}; Right part is the inflow function, which is a step function with range={1, a}.

last inequality holds by considering the leaving time and the arriving time of last particle in OPT flow. On the other hand, for the total amount M and the inequality, we have:

M ≥ Q(ˆθ) ≥ 1

last inequality holds since we can copy the infinite inflow during θ = 0 1kL(ˆθ) for k times as an option of infinite inflow during θ = 0 ∼ L(ˆθ). On the other hand, since we have to send M mass of particle in OPT flow, so:

TOP T 1

Theorem 1. In the fluid queuing model, the Price of Anarchy of 2 for parallel­link net­

works is tight.

Proof. Gien parallel­link network C, inflow µ0, denoted ej = (vj, τj) for all ej ∈ E. At any moment θ0, suppose EQU flow use n edges, then we have:

L(θ0)≤ τn+1.

For each edge, the travel time minus the delay time of edge is the queuing time. Hence, we can calculate the mass of particle queuing in C at θ0as an lower bound of Q(θ0):

Q(θ0)

n i=1

vi[L(θ0)− τi],

where Q(θ0) is the particle existing in C at θ0. On the other hand, for each edge, the given time of Mput minus the delay time of edge is the total time to pass particle. We have:

Mput(L(θ0)) =

n i=1

vi[L(θ0)− τi].

This implies Q(θ0) ≥ Mput(L(θ0)). Follow from Lemma 1, we have P oA ≤ 2. Fur­

thermore, let’s provide an example for the Price of Anarchy of 2. Given 1 > a > 0, consider:

Theorem 2. In the fluid queuing model, if all edges in 1 aggregated network, the Price of Anarchy of 2−1+ϵ1 ≈ 1 + ϵ for parallel­group­link networks is tight, where ϵ is given by the network.

Proof. Given parallel­group­link network C whose edges are in 1 aggregated network, inflow µ0, denoted ej = (vj, τj) for all ej ∈ E. At the last leaving time ˆθ, for the shortest travel time ˆL, we have the condition:

Lˆ

τj ≤ 1 + ϵ, ∀ej ∈ E.

Similar as Lemma 1, we have:

TEQU − TOP T = ˆL + ˆθ− TOP T ≤ (1 − 1

1 + ϵ) ˆL + τ1+ ˆθ− TOP T (∗)

≤ (1 − 1 1 + ϵ) ˆL,

where the last inequality holds by considering the leaving time plus minimal delay time and the arriving time of last particle in OPT flow. Hence, we afford:

2 1

1 + ϵ TEQU

TOP T = PoA.

Finally, similar as Theorem 1, the example for the Price of Anarchy of 2 1+ϵ1 is shown as below:

Lemma 2. Consider the fluid queuing model with network C, inflow µ0, the total capacity of minimal cut face Min­Cut(C), the inequality holds:

M ≤ TOP T ∗ Min­Cut(C),

where M is the total amount and TOP T is the arriving time of all particle in OPT flow.

Proof. In this case, for OPT flow, each particle from s to t has to pass by one of the edges contained in the minimal cut face of C. And, the throughput of the minimal cut face is at most Min­Cut(C) at any moment. As a result, the inequality holds.

Theorem 3. In the fluid queuing model, given network C, inflow µ0. If each of C’s edge is used by OPT flow at some moment, the Price of Anarchy is at most 2|V | − 1.

Proof. At the last leaving time ˆθ, record lvθ) for all v ∈ E − {s} and sort them into

Now, for network C, interval (ti−1, ti), consider the Min­Cut(C) and an index set of interval S :={k|ek ∈ [ti−1, ti]}, we have:

Conjecture 1. In the fluid queuing model, removing each OPT­unused edge in networks only worsens the Price of Anarchy. Hence, the Price of Anarchy is at most 2|V | − 1 for networks.

Remark 4. In parallel­link networks, this conjecture is true. In general networks, this conjecture could be false, take Pigou’s example for an example. The conjecture is possible to be true in series­parallel networks.

Theorem 4. In the fluid queuing model, the Price of Anarchy is at most D(C) for series­

parallel networks C, where D(C) denotes the maximal number of each path’s node among all s to t paths.

Claim: Given a series­parallel network C, inflow µ0, any moment θ0. For the through­

put flow Mput, the shortest travel time L, and the mass of particle existing in network Q.

We have the inequality:

Mput(C,∞, L(C, µ0, θ0))≤ (D(C) − 1)Q(C, µ0, θ0), which implies the Theorem from Lemma 1.

Proof. (a)Base Case

In parallel­link network, D(C) = 2. The statement is true by Theorem 1.

(b)Parallel­linking

After parallel­linking operation, series­parallel networks C1 links to C2. For C’s inflow µ0, define the EQU flow’s inflow of C1 is µup0 , inflow of C2 is µdn0 , and µ0 = µup0 + µdn0 ; for any moment θ0. we have:

L(C1, µup0 , θ0) ≥ L(C, µ0, θ0), L(C2, µdn0 , θ0) ≥ L(C, µ0, θ0),

Q(C, µ0, θ0) = Q(C1, µup0 , θ0) + Q(C2, µdn0 , θ0).

Induction hypothesis of Claim:

given C1, µup0 , θ0 : Mput(C1,∞, L(C1, µup0 , θ0))≤ (D(C1)− 1)Q(C1, µup0 , θ0), given C2, µdn0 , θ0 : Mput(C2,∞, L(C2, µdn0 , θ0))≤ (D(C2)− 1)Q(C2, µdn0 , θ0),

This implies:

Mput(C,∞, L(C, µ0, θ0))

(∗)

≤ sum of LHS

≤ sum of RHS

≤ max(D(C1)− 1, D(C2)− 1)[Q(C1, µup0 , θ0) + Q(C2, µdn0 , θ0)]

= (D(C)− 1)Q(C, µ0, θ0),

where D(C) = max(D(C1), D(C2)). And, the first inequality holds since the throughput of C is just the sum of throughput of C1and C2 with same period time.

(c)Series­linking

After series­linking operation, series­parallel networks C1 links to C2. For C1’s inflow µ0, define the EQU flow’s outflow of C1is µ1, and µ1is C2’s inflow; for any moment θ0, define θ1 = lp0). We have:

L(C, µ0, θ0) = lt0)− θ0

= lt0)− θ1+ lp0)− θ0

= L(C2, µ1, θ1) + L(C1, µ0, θ0),

Q(C, µ0, θ0)

(∗)

≥ Q(C1, µ0, θ0), Q(C, µ0, θ0)

(∗∗)

≥ Q(C2, µ0, θ0).

Inequality (*) holds by consider the mass of particle existing in C1 at θ = θ0 as a lower bound; Inequality (**) holds by consider the mass of particle existing in C2at θ = θ1as a lower bound.

Induction hypothesis of Claim:

given C1, µ0, θ0 : Mput(C1,∞, L(C1, µ0, θ0))≤ (D(C1)− 1)Q(C1, µ0, θ0), given C2, µ1, θ1 : Mput(C2,∞, L(C2, µ1, θ1))≤ (D(C2)− 1)Q(C2, µ1, θ1),

This implies:

Mput(C,∞, L(C, µ0, θ0))

(∗)

≤ sum of LHS

≤ sum of RHS

≤ [D(C1)− 1 + D(C2)− 1]Q(C, µ0, θ0)

= (D(C)− 1)Q(C, µ0, θ0),

where D(C) = D(C1) + D(C2)− 1. Moreover, the first inequality holds since we can consider C2sending infinite particle without delay time in θ = 0∼ θ1and C1sending infinite particle without delay time in θ = θ1 ∼ lt0) as an upper bound of Mput.

3.3 (2 + 2)­parallel­link Network

Remark 5. We remove all edges exceed m1 and m2, where m1 and m2 are the maximal number of used edges in OPT flow or EQU flow in C1 and C2 parts respectively. And, this has no influence on our model. On the other hand, we use ej = (aj, σj) in C1 and ej = (vj, τj) in C2.

Now, consider the flows on network C1 and inflow µ0. Define TEQU|C1,TOP T|C1 as the arriving time of all particle in EQU flow and OPT flow respectively. Let’s see a new model named by ”Restricted Inflow Model” as below:

Definition 10 (Restricted Inflow Model). In EQU flow, consider any moment θ0such that µ00) ≥ a1 + ... + am1 and lt0) = σm1. We imitate the OPT flow, let µ00)− (a1 + ... + am1) mass of particles wait at s until µ0 ≤ a1+ ... + am1 and all previous particles have leaven.

In EQU flow after the operation, the arriving time of each particle is the same as usual.

Since the order of any pair of particles is conserved and each edge passing the same mass of particles as usual. Furthermore, the travel time of each particle is at most σm1 in the Restricted inflow model. Finally, each particle in EQU flow would depart earlier or equal to OPT flow, since there is always no more particle wait at s in EQU flow.

Definition 11 (Different Arriving Time). Given parallel­link network C1, inflow µ0, each particle p from µ0. Define d(C1, µ0, p) by the different arriving time of EQU flow to OPT flow for particle p. Define d(C1, µ0) = suppd(C1, µ0, p).

Follow from definition, we have:

d(C1, µ0, p)≤ σm1. TEQU|C1 − TOP T|C1

(∗)

≤ d(C1, µ0)≤ σm1,

last inequality holds by considering the last particle as a special case of particle p.

Definition 12 (EO flow). Given (2 + 2)­parallel­link network C1 + C2, inflow µ0. For EQU flow in network C1 using inflow µ0, we have the outflow of C1 as the inflow of C2,

denoted by µ1. Now, consider the OPT flow of network C2using inflow µ1, this is the EO flow in (2 + 2)­parallel­link network C1+ C2. And, define the arriving time of all particle in EO flow by the arriving time of all particle in OPT flow of network C2using inflow µ1.

Define n2 is the maximal number of used edges in EO flow or EQU flow in C2part.

We apply the above lemma on networks C2, inflow µ1, we have:

TEQU − TEO ≤ τn2 ≤ τm2,

where And, m2 is the maxiaml number in OPT flow or EQU flow in C2 part. Since we remove all edges exceed m2, n2 ≤ m2.

Lemma 3. In the fluid queuing model, given (2 + 2)­parallel­link network C1 series­

linking to C2 and inflow µ0, denoted ej = (aj, σj) in C1 and ej = (vj, τj) in C2. For the OPT flow and EQU flow, we have:

TEQU − TOP T ≤ σm1 + τm2.

Proof. Follow from above, for the EO flow in this (2 + 2)­parallel­link network, we have:

TEQU− TEO ≤ τm2, TEQU|C1 − TOP T|C1 ≤ σm1.

In EO flow, consider a strategy as below: For all particle arriving source of C2, let them wait at source for σm1 − d(C1, µ0, p) time and leave. In this case, EO flow has the same inflow of C2as OPT flow, but delay for exactly σm1 time. That is:

OPT’s inflow of C2 : µ2(θ), EO’s inflow of C2 : µ2(θ + σm1),

where µ2(θ) is the inflow of C2in OPT flow. This relationship implies TEO−TOP T = σm1. However, EO flow may choose some better solutions, implies

TEO− TOP T ≤ σm1,

TEQU − TOP T = TEQU− TEO+ TEO− TOP T ≤ σm1 + τm2.

Remark 6. Inthe proof of Lemma 3, for EO flow, we let the particle wait at the intermedi­

ate point p. Despite this is not allowed in our model, it only worsens TEQUand makes the PoA bigger. The upper bound of PoA is still true if we allowed waiting at the intermediate point p. So, we allow it.

Lemma 4. Given (2 + 2)­parallel­link network C1+ C2, TOP T = TEOor TOP T ≥ σ2+ τ2, where TOP T, TEOis the arriving time of all particle in OPT flow and EO flow respectively and (a1, σ1), (a2, σ2) denotes the edges of C1, (v1, τ1), (v2, τ2) denotes the edges of C2.

Proof. Given (2 + 2)­parallel­link network C1 + C2, inflow µ0, the maximal number of used edges in OPT flow or EQU flow in C1 part and C2 part are m1 and m2 respectively.

Now, if m1 = 1, then TOP T = TEO. Else, consider m1 = 2. Let’s divide it into two cases:

(a)m1 = 2 and a1 ≥ v1

During θ = 0 ∼ σ2, OPT flow and EO flow have the same inflow and outflow at C2. Now, if m2 = 1, after θ = σ + 2, OPT flow and EO flow still have the same outflow at C2, implying TOP T = TEO. Else, m2 = 2, both OPT and EO active (v2, τ2) at θ = σ1.

• If EO flow shutdown (v2, τ2)f irst, then TEO ≤ TOP T, implies TEQ = TOP T.

• If OPT flow shutdown (v2, τ2)f irst but before θ = σ2+ τ2, then OPT flow and EO flow have the same outflow at C2 later, which will pass the same mass of particle.

We let EO flow shutdown (v2, τ2) at the same time to afford TEQ = TOP T.

• If OPT flow shutdown (v2, τ2)f irst and after θ = σ2+ τ2, then TOP T ≥ σ2+ τ2.

(b)m1 = 2 and a1 ≤ v1

If m2 = 1, consider the EO flow and OPT flow. Maybe EO flow and OPT flow have the total same outflow, then TEO = TOP T; Else, EO flow and OPT flow must have the same outflow during θ = 0∼ σ2+ τ1, implies TOP T ≥ σ2 + τ1. Both of the cases are enough to prove PoA≤ 2.

Else, m2 = 2, which implies a1+ a2 ≥ v2 ≥ a1 since v2is used. Now, if OPT flow

use (v2, τ2), then TOP T ≥ σ2+ τ2. Else, EQU flow use (v2, τ2), we can compute: On the other hand, in EQU flow, let’s find a lower bound of the total mass of particle arriving at the source of C1+ C2 before activating (v2, τ2) as an lower bound of M :

Theorem 5. In the fluid queuing model, the Price of Anarchy of 2 for (2 + 2)­parallel­link networks is tight.

Chapter 4

Extension

We will show a self­defined tax scheme and improve the PoA’s bound with it in the fluid queuing model. Although it seems helpless for the networks with constant inflow, it is helpful for the parallel­link networks with some extreme inflow cases.

Definition 13 (Delay­time Tax Scheme). In the fluid queuing model, we increase the delay time of edges by imposing tax in the given networks. The tax is not considered as the cost of society’s welfare. That is, the tax scheme will only change the behavior of particles, and the computation of delay time and travel time still uses the original setting. This is called the delay­time tax scheme, refer to [11].

Remark 7. Under the definition of the Delay­time Tax Scheme, OPT flow would not change anything after taxing. This is because the computation of delay time and travel time still uses the original setting. As a result, we only discuss EQU flow and the changes on the arriving time of all particle in EQU flow, TEQU.

Theorem 6. In the fluid queuing model with constant inflow, the Price of Anarchy is at least 43 for parallel­link networks after taxing.

Proof. Let’s show an example of the Price of Anarchy of 43, refer to [10]. Consider the parallel­link network with two edges, the total amount M , and the constant inflow function

Figure 4.1: The diagram of “PoA of 43 Example” This is the parallel­link network with edges {e1 = (0.5, 0), e2 = (1, 1)}, the total amount M = 1, and the constant inflow function µ0 = 1.

µ0 as below:

E ={e1 = (0.5, 0), e2 = (1, 1)}, M = 1,

µ0(θ) = 1.

In this example, we have:

TOP T = 1.5, TEQU = 2, PoA = 4 3.

Now, for this example, we apply the Delay­time tax scheme into the below 3 cases, and discuss the influence on TEQU:

(a)Taxing on both edges

This can reduce to ”Taxing on single edge” case, since the behavior of particles only be influenced by the difference of two edges’ delay time, but not the exactly value of each edge’s delay time.

(b)Taxing on down edge e2 = (1, 1)

In EQU flow, all particle still choose up edge as usual. Nothing is changed. We have:

TEQU = 2 and PoA = 43 after taxing.

(c)Taxing on up edge e1 = (0.5, 0)

Denote the delay time of e1 by x after taxing. Consider EQU flow:

• if x > 1, then all particle choose down edge. We have: TEQU = 2 and PoA = 43 after taxing.

• if x = 1, then passing two edges spends the same time. That is, there are infinitely many Nash equilibrium, and we consider the worst­case, ”the last particle go down edge”. We have: TEQU = 2 and PoA = 43 after taxing.

• if x < 1, then the behavior of particle is as below: During θ = 0 ∼ 1 − x, all particles go up edge and queue length is 12(1− x) finally. From now on, passing two edges spends the same time. That is, there are infinitely many Nash equilibrium, and we consider the worst­case. We have: TEQU = 2 and PoA = 43 after taxing.

As a result, the Price of Anarchy for this example is 43 after taxing.

Theorem 7. In the fluid queuing model with constant inflow, the Price of Anarchy is at least e−1e after taxing on single edge.

Proof. Let’s show an example of the Price of Anarchy ofe−1e , refer to [10]. Consider the series­parallel­link network with 2m edges, the total amount M , and the constant inflow function µ0 as below:

Now, for this example, we apply the Delay­time tax scheme on single edge into the below 2 cases, and discuss the influence on TEQU:

(a)Taxing on up edge ei = (ui, αµm(u 1

OP T u1i)):

In EQU flow, all particles go down edge firstly and become part of queuing until t = α.

At this moment, inflow stop! For non­taxing­up­edges, they are about to be activated;

For the only taxing­up­edge ei, it is not yet to be activated. So, TEQU > αµm and

Figure 4.2: The diagram of “PoA ofe−1e Example” This is the series­parallel­link network with edges {ei = (ui, 0), ei = (ui, αµm(u 1

OP T u1i))}i=1∼m, the total amount M = αµm, and the constant inflow function µ0 = µm, where m∈ N, α > 0, µi =∑i

k=1µk.

limm→∞PoA > e−1e .

(b)Taxing on down edge ei = (ui, 0):

Denote the delay time of ei by x after taxing.

• In EQU flow, if x > αµm(u 1

OP T u1i) or x = αµm(u 1

OP T u1i), similar as Theorem 6, then all particles coming vi+1go up road ei+1. This makes TEQU the same and limm→∞PoA = e−1e .

• In EQU flow, if x < αµm(u 1

OP T u1i), then all particles go down edge firstly. And, the up road ei+1 is activated earlier than t = α, implies lvi+1(θ) = τi+1. Despite the up orad ei+1 is activated earlier, but the subsystem from vi+1 to v1 would not finish earlier. This is because the inflow from vi+1coming ei+1and ei is the same as usual. So, TEQU = αµm and limm→∞PoA = e−1e .

As a result, the Price of Anarchy for this example is e−1e after taxing on single edge.

Figure 4.3: The diagram of “PoA of 2 Example”. Left part is the parallel­link network with 2 edges{e1 = (a, σ), e2 = (b, τ )}; Right part is the inflow function, which is a step function with range={1, a}.

Theorem 8. In the fluid queuing model with dynamic inflow, the Price of Anarchy can be reduced by the delay­time tax scheme in some cases.

Proof. Let’s show the parallel­link network with 2 edges as example, whose Price of An­

archy is 1 + ϵ after taxing with given ϵ. This example is similar as Chapter 3­Therorem 1.

Given a, σ, b, τ such that a + b≥ 1, σ < τ, consider:

Now, we apply the Delay­time tax scheme on this example. Given ϵ > 0, tax on up edge e1 = (a, σ) such that e1 = (a, τ−ϵ). In EQU flow, the behavior of particles are as follow:

All particles go up­road and queue length achieves aϵ, 0 ∼ ϵ a 1− a, Particles go up road at speed a; down road at speed 1− a, ϵ a

1− a ∼ (τ − σ) a 1− a, Inflow stop! Consume the queuing, − σ) a

1− a ∼ (τ − σ) a

1− a + τ + ϵ.

As a result, TEQU = (τ − σ)1−aa + τ + ϵ and PoA = 1 + (1τ−aσ−a)ϵ → 1 as ϵ → 0+.

Chapter 5

Conclusions and Future Works

We find upper bounds of the PoA of networks with dynamic inflow. We prove the PoA of 2 of parallel­link networks and (2 + 2)­parallel­link networks, the PoA of D(C) of series­

parallel networks, the PoA of 2|V | − 1 of general networks with assumption. We reduce the upper bound of PoA of parallel­link networks or series­parallel networks from infinite to 2 and D(C) respectively. The bounds we proved are different from networks with constant inflow in the fluid queuing model. On the other hand, similar to the work of tax scheme [11], we design the Delay­time tax scheme to improve the system’s inefficiency, which may help a lot on networks with dynamic inflow. Our main work is to use the total amount of inflow as an upper bound of the maximal throughput of networks to afford the lower bound of the cost of optimal flows (or said optimal time) in the fluid queuing model.

This technique helps us finding PoA’s tight bound of parallel­link networks and a simple example of series­parallel networks, even a loose bound of series­parallel networks. This

This technique helps us finding PoA’s tight bound of parallel­link networks and a simple example of series­parallel networks, even a loose bound of series­parallel networks. This

相關文件