• 沒有找到結果。

Stochastic entropy along a single trajectory

在文檔中 離子幫浦的隨機熱力學 (頁 12-0)

Entropy might be considered as an ensemble property and therefore seems not to be applicable to a single trajectory. However, the previous research for so-called fluctuation theorems generally [9] relates the probability of entropy generating trajectories to that of entropy annihilating trajectories. So it obviously requires a definition of entropy on the level of a single trajectory. Therefore, the definition of entropy production along a single stochastic trajectory is introduced through the diffusive system with a particle in overdamped motion [1], then generalized to the discrete-system governed by a master equation.

At first, from the common definition of a nonequilibrium Gibbs entropy [8]

(1-11) the suggested definition for trajectory-dependent entropy of the system for a Brownian particle is given by

, (1-12)

where the probability is obtained by solving the Fokker-Planck equation (1-13) Similarly, the definition of trajectory-dependent system entropy for the probability derived from a master equation is given by

ln (1-14)

In the diffusive system of a Brownian particle, the relation between the rates of

8

change is derived from the equations of motions [1].

Therefore, the similar derivation is also applied to the discrete-state system. The equation of motion for the system entropy becomes

(1-15)

The first term on the right-hand side contributes along the time intervals during which the system remains in the same states; to more explicitly, the system is at the same state during the time intervals whereas the time-dependent protocol and the corresponding probability of the state keep changing, and thus it results in the part of due to the change of the protocol. On the other hand, the second term arises from the jumps at ; to more explicitly, the time-dependent protocol and the corresponding probability of the state remain the same at when jumps occur whereas the system change the states, and thus it results in the other part of due to the change of states.

Now we split up the right-hand side of (1-15) into a total entropy production and a medium entropy production as follows.

(1-16)

and

(1-17)

where is the transition rate for forward jump and is that for backward jump. Besides, the balance holds.

Although the choice of seems to be arbitrary, there are two facts which motivate this choice. First, we would observe the ensemble properties of entropy by

9

taking average over trajectories, so we need the probability for a jump occurring at from to , which is . Hence, these entropy become

(1-18)

(1-19)

and

(1-20)

such that the balance holds. Besides, the ensemble of total entropy production in (1-20) is consistent with the macroscopic entropy (1-11) and thus obeying the second law of classical thermodynamics. Second, with this choice of in (1-17), the total entropy production fulfills the IFT, which we will show below.

With the definitions for entropy along a single stochastic trajectory (1-15) (1-16) (1-17), the meaning of the IFT (1-10) becomes transparent, which is derived from the discrete-state system governed by a master equation. At first, we recall the stochastic quantity from (1-7)

(1-21) Then we split up the right hand side of (1-21) into the contribution of and , according to the interpretation of (1-15) and (1-17). That is,

(1-22)

10

(1-23)

(1-24)

where we have used the definition . Then finally,

(1-25)

where is the first term and is the second term in (1-24).

So far we proved that the stochastic quantity defined as is exactly the total entropy production in the discrete-state system governed by a master equation.

Therefore, the integral fluctuation theorem becomes

(1-26) As an immediate consequence of (1-26), one can derive a formula according to Jensen’s inequality . This result is consistent with the second law of classical thermodynamics and gives an a posteriori support to the entropy definition.

So far, the main result is proved but there is still one thing which has to be referred. Although we start the derivation for the IFT from the stationary distribution and obeying detailed balance for a fixed , the choice for the initial and final distribution, in fact, are not uniquely selected. As a mathematical result, the IFT is truly universal which is valid for any external protocol, any initial conditions, and any trajectory length, so there are infinitely many choices of initial and final distribution. Nevertheless, the most intuitive and physically meaningful choice might be and , which the former stands for stationary state with

11

, and the latter is the state which has reached the static state after a long time.

Notice that the probability in a static state discussed in this thesis might still oscillate but doesn’t ascend or descend on average over time.

In the later sections, this choice of and is applied mostly to our discussion.

12

2 An experiment test and simulation for two-state system 2.1 Experimental test for entropy production of a two-level system

To verify the fluctuation theorem in a nonthermal system with time-dependent rates, an experiment of a two-level system is demonstrated [2]. The device with a single defect center in natural IIa-type diamond (Drukker) is excited by a red and a green laser simultaneously and can be considered as an effective two-level system with a dark and a bright state, such that

,

where and are determined by the green and red lasers respectively.

This system is driven out of the initial equilibrium by modulating the intensity of the green laser with a sinusoidal protocol with modulation period . This leads to the time-dependent rate

(2-1)

with

, (2-2)

where 0 < < 1 is the strength of the modulation. The intensity of the red laser is constant and therefore . Therefore, the master equation for the time-dependent probability and of this two-level system then reads

(2-3)

where the and represent the probabilities for the system being at state 0 and 1 stays, respectively. Once the probability distribution of the system is given, a dimensionless, nonequilibrium entropy for driven systems on the level of a single

13

stochastic trajectory has been defined [1] as

, (2-4)

where the measured probability at state at is determined by the master equation.

Figure 2-1 [2]

Figure 2-1 (a) shows the protocol together with the probability to dwell in the bright state or state one. The step function Figure 2-1 (b) displays a sample binary trajectory jumping between the two states. In Figure 2-1 (c), the protocol gives the evolution of the entropy of the system according to (2-4). The curve consists of smooth part and the jump part. The smooth part is due to the time-dependent protocol at the same state; the jump part is due to the contribution between the two states, where and are the probabilities of the states immediately before and after the jump respectively.

Besides the entropy of the system itself, energy exchange and dissipation lead, in general, to a change in medium entropy. For an athermal system such as a discrete state system, this change in medium entropy cannot be inferred from the

14

exchanged heat. Rather it has to be defined through the rate constants, and is given by

(2-5) for a jump from state to state with instantaneous rate ( being the backward rate). In this case it becomes for a jump 1→0 and for a jump 0→1. As demonstrated in Figure 2-1 (d), the medium entropy changes only when the system jumps, thus balancing to some degree the change of .

One of the fundamental consequences of the definition of stochastic entropy is the fact that besides entropy producing trajectories, entropy annihilating trajectories also exist; see Figure 2-1 (e) and (f), respectively. However, in accordance with physical intuition, the latter become less likely for longer trajectories or increased system size. In fact, entropy annihilating trajectories not only exist, they are essential to satisfy the IFT

exp (2-6)

This theorem states that the non-uniform average of the total entropy change over infinite trajectories becomes unity for any trajectory length and any driving protocol. Moreover, trajectories with may seldom occur but are exponentially weighted and thus give a contribution substantially to the left hand side of (2-6).

2.2 Reproduction of the experiment by simulation

The validity of the definition of stochastic entropy for a single trajectory and the corresponding IFT is in principle verified by the experiment of two-state system stated above. Nevertheless, restricted by the intrinsic limitation of experiments such as the amount of data, the resolution of instruments, and etc., there are still some

15

conditions which cannot be verified thoroughly.

The resolution of the detectors in the experiment is 1ms and therefore the measurable shortest time interval between two jumps must be 1ms or longer.

Nevertheless, is the resolution short enough to detect the fastest jumps between states?

How would the measured transition rates be affected if the resolution is longer or shorter?

Besides the resolution, the amount of the realizations is also a limit of experiments. Although the IFT is valid for summing over infinite number of trajectories, the tests with only 2000 trajectories in the experiment seem to be sufficient for IFT. Nevertheless, is it enough for thousands of trajectories all the time?

What if the conditions such as the external protocol or the trajectory length change?

The IFT is generally valid but is there any experimental condition beyond the feasibility?

Therefore, as an a priori tool, a simulation based on the conditions of the experiment stated above is developed to recheck the validity of the definition of stochastic entropy for a single trajectory and the corresponding IFT, and furthermore examine another conditions for a two-state system.

The simulation is developed on the idea of throwing a stochastic die sequentially with the same time interval. The first step of the simulation is to create a single

trajectory and then we can get an ensemble of trajectories. Assumed that the system is initially at state-one, then a die is thrown after a period of time to decide whether the system will stay still or jump to the other state, that is state-two. If the side of “jump”

is on the top, the system will jump to the other state instantly without any hesitate and wait for the next chance to throw a die. Whether the system stood still or jumped to

16

the other state this time, the next chance to throw a die is totally independent. That is, the process is Markovian.

The method of the simulation

The probability of jumping depends on the product of transition rate and the given time interval , that is

. (2-7)

where is the probability jumping from -state to -state. For example, if the given interval is 1ms and the transition rate from state-1 to state-2 at a certain time is 500(1/s), then the system has probability to jump from 1 to 2 at that moment. Note that the jump probability is different from the state probability derived from a master equation. The latter means the probability which the system should be found in state- over averaging many trajectories and thus an ensemble quantity. On the other hand, although the former also means probability, it is a quantity for each time to throw a die for each trajectory. Besides, the time interval is arbitrary and decides the probability to jump. The shorter the , the less probable the system would jump and vice versa. Be careful to choose a suitable so that the probability to jump would not be larger than one at any time over the total process, or it would be ambiguous otherwise.

With the transition rates, initial probability distribution for stationary states, and the definitions for system entropy Eq. (2-4) and medium entropy Eq. (2-5), a set of Figure 2-2 (a)(b)(c) and Figure 2-3 (a)(b) for a single trajectory similar to the two-state experiment Figure 2-1 (a-f) can also be demonstrated.

17

Figure 2-2

Entropy production in the two-state system with a single defect center in diamond, with parameters , , , and for a single trajectory over 4 periods. (a) shows the protocol [solid blue line]

together with the probability [dashed green line] to dwell in the state one. (b) Single trajectory [solid blue line] and probability of state-one [dashed green line]. (c) Evolution of the system entropy [black dots]. The curve is much smoother than that in Figure 2-1 (c) when the system is at the same state because Figure 2-1 (c) is experimental measurement. (d) Entropy change of the medium, where only jumps contribute to the entropy change.

18

Figure 2-3

Two examples of the change of system entropy [solid black line] and medium entropy [dashed red line]. The dashed blue lines indicate the original value of the entropy.

The change of system entropy just fluctuates around zero without net average entropy production, whereas in (a) contributes positive change of entropy and thus an entropy producing trajectory and (b) contributes negative change of entropy and thus an entropy annihilating trajectory.

After creating a single trajectory, an ensemble of trajectories can also be created to check the validity of IFT. Figure 2-4 is a set of the histograms of entropy change of (a) system, (b) medium and (c) total entropy production taken from 2000 trajectories with the same condition in the two-state experiment Figure 2-1 (g)(h)(i).

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

0 1 2 3 4

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

-1 0 1

(a)

(b)

t/s

,

19

Figure 2-4

Histograms taken from 2000 trajectories of the (a) system, the (b) medium, and the (c) total entropy change. The system entropy shows four peaks corresponding to four possibilities for the trajectory to start and end (0→1, 1→0, 0→0, and 1→1). The distribution (c) of the total entropy change has the mean and width

; on this scale it differs only slightly from the distribution of the medium entropy change (b).

In Figure 2-5, the calculations of IFT taken from 2000 trajectories for period from 1 to 20 are demonstrated. Note that each period is calculated 5 times to examine the deviation of the outcome of IFT. With increased length, a deviation of IFT becomes observable. This deviation is due to the requirement for more realizations as the mean value of the entropy increases and the deviation can be corrected in the latter section.

20

Figure 2-5

The mean over 2000 trajectories for each period with the modulation depth .

2.3 Improvements in simulation

2.3.1 Consideration of the ensemble average of states

So far we have reproduced the main results of the two-state experiment [2], and the next step is to improve the experimental conditions. First, we determine the probability for the system being at the state-one by taking average over stochastic trajectories and derive the ensemble average of states for state-one, and we call this quantity . means the probability which the system is at state-one from the viewpoint of single trajectories, whereas is derived from the master equation.

The interpretation of has many advantages which would be seen soon later. The most important one is to check the idea of throwing a die and the correctness of the simulation. If there is something wrong, the curve of would be totally different from the curve of probability for the corresponding state.

0 2 4 6 8 10 12 14 16 18 20

0.7 0.8 0.9 1 1.1 1.2

Trajectory length (# of periods)

21

Figure 2-6 shows averaged from 2000 trajectories with the condition of the two-state experiment.

Figure 2-6

[solid blue line] over 2000 trajectories. The dashed red line is the probability of state-one solved from the master equation.

From Figure 2-6, it can be seen obviously that can only roughly fit the curve of , especially at the place with larger amplitude. The result is due to the lack of realizations. Therefore, we try to add trajectories so that the curve of would be smooth and fit the curve of closely.

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

0.45 0.5 0.55 0.6 0.65 0.7

t/s

22

Figure 2-7

The mean [solid blue line] of state-one over 100,000 trajectories compared to the probability of state-one [dashed red line].

With increased realizations, it seems the curve of fit that of more closely and IFT is also more accurate (Figure 2-8) rather than the results in Figure 2-5.

Figure 2-8

The mean over 100,000 trajectories for each period with the modulation depth and resolution . IFT is calculated 5 times for each period in order to examine the deviation.

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5

Length of trajectories (# of periods)

23

Although IFT becomes more accurate after adding more realizations and its validity is also verified in principle, there is something needed to examine more carefully. If we zoom into just one period in Figure 2-7 (Figure 2-9 (a)), one can find there is a constant phase delay of compared to that of . This is due to periods with different resolution 5ms, 1ms and 0.1ms respectively.

0.56 0.58 0.6

24

2.3.2 Estimation for quantity of statistics

With only 2000 trajectories and under suitable experimental conditions, such as trajectory length, resolution, modulation depth etc., it seems that IFT works well in principle (deviation < 20%, Figure 2-5). In the simulation, IFT is even confirmed with higher accuracy (deviation < 5%, Figure 2-8) when the number of trajectories is increased to 100,000. Nevertheless, is this number large enough for other conditions?

To show this concern is necessary, we take longer observation time. Figure 2-10 demonstrates that the deviation is increased with the number of period. The example Figure 2-10 (a) has a point separated far from others, which seems to be absurd at first glance. But in fact, this extreme case appears typically.

0 10 20 30 40 50 60 70 80 90 100

0 0.5 1 1.5 2 2.5 3 3.5

(a) 4

Length of trajectories (# of periods)

25

Figure 2-10

The mean over 100,000 trajectories for each period with the modulation depth and resolution . Same as the former examples, IFT is calculated 5 times for each period in order to examine the deviation. Notice that there is a point at the upper right corner. Note the red dashed rectangle is just Figure 2-8 and Figure 2-10 (b) is the magnified view of (a) omitting the point at the upper right corner.

The result in Figure 2-10 is due to the structure of the non-uniform average . Because the entropy annihilating trajectories may occur seldom but are exponentially weighted, they contribute substantially to the left hand side of IFT. To keep , each of entropy annihilating trajectories needs a large quantity of entropy producing trajectories to balance. Therefore, the variation on the number of entropy annihilating trajectories would affect the results of IFT enormously, especially when the number of annihilating trajectories is small.

With increased observation time, the mean of total entropy production shifts in a positive direction and spreads outwards in both directions (Figure 2-11 and Figure 2-12). The number of annihilating trajectories also decreases, and it leads to the larger deviation of IFT. In most situations, a large number of entropy producing trajectories

0 10 20 30 40 50 60 70 80 90 100

Length of trajectories (# of periods)

26

lacks sufficient number of annihilating ones to balance, which brings about the result of . But sometimes, too many, or even a little more entropy annihilating trajectories are generated, resulting in . This explains the distribution of in Figure 2-10.

Figure 2-11

Histograms of total entropy production with different periods of (a) 20T, (b) 60T, and (c) 100T, respectively. The mean and the width (two standard deviations) of are also shown in each figure.

0 100 200 300

0 100 200 300

-10 -5 0 5 10 15 20 25 30

0 100 200 300

(a)

(b)

(c)

27

Figure 2-12

The mean of total entropy production is proportional to the trajectory length. It seems surprising at first glance but in fact can be explained easily; because the total process is Markovian, the change of from periods must be the same as that from and so on.

Except for the rough description from Figure 2-11, the relation between the probability of entropy producing trajectories and entropy annihilating trajectories, in fact, obeys the detailed fluctuation theorem (DFT) [7]

relaxed into the corresponding periodically oscillating distribution. In this case, the

10 20 30 40 50 60 70 80 90 100

Length of trajectories (# of periods)

28

trajectory length is very long and thus dominates . Therefore, DFT is valid in principle and suitable for the estimation.

Figure 2-13

The test diagram of DFT for the data set (i). The red asterisk denotes the mean of total entropy production and the points near have more accuracy of the DFT. The blank on the right side represents the missing points due to the lack of realizations; some positive entropy production can not correspond to their negative entropy production .

The dashed line is with slope = 1.

To estimate the trajectory number required to verify IFT, we take an example as follows. Assumed there are two sets of data to verify the IFT of two-state system.

To estimate the trajectory number required to verify IFT, we take an example as follows. Assumed there are two sets of data to verify the IFT of two-state system.

在文檔中 離子幫浦的隨機熱力學 (頁 12-0)