• 沒有找到結果。

Mutual Exclusion

在文檔中 µC/OS-II Goals Preface (頁 55-59)

LOW PRIORITY TASK

2.19 Mutual Exclusion

The easiest way for tasks to communicate with each other is through shared data structures. This is especially easy when all the tasks exist in a single address space. Tasks can thus reference global variables, pointers, buffers, linked lists, ring buffers, etc. While sharing data simplifies the exchange of information, you must ensure that each task has exclusive access to the data to avoid contention and data corruption. The most common methods to obtain exclusive access to shared resources are:

a) Disabling interrupts b) Test-And-Set c) Disabling scheduling d) Using semaphores

2.19.01 Mutual Exclusion, Disabling and enabling interrupts

The easiest and fastest way to gain exclusive access to a shared resource is by disabling and enabling interrupts as shown in the pseudo-code of listing 2.3.

Disable interrupts;

Access the resource (read/write from/to variables);

Reenable interrupts;

Listing 2.3, Disabling/enabling interrupts.

µC/OS-II uses this technique (as do most, if not all kernels) to access internal variables and data structures. In fact, µC/OS-II provides two macros to allow you to disable and then enable interrupts from your C code:

OS_ENTER_CRITICAL() and OS_EXIT_CRITICAL(), respectively (see section 8.03.02, OS_CPU.H, OS_ENTER_CRITICAL() and OS_EXIT_CRITICAL()). You need to use these macros in pair as shown in listing 2.4.

void Function (void) {

OS_ENTER_CRITICAL();

.

. /* You can access shared data in here */

.

OS_EXIT_CRITICAL();

}

Listing 2.4, Using µC/OS-II’ s macros to disable/enable interrupts.

You must be careful, however, to not disable interrupts for too long because this affects the response of your system to interrupts. This is known as interrupt latency. You should consider this method when you are changing or copying a few variables. Also, this is the only way that a task c an share variables or data structures with an ISR. In all cases, you should keep interrupts disabled for as little time as possible.

If you use a kernel, you are basically allowed to disable interrupts for as much time as the kernel does without affectin g interrupt latency. Obviously, you need to know how long the kernel will disable interrupts. Any good kernel vendor will provide you with this information. After all, if they sell a real-time kernel, time is important!

2.19.02 Mutual Exclusion, Test-And-Set

If you are not using a kernel, two functions could ‘agree’that to access a resource, they must check a global variable, and if the variable is 0 the function has access to the resource. To prevent the other function from accessing the resource, howe ver, the first function that gets the resource simply sets the variable to 1. This is commonly called a Test-And-Set (or TAS) operation. The TAS operation must either be performed indivisibly (by the processor) or you must disable interrupts when doing t he TAS on the variable as shown in listing 2.5.

Disable interrupts;

if (‘Access Variable’ is 0) { Set variable to 1;

Reenable interrupts;

Access the resource;

Disable interrupts;

Set the ‘Access Variable’ back to 0;

Reenable interrupts;

} else {

Reenable interrupts;

/* You don’t have access to the resource, try back later; */

}

Listing 2.5, Using Test-And-Set to access a resource.

Some processors actually implement a TAS operation in hardware (e.g. the 68000 family of processors have the TAS instruction).

2.19.03 Mutual Exclusion, Disabling and enabling the scheduler

If your task is not sharing variables or data structures with an ISR then you can disable/enable scheduling (see section 3.06, Locking and Unlocking the Scheduler) as shown in listing 2.6 (using µC/OS-II as an example). In this case, two or more tasks can share data without the possibility of contention. You should note that while the scheduler is locked, interrupts are enabled and, if an interrupt occurs while in the critical section, the ISR will immediately be executed. At the end of the ISR, the kernel will always return to the interrupted task even if a higher priority task has been made ready-to-run by the ISR. The scheduler will be invoked when OSSchedUnlock() is called to see if a higher priority task has been made ready to run by the task or an ISR. A context switch will result if there is a higher priority task that is ready to run. Although this method works well, you should avoid disabling the scheduler because it defeats the purpose of having a kernel in the first place. The next method should be chosen instead.

void Function (void) {

OSSchedLock();

.

. /* You can access shared data in here (interrupts are recognized) */

.

OSSchedUnlock();

}

Listing 2.6, Accessing shared data by disabling/enabling scheduling.

2.19.04 Mutual Exclusion, Semaphores

The semaphore was invented by Edgser Dijkstra in the mid 1960s. A semaphore is a protocol mechanism offered by most multitasking kernels. Semaphores are used to:

a) control access to a shared resource (mutual exclusion);

b) signal the occurrence of an event;

c) allow two tasks to synchronize their activities.

A semaphore is a key that your code acquires in order to continue execution. If the semaphore is already in use, the requesting task is suspended until the semaphore is released by its current owner. In other words, the requesting task says: "Give me the key. If someone else is using it, I am willing to wait for it!"

There are two types of semaphores: binary semaphores and counting semaphores. As its name implies, a binary semaphore can only take two values: 0 or 1. A counting semaphore allows values between 0 and 255, 65535 or 4294967295, depending on whether the semaphore mechanism is implemented using 8, 16 or 32 bits, respectively.

The actual size depends on the kernel used. Along with the semaphore's value, the kernel also needs to keep track of tasks waiting for the semaphore's availability.

There are generally only three operations that can be performed on a semaphore: INITIALIZE (also called CREATE), WAIT (also called PEND), and SIGNAL (also called POST).

The initial value of the semaphore must be provided when the semaphore is initialized. The waiting list of tasks is always initially empty.

A task desiring the semaphore will perform a WAIT operation. If the semaphore is available (the semaphore value is greater than 0), the semaphore value is decremented and the task continues execution. If the semaphore's value is 0, the task performing a WAIT on the semaphore is placed in a waiting list. Most kernels allow you to specify a timeout;

if the semaphore is not available within a certain amount of time, the requesting task is made ready to run and an error code (indicating that a timeout has occurred) is returned to the caller.

A task releases a semaphore by performing a SIGNAL operation. If no task is waiting for the semaphore, the semaphore value is simply incremented. If any task is waiting for the semaphore, however, one of the tasks is made ready to run and the semaphore value is not incremented; the key is given to one of the tasks waiting for it. Depending on the kernel, the task which will receive the semaphore is either:

a) the highest priority tas k waiting for the semaphore, or

b) the first task that requested the semaphore (First In First Out, or FIFO).

Some kernels allow you to choose either method through an option when the semaphore is initialized. µC/OS-II only supports the first method. If the readied task has a higher priority than the current task (the task releasing the semaphore), a context switch will occur (with a preemptive kernel) and the higher priority task will resume execution;

the current task will be suspended until it again b ecomes the highest priority task ready-to-run.

Listing 2.7 shows how you can share data using a semaphore (using µC/OS -II). Any task needing access to the same shared data will call OSSemPend() and when the task is done with the data, the task calls OSSemPost(). Both of these functions will be described later. You should note that a semaphore is an object that needs to be initialized before it’s used and for mutual exclusion, a semaphore is initialized to a value of 1. Using a semaphore to access shared data doesn’t affect interrupt latency and, if an ISR or the current task makes a higher priority task ready-to-run while accessing the data then, this higher priority task will execute immediately.

OS_EVENT *SharedDataSem;

void Function (void) {

INT8U err;

OSSemPend(SharedDataSem, 0, &err);

.

. /* You can access shared data in here (interrupts are recognized) */

.

OSSemPost(SharedDataSem);

}

Listing 2.7, Accessing shared data by obtaining a semaphore.

在文檔中 µC/OS-II Goals Preface (頁 55-59)