• 沒有找到結果。

Driver-Domain-Bypass Network Architecture

Chapter 3. Design and Implementation

3.1. Driver-Domain-Bypass Network Architecture

Before describing the driver-domain-bypass network architecture, we first present the packet transmission and reception flows in the original Xen network architecture in order to show the differences between the two architectures.

Figure 1(a) shows the packet transmission flow. As mentioned before, packet transmission is done by sending a request from the frontend driver residing in the user domain to the backend driver residing in dom0. Before sending the request, the frontend driver performs a page grant operation1, granting dom0 read-only access to the domU’s page that contains the packet. The page grant operation is needed for the backend driver and the physical NIC driver to process the packet. On receiving the request, the backend driver maps the packet page with read-only permission into the address space of dom0 and asks the bridge to identify the target physical NIC. Then, the driver corresponding to the target physical NIC is responsible for transmitting the packet. After the packet has been transmitted successfully, the backend driver unmaps the packet page from the address space of dom0 and informs the frontend driver about the transmission completion.

Figure 1(b) shows the packet reception flow. With a NIC card that supports DMA, an

1 Page grant operation allows a domain to obtain access permission of pages that belong to another domain. It also allows transferring ownership of pages between two domains.

incoming packet will be transferred into dom0 directly by using DMA. Then, the bridge residing in dom0 finds out the virtual NIC of the target domain, and asks the backend driver to transfer the packet to that domain. To avoid page copying, the backend driver performs another kind of page grant operation that transfers the ownership of the packet page to the target domain. The frontend driver residing in the target domain can then remap the page into its address space, and notify the kernel about the packet reception. Note that such page ownership transfers increase the memory size of the target domain and decrease that of dom0.

For balancing the memory sizes, a domain has to transfer some free pages to the VMM before it can receive any packets, and dom0 can claim free pages from the VMM when the number of its free pages is lower than a threshold.

Figure 1. Network Architecture of Xen

frontend

1. The frontend driver grant dom0 read-only access to the packet page

2. The frontend driver sends a request to the backend driver

3. The backend driver maps the packet page 4. The NIC driver transmits the packet 5. The backend driver unmaps the packet page 6. The backend driver notifies the frontend driver

(4)

1. The NIC driver processes a incoming packet 2. The backend driver transfer the page

ownership to domU

3. The backend driver notifies the frontend driver 4. The frontend driver maps the packet page

(a) (b)

9

Figure 2. Driver-domain-bypass Network Architecture

The driver-domain-bypass network architecture is described as follows. As shown in Figure 2, we move backend driver and NIC driver from dom0 into VMM, which are called VMMBE driver and VMMNIC driver respectively in the rest of the thesis. When domU demands to transmit/receive a packet, it asks VMM instead of dom0 to handle the packet transmission/reception. Note that handling packet transmission/reception in VMM avoids flushing TLB because switches between a domain and VMM do not need context switches.

Thus driver-domain-bypass network architecture does not increase the TLB miss rate.

Figure 2(a) shows the flow of packet transmission under the DBNet architecture. In order to transmit a packet, the frontend driver sends a request to VMMBE driver, which retrieves the memory address of the packet from the request and finds out the target physical NIC.

Then VMMBE asks VMMNIC driver to transmit the packet. After the packet is transmitted successfully, VMMBE driver notifies the frontend driver about the transmission completion.

Note that packet transmission does not involve dom0. Thus, no extra domain switches are

1. The frontend driver sends a request to the backend driver

2. The VMMBE driver finds out the physical NIC which the packet will pass through

3. The VMMNIC driver transmits the packet 4. The VMMBE driver notifies the frontend driver

about transmission completion

1. The VMMNIC driver processes the incoming packet

2. The VMMBE driver gives the packet page to domU

3. The VMMBE driver notifies the frontend driver 4. The frontend driver maps the packet page

(3)

VMM can access the address space of domU directly.

Figure 2(b) shows the flow of packet reception. An incoming packet is transferred into VMM directly by using DMA. In order to find out the target domain, VMMBE driver looks up the destination MAC address of the packet in its look-up table. Then, VMMBE driver performs a page grant operation to transfer the ownership of the packet page to the target domain and notifies the frontend driver, which remaps the packet page into its address space and notifies its kernel of about the packet reception. As mentioned above, the page ownership transfer increases the memory size of the target domain. To balance the memory sizes, the target domain should transfer some free pages to VMM before it can receive any packet. Note that the memory balance logic in DBNet is simpler than that in the original Xen network architecture since the former only involves the target domain and the VMM. This helps to reduce the CPU load and the working set size. Moreover, packet reception does not involve dom0, and thus no extra domain switches are required.

相關文件