• 沒有找到結果。

Software-Defined Networking: Standardization for Cloud Computing\'s Second Wave

N/A
N/A
Protected

Academic year: 2021

Share "Software-Defined Networking: Standardization for Cloud Computing\'s Second Wave"

Copied!
3
0
0

加載中.... (立即查看全文)

全文

(1)

0018-9162/14/$31.00 © 2014 IEEE Published by the IEEE Computer Society

NOVEMBER 2014

19

GUEST EDITORS’ INTRODUCTION

Software-Defined Networking:

Standardization for Cloud

Computing’s Second Wave

Ying-Dar Lin, National Chiao Tung University

Dan Pitt, Open Networking Foundation

David Hausheer, Technische Universität Darmstadt

Erica Johnson, University of New Hampshire Interoperability Laboratory

Yi-Bing Lin, National Chiao Tung University

(2)

20

COMPUTER

GUEST EDITORS’ INTRODUCTION

As cloud computing’s second wave begins to

transform the networking industry, a

snap-shot of developments in software-defined

networking standardization suggests how

its components—devices, controllers,

ap-plications, service chains, network function

virtualization, and interfaces—are maturing.

C

loud computing’s first wave began with server centralization and virtualization—resulting in a paradigm shift that changed how data is stored and how software is used. The emerging second wave, software-defined networking (SDN), takes network centralization and virtualization, and especially network control, into the cloud.

After emerging in datacenters, SDN deployment has grown up into the networking-as-a-service (NaaS) model cloud service providers now offer to enterprise and residen-tial subscribers. By centralizing control-plane software (the software controlling the part of the network that carries the signaling traffic responsible for routing) to the controller and its applications, and controlling the device data plane (the actual data-packet movement) remotely, devices can become simpler. Thus, SDN significantly reduces the ad-ministrators required and as a result reduces expenses, both capital and operational. SDN also enables fast service orchestration because the data plane is highly program-mable from the remote control plane at controllers and applications. In general, SDN takes networking into the computing domain and will increasingly adopt the stan-dardization practices common for computing and software.

NEED FOR STANDARDIZATION

However, in detaching the control plane to reside separately from the data plane, we must introduce new protocols be-tween the two—namely, the southbound API bebe-tween controllers and devices, and the northbound API between controllers and applications. Extending the control plane from controllers to applications, as with service chaining (SC), and the data plane from devices to network func-tion virtualizafunc-tion (NFV) requires that newer mechanisms be added and that APIs be updated. To avoid fragmented markets, these southbound and northbound APIs need stan-dardization as soon as possible. Multiple standards bodies are competing—or cooperating—for a piece of the action; these include the Open Networking Foundation (ONF), the Internet Engineering Task Force (IETF), the European Telecommunications Standards Institute (ETSI), and the In-ternational Telecommunication Union (ITU). ONF is taking the lead in growing the dominant OpenFlow protocol for

the southbound API: ONF-certified labs can help monitor how the defined standards are implemented on commercial products. However, the northbound, SC, and NFV APIs are still under development, although this is more likely taking place in open source software projects than in standards committees. For stable SDN growth in a unified market, where all devices, controllers, applications, service chains, and NFVs are highly interoperable, standardization is criti-cal, along with associated test-lab facilities that enforce the standards.

IN THIS ISSUE

We focused our examination of SDN on its standardization and market maturity—for example by taking a look at ONF and what its certified test labs have accomplished so far. Rather than having an open call to solicit paper submissions, we invited ONF leaders as well as a number of selected sci-entists and practitioners from both industry and academia to contribute articles. After a rigorous peer-review process, we accepted six papers covering various SDN perspectives.

Evolution of SDN and OpenFlow

Many principles behind SDN and OpenFlow are not en-tirely new; the intellectual history of programmable networks that led to SDN is well documented. “SDN and OpenFlow Evolution: A Standards Perspective” by Jean Tourrilhes, Puneet Sharma, Sujata Banerjee, and Justin Pettit describes how the SDN framework and the Open-Flow protocol have evolved during ONF’s standardization process. The Extensibility Working Group, whose activities have led to the evolution of the OpenFlow protocol, has been fundamentally driven by specific use cases. This is in line with ONF’s desire to measure its success by SDN’s market acceptance, which depends on business cases.

Open source standards testing program

Aligning economic, technological, and market drivers in the context of an open source standard like the OpenFlow net-work specification is challenging. “Aligning Technology and Market Drivers in an Open Source Standards Testing Pro-gram” by Rick Bauer, Ron Milford, and Li Zhen presents and analyzes ONF’s testing program. The authors describe how the program was designed to leverage both collaboration and competition among participants; the “team of rivals” model ONF created was designed to develop consumer confidence, industry competition, and trustworthy product validation.

Service function chaining

and network service headers

SC defines a new service deployment model promising topological independence and elastic scaling of services. “Service Function Chaining: Creating a Service Plane via Network Service Headers,” by Paul Quinn and Jim Guich-ard, presents NSH, a standard data-plane format that

(3)

NOVEMBER 2014

21

creates a service plane for network SC. The authors outline

how the NSH protocol, which was submitted in February 2014 to the IETF standard track, provides the required data-plane information needed to meet the promised goals.

Open source and network control planes

SDN is a promising innovation that aims to open interfaces of proprietary networking devices to improve orchestra-tion, lower operating expenses, and enable innovation. In “When Open Source Meets Network Control Planes,” Christian Esteve Rothenberg, Roy Chua, Josh Bailey, Martin Winter, Carlos N.A. Corrêa, Sidney C. de Lucena, Marcos Rogério Salvador, and Thomas D. Nadeau discuss the role that open source plays in transforming SDN’s software and hardware in the networking landscape. The authors de-scribe the RouteFlow project, which aims to combine open source IP routing stacks and OpenFlow networks, and they also share the operational experience of its use at a live In-ternet exchange.

Deploying software-defined networks

Many networks, especially enterprise networks, face chal-lenges when migrating from traditional platforms to SDN. In “Software-Defined Networks: Incremental Deployment with Panopticon,” Marco Canini, Anja Feldmann, Dan Levin, Fabian Schaffert, and Stefan Schmid describe an incremental approach that combines traditional and SDN switches through interim hybrid networks. The authors claim that SDN benefits can extend—thanks to their Pan-opticon architecture—over an entire network, even when only a small fraction of the network is SDN-enabled.

Virtualizing network functions in home networks

Home networks are frequently based on relatively low-cost devices that are failure prone and thus require frequent use intervention. In “Virtualization of Home Network Gateways,” Marion Dillon and Timothy Winters present networks function virtualization, a new approach that aims to move the home network gateway to the cloud.

T

his first special issue of Computer devoted to SDN focuses on standardization. While we expect future issues will broaden this scope, we hope the articles included here will provide readers with a snapshot that suggests the prospects and many possibilities for this developing technology.

Ying-Dar Lin is a distinguished professor of computer

sci-ence at National Chiao Tung University (NCTU) in Taiwan. His research interests include design, analysis, imple-mentation, and benchmarking of network protocols and algorithms; quality of services; network security; and software-defined networking (SDN). Lin received a PhD in

computer science from the University of California, Los An-geles. In 2002, he founded the Network Benchmarking Lab (NBL, www.nbl.org.tw), which recently became an approved test lab of the Open Networking Foundation (ONF). He is an IEEE Fellow, an IEEE Distinguished Lecturer, and a research associate of ONF. He is currently on the editorial boards of

IEEE Transactions on Computers, Computer, IEEE Net-work, IEEE Communications Magazine (Network Testing Series), IEEE Wireless Communications, and others.

Con-tact him at ydlin@cs.nctu.edu.tw.

Dan Pitt is the executive director of ONF. Pitt spent 20

years developing networking architecture, technology, standards, and products at IBM Networking Systems, IBM Research Zurich, Hewlett Packard Labs, and Bay Networks, where he was vice president of the Bay Architecture Lab. A former dean of the School of Engineering at Santa Clara University, from 2007 to 2011 he served in executive opera-tional roles in startup companies in the US, Canada, and Australia. His research interests include developing net-working architecture, technology, standards, and products. Pitt received a PhD in computer science from the University of Illinois. Contact him at dan.pitt@opennetworking.org.

David Hausheer is an assistant professor in the Department

of Electrical Engineering and Information Technology of Technische Universität Darmstadt and was a visiting scholar at the University of California, Berkeley, from 2009 to 2011. His research interests include SDN, peer-to-peer and overlay networks, energy-efficient networking, and network eco-nomics. Hausheer received a PhD in technical sciences from ETH Zurich. He is an executive committee member of the IEEE Computer Society Technical Committee on Computer Communications and has served as an organizing commit-tee member for the IEEE Board of Directors. Contact him at hausheer@ps.tu-darmstadt.de.

Erica Johnson is the director of the University of New

Hampshire InterOperability Laboratory. Her research in-terests include conformance, interoperability, robustness, and performance testing for networking technologies such as broadband, IP, wireless, and Ethernet. Johnson received a BS in computer science and an MBA from the University of New Hampshire. Contact her at erica.johnson@iol.unh.edu.

Yi-Bing Lin is a chair professor at National Chiao Tung

Uni-versity and a deputy minister for the Ministry of Science and Technology in Taiwan. His research interests include per-sonal communications, mobile computing, IP telephony, and 5G mobile communications. Lin received a PhD in computer science from the University of Washington. He is a Fellow of the American Association for the Advancement of Science, ACM, IEEE, and the Institution of Engineering and Technol-ogy. Contact him at liny@cs.nctu.edu.tw.

參考文獻

相關文件

H..  In contrast to the two traditional mechanisms which all involve evanescent waves, this mechanism employs propagating waves.  This mechanism features high transmission and

For ASTROD-GW arm length of 260 Gm (1.73 AU) the weak-light phase locking requirement is for 100 fW laser light to lock with an onboard laser oscillator. • Weak-light phase

OurChain stands for all your blockchains, an autonomous platform for any blockchain, including a ChainAgent, a ChainBrowser, a ChainFoudry, a Ch ainOracle and an OurCoin with

真實世界的power  delay

A Cloud Computing platform supports redundant, self-recovering, highly scalable programming models that allow workloads to highly scalable programming models that allow workloads to

• Learn about wireless communications and networks!. • Why is it so different from wired communications

• National Human Genome Research Institute(NHGR I) hosted several meetings on cloud computing and on informatics and analysis in 2010.. • “One thing that is clear is that as

• Instead of uploading and downloading the dat a from cloud to client for computing , we shou ld directly computing on the cloud ( public syst em ) to save data transferring time.