• 沒有找到結果。

Research challenges towards the Future Internet

N/A
N/A
Protected

Academic year: 2021

Share "Research challenges towards the Future Internet"

Copied!
20
0
0

加載中.... (立即查看全文)

全文

(1)

Research challenges towards the Future Internet

Marco Conti

a,⇑

, Song Chong

b

, Serge Fdida

c

, Weijia Jia

d

, Holger Karl

e

, Ying-Dar Lin

f

, Petri Mähönen

g

,

Martin Maier

h

, Refik Molva

i

, Steve Uhlig

j

, Moshe Zukerman

d

a

IIT-CNR, Via G. Moruzzi 1, 56124 Pisa, Italy

b

KAIST, Gusong-dong, 373-1, Yusong-gu, Daejeon, Republic of Korea

cUniversite Pierre et Marie Curie, 104 Avenue du Président Kennedy, 75016 Paris, France d

City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong

e

Universität Paderbon, Warburger Str. 100, Paderborn, Germany

f

National Chiao Tung University, 1001 University Road, Hsinchu, Taiwan

g

RWTH Aachen University, Institute for Networked Systems, Kackertstrasse 9, 52072 Aachen, Germany

h

INRS – University of Quebec, 800, Gauchetiére West, Montreal, QC H5A 1K6, Canada

iEurécom, 2229 route des Crêtes, BP 193, 06560 Sophia-Antipolis Cedex, France jTU Berlin/Deutsche Telekom Laboratories, Ernst-Reuter-Platz 7, Berlin 10587, Germany

a r t i c l e

i n f o

Article history:

Available online 5 September 2011

Keywords:

Internet architecture and protocols Future internet

Optical networks Wireless networks Cognitive networks Green networking

Data and communication security System security

a b s t r a c t

The convergence of computer-communication networks towards an all-IP integrated network has trans-formed Internet in a commercial commodity that has stimulated an un-precedent offer of novel commu-nication services that are pushing the Internet architecture and protocols well beyond their original design. This calls for extraordinary research efforts at all levels of the protocol stack to address the chal-lenges of existing and future networked applications and services in terms of scalability, mobility, flexi-bility, security, etc. In this article we focus on some hot research areas and discuss the research issues that need to be tackled for addressing the multiple challenges of the Future Internet. Far from being a com-prehensive analysis of all the challenges faced by the Future Internet, this article tries to call the attention of Computer Communications readers to new and promising research areas, identified by members of the journal editorial board to stimulate further research activities in these areas. The survey of these research areas is then complemented with a brief review of the on-going activities in the other important research areas towards the Future Internet.

Ó 2011 Elsevier B.V. All rights reserved.

1. Introduction

In recent years, all communications media have been converg-ing towards the use of the Internet platform. This has stimulated an un-precedent offer of new (ubiquitous) IP services ranging from interactive IPTV and social media to pervasive urban sensing, which are pushing towards a continuous increase in the number of Internet users, and their demand for ubiquitous, reliable, secure and high-speed access to the Internet. This generates several chal-lenges, and hence research opportunities, at all layers of the Inter-net protocol stack.

To cater the increasing bandwidth demand, network technolo-gies with higher capacity are introduced both in the wired and wireless Internet. Indeed, in the wired part of the network, we ob-serve an increased adoption of the optical networking technolo-gies, both inside the Internet core (i.e., ISP networks) and at the network edges (i.e., fiber at home). A similar trend is observed also in the wireless part of the network, where there is a joint effort of

industry and academia for increasing the capacity of the wireless-network technologies. In the wireless field the wireless-networks’ capacity is constrained by the limited spectrum, and hence research efforts are mainly devoted to increase the efficiency in the spectrum usage.

The new ubiquitous and multimedia communication services are radically changing the Internet nature: from a host-to-host communication service to a content-centric network, where users access the network for finding relevant content and possibly mod-ifying it. The user-generated-content (UCG) paradigm is further pushing towards this evolution, while the social platforms have an increasing role in the way users access, share and modify the content. The radical departure from the objectives that have driven the original Internet design is now pushing towards a re-design of the Internet architecture and protocols to take into account new design requirements that are outside the original Internet design. Security and privacy is clearly a key requirement, but several other requirements must be taken into account in the Future Internet de-sign, such as supporting users’ mobility, efficiently handling of multimedia and interactive services, tolerating network partition-ing and/or node disconnections. In particular, energy efficiency is

0140-3664/$ - see front matter Ó 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.comcom.2011.09.001

⇑ Corresponding author. Tel.: +39 050 315 3062; fax: +39 050 3152593. E-mail address:marco.conti@iit.cnr.it(M. Conti).

Contents lists available atSciVerse ScienceDirect

Computer Communications

(2)

emerging as a key design requirement to make Internet sustainable as several reports indicate that the energy consumption due to Internet technologies is already high and, without paying attention to it, the problem will become critical while the Internet role in the society expands.

This paper presents an in-depth analysis of key research areas towards the Future Internet. We wish to remark that the paper is far from being a comprehensive analysis of all the challenges for building the Future Internet, but presents a selection of topics on which we expect/solicit more research contributions in the near future.

The paper is organized according to a layered network organiza-tion; in the first three parts we focus on network technologies, Internet architecture and protocols, and application issues, respec-tively. The fourth part is dedicated to cross-layer issues, i.e., research issues that affect several layers of the protocol stack. More precisely, in Section2we analyse the major challenges to build a scalable and robust (wired and wireless) network infrastructure by discussing the research challenges in the optical (Section2.1) and wireless (Section2.2) networking field. In Section3, we ana-lyse the challenges related to the Internet architecture and proto-cols. Then, in Section4, we discuss the role of the mobile-phone technology for delivering multimedia services to ubiquitous users. Sections5 and 6focus on two cross-layer research issues: energy efficiency (Section5) and security (Section6). Specifically, in Sec-tion5we present and discuss the research issues emerging when we include the energy efficiency as a major constrain in the Internet design (this is also referred to as Green Internet or Green Networking). Section 6 is devoted to discuss the research chal-lenges that are emerging when we consider the security require-ments both at the data and system level. Section7concludes the paper with a brief review of other important research areas that have not been covered in detail in this paper.

2. Network technologies

The development of a broadband and ubiquitous Internet is mainly based on optical network technologies for building high capacity transport and access networks, and on wireless network technologies for providing ubiquitous Internet accesses. Accord-ingly, in Section2.1we review and discuss the optical networking research challenges, while, Section2.2is devoted to analyze and discuss some research challenges for building broadband and scal-able wireless networks.

2.1. Optical networking1

During the Internet bubble, the expected bandwidth require-ments were hugely overestimated and way too many optical net-works were built, flooding the market with unneeded capacity. As a consequence, prices for dark fiber became so low that custom-ers, e.g., banks and corporations with large data transfer needs, started to buy up low-cost dark fibers and run their own optical links and networks. Similarly, the prices for monthly leases of opti-cal fiber connections decreased significantly. For instance, the prices for monthly leases on 10 Gb/s links between Miami and New York City fell from around $75,000 in 2005 to below $30,000 at the end of 2007, and prices for 10 Gb/s connections be-tween New York and London fell by 80% from 2002 to 2007[1]. To make things look even dimmer, it is worthwhile to note that about 80–90% of the world’s installed fiber is unlit, i.e., is not used, and only 18% of the world submarine fiber is lit[2]. Given these huge amounts of affordable unused capacity in already installed and

heavily overbuilt fiber infrastructures, one naturally asks what the research challenges in optical networks are, if any.

The major problem with dark fiber is the fact that it is abun-dantly installed in some areas, while it is not available at all at other places. Optical networks are commonplace in wide and metropolitan areas, but in today’s access networks fiber has just started to pave all the way with glass to individual homes and businesses, giving rise to fiber-to-the-home/business (FTTH/B) net-works. According to the latest broadband-related statistics of the OECD (Organisation for Economic Co-operation and Development) broadband portal, FTTH/B connections are still a minority in almost every OECD country, accounting only for 9% of the 271 million fixed wireline broadband subscribers worldwide. However, it is important to note that both digital subscriber line (DSL) and cable networks, the two currently mostly deployed wired broadband technologies, rely on so-called deep fiber access solutions that push fiber ever deeper into the access network[3]. While copper will certainly continue to play an important role in current and near-term broadband access networks, it is expected that FTTH/B deployment volume will keep increasing gradually and will even-tually become the predominant fixed wireline broadband technol-ogy by 2035[4]. FTTH/B networks not only alleviate the notorious first/last mile bandwidth bottleneck, but also give access to the ever-increasing processing and storage capabilities of memory and CPU of desktops, laptops, and other wireless handhelds. While current desktop and laptop computers commonly operate at a clock rate of a couple of GHz with a 32-bit wide backplane, result-ing in an internal flow of 2–8 Gb/s with today’s limited hard drive I/O, future desktops and laptops are expected to reach 100 Gb/s[5]. In fact, optical buses can now be built right onto the circuit board that will unveil computer systems 100 times as fast as anything available today[6].

Recently, the convergence of optical broadband access net-works with their wireless counterparts has been receiving an increasing amount of attention[7]. These hybrid optical–wireless networks have the potential to provide major cost savings by pro-viding wired and wireless services over the same infrastructure. A lot of research activities focused on the optical generation of radio frequencies and remote modulation schemes in order to build low-cost remote antenna units (RAUs) for radio-over-fiber (RoF) networks. RoF networks have been studied for decades and are well suited for access solutions with a centralized control station, e.g., WiMAX and cellular networks. However, wireless local area network (WLAN)-based RoF networks suffer from a limited fiber length of less than 2 km due to the acknowledgment (ACK) time-out value of the widely deployed distributed coordination func-tion (DCF), which is set to 9

l

s and 20

l

s in IEEE 802.11a/g and IEEE 802.11b WLAN networks, respectively. These shortcomings can be avoided in recently proposed radio-and-fiber (R&F) net-works, where access to the optical and wireless media is con-trolled separately from each other by using in general two different medium access control (MAC) protocols, with protocol translation taking place at the optical–wireless interface [8]. R&F networks pose a number of new challenges and opportuni-ties. Among others, these challenges involve the design and inves-tigation of hybrid MAC protocols, integrated path selection algorithms, integrated channel assignment and bandwidth alloca-tion schemes, optical burst assembly and wireless frame aggrega-tion techniques, as well as flow and congesaggrega-tion control protocols to address the mismatch of optical and wireless network data rates[9].

Another important area of ongoing research is the migration from current Gigabit-class ITU-T G.984.x gigabit passive optical network (GPON) and IEEE 802.3ah Ethernet passive optical net-work (EPON) to next-generation PONs (NG-PONs). NG-PON tech-nologies can be divided into the following two categories[10]: 1

(3)

NG-PON1: This type of technologies allows for an evolutionary growth of existent Gigabit-class PONs and supports their coex-istence on the same optical distribution network (ODN). NG-PON2: This category of enabling technologies envisions a revolutionary upgrade of current PONs, giving rise to disruptive NG-PONs without any coexistence requirements with existent Gigabit-class PONs on the same ODN.

NG-PON1 technologies include a number of performance-enhancing options, most notably XG-PONs (the Roman numeral X stands for 10 Gb/s), wavelength division multiplexing (WDM) overlay of multiple XG-PONs, or the deployment of reach extend-ers to enable long-reach PONs. NG-PON1 will be gradually replaced by NG-PON2 solutions after resolving a number of issues related to the research and development of advanced optical network com-ponents and enabling technologies such as dense WDM (DWDM), optical code division multiplexing (OCDM), or orthogonal fre-quency division multiplexing (OFDM)[10].

As broadband access with access rates of 100 Mb/s per sub-scriber becomes increasingly commonplace in most developed countries, the power consumption of the Internet is estimated to rise to more than 7% of a typical OECD country’s national electricity supply, resulting in a power consumption of several TWh and ex-penses of millions of dollars and tons of carbon gas emissions per year. The power consumption of today’s Internet is largely dominated by its access networks, which account for roughly 70% of the total power consumption. PONs have been shown to provide the lowest energy solution for broadband access, clearly outperforming alternative fiber, copper, or wireless access solu-tions based on optical point-to-point Ethernet, DSL, or WiMAX technologies [11]. While energy efficiency and low-power tech-niques have been widely considered important design criteria for wireless networks due to the limited battery life of mobile termi-nals, more research on energy-aware wired networks is desirable after the first IEEE standard 802.3az for Energy Efficient Ethernet (EEE) has been ratified not until September 2010.

Due to the ever-increasing speed of optical access networks, the bandwidth bottleneck will move from the first/last mile toward metropolitan and wide area networks. To provide higher band-width efficiency, current optical metro and core wavelength-switching networks based on reconfigurable optical add-drop multiplexers (ROADMs) could be required to resort to more efficient switching techniques at the subwavelength granularity in the near- to mid-term. A wide variety of optical switching tech-niques have been investigated, including waveband switching (WBS), photonic slot routing (PSR), optical flow switching (OFS), optical burst switching (OBS), and optical packet switching (OPS)

[12]. According to[13], however, there is little evidence that opti-cal subwavelength switching techniques will become competitive with electronic switches in terms of energy consumption related to operation and heat dissipation. It was shown in[14]that pho-tonic technologies are significantly more power hungry than CMOS and, except for very simple signal processing subsystems, CMOS will continue to be the most power efficient technology. In re-sponse to these issues, a promising solution toward practical OPS networks might be the use of mostly passive wavelength-routing components, e.g., athermal arrayed-waveguide grating, and replace fast optical packet switching with fast tuning lasers[15]. Despite the fact that small optical recirculation loops might be a viable solution for optical core routers[16], many open issues remain to be addressed with regard to power consumption and function-ality of future optical switching equipment. Instead of mimicking their electronic packet-switching counterparts, research efforts should explore novel optical forwarding paradigms as the forward-ing engine along with the power supply fans and blowers are the two most energy-consuming building blocks of today’s high-end

electronic routers[11]. This is of particular importance since elec-tronic routers might evolve from packet switching to flow switch-ing devices[17]and the sum of all forms of video (TV, video on demand, Internet, and P2P) is expected to account for over 91% of global consumer traffic by 2014, whereby Internet video alone will account for 57% and 3D/HD Internet video will comprise 46% of all consumer Internet video traffic by 2014, respectively [18]. These video streams should be treated as flows rather than individ-ual packets, which despite all the decades of research on advanced optical switching techniques somewhat ironically calls for the old-fashioned yet widely deployed optical circuit switching, which over the past has successfully shown to provide the desired net-work simplification and cost savings by reducing the number of re-quired optical-electrical-optical (OEO) conversions by optical bypassing electronic core routers.

In summary, FTTH/B networks are poised to become the next major success story of optical networking technologies, whereby PONs will play a key role toward their evolutionary or revolution-ary upgrade to NG-PONs by leveraging on a number of new en-abling technologies, e.g., OCDM and OFDM. In alignment with the recently standardized EEE, PONs will be also at the heart of en-ergy-efficient optical and bimodal fiber-wireless (FiWi) broadband access networks, which pose a number of new challenges and opportunities at the MAC layer, e.g., design and investigation of hy-brid MAC protocols in R&F based FiWi networks as well as integra-tion of optical burst assembly and wireless frame aggregaintegra-tion techniques. In optical metro and wide area networks, research ef-forts should explore novel optical forwarding paradigms apart from OBS and OPS, which despite years and decades of research re-main questionable to be widely deployed in real-word networks not only due to their technological complexity and increased power consumption but also due to the fact that the vast majority of global consumer traffic will be based on video streams.

2.2. Wireless networks

The wireless communications world is rapidly and dramatically changing, and entering new and uncharted territory. Mobile data traffic is growing at an unprecedented rate well beyond the capac-ity of today’s 3G networks. Many researches[19,20]forecast that by 2014, an average mobile user will consume 7 GB of traffic per month, which is 5.4 times more than today’s average user con-sumes per month, and the total mobile data traffic throughout the world will reach about 3.6 exabytes per month, 39 times in-crease from 2009.

The increasing number of services and users are generating tough challenges that need to be met by the research community. So far the physical layer research and development have been able to cater for the increasing appetite for capacity. However, the need to increase spectral efficiency and to consider overall energy con-sumption of the systems, combined with our closeness to Shannon limit, indicate that the networking community and ‘‘upper OSI-lay-ers’’ research have to take a leading role to find research solutions for these needs and solve grand challenges in research. In Sections

2.2.1 and 2.2.2we analyse two approaches to cope with bandwidth

scarcity in wireless networks. Specifically, in Section2.2.1we dis-cuss cognitive networks and dynamic spectrum sharing systems

[134], while in Section2.2.2we discuss the spatial reuse of the spectrum by adopting smaller and smaller cells.

The use of wireless channels is highly affected by interference, thus to increase the efficiency of wireless communications we need to cope with channels’ interference. In Section2.2.3we pres-ent and discuss the research challenges for exploiting cooperative diversity to combat the fading, and thus increasing the channel efficiency.

(4)

2.2.1. Cognitive networks2

Cognitive radio networks and various other cooperative commu-nications methods are proposed as approaches to tremendously in-crease spectral efficiency and to provide radically new and more efficient wireless access methods[35,36]. Cognitive radios (CR) have been so far mostly studied in the context of dynamic spectrum ac-cess (DSA). The introduction of DSA concept has produced an ava-lanche of new ideas on how to break the gridlock of spectrum scarcity. Particularly the USA, thanks to the initiatives of FCC (Federal Communications Commission) and a former DARPA XG research pro-gram, has led to reconsidering also spectrum regulations. Regardless of the advances, a lot needs to be still done even in the domain of DSA capable cognitive radios and networks. One of the challenges is to combine interdisciplinary research approaches in a fruitful and meaningful way. Traditionally the engineering and computer sci-ence community, particularly in the context of radio networks, has taken the regulation and business context as fixed boundary condi-tions. However, in order to speed up progress we need to consider implications and possibilities that regulatory changes can and should induce. This is a tremendous challenge since only few net-work and wireless researchers are educated or knowledgeable in regulatory issues and network economics. Nevertheless, one can ar-gue that it is especially the cross section area of technology, regula-tion and economics that can provide new and powerful insights for research and future development. Thus one of the key problems for network researchers is that in order to advance we need to embrace interdisciplinary research topics. Moreover, we need to use and de-velop more detailed radio network models – but this needs to be done without losing a certain level of abstraction so that our models and tools stay general enough. This tension between the case spe-cific, detailed modelling or simulations, and generalized abstrac-tions is likely to increase in coming years.

Even larger challenge in cognitive radio networks is in the do-main of self-organization and machine learning based optimization. It is sometimes forgotten that Mitola’s original vision on cognitive radios was not limited to dynamic spectrum access, but focused on general context sensitivity and environmental adaptation. Thus Mitola’s cognitive cycle shares many aspects with cooperative net-working, autonomous networks, and adaptive radios[35,37]. Fulfill-ing this research vision has proven to be harder than expected. It is still not clear even what sort of network architecture would be best suited to support so called cognitive cycle. In fact, it is not even pro-ven that machine learning methods in general can provide optimiza-tion gains that pay off the increased complexity of networks. It seems that new approaches are required to understand fundamen-tal possibilities and limits in this area[38,39]. Combining method-ologies and tools from networking, control theory, machine learning, and decision theory communities is most likely the best approach, but we need clearly stated research problems and a gen-eration of researchers who are familiar on combining knowledge from different disciplines. In general, the communications research itself is in the danger of becoming more and more fragmented and our articles focus on increasingly narrower problems, but at the same time great breakthroughs seem to require more interdisciplin-ary approaches with great deal of knowledge and abstractions from different fields. Perhaps apart from research problems, we are also faced with educational and research methodology challenges.

One of the great challenges in the radio networks research is also a partial lack of hard data. For example, in the case of develop-ment of wireless networks and DSA systems, we are often forced to use anecdotal trend data from various white papers. At best we may have some small-scale measurement campaigns done by uni-versities, which often do not even provide raw data to the

commu-nity to use and validate the obtained results. One can argue that fundamental theoretical work does not need such data, but cer-tainly more practically oriented research work, e.g., on DSA, sched-uling and interference avoidance, become hardly tenable or at least we lack convincing justification for our research problems and con-clusions. And it is sometimes hard to do some fundamental re-search unless you first know what the problem is. It seems that in the future we should require more measured facts instead of being content with visions and general motivational statements. There are some weak signals that this approach may become more dominant in the coming years as some research groups have re-cently started to share their data more openly or at least have based work on measurement campaigns.

One example in the need to combine knowledge from different fields and having measurement-based facts is radio network topol-ogies. A lot of work was, and indeed is, done by network research-ers by assuming extremely simplified propagation models for radio systems – so called disc model is still used far too often. This abstraction may be useful in some cases, but often it can lead to highly misleading quantitative – or even qualitative – results. Sim-ilarly we should pay more attention to understand the topology of radio networks. So far most of our theoretical and simulation methods are based on assumption that radio nodes are randomly (Poisson) distributed over the operating area. Recent measure-ments and general argumeasure-ments show that this is hardly true in all spatial scales. Thus we need to reconsider our models and previous conclusions that might have been drawn under such assumptions

[40,41]. More generally, it is still not clear how we should abstract various propagation and network topology issues to be analytically tractable and reasonable to simulate without losing quantitative and qualitative prediction power of the models.

Certainly the largest research challenge is the management of the increasing complexity of systems. This challenge is evident both in practical design of systems, and theoretical modelling of increasingly heterogeneous networks. The same pain is also shared by industrial R&D departments on standardization and develop-ment of actual deploydevelop-ments. It seems that as the complexity increases, we have to be careful especially with introduced non-linearity and parameter sensitivity at the system level. Cross-layer design has been often proposed as a solution for many optimiza-tion problems and it is strongly linked also to cognitive networking paradigm. However, as this approach can also lead to increased non-linearity and complexity one should heed the recent warning by Kawadia and Kumar that cross-layer design is not always the most efficient overall approach [42]. One of the great research challenges is to find out where the sweet spot for cross-layer design is, and what would be an ideal network architecture that allows both simplicity and cross-layer design.

Finally, it may be useful to end this musing with some less philosophical and more specific topics on the future research opportunities and challenges. First more practical item to empha-sise is the challenge to provide better experimental research tools. The academic research community used to lead development of new system concepts. Lately an increasing amount of systems are beyond the reach of academic research and can be handled only by few large and well-funded companies. One needs just to think about the development of routers, wireless access systems, etc. However, it does not need to be so. Innovative approaches to devel-op devel-open research platforms and having well defined interfaces so that efforts can be shared could provide again capability for aca-demic groups to experiment and if not to lead, at least to contrib-ute. Recently introduced open platforms such as OpenFlow (Stanford), WARP-boards (Rice) and USRP/gnuRadio are proof that a lot can be done if the talented people and the community put their forces behind common research platforms. Where we need research is on how to develop open APIs and interfaces to handle 2

(5)

different hardware and protocol stacks that ensure that develop-ment effort is minimized and research opportunities maximized

[43]. There has been some effort to develop open interfaces and definition methods, but research work in this domain is still lack-ing momentum.

Another and final research challenge to mention in this article is to predict that research in Medium Access Control (MAC) layer and scheduling in general is likely to have a renaissance. As wireless ac-cess is becoming dominant method to acac-cess data, and especially if DSA and CR networking approaches will become mainstream, it is clear that we have to revisit fundamental concepts of, and design approaches for MAC-layers and scheduling. MAC-layers should be-come most likely more aware of underlying network topology and physical layer limitations. In the context of cognitive radios, the MAC layer should often be able to distinguish between primary and secondary users. Many of these concepts have not been con-sidered to be a part of MAC-layer knowledge, and thus there re-mains a lot to be done. Scheduling itself is one of the approaches that could provide rapid efficiency gains in heterogeneous net-works. For example, selecting scheduling priorities so that different applications and heterogeneous networks are taken into account is likely to provide a lot of research challenges for the next decade.

The next decade for networking research looks indeed promis-ing. The increasing amount of data communications and emergence of new networks including machine-to-machine communications are generating a demand for new results to ensure increased effi-ciency. We will have no lack of challenges both in theoretical and practical domains. The real challenge is to have the community well educated and ready, and fight against fragmentation of our knowl-edge and keeping our focus on worthy big problems.

2.2.2. Spectrum spatial reuse3

There are several approaches to meet the wireless networks explosive traffic growth, one of which is upgrading today’s 3G net-works to a next-generation cellular network with enhanced PHY (physical layer) technology. However, the enhancement of PHY technologies approaches its theoretical limit and may not scale well with the explosive growth rate of mobile data traffic. Accord-ing to Cooper’s law, the number of voice calls carried over radio spectrum has been increased by a million times since 1950, and Cooper also predicted that this would continue for the foreseeable future[21]. Of that million times improvement, roughly 25 times was from using more spectrum, 5 times was from using frequency division, and another 5 times was from the enhancement of PHY technologies. But the lion’s share of the improvement – a factor of about 2700 – Cooper suggested was the result of spatial reuse of spectrum in smaller and smaller cells. Cooper’s law tells us that despite being close to the Shannon limit, there is no end for prac-tical increases in wireless capacity if we are prepared to invest in an appropriately dense infrastructure. The small-cell gain, how-ever, comes at a high cost. As the infrastructure becomes denser with the addition of smaller cells, inter-cell interference (ICI) inev-itably becomes higher and more complex to manage. Thus, a key technical challenge in scaling wireless capacity by increasing the density of cells is how to effectively manage ICI in such a complex cellular deployment. Another important technical requirement for small-cell networks is self-x capability of cells where x includes configuration, optimization, diagnosis, healing, etc., since small-cell base stations would be less reliable and in many cases their deployment/removal and on/off would be done by individual sub-scribers in an ad hoc manner, not by operators in a pre-planned manner. The self-x capability is an enabler for fully distributed autonomous network management that can realize the small cell

gain without suffering much from exponentially growing complex-ity of network management.

ICI management problem can be tackled at PHY layer and also at upper layers such as MAC, routing, transport layers. Techniques such as Successive Interference Cancellation, Interference Align-ment are the examples of PHY-layer ICI manageAlign-ment techniques. Mathematically, ICI management problem at upper layers and self-x problem can be tackled in the light of stochastic network utility maximization (NUM) problem with queue stability con-straint[22–24], which is generally given by

max

R2K

X

s

UsðRsÞ; ð1Þ

where R = [Rs] is the vector of long-term average rates Rsof all users

s in the network,Kis the unknown long-term rate region of the net-work that can be shown to be always convex if the randomness in wireless channels has a finite set of states and the sequence of states forms an irreducible Markov chain with stationary distribu-tion, and Usis a concave utility function of user s. Assume that

exog-enous arrivals to the network follow a stochastic process with finite mean and each wireless link is equipped with a transmission queue. It is known that the above NUM problem can be asymptotically solved by solving the following MAC-layer problem in(3)in con-junction with the transport-layer algorithm in(2)(here we assume that route is fixed for all flows for simplicity)[22–24]. At time t, each source s independently determines its instantaneous data rate rs(t) by rsðtÞ ¼ U01s X l2LðsÞ qlðtÞ 0 @ 1 A; ð2Þ

where L(s) is the set of links on the route of flow s; U0

s is the first

derivative of Us, andPl2LðsÞqlðtÞ is the end-to-end queue length of

flow s at time t. Note that this form of source congestion control can be easily implemented at transport layer in a fully distributed manner by the help of end-to-end signalling to carry queue length information to the source. In fact, the necessity of end-to-end sig-nalling can be removed without losing optimality if each flow has separate queue at every link.

The key technical challenge lies in the MAC-layer problem, ex-pressed by the following network-wide weighted sum rate maxi-mization problem max P X ‘ qlðtÞ  rlðt; PÞ; ð3Þ

where ql(t) is the queue length of link l at time t, P = [Pl] is the vector

of power allocations of all links in the network and rl(t, P) is the

achievable rate of link l at the power allocation P given network-wide channel state at time t. This problem is indeed a core problem that arises in any wireless networking problem, for instance, ICI management problem in a cellular network, modelled by a stochas-tic NUM problem, is nothing but finding P repeatedly at each time t from(3). Note, however, that the problem not only requires central-ized computation using global information but it is also computa-tionally very expensive. As an illustrative example, consider an important special case of the problem that each Plcan take either

0 or its maximum value and, furthermore, the choice of P is re-stricted not to activate any two interfering links simultaneously for conflict-free transmission. Then, the original problem is reduced to so called max-weight scheduling problem[25,26]that is a central research theme of multi-hop wireless networking research commu-nity. The max-weight scheduling problem is a NP-hard problem since it involves a weighted maximum independent set problem of NP-hard complexity. As another example, consider ICI manage-ment problem in a multi-carrier, multi-cell network[27,28]. The corresponding MAC-layer problem turns out to be a centralized 3

(6)

joint optimization problem of user scheduling and power allocation, which is computationally very expensive since the user scheduling involves multiple NP-hard problems, and the power allocation in-volves nonconvex optimization.

There have been several works to find low-complexity, distrib-uted algorithms for the max-weight scheduling problem and the dynamic ICI management problem. In [26], a randomized algo-rithm, called as pick-and-compare algoalgo-rithm, has been proposed. The algorithm asymptotically solves the max-weight scheduling problem with linear complexity but the reduction of complexity comes at the cost of slow convergence and increased delay. In

[29,30], distributed maximal/greedy scheduling algorithms have

been studied but they yield approximate schedules losing through-put optimality. Recently, in[31–33]it is shown that CSMA algo-rithms can asymptotically solve the max-weight scheduling problem if the product of back-off time and packet transmission time is adjusted as an exponential function of the weight ql(t)

and the first prototype implementation on a real 802.11 hardware has been reported in[34]. Nevertheless, finding and prototyping low-complexity, distributed max-weight scheduling algorithms is still an open problem that has many issues to be resolved, one of which is delay issue. The max-weight scheduling intrinsically suf-fers from large delay incurred by queue build-up and thus how to reduce delay while minimizing loss in throughput optimality is one of the top priority research issues. On the other hand, research on low-complexity, distributed algorithms for the ICI management problem has received relatively less attention from networking community and there are only a few notable works[27,28] avail-able in the literature. In[27], a concept of reference user has been introduced to decentralize the network-wide optimization and to lessen the involved computational complexity but the algorithm cannot guarantee throughput optimality. Proof of concept through prototype implementation, for instance, prototyping and evalua-tion on real 802.11 hardware, is also an important research direc-tion in this area. The key quesdirec-tion to be answered there is how much capacity gain one can actually achieve by adding low-com-plexity, fully distributed ICI management functionality in a mas-sively and arbitrarily deployed WiFi access points environment.

In summary, the MAC-layer problem in(3)is a core problem that inevitably arises and needs to be solved in any wireless net-work whose objective is to maximize netnet-work-wide sum utility. ICI management in small-cell networks is an important special case of the general problem. Development and experimental vali-dation of low-cost, fully distributed algorithms for the problem is a very challenging research issue and the key step to realize self-x small-cell networks that are believed to be the most effective way to scale wireless capacity continuously without known limit. The theory suggests that source congestion control to be in the form of(2)but in reality TCP does source congestion control. Other interesting questions are how TCP interacts with the MAC-layer problem and what modification is necessary for TCP.

Network greening is a rapidly emerging research area, which is further discussed in Section5. From a radio resource control point of view, network greening is in a loose sense a dual problem of the stochastic NUM. Maximizing network capacity for a given power budget is reciprocal to minimizing power consumption for a given capacity requirement. Thus, study on small-cell networks from a network greening point of view would be another important re-search direction.

2.2.3. Cooperative diversity4

One of the distinguishing features of a wireless communication system is the stochastic nature of the wireless channel. It is mainly

due to the basic property of multi-path propagation of electro-magnetic waves, causing a transmitted wave to interfere with cop-ies of itself, arriving over different paths of different lengths at the receiver. As soon as there is movement – of the sender, the recei-ver, or even just of nearby objects – this interference situation will change, possibly rapidly and in unforeseeable ways. This is called fast fading and one of the defining characteristics of wireless trans-mission[44]. The main means to combat fast fading is to use differ-ent communication resources, formalized as a channel (not to be confused with a particular frequency band of the wireless spec-trum). Channels can be orthogonal to each other in that they do not influence each other; if the transmission qualities of such orthogonal channels are stochastically independent, distributing transmissions over several such channels reduces the chances of communication outage. If done properly, this reduction is expo-nential in the number of used channels, providing a considerable gain through the use of these diverse channels – hence the com-mon term diversity gain.

Typical examples for orthogonal channels are different time slots, different frequency bands spaced sufficiently far apart, or dif-ferent codes in a code-multiplexing scheme. Also, difdif-ferent propa-gation paths can be used as orthogonal channels, as demonstrated by multi-antenna, multi-input/multi-output (MIMO) schemes. In the absence of multiple antennas, multiple users with single anten-nas can cooperate to create a similar situation; it is usually called cooperative diversity – introduced in[45], while a survey can be found, e.g., in Ref.[46]. Despite being superficially similar to mul-ti-hop networks, it is substantially different in the way communi-cation channels are used and in the way a receiver processes received signals. Alternatively, channels can be allowed to interact (e.g., overlapping transmissions from different base stations at the same time) and still be useful to reduce fading; these schemes, however, typically require careful control (compare the various forms of coordinated multipoint transmission in LTE-Advanced; for a brief survey, see reference[47]and references therein). None-theless, even in such non-orthogonal channels, similar options as in orthogonal channels exist.

Cooperative diversity, be it over orthogonal or non-orthogonal channels, has received considerable attention in the last few years, but mostly from wirelessly oriented researchers and mostly for cel-lular systems. The integration with protocol stacks is on going, but there is still a lot of work to do here. The following paragraphs highlight some areas that are still in need of research and new ideas.

Energy-efficient MAC protocols for cooperation diversity. Coopera-tive diversity requires, in its simplest form, the cooperation of three nodes, commonly referred to as a source, an assisting relay, and a destination. To actually exploit diversity effects (and not just to build an inferior multi-hop system), all three nodes must be awake and listening to the channel at the same time. This might be a non-issue for cellular systems, but to use cooperative diversity in ad hoc, mesh, or sensor networks, the integration with sleeping cycles is necessary. It is pointless to create a cooperative diversity system when it cannot be assured by the MAC protocol that all rel-evant nodes are awake at the right point in time. And this is indeed a tougher problem than it sounds like, as waking up a node (be it sender- or receiver-initiated) requires some form of communica-tion, but diversity systems are intended for the situation when there is no reliable link in the first place. Hence, to communicate, we need to wake up those nodes with which we cannot really com-municate – a challenging catch for which only some initial work exists so far[48].

Extend cooperation diversity to other communication primitives. Most cooperative diversity protocols suffer substantially from mul-tiplexing loss, i.e., the need to use orthogonal channels. But in some settings, this not really a loss but has to happen anyway – the 4

(7)

prime example is a broadcast into a wireless network. Here, the same data has to be transmitted by many nodes anyway. Hence, these repeated transmissions could also be exploited to realize diversity gains. Only some few first results are known so far, show-ing that in general this problem (and variations of it) is NP com-plete[49].

Integrate cooperation diversity with multi-user diversity tech-niques. Is it possible to combine cooperative diversity techniques with existing multi-user diversity techniques? A typical multi-user diversity technique is OFDMA: in a cellular setting, the subchan-nels of an OFDM system are allocated to different users, e.g., to maximize system capacity by assigning a subcarrier to the user with the highest channel gain; this scheme essentially exploits fre-quency diversity across multiple users. One appealing approach would be to combine cooperation diversity into this scheme by also choosing subcarriers based on how data can be cooperatively forwarded. Some first results exist[50,51], but practically, a num-ber of questions are still open. For example, how should this be controlled, what are the maximum possible capacity gains?

Dealing with limited, outdated channel state information. How should the relay selection process really work? A lot of analytical work is available, but practical schemes that do consider the actual signalling overhead and limited validity of channel state informa-tion are still rare[52]. How is the tradeoff between source-based selection vs. relay-based or destination-based selection, using channel state information explicitly or relying on opportunistic schemes, what are the resulting robustness properties?

Cross-layer aspects. Is it possible to integrate cooperative diver-sity techniques even with application layer techniques? As the source/channel separation theorem fails, the question of looking at the source and channel jointly comes up[53]. Various tech-niques could be considered – for example, network coding has been shown to provide benefits in content distribution applica-tions. When applying this to a mesh network, what is the relation-ship to diversity gains available via the wireless channel?

Cooperative diversity in non-wireless settings. Last in this list, a wild speculation. Cooperative diversity is currently perceived as a wireless-only technique, resting on the random nature of the wire-less channels. But even fixed-network channels (rather thought of as a routing path) randomly fluctuate, and fixed networks also pos-sess path diversity. Is it possible and profitable to apply these tech-niques to wired networks? At which timescales should this happen?

Overall, cooperation diversity is a very powerful tool, but a tool that must be used wisely and with proper consideration of the present scenario, the communication primitive, the user data, and the acceptable trade-offs. There still seems a considerable amount of work before cooperation diversity schemes will be used as a matter of course in all kinds of wireless systems.

3. Internet architecture and protocols

The evolution of the Internet is of utmost importance to our economy and our society just because it has been playing a cen-tral and crucial role as the main enabler of our digital era. How-ever, the Internet is also a victim of its own success as it should remain stable and robust and therefore develop a natural resis-tance to revolutions. This is a reason why the main innovation currently comes from the edge with the explosion of wireless technologies and overlay architectures. However under the push of novel services also the Internet structure is changing. Specifi-cally, as discussed in Section3.1, the emergence of content distri-bution networks is driving toward a content-centric Internet. In Section 3.1 we discuss how this evolution is changing the way Internet is structured and the associated research challenges. A long-term view about the (re)evolution of the Internet

architec-ture is then discussed in Section 3.2where we discuss two key aspects of the Future Internet architecture: virtualization and federation.

3.1. A Content-centric Internet5

Today’s Internet[54]differs significantly from the one that is described in popular textbooks [55–57]. The early commercial Internet had a strongly hierarchical structure, with large transit Internet Service Providers (ISPs) providing global connectivity to a multitude of national and regional ISPs[58]. Most of the content was delivered by client–server applications that were largely cen-tralized. With the recent advent of large-scale content distribution networks (CDNs), e.g., Akamai, Youtube, Yahoo, Limelight, and One Click Hosters (OCHs), e.g., Rapidshare, MegaUpload, the way the Internet is structured and traffic is delivered has fundamentally changed[54].

Today, a few ‘‘hyper-giants’’, i.e., CDNs and OCHs, often have di-rect peerings with large ISPs or are even co-located within ISPs and rely on massively distributed architectures based on data centers to deliver their content to the users. Therefore, the Internet struc-ture is not as strongly hierarchical as it used to be[54].

These fundamental changes in content delivery and Internet structure have deep implications on how the Internet will look like in the future. Hereafter, we describe how we believe that three dif-ferent aspects of the Internet may lead to significant changes in the way we need to think about the forces that shape the flow of traffic in the Internet. Specifically, we first describe how central DNS has become as the battlefield between content providers and ISPs. Next, we discuss how split architectures may change the ability of many stakeholders to influence the path that the traffic belong-ing to specific flows will follow across the network infrastructure. Finally, we discuss how the distributed nature of current content delivery networks will, together with changes within the forward-ing/routing, enable much more complex handling of the traffic, on a much finer granularity compared to the current Internet.

DNS and content redirection. The Domain Name System (DNS) was originally intended to provide a naming service, i.e., one-to-one mappings between a domain name and an IP address. Since then, DNS has evolved into a highly scalable system that fulfils the very stringent needs of applications in terms of its

responsive-ness[59–61]. Note that the scalability of the DNS system stems

from the heavy use of caching by DNS resolvers[62].

Today, the DNS system is a commodity infrastructure that al-lows applications to map individual users to specific content. This behaviour diverges from the original purpose of deploying DNS

[63]. Given the importance of DNS for end-user experience and how much the DNS system has changed over the last decade, understanding how DNS is being deployed and used both by ISPs and CDNs is critical to understand the global flow of traffic in to-day’s Internet.

For example, recent DNS measurements of DNS resolvers’ per-formance[64]have shown that the DNS deployment of commer-cial ISPs sometimes leads to poor DNS latency.

Different third-party resolvers, e.g., GoogleDNS or OpenDNS, do not perform particularly better in terms of responsiveness com-pared to ISPs resolvers. A key aspect of DNS resolvers is not only latency, but also how well they represent the end-host for which they do the DNS resolution. Third-party DNS resolvers do not man-age to redirect the users towards content available within the ISP, contrary to the local DNS ones.

While more work is necessary to pinpoint the exact reasons for this behaviour, we strongly expect that the explanation has to do

5

(8)

with the fact that third-party DNS resolvers are typically outside ISPs and cannot indicate the IP of the original end-host that origi-nates the DNS query[65]. The current advantage of DNS resolvers inside the ISP of the host is their ability to represent the end-user in terms of geographic location and its vicinity to content.

Opening the network infrastructure. Content is not the only place where an Internet (r)evolution is taking place. Thanks to a matur-ing market that is now close to ‘‘carrier grade’’[66–70], the deploy-ment of open source based routers has significantly increased during the last few years. While these devices are not competing with commercial high-end switches and routers available with re-spect to reliability, availability and density, they are fit to address specialized tasks within enterprise and ISP networks. Even PC-based routers with open source routing software are evolving fast enough to foresee their use outside research and academic envi-ronments[71–73].

The success of open-source routing software is being paralleled with increasing virtualization, not only on the server side, but also inside network devices. Server virtualization is now followed by network virtualization, which is made possible thanks to soft-ware-defined networking, e.g., OpenFlow[74]that expose the data path logic to the outside world. The model of network devices con-trolled by proprietary software tied to specific hardware will slowly but surely be made obsolete. Innovation within the network infrastructure will then be possible. A decade ago, IP packets were strictly following the paths decided by routing protocols. Tomor-row, together with the paths chosen by traditional routing proto-cols, a wide range of possibilities will arise to customize not only the path followed by specific traffic, but also the processing that this traffic undergoes. Indeed, specific actions that are statically performed today by specialized middleboxes placed inside the net-work, e.g., NAT, encryption, DPI, will be implemented on-path if processing capabilities happen to exist, otherwise the traffic will be dynamically redirected to close-by computational resources. This opens a wide range of applications that could be implemented almost anywhere inside the network infrastructure.

Towards a new business model for the Internet. As content is mov-ing closer to the end-user for improved quality of experience, and the infrastructure opens up to unprecedented control and flexibil-ity, the old business model of hierarchical providers and customer– provider relationships is hardly viable. Nowadays, content delivery is a very profitable business while, on the other side, infrastructure providers struggle to provide the necessary network bandwidth for hungry multimedia applications at reasonable costs. The conse-quence of more and more limited ISP profit margins is a battle be-tween content providers and the network infrastructure to gain control of the traffic.

This battle stems from fundamental differences in the business model of content delivery networks and ISPs. Today, the operators of content delivery networks, for example through DNS tweaking, decide about the flow of the traffic by properly selecting the server from which a given user fetches some content [61,75,76]. This makes content delivery extremely dynamic and adaptive. On the ISP side, most of the traffic engineering relies on changing the rout-ing configuration[77–79]. Tweaking existing routing protocols is not only dangerous, due to the danger of mis-configurations[80], routing instabilities [81]and convergence problems [82,83], but is simply not adequate to choose paths at the granularity of con-tent. ISPs need therefore new mechanisms to regain control of the traffic.

This can be achieved for example by exploiting the diversity in content location to ensure that their network engineering is not made obsolete by content provider decisions[84]. Another possi-bility is to leverage the flexipossi-bility in network virtualization and making their infrastructure much more adaptive than today’s sta-tic provisioning[85].

The deep changes we discussed in this section create unprece-dented opportunities for researchers to propose and evaluate new solutions that will address not only relevant operational chal-lenges, but also potentially business-critical ones. The ossification of the Internet protocols does not mean that the Internet is not evolving. The Internet has changed enormously over the last dec-ade, and will continue to do so. What we observe today is simply a convergence of content and infrastructure that questions a model of the Internet that is not appropriate anymore. Content is not just king in the Internet, it is the emperor that will rule all its subjects. We believe that the three research areas above need critical in-put from the community in order to enable a truly content-centric Internet. First, even after more than two decades of deployment and evolution, the DNS is still poorly understood. The DNS is much more than a naming system: today it is a central point in the con-tent distribution arena. The way DNS resolvers are used and de-ployed is a rather open field, which might lead to significant improvements in flexibility and performance for content and appli-cation providers, ISPs, as well as end-users. Second, software-de-fined networking opens a wide range of possibilities that would transform the current dumb pipes of the Internet core into a flex-ible and versatile infrastructure. For the first time, researchers are able to inject intelligence inside the network. Finally, as content is moving closer to the end-user, the very structure of the Internet is reshaped. This leads to fundamental questions about the possible directions in which the Internet might be going, not only at a tech-nical level, but also from a business perspective.

3.2. Federation and virtualization in the Future Internet6

In the Future Internet, we foresee various concurrent networks being deployed and customized to provide their specific service. An increased diversity and functionality of the networks and their components require a revision of the Internet architecture to sup-port their interoperability and continuous deployment. We can see some potential developments such as:

– The emergence of virtual worlds, sensing environments, inter-actions between the physical and virtual worlds.

– Networked systems, embedded systems, vehicular communications.

– Developments in digital life with all related applications and usage to assist the well-being of citizens.

The above examples illustrate some different shapes and requirements that the network could take in the future. Each of these evolutions is addressing a given environment where the ob-jects and constraints are quite particular. The objective to design an architecture with an Hourglass model will come at cost to accommodate the diversity of its numerous components. At which level should the interconnectivity be provided? What is the defini-tion of a managed network? How can we support interconnectivi-ty? Should we embed economics in the protocol design from scratch? What are the incentives to share and how they can be evaluated? Is there a reasonable transition methodology and scenario?

We claim that the Future Internet will therefore be polymorphic to allow several networking environments, each with their own fea-tures and strengths, to be deployed and coexist on a permanent ba-sis. We expect that virtualization and federation are the pillars of such architecture.

At the end, the network at large should be seen as a global shared resource, which is virtualized and made accessible at scale.

6

(9)

Virtualization is therefore a strategic component to accommodate dif-ferent instances of the network into a single framework. In other words, virtualization is an enabler of diversity. In the extreme case, virtualization could contribute this way in the development of par-allel global networks run by certain big players. An opposite force is required to ensure global connectivity and competition: the glue to manage and secure the Polymorphic Internet will be provided by the Federation principle. Federation could be horizontal or vertical involving different levels of cooperation between independent organizations. ISP interconnection is a typical example of a nowa-days federation implemented through a set of bilateral agreements.

A subsequent question is related to the ability to interoperate virtualized infrastructures supporting heterogeneous protocols and services. The goal is to achieve increased coverage at different layers and/or enable resource sharing between independent do-mains. Federation is more than interconnection. It covers API, pol-icies, governance, trust and economics. Of course, interoperability should be achieved at different levels, such as naming, service dis-covery or resource management.

Federation will govern the interoperability of independent net-works managed by a given authority. Alike the domain concept in the current Internet, a similar environment will be defined as it is unlikely that a single entity will deploy alone the concurrent net-working environments mentioned above. A domain is considered as an independent set of resources providing services managed by a trusted administrative authority. Therefore, a domain has a value by itself but will also often benefit for being associated with other domains in order to achieve scale or heterogeneity. Users of a domain have access to its resources but would, in certain circum-stances, benefit from accessing services offered by other domains. The governance of the global shared resources provided by the fed-eration of domains is therefore distributed. It requires local policies to control local resource access but also external policies to grant access to external users. Different federation architectures can be considered, ranging from bilateral agreements to more scalable peering models. Key issues are to enforce a federation model that supports incentives mechanisms for sharing and rewarding poli-cies that favour access to resources and services. Cloud federation can be seen as an example of such evolution. A consequence is that we might want to extend the waist of the reference model, intro-ducing the concept of federation instead of a homogeneous abstraction. From an economic point of view one needs to under-stand under which assumptions federation is beneficial for the in-volved parties and the network as whole and what types of federation agreements could help the system reach the desirable equilibrium. Federation economics will have to address the heter-ogeneity and polymorphism of the network domains involved and their complex interactions. How to compare and value multi-dimensional resources, to what extent the future Internet economy should be regulated or be designed as a free market in order to achieve a globally efficient allocation and provision of resources are only some of the questions that need to be answered.

4. Ubiquitous computing services7

The Mobile Phone (MP for short) is the key device in accessing Ubiquitous Computing Services (UCS), i.e., a variety of emerging ubiquitous multimedia and data services, including mobile cloud computing services. UCS access requires integrated use of wireless, mobile and Internet networks for stable and secure transmissions. To realize MPs access to UCS, the first step is to look at how the dif-ferent available technologies will integrate and work with each

other. A special attention must be deserved at using off-the-shelf mobile phones to accessing UCS through today’s integrated and heterogeneous wired and wireless networks. Specifically, from this analysis it emerges that several fundamental challenges need to be addressed:

System challenges: MPs usually have small display, limited size memory/storage, limited processing power, etc. On the other hand, applications usually require high processing, large memory space, and large screen display. In addition, among the system challenges, the limited battery lifetime (also referred to as energy challenge) represents one of the most critical constraints.

Communication challenges: A stable connection and high-quality communications are required for accessing UCS. However, stable/ secure connections with sufficient bandwidth cannot be always guaranteed to MPs, especially to high-speed moving MPs. Indeed, MPs often has to cope with shortage of bandwidth, frequent dis-connections, and fluctuating wireless channels, etc.

Security challenges: MPs are vulnerable to various security threatens with can affect the communication links, the data access, the storage, etc.

In the following we will focus on System and Communication challenges, while Security challenges are discussed in the next section.

System challenges. The current trend is to use MPs to access ri-cher ubiquitous services than simple phone calls. In addition, mo-bile cloud computing is promising to bring a lot of new and rich applications to MPs. However MP hardware and software con-straints highly limit this evolution. Indeed, today there are about 3.5 billion MPs worldwide that are low-end phones which, for using mobile computing services, need to get over their system gaps such as inadequate computational capability, lack of storage, unstable and slow communication links, and above all the energy constraints (the extra computation and networking activities re-quired for accessing advanced UCS, consume the battery power at the speed exceeding that MPs are designed for). Therefore, bringing high-end cloud computing services to low end MPs is not an easy task. A key challenge is the code portability across mil-lions of off-the-shelf low-end MPs. Currently we have a broad range of hardware platforms and operating systems, making the interchange of data and applications between devices difficult. Currently, most MPs can only support a few built-in applications shipped with the MP itself. This prevents developers and users to add more advanced applications to the MPs. Therefore, making low end MPs programmable and enabling code portability is a cru-cial incentive for attracting a broader set of developers to provide state-of-the-art applications for those platforms. Virtual machines constitute a good solution for code mobility, providing a virtual-ized processor architecture that is implemented over MP architec-tures, which allows installing and running extra applications over closed systems. However, virtual machines do not solve the re-source-constraint problems because they contend for the MP lim-ited computational resource. Therefore the question is ‘‘how to exploit virtual machines without further reducing the limited MP re-sources’’. Forwarding a part of computation to resource-rich and cost-effective clouds could be a good way to reduce low end MPs computation burden. Some research works already exist about off-loading the computation tasks to the cloud[113,114], but they are mainly applicable to smartphones. On the other hand, overcoming system and energy gaps of low-end devices requires novel design approaches. In particular the energy gap is a critical one, as offload-ing the computations to the cloud requires additional energy for the increased computing and networking activities. It is widely known that energy is limited for a large majority of mobile devices. For a typical low end MPs, assuming reasonable voice talk and very little web-browsing time, a fully charged battery is able to satisfy the MP energy requirements for several days. However, adding ex-7

(10)

tra functionalities to the MPs will probably increase the users’ con-nection time resulting in shorter battery lifetime. As a conse-quence, people will need to recharge their MPs more often than as usual. Shortening battery life is very undesirable and annoying to users, particularly when there is no recharge station nearby. Therefore, energy aware features should be carefully addressed.

Some may argue that the low-end phone’s capability will soon grow up to achieve the same capabilities of current high end MPs (Moore’s Law), and hence solutions focusing on low end de-vices are unnecessary. However, we argue that though hardware is growing according Moore’s Law, software requirements are also growing. Moreover, from the economic point of view, in some world regions (e.g., developing or third world countries) people only look for affordable MPs, i.e. the cheaper the better. Moreover, as for the energy saving, we believe this will continue to be a major research challenge for battery driven mobile devices.

Communication challenges. Accessing in a cost efficient way ri-cher multimedia UCS calls for novel networking approaches for an effective usage of the available heterogeneous communication technologies. To clarify this aspect, let’s use the following example. Some MP users on a bus are lunching bandwidth hunger applica-tions such as video streaming and surveillance applicaapplica-tions. To implement such applications they can use the cellular network that, however, can be very expensive, charged by airtime or traffic volume. On the other hand, there may be WiFi WLAN networks nearby with high bandwidth available (e.g., 54 Mbps) and unlim-ited free access. However, even with the available WLAN band-width from a nearby public access point, the MPs might be unable to access UCS as: (1) the MPs are unable to access to the heterogeneous networks; or (2) they are unable to access the right services because the MPs do not have necessary access support; or, even when the applications/services can be accessed, (3) the ser-vices may be disconnected frequently, resulting in an unacceptable quality of services.

The above challenges can be addressed by aggregating/bundling the available heterogeneous wireless links, i.e., by converging into a unique ubiquitous networking the modern 3G networks (e.g., W-CDMA, TD-SCDMA, CDMA2000, HSPA), the WLANs and the Internet. To achieve the heterogeneous wireless link bandwidth aggregation, three grand challenges must be addressed: (1) the link-interface heterogeneity – end-users need to access different types of mobile links; (2) the link-communication interruption due to end-user mobility, unstable radios, and limited coverage; and (3) the link-access vulnerability – mobile links are highly vul-nerable to attacks. To summarize, we need to investigate novel algorithms and protocols for providing MPs with secure, stable and cost-effective bandwidth. For example, data channels can be aggregated on demand or adaptively. How the aggregated links will be managed for downloading and uploading transmissions is a very intriguing challenge.

In the mobile access to UCSs, special challenges occur in access-ing Internet for passengers of large/long size vehicles, such as long distance trains[115], fleets or cruise on maritime communications

[116]. In these cases the users often suffer from annoying service deterioration due to fickle wireless environment. For example, con-sider a chained wireless access gateway on a train which consists of a group of interlinked routers with a wireless connectivity to the Internet; the protocol handling the mobile chain system should exploit the spatial diversity of wireless signals to improve the Internet access as the routers do not measure the same level of radio signal. Specifically, a high-speed train can be viewed as a vir-tual ‘‘gateway’’ long several hundred meters that spans across the train and seeking for the best signal quality. An intelligent protocol will re-route the traffic toward the routers experiencing in that point the best quality signal. This kind of research has to tackle two fundamental issues: (1) to reduce average temporary

commu-nication blackout (i.e. no Internet connection), and (2) to enhance the aggregate throughput the system.

In the analysis of the communication challenges, performance evaluation studies of the wireless network QoS (e.g., 3G+ and LTE standards) constitute another fundamental research area. The aim is to study the performance of heterogeneous mobile/wireless networks and to investigate the effectiveness of the novel commu-nication strategies built on top of them, with special attention to the impact of user mobility on the performance of wireless net-works. Such study, therefore, will require extensive investigations of all possible mobile scenarios in urban areas, including subways, trains, offshore ferries and city buses[117].

Before concluding this section, it is worth noting that MPs’ cooperation can be exploited for tackling the system and commu-nication challenges. Shen et al. [118] envision a new better-to-gether mobile application paradigm where multiple mobile devices are placed in a close proximity and study a specific to-gether-viewing video application in which a higher resolution vi-deo is played back across screens of two mobile devices placed side by side. Li et al.[119]design a buddy proximity application for mobile phones, in which mobile phones can be useful agents for their owners by detecting and reporting situations that are of interest. SmartSiren[120] is a collaborative virus detection and alert system for smartphones. In order to detect viruses, SmartSi-ren collects the communication activity information from the Smartphones, and performs joint analysis to detect both single-de-vice and system-wide abnormal behaviours. Multiple-party video conferencing and online gaming also help to stimulate cooperation and collaboration among 3G phones. A kind of online game target-ing at augmented reality[121]is developed to allow simultaneous connection and game participation from many different users. Fur-thermore, cell phones with GPS component are utilized to help cell phones without GPS to locate themselves[122].

5. Green Internet8

For half a century the research field of computer communica-tions has contributed to the design and optimization of computer and telecommunications networks, wireline and wireless, with the aim to meet quality of service requirements at minimal cost. Although in wireless communications, power control and optimiza-tion[246]has always been an important consideration, as excessive power by one user may interfere with the reception of another, the cost of energy in wireline networks has not been a key consideration in the traditional teletraffic research. This is changing now. There is an increasing recognition in the importance of energy conservation on the Internet because of the realization that the exponential growth of energy consumption that follows the exponential in-crease in the carried data is not sustainable. One example that sig-nifies this realization is the GreenTouchTM[86]consortium with

membership that includes, many major players in industry and academia, in which ‘‘industry leaders and diverse global talents come together to create an energy efficient Internet through an open approach to knowledge sharing’’. The consortium is ‘‘dedi-cated to creating a sustainable Internet through innovation and col-laboration – increasing ICT energy efficiency by a factor of 1000’’ within five years ‘‘to fundamentally transform global communica-tions and data networks.’’ This ambitious goal of three orders of magnitude efficiency improvement is justified by an analysis re-ported in[86]that indicates a potential of four order of magnitude reduction. This is consistent with the analysis of Tucker[87]that evaluates the minimum energy requirement to be lower by over three orders of magnitude of what the Internet consumes today.

8

參考文獻

相關文件

• For novice writers, a good strategy is imitation: choose a well-written paper that is of a similar flavor, analyze its organization, and sketch an organization for your results

Since the sink is aware of the location of the interested area, simple greedy geographic routing scheme is used to send a data request (in the form of

This research is to integrate PID type fuzzy controller with the Dynamic Sliding Mode Control (DSMC) to make the system more robust to the dead-band as well as the hysteresis

(2)Senior(experienced) financial consultants revealed more proficient executive ability in the areas such as fund, insurance, structured note or multi-dimensioned areas of business,

The aim of this research was investigated and analyzed the process of innovation forwards innovative production networks of conventional industries that could be based on the

In order to accurately represent the student's importance and degree of satisfaction towards school service quality, as well as to design a questionnaire survey and

In order to reach this research purpose, this research was probed into and gathered together to exactly happen in such four dimensions as product

This study recommends that the government set more preferential policies regarding health and medical care, financial security, life and leisure, as well as housing and social