Petr Grygarek, Advanced Computer Networks Technologies. 1. Network Virtualization. Petr Grygárek . dcb-pelissier-NIC-Virtualizationpdf. Network virtualization is a technology which enables combining computer network resources into a single platform appearing as a single network. In this form of. Foreword by Pat Gelsinger. VMware NSX®. Network. Virtualization. Fundamentals. Gustavo A. A. Santana, VCIX-NV® and CCIE®
|Language:||English, Spanish, Hindi|
|ePub File Size:||16.88 MB|
|PDF File Size:||15.74 MB|
|Distribution:||Free* [*Regsitration Required]|
Explosion of IP address configuration and management. • VM to Server is predicted to be a ration. • VMotion requires that the destination be on the same IP. PDF | This paper is a tutorial, which discusses reference model; architectural principles and objectives of network virtualization. Network. PDF | Network virtualization refers to a broad set of technologies. Commercial solutions have been offered by the industry for years, while more.
Greenberg, and J. Church, A. Propagation types of virtual nodes e. In our evolution and wide adoption of programmable hardware, it is comparison summarized in Tables III-VI, we evaluate the not excluded that these technologies become common place deployability of an architecture as high if the architecture in the near future. Keller, R. In spaces, middlebox traversal, network broadcasting, VM group- addition, PSSR can be implemented with Multiprotocol Label ing, and bandwidth reservation.
Since there could be no more than 7 flows, the number of VRs was also limited to a maximum of 7. Figure 4 is an Intel Xeon E with four cores 2. This computer runs Xen Hypervisor keeping constant the number of flows per VR; figure 5 shows version 3.
The machines responsible for traffic generation the effect of increasing the flows in a single VR. This port is connected to a hub ensuring that the regardless of the number of active VRs and flows per VR. On the among runs. This is likely due to the low amount of extra load right side of the virtual routers, a symmetric configuration is imposed on the system by the extra traffic and extra VRs, used.
VRs while keeping constant the number of flows per VR, while Iperf  sessions are used to generate traffic flows with figure 7 shows the effect of increasing the flows in a single a packet size of bytes at a bit rate of 1Mbps. The data VR. Average packet delay for a single flow per active VR and different Figure 6. Packet delay variance for a single active VR with different numbers of flows Figure 5. Average packet delay for a single active VR with different numbers of flows always close to 1.
We can see that the general trend is that the variance of When we observe the variance of the delays of the packets the packet delay, as well as the confidence intervals, increase in Figure 9, we can see very clearly that the larger the with the number of active VRs and flows per VR, with a number of active VRs and the larger the number of flows few exceptions.
This means that, as the number of active VR per VR, the larger is the variance of the packet delay. For the and flows per VR grow, the delays become more statistically same cases analyzed in Figures 8 and 9, we also studied the unstable on 2nd order time measures: This is expectable due to the small In figures 8 and 9, it is possible to observe the behavior of bandwidth used by each flow 1Mbps which, even when the average packet delay, as well as the packet variance, when considering the maximum combination of 7 flows, is still too different combinations of VRs and flows per VR are activated low to be affected.
In figure 10 it is represented the packet delay variation while For the average packet delay in Figure 8, we can see that it varying the number of active routers. Average packet delay for a maximum total of 7 flows network flows, and also as a function of the CPU load. Testbed Configuration The purpose of these tests is to analyze the behavior of the throughput with a different number of virtual routers and CPU load.
The changes made to the configuration of the testbed are due to the fact that the ports on the hubs used in the previous setup are limited to 10 Mbps. In this configuration, only three machines were used see figure Eddie as a transmitter; Susan as a receiver; and Bree where the VRs were mounted. The three nodes are directly connected through Ethernet cables with a bandwidth of Mbps.
To test the throughput the software Iperf was used, which will allow the measuring of the transmission rate between Eddie and Susan. In all the tests, 15 runs of 30 seconds of traffic were analyzed. Packet delay variance for a maximum total of 7 flows was used. Results - Throughput In order to establish a reference value, the throughput between Eddie and Susan was first measured without the use of VRs. This reference serves as a base comparison for the upcoming results. The throughput registered was constant and with a value of Mbps.
In the first part of these tests, the throughput was measured while varying the number of active VRs; the number of VRs ranges from 2 to 7. The obtained results can be seen in figure 12 where the throughput shown is the combined value of all the flows. Also, during these tests it was possible to observe that the hypervisor manages to make a fair distribution of resources. Up to 4 VRs, the throughput increases with the number of active routers, which is probably due to a more efficient use of the bandwidth.
Beyond this point, increasing the number of VRs decreases the global throughput. These two facts together lead us to believe that the throughput decrease after 4 VRs must be due to the increase of CPU load.
The measurements on the average packet delay and packet Figure Throughput behavior while varying the number of VR delay variance showed that, for different combinations on the number of VRs and flows per router, the average delay is mostly constant while the delay variance increases with both the number of VRs and the number of flows.
We can also see that the confidence intervals for the variance also increase in the same way, which shows an increasing degree of instability for 2nd order parameters of the delay probabilistic distribution. Apart from the results for 1 VR, the packet delay variation increases almost linearly with the increase of VRs. Also, looking at these results from an absolute value point of view, values in the order of microseconds indicate a good performance.
Several types of data services which operate in real time, e. Throughput behavior while varying CPU load amount of average delay but need this delay to be stable.
Therefore, one must make sure that the packet delay variance falls within certain limits. With this analysis, we analyzed how To assess the influence of the CPU load in the VRs virtualization impacts this important aspect for real-time data performance, a program called Lookbusy  was used.
This services. During this test only 1 traffic, the obvious conclusion is that up to 4 VRs network VR was active. The results show that the amount of CPU load variance and variation increase as the number of VRs and significantly affects the throughput.
The throughput decreases flows increases. Before performance. Even though using newer and better hardware making these tests, a trial run was made in which no virtual should improve greatly the results obtained, given the age of routers were used. During this trial run, the throughput seemed the hardware used, the trend should remain that the more flows independent from the CPU load. It would be important to study how the use in terms of performance of network virtualization using the of other virtualization technologies and hardware impacts the NVSS.
The tests using TCP traffic allowed the retrieval of throughtput and packet delay statistics. On the other hand, it the throughput behavior while varying the number of virtual would also be relevant to have a direct comparison between the routers and CPU load.
The tests with UDP traffic permitted an performances of virtualized and non-virtualized networks for assessment of the behavior of packet delay in terms of average the same kind of services.
Experimenting with a larger number value and variance, as well as its variation, for different of VRs and network flows would allow us to have a more amounts of active routers and flows per router. Chowdhury and R. Ahlgren, P. Aranda et al. Nogueira, M. Melo et al. Peterson and T. Egi, A. Greenhalgh et al.
ICCCN Proceedings of 16th International Conference on, , p. Barham, B. Dragovic et al. This work of purposes such as load balancing, fault diagnosis, was supported in part by the Natural Science and Engineering Council of and security protection. R center. Bari, M. Podlesny, Md. Golam Rabbani, Q. Zhang, and M. Boutaba is with the David R. Esteves and L. Clos topology Fig. Conventional data center network topology II.
Table I provides a list of abbreviations used throughout to be implemented and managed independently. By separating the paper. Data Center management policies. Also, since VNs are logically separated from one another, implementing performance isolation and A data center DC is a facility consisting of servers phys- application QoS is facilitated. Management procedures of VNs ical machines , storage and network devices e.
Furthermore, isolation offered in network tems. In what follows, we present the conventional expedites network innovation. So far, most of the existing topology used in data centers and some other topologies that work on network virtualization has been focused on virtualiz- have been recently proposed. Thus, Figure 1 shows a conventional data center network topol- virtualizing data center networks is a relatively new research ogy .
In this topology, the Top-of-Rack ToR switch in the direction, and a key step towards fully virtualized data center access layer provides connectivity to the servers mounted on architectures. Every ToR switch is connected to multiple aggregation research challenges including virtualization techniques, ad- switches for redundancy. The core layer provides secure con- dressing schemes, performance isolation, scalability, failure nectivity between aggregation switches and core routers CR tolerance, monitoring, interfacing, security, pricing, and re- connected to the Internet.
A particular case of the conventional source management. Our switches. Second, we of switches . Each switch in a stage is connected to all compare these architectures and highlight their design trade- switches in the next stage, which provides extensive path offs. Finally, we point out the key future research directions diversity. Figure 2 shows an example of a three-stage Clos for data center network virtualization. To the best of our topology.
The remainder of the survey is organized as follows. The i-th port of any core switch is connected to compare them from different perspectives Section IV.
The above topologies have the properties that make them suitable for data center networks. However, data center topolo- gies are not limited to the topologies presented in this Section. For example, BCube  is a data center network architecture based on hyper-cube topology.
For example, a physical machine server is virtualized via a hypervisor that creates virtual machines VMs having different capacities CPU, memory, disk space and running different operating systems and applications. While a Virtualized Data Center is a physical data center with deployed resource virtualization techniques, a Virtual Data Center is a logical instance of a Fig. Virtualization of a data center. Virtualized Data Center consisting of a subset of the physical data center resources.
Hence, unlike a VN, a VDC is composed of different virtual links are characterized by their bandwidth. Propagation types of virtual nodes e.
However, since a data center network covers a disk. ISP networks is the number of nodes. While the number of Nevertheless, there are other features worth considering nodes in ISP backbones is in order of hundreds e. We provide more cluster . This can potentially raise scalability issues, and details about the challenges related to these features in the increase management complexity.
For a survey of ISP network virtualization within a shim layer that intercepts all packets entering and the interested reader is referred to . Each tunnel is associated with an allowed sending rate on that tunnel implemented as a rate-limiter. The other alternatives  are group allocation for handling C. In particular, whereas the former assumes the use of rate-limiters , , , .
In particular, a that there are two players: ISPs and end-users, the latter rate-limiter module is incorporated into a hypervisor of each proposes to separate the role of the traditional ISP into two: It relies on the innovation since it separates the role of deploying networking Open vSwitch also running in the hypervisor to track rates mechanisms, i.
Bandwidth guarantee in GateKeeper is achieved owning and maintaining the physical infrastructure i. CloudNaaS relies on Open vSwitch, a data center. Each tenant creates a VDC over the limiting. The deployment of rate limiters located at end-hosts physical infrastructure owned by the InP for further deploy- makes it possible to avoid explicit bandwidth reservation at ment of services and applications offered to end-users.
On the schemes and network performance isolation. Accordingly, we other hand, path diversity available in data center networks focus our attention in this survey on: In particular one effective technique to forward packets between virtual nodes.
BARI et al.: When a VM that a project may address more than one feature. A checkmark sends an Ethernet frame, VNET replaces the source and shows the features that are inherent to the surveyed proposals. Traditional data center DC respectively. Then switches perform packet forwarding using Virtualization in current data center architectures is com- the MAC addresses of the physical machines.
VNET uses monly achieved by server virtualization. Diverter requires tenants is achieved through VLANs. Data centers relying that every VM have an IP address format encoding the tenant on this simple design can be implemented using com- identity, the subnet, and the virtual machine address currently modity switches and popular hypervisor technologies e.
VNET performs routing between subnets layer 2 and layer 3 address spaces. Summarizing, Diverter provides layer scalability since commodity switches were not designed to 3 network virtualization that allows every tenant to control handle a large number of VMs and the resulting amount of his own IP subnets and VMs addresses.
In particular, switches have to maintain an entry in their The main limitation of the proposal is that it does not FIBs Forwarding Information Base for every VM, which can provide any QoS guarantee, the support of which the authors dramatically increase the size of forwarding tables.
In addition, consider as future work. NetLord allowed by the To maximize revenue, providers of Infrastructure-as-a- Service IaaS  are interested in a full utilization of their B. Diverter resources. One of the most effective ways to achieve that Supporting logical partitioning of IP networks is essential is by maximizing the number of tenants using the shared for better accommodation of applications and services needs infrastructure.
NetLord  is a network architecture that in large-scale multi-tenant environments like data centers. The extra source and destination L2 addresses determine the MAC addresses of the ingress and egress switches of a server hosting a source VM respectively. Packet forwarding from an Cloud tenants have a need to migrate services across data egress switch to a destination server is based on an L3 lookup centers, to balance load within and across data centers, or of an egress port.
An NLA forwards packets on a destination to optimize performance of their services. One approach that allows to achieve the above in the encapsulated tenant packet. To support virtual routing, objectives of tenants and users is migration of VMs. To avoid NetLord uses the same routing mechanism as Diverter . COTS switch  does that. Finally, virtualized router. FEs are distributed does not support any bandwidth guarantee. The control plane is supported by one or several Centralized Controllers CC.
A VM is deployed on a server connected to only one edge FE. CC calculates the routing path from each FE to VMs and spreads that information among FEs, which rely on these routing tables for forwarding packets. SEC2 architecture. VL2 is based on a non-oversubscribed Clos topology see Figure 2 that provides easiness of routing and resilience. The fabric manager a process Packets are forwarded using two types of IP addresses: Using k- dresses AAs used by switches and servers, respectively.
VL2 port switches, forwarding table at each switch is limited to relies on a directory system for AA-to-LA mappings. Before O k records due to the properties of a multi-rooted fat-tree sending a packet, a VL2 server encapsulates the packet with topology.
The position of a switch in the topology may be only. At the destination ToR switch, the packet is decapsulated set manually by an administrator, or, automatically through and delivered to the destination AA server. First, it requires multi-rooted fat-tree topol- and servers improves the scalability of VL2 since ToR ogy making PortLand inapplicable to other used data center switches do not have to store forwarding information for network topologies.
Second, resolving ARP requests by a a large number of servers. Furthermore, the VL2 directory single server i. Third, each edge switch should of the network after failures.
One limitation of VL2 is the lack of absolute bandwidth guarantees between servers, which is required by many appli- G. SEC2 cations e. The proposal is also highly To ensure wide adoption of cloud computing over data coupled to the underlying Clos topology, and requires that centers, it is important to provide all tenants with security switches implement OSPF, ECMP, and IP-in-IP encapsulation, guarantees.
In particular, one of the important security issues which can limit its deployment. Second, generation data centers. PortLand  addresses all these per-user control of security policy is a challenge. Third, issues for a multi-rooted fat-tree topology see Figure 3. In having a large number of VLANs in the same data center particular, the architecture proposes an L2 routing mechanism network may induce complexity in network management and employing the properties of that topology. It supports plug- increased control overhead.
In particu- virtualization techniques to provide secured elastic cloud com- lar, a PMAC has a format of pod. Network virtualization is pod is the pod number of an edge switch, position is its supported through Forwarding Elements FEs and a Central position in the pod, port is the port number of the switch Controller CC.
FEs perform address mapping, policy checking and enforcement, and packet forwarding. The network architecture has two levels: An edge domain is assigned a unique eid, and is connected to the core domain by one or more FEs. The advantage of SEC2 is that it does not require specialized routers or switches across the entire data center network. Abstractions in Oktopus: Oktopus that support jumbo frames. Although infrastructure providers offer to tenants on- demand computing resources through allocating VMs in H.
SPAIN data centers, they do not support performance guarantees on network resources to tenants. First, variability of network per- networks, since it does not exploit path diversity offered by formance induces unpredictable application performance in data center networks, resulting in limited bi-section bandwidth data centers making application performance management a and poor reliability .
Smart Path Assignment In Networks challenge. Second, unpredictable network performance can SPAIN  uses the VLAN support in existing commod- decrease application productivity and customer satisfaction, ity Ethernet switches to provide multipathing over arbitrary leading to revenue losses. Oktopus  is the implementation topologies.
The A virtual cluster shown in Figure 8a provides the illusion agent also detects failed paths and re-routes packets around of having all VMs connected to a single non-oversubscribed the failures by using a different path. A virtual oversubscribed cluster illustrated in ity issues. In patterns. A tenant can choose the abstraction and the degree addition, SPAIN requires that switches store multiple entries of the oversubscription of the virtual network based on the for every destination and VLAN; it creates more pressure communication pattern of the application the tenant plans to on switch forwarding tables than the standard Ethernet does.
Oktopus uses a greedy algorithm for VLANs allowed by the Finally, the resource allocation to the VDC. Gatekeeper focuses on providing guaranteed bandwidth among VMs in a multiple-tenant data center, and achieving a high bandwidth utilization. In general, achieving a strict bandwidth guarantee often implies non effective utilization of a link bandwidth when free capacity becomes available. SecondNet architecture.
Gatekeeper creates one or more logical switches that interconnect VMs belonging to the same is how to implement the abstractions of Oktopus for other tenant. SecondNet minimum guarantee by the largest amount.
Although fault-tolerance among multiple VMs in a multi-tenant virtualized data center. Besides, types: Furthermore, the scale of the experimental effort service type 2.
PSSR improves the scalability of the data evaluate the effectiveness of Gatekeeper in real cloud environ- plane as paths are calculated at the source node. In this way, ments.
Besides, SecondNet allows resources VM and plications in clouds. In particular, the architecture provides bandwidth to be dynamically added to or removed from a set of primitives that suit the requirements of typical VDCs i. In spaces, middlebox traversal, network broadcasting, VM group- addition, PSSR can be implemented with Multiprotocol Label ing, and bandwidth reservation.
For bandwidth reservation , they do not fully address all of the example, while the BCube  network achieves high net- above issues. CloudNaaS relies on OpenFlow forwarding to achieve the objectives e. Gatekeeper application deployment in CloudNaaS includes several steps. Rodrigues et al. After the network requirements are translated switches or routers. NetShare allocates bandwidth for tenants into a communication matrix, the cloud controller determines in a proportional way and achieves high link utilization for the placement of VMs and generates network-level rules that infrastructure providers.
In NetShare, data center network links can be installed on switches. In this way, one takes into consideration communication locality. Besides, Robin DDR . To implement more compliance with OpenFlow. Finding a better trade-off between scalability and network multipath is possible with the use of ECMP. The provide any absolute bandwidth guarantees to services. Seawall IV. Scalability and fault-tolerance are important ants, and prevents malicious tenants from consuming all design concerns for data centers comprising large numbers of network resources.
Besides, Seawall requires that a physical servers and network resources, and expected to support a large machine maintains state information only for its own entities, number of tenant applications.
As data centers typically use which improves scalability. In all cases, Seawall shares the bandwidth QoS is an increasing concern of tenants and is important to the proportionally and enforces isolation. Moreover, Seawall al- success of virtualized data center architectures. We summarize the results of our comparison in conditions, making it fault-tolerant. In the following subsections, dows 7 and Hyper-V. Moreover, without admission control, we will provide detailed discussion of our evaluation of each it is unlikely that Seawall will be able to achieve absolute performance metric.
Scalability N. NetShare Achieving high scalability in virtualized data centers re- NetShare  tackles the problem of bandwidth allocation quires address spaces that support large number of tenants in virtualized data center networks proposing a statistical and their VMs. The maximum Among the architectures surveyed in the paper, Second- numbers of tenants and VMs depend mainly on the number Net, Seawall, and Gatekeeper achieve high scalability by of bits used to identify tenants and VMs.
The number of VMs keeping states at end-hosts e. Depending on the packet encapsulation maintaining the forwarding state only for forwarding scheme, the size of the forwarding table depends switches in the network. Diverter is also scalable, because on the number of VMs, physical machines, switches or pods.
Failures of these control plane optimization for improving scalability, such an optimization components can lead to malfunctioning of part or the whole limits path-diversity provided in the network and deteriorates data center and result in inability to detect failures in the data overall effectiveness of the approach.
Further, CloudNaaS is plane. SEC2 is not scalable because the based on spanning tree protocols depends on the time that the addressing scheme limits the numbers of tenants and subnets protocol takes to converge after topology changes.
Adaptations supported in the network. Similar to STP, failures in instances of routing protocols such as OSPF require routes recalculation, which may take a variable time depending on the size of the network and on B. However, as shown in , In the context of virtualized data centers, fault-tolerance the convergence time of OSPF less than one second is not covers failure handling of components in the data plane e.
For instance, SecondNet switches through a set of rules and associated actions. The uses a spanning tree signalling channel to detect failures, and centralized design of the OpenFlow controller makes it prone its allocation algorithm to handle them. Schemes such as Oktopus and CloudNaaS another available controller. Schemes including Seawall and Gate- Distributed lookup systems can be used to minimize the keeper can adapt to failures by re-computing the allocated negative impact of failures in address lookup systems.
Deployability switches. Thus, implementing those architectures can increase As mentioned previously, deployability is a key aspect of the overall cost of the network. Nevertheless, with hardware any data center network virtualization architecture. In our evolution and wide adoption of programmable hardware, it is comparison summarized in Tables III-VI, we evaluate the not excluded that these technologies become common place deployability of an architecture as high if the architecture in the near future.
On the other hand, low deployability refers to tend to deploy commodity equipment, which are cheap and architectures requiring devices with the features that are not easily replaceable. Using this equipment is not always a available in every switch e.
For instance, in the case of protocols. It induces a scalability is- ployability in Table VIII, which describes the required features sue because commodity switches often have a limited amount to be implemented in hypervisors on physical machines , edge of resources i. However, the switches, and core switches.
A FIB allows to In both conventional and NetLord architectures, commodity map MAC address to a switch port when making a decision switches are used, however, the forwarding scheme makes the about packet forwarding.
The table also shows, which difference, hence there is no scalability problem in NetLord. Depending on the scheme, this server can have different functionalities D. On the other hand, Seawall hypervisor, most of the surveyed architectures require extra and NetShare provide weighted fair-sharing of bandwidth hardware features. Load-balancing QoS for delay-sensitive applications such as video streaming, online gaming, web telephony and conferencing.
Therefore, it is envisioned that future cloud Lastly, Diverter, VICTOR, and SEC2 are essentially address- infrastructures will be multi-tiered, where edge data centers ing schemes that do not explicitly address load-balancing. Summary Similar to large data centers, virtualization is required in Our comparison of different proposed architectures reveal edge data centers for supporting VDCs from multiple tenants several observations.
First, there is no ideal solution for all with diverse performance objectives and management goals. This is mainly because each new research challenges: For example, it is between performance and operational cost? This problem possible to combine VICTOR and Oktopus to deploy virtu- is commonly known as the service placement problem alized data center with bandwidth guarantees while providing .
This problem shares many similarities with of the performance requirements of the applications residing the traditional replica placement problem . However, in the data centers. Thus, the issues discussed in this section existing solutions have not studied the dynamic case, require further research efforts in the context of different cloud where demand and system conditions e.
As there can be a large number of edge data centers, monitoring and controlling resources in such a large infrastructure have inherent challenges and can A. Minimizing this Most of the existing studies so far on data center network management overhead is a major issue worth investiga- virtualization have been focusing on large data centers con- tion. Although large data We believe addressing the above research challenges will be centers enjoy economy-of-scale and high manageability due crucial to the success of multi-tiered cloud infrastructures.
In particular, economics B. Virtual data center embedding factors dictate that there will be only a handful of large data centers built in locations where construction and operational Accommodating a high number of VDCs depends on a e. This centers may be located far away from end users, resulting in problem is commonly referred to as embedding and has been higher communication cost and potentially sub-optimal service the subject of extensive research in the context of network quality in terms of delay, jitter and throughput.
Data center architectures like Second- Motivated by this observation, recent proposals such as Net  and Oktopus  have proposed heuristics to cope mist , EdgeCloud , micro-data centers , nano- with the NP-hardness of the embedding problem. However, data centers  have been put forward to advocate building there are several other issues concerning the design of virtual small-scale data centers for service hosting at the network data center embedding algorithms: In this paper we adopt the terminology besides physical servers that can be virtualized.
VDC embedding. Even though the proposed data center ar- We believe the embedding algorithms for VDCs should chitectures have relied on different network topologies such consider requirements for other resources as well. Designing a VDC re-embedding fat-tree topology. Simply stated, network programmability can be Network programmability has recently received square-foot data center, which includes processors, renewed attention in the research community.
For example, running a customized code Greenberg et al. In the context of virtualized data centers, network concerns. As we have by consolidating a large number of virtual resources on already seen, many architectural proposals surveyed in the a smaller number of physical ones. However, in the context of virtualized multi- works , none of the existing embedding algorithms tenant data centers, network programmability needs to address has considered energy cost.
The main challenge in re- a number of research challenges: The failure of a physical link can cause 3 protocols e. IPv4 and Ethernet by all tenants. Pro- disruption to multiple VDCs that share the link.
Recent work such as programming APIs. An interesting research challenge is Polyvine  represents an initial effort for tackling this to understand the impact of building a data center net- problem.
Introducing heterogene- and providing guaranteed network performance. Designing ity has both advantages and disadvantages. The issue of FlowVisor  are the most prominently proposed tech- providing guaranteed delay is still an open problem, as it not nologies for achieving programmability in data center net- only requires isolated bandwidth allocation, but also effective works. OpenFlow is an abstraction layer that allows users rate control mechanisms. One special components called controllers.
FlowVisor  is a particular challenge in data center environment is the TCP network virtualization layer that allows multiple controllers incast collapse problem , where simultaneous arrival of one controller per tenant to share a single OpenFlow switch. One of the limitations in network delay. We believe any solution that provides delay of OpenFlow is scalability. Currently, OpenFlow adopts a guarantees in data center networks must also have the capabil- centralized architecture where a single controller is responsible ity of handling TCP incast collapse.
We believe the problem for managing all OpenFlow-enabled switches in the network. There are some proposals that aim E. Data center management to overcome this issue.
Similarly, the scalability of FlowVisor of the data center while service providers manage the virtual is also a subject needing further investigation, given the large resources e. A their virtual data centers. An important advantage of virtual- possible avenue for improving FlowVisor scalability is to ized data centers is that the physical resources are managed by determine the optimal number and placement of FlowVisor a single infrastructure provider.
This allows the infrastructure instances in a programmable data center network. Finally, provider to have a full view of the system thus facilitating other programmable platforms e. However, agents, and Script MIB could also be evaluated in the context there are still several challenges that need to be addressed in of virtualized data center networks. Centralized D. Network performance guarantees monitoring approaches suffer from low scalability and re- Commercial data centers today are home to a vast number silience.
Cooperative monitoring  and gossipping  of applications with diverse performance requirements. For aim to overcome these limitations by enabling distributed example, user-facing applications, such as web servers and and robust monitoring solutions for large scale envi- real-time e.
In network. Data center network virtualization is by monitoring tools designed for data centers. For instance, many proposed architectures, such service providers also require further exploration.
However, operational cost of a data center. One of the main providing strict bandwidth guarantee can lead to low utiliza- challenges towards optimal energy consumption is to de- tion if tenants do not fully utilize the allocated bandwidth.
ElasticTree , for example, at- high resource utilization. However, they do not provide hard tempts to achieve energy proportionality by dynamically resource guarantees to each virtual data center. There is powering off switches and links. A SURVEY reducing power consumption though network consolida- an overhead associated with such infrastructures especially in tion e.
How- large-scale data centers. In , the authors showed that it is ever, minimizing energy consumption can come at the a challenge to audit web services in cloud computing envi- price of VDC performance degradation.
Thus, designing ronments without deteriorating application performance. This introduces ures of a physical resource can potentially affect multiple the additional complexity of managing heterogeneous security service providers. Most existing architectures rely on mechanisms and policies.
Furthermore, the co-existence and reactive failure handing approaches. Their main draw- interaction of multiple security systems expected in a multi- back is the potentially long response time, which can tenant data center is an issue that has not been addressed negatively impact application performance. Ideally, fault before.