draft-ietf-nvo3-framework-00.txt   draft-ietf-nvo3-framework-01.txt 
skipping to change at page 1, line 16 skipping to change at page 1, line 16
Expires: March 2013 Expires: March 2013
Thomas Morin Thomas Morin
France Telecom Orange France Telecom Orange
Nabil Bitar Nabil Bitar
Verizon Verizon
Yakov Rekhter Yakov Rekhter
Juniper Juniper
September 11, 2012 October 19, 2012
Framework for DC Network Virtualization Framework for DC Network Virtualization
draft-ietf-nvo3-framework-00.txt draft-ietf-nvo3-framework-01.txt
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six Internet-Drafts are draft documents valid for a maximum of six
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as at any time. It is inappropriate to use Internet-Drafts as
reference material or to cite them other than as "work in progress." reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on March 11, 2013. This Internet-Draft will expire on April 19, 2013.
Copyright Notice Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
skipping to change at page 2, line 24 skipping to change at page 2, line 24
Several IETF drafts relate to the use of overlay networks to support Several IETF drafts relate to the use of overlay networks to support
large scale virtual data centers. This draft provides a framework large scale virtual data centers. This draft provides a framework
for Network Virtualization over L3 (NVO3) and is intended to help for Network Virtualization over L3 (NVO3) and is intended to help
plan a set of work items in order to provide a complete solution plan a set of work items in order to provide a complete solution
set. It defines a logical view of the main components with the set. It defines a logical view of the main components with the
intention of streamlining the terminology and focusing the solution intention of streamlining the terminology and focusing the solution
set. set.
Table of Contents Table of Contents
1. Introduction.................................................3 1. Introduction................................................3
1.1. Conventions used in this document.......................4 1.1. Conventions used in this document.......................4
1.2. General terminology.....................................4 1.2. General terminology.....................................4
1.3. DC network architecture.................................6 1.3. DC network architecture.................................6
1.4. Tenant networking view..................................7 1.4. Tenant networking view..................................7
2. Reference Models.............................................8 2. Reference Models............................................8
2.1. Generic Reference Model.................................8 2.1. Generic Reference Model.................................8
2.2. NVE Reference Model....................................10 2.2. NVE Reference Model....................................10
2.3. NVE Service Types......................................11 2.3. NVE Service Types......................................12
2.3.1. L2 NVE providing Ethernet LAN-like service........11 2.3.1. L2 NVE providing Ethernet LAN-like service.........12
2.3.2. L3 NVE providing IP/VRF-like service..............11 2.3.2. L3 NVE providing IP/VRF-like service..............12
3. Functional components.......................................11 3. Functional components.......................................12
3.1. Generic service virtualization components..............12 3.1. Generic service virtualization components..............12
3.1.1. Virtual Access Points (VAPs)......................12 3.1.1. Virtual Access Points (VAPs)......................13
3.1.2. Virtual Network Instance (VNI)....................12 3.1.2. Virtual Network Instance (VNI)....................13
3.1.3. Overlay Modules and VN Context....................13 3.1.3. Overlay Modules and VN Context....................13
3.1.4. Tunnel Overlays and Encapsulation options.........14 3.1.4. Tunnel Overlays and Encapsulation options..........14
3.1.5. Control Plane Components..........................14 3.1.5. Control Plane Components..........................14
3.1.5.1. Auto-provisioning/Service discovery.............14 3.1.5.1. Distributed vs Centralized Control Plane.........15
3.1.5.2. Address advertisement and tunnel mapping........15 3.1.5.2. Auto-provisioning/Service discovery.............15
3.1.5.3. Tunnel management...............................15 3.1.5.3. Address advertisement and tunnel mapping.........16
3.2. Service Overlay Topologies.............................16 3.1.5.4. Tunnel management...............................17
4. Key aspects of overlay networks.............................16 3.2. Multi-homing..........................................17
4.1. Pros & Cons............................................16 3.3. Service Overlay Topologies.............................18
4.2. Overlay issues to consider.............................17 4. Key aspects of overlay networks.............................18
4.2.1. Data plane vs Control plane driven................17 4.1. Pros & Cons...........................................18
4.2.2. Coordination between data plane and control plane..18 4.2. Overlay issues to consider.............................19
4.2.1. Data plane vs Control plane driven................19
4.2.2. Coordination between data plane and control plane..20
4.2.3. Handling Broadcast, Unknown Unicast and Multicast (BUM) 4.2.3. Handling Broadcast, Unknown Unicast and Multicast (BUM)
traffic..................................................18 traffic.................................................20
4.2.4. Path MTU..........................................19 4.2.4. Path MTU.........................................21
4.2.5. NVE location trade-offs...........................19 4.2.5. NVE location trade-offs...........................21
4.2.6. Interaction between network overlays and underlays.20 4.2.6. Interaction between network overlays and underlays.22
5. Security Considerations.....................................21 5. Security Considerations.....................................23
6. IANA Considerations.........................................21 6. IANA Considerations........................................23
7. References..................................................21 7. References.................................................23
7.1. Normative References...................................21 7.1. Normative References...................................23
7.2. Informative References.................................21 7.2. Informative References.................................23
8. Acknowledgments.............................................22 8. Acknowledgments............................................24
1. Introduction 1. Introduction
This document provides a framework for Data Center Network This document provides a framework for Data Center Network
Virtualization over L3 tunnels. This framework is intended to aid in Virtualization over L3 tunnels. This framework is intended to aid in
standardizing protocols and mechanisms to support large scale standardizing protocols and mechanisms to support large scale
network virtualization for data centers. network virtualization for data centers.
Several IETF drafts relate to the use of overlay networks for data Several IETF drafts relate to the use of overlay networks for data
centers. centers.
skipping to change at page 4, line 25 skipping to change at page 4, line 25
1.2. General terminology 1.2. General terminology
This document uses the following terminology: This document uses the following terminology:
NVE: Network Virtualization Edge. It is a network entity that sits NVE: Network Virtualization Edge. It is a network entity that sits
on the edge of the NVO3 network. It implements network on the edge of the NVO3 network. It implements network
virtualization functions that allow for L2 and/or L3 tenant virtualization functions that allow for L2 and/or L3 tenant
separation and for hiding tenant addressing information (MAC and IP separation and for hiding tenant addressing information (MAC and IP
addresses). An NVE could be implemented as part of a virtual switch addresses). An NVE could be implemented as part of a virtual switch
within a hypervisor, a physical switch or router, a Network Service within a hypervisor, a physical switch or router, a Network Service
Appliance or even be embedded within an End Station. Appliance.
VN: Virtual Network. This is a virtual L2 or L3 domain that belongs VN: Virtual Network. This is a virtual L2 or L3 domain that belongs
a tenant. to a tenant.
VNI: Virtual Network Instance. This is one instance of a virtual VNI: Virtual Network Instance. This is one instance of a virtual
overlay network. Two Virtual Networks are isolated from one another overlay network. Two Virtual Networks are isolated from one another
and may use overlapping addresses. and may use overlapping addresses.
Virtual Network Context or VN Context: Field that is part of the Virtual Network Context or VN Context: Field that is part of the
overlay encapsulation header which allows the encapsulated frame to overlay encapsulation header which allows the encapsulated frame to
be delivered to the appropriate virtual network endpoint by the be delivered to the appropriate virtual network endpoint by the
egress NVE. The egress NVE uses this field to determine the egress NVE. The egress NVE uses this field to determine the
appropriate virtual network context in which to process the packet. appropriate virtual network context in which to process the packet.
skipping to change at page 5, line 19 skipping to change at page 5, line 19
Data Center (DC): A physical complex housing physical servers, Data Center (DC): A physical complex housing physical servers,
network switches and routers, Network Service Appliances and network switches and routers, Network Service Appliances and
networked storage. The purpose of a Data Center is to provide networked storage. The purpose of a Data Center is to provide
application and/or compute and/or storage services. One such service application and/or compute and/or storage services. One such service
is virtualized data center services, also known as Infrastructure as is virtualized data center services, also known as Infrastructure as
a Service. a Service.
Virtual Data Center or Virtual DC: A container for virtualized Virtual Data Center or Virtual DC: A container for virtualized
compute, storage and network services. Managed by a single tenant, a compute, storage and network services. Managed by a single tenant, a
Virtual DC can contain multiple VNs and multiple Tenant End Systems Virtual DC can contain multiple VNs and multiple Tenant Systems that
that are connected to one or more of these VNs. are connected to one or more of these VNs.
VM: Virtual Machine. Several Virtual Machines can share the VM: Virtual Machine. Several Virtual Machines can share the
resources of a single physical computer server using the services of resources of a single physical computer server using the services of
a Hypervisor (see below definition). a Hypervisor (see below definition).
Hypervisor: Server virtualization software running on a physical Hypervisor: Server virtualization software running on a physical
compute server that hosts Virtual Machines. The hypervisor provides compute server that hosts Virtual Machines. The hypervisor provides
shared compute/memory/storage and network connectivity to the VMs shared compute/memory/storage and network connectivity to the VMs
that it hosts. Hypervisors often embed a Virtual Switch (see below). that it hosts. Hypervisors often embed a Virtual Switch (see below).
Virtual Switch: A function within a Hypervisor (typically Virtual Switch: A function within a Hypervisor (typically
implemented in software) that provides similar services to a implemented in software) that provides similar services to a
physical Ethernet switch. It switches Ethernet frames between VMs' physical Ethernet switch. It switches Ethernet frames between VMs'
virtual NICs within the same physical server, or between a VM and a virtual NICs within the same physical server, or between a VM and a
physical NIC card connecting the server to a physical Ethernet physical NIC card connecting the server to a physical Ethernet
switch. It also enforces network isolation between VMs that should switch. It also enforces network isolation between VMs that should
not communicate with each other. not communicate with each other.
Tenant: A customer who consumes virtualized data center services Tenant: In a DC, a tenant refers to a customer that could an
offered by a cloud service provider. A single tenant may consume one organization within an enterprise, or an enterprise with a set of DC
or more Virtual Data Centers hosted by the same cloud service compute, storage and network resources associated with it.
provider.
Tenant End System: It defines an end system of a particular tenant, Tenant System: A physical or virtual system that can play the role
which can be for instance a virtual machine (VM), a non-virtualized of a host, or a forwarding element such as a router, switch,
server, or a physical appliance. firewall, etc. It belongs to a single tenant and connects to one or
more VNs of that tenant.
End device: A physical system to which networking service is
provided. Examples include hosts (e.g. server or server blade),
storage systems (e.g. file servers, iSCSI storage systems) and
network devices (e.g. firewall, load-balancer, IPSec gateway). An
end device may include internal networking functionality that
interconnects the device's components (e.g. virtual switches that
interconnects VMs running on the same server). NVE functionality may
be implemented as part of that internal networking.
ELAN: MEF ELAN, multipoint to multipoint Ethernet service ELAN: MEF ELAN, multipoint to multipoint Ethernet service
EVPN: Ethernet VPN as defined in [EVPN] EVPN: Ethernet VPN as defined in [EVPN]
1.3. DC network architecture 1.3. DC network architecture
A generic architecture for Data Centers is depicted in Figure 1: A generic architecture for Data Centers is depicted in Figure 1:
,---------. ,---------.
,' `. ,' `.
( IP/MPLS WAN ) ( IP/MPLS WAN )
`. ,' `. ,'
skipping to change at page 6, line 20 skipping to change at page 6, line 29
,---------. ,---------.
,' `. ,' `.
( IP/MPLS WAN ) ( IP/MPLS WAN )
`. ,' `. ,'
`-+------+' `-+------+'
+--+--+ +-+---+ +--+--+ +-+---+
|DC GW|+-+|DC GW| |DC GW|+-+|DC GW|
+-+---+ +-----+ +-+---+ +-----+
| / | /
.--. .--. .--. .--.
( ' '.--. ( ' '.--.
.-.' Intra-DC ' .-.' Intra-DC '
( network ) ( network )
( .'-' ( .'-'
'--'._.'. )\ \ '--'._.'. )\ \
/ / '--' \ \ / / '--' \ \
/ / | | \ \ / / | | \ \
+---+--+ +-`.+--+ +--+----+ +---+--+ +-`.+--+ +--+----+
| ToR | | ToR | | ToR | | ToR | | ToR | | ToR |
+-+--`.+ +-+-`.-+ +-+--+--+ +-+--`.+ +-+-`.-+ +-+--+--+
.' \ .' \ .' `. / \ / \ / \
__/_ _i./ i./_ _\__ __/_ \ / \ /_ _\__
'--------' '--------' '--------' '--------' '--------' '--------' '--------' '--------'
: End : : End : : End : : End : : End : : End : : End : : End :
: Device : : Device : : Device : : Device : : Device : : Device : : Device : : Device :
'--------' '--------' '--------' '--------' '--------' '--------' '--------' '--------'
Figure 1 : A Generic Architecture for Data Centers Figure 1 : A Generic Architecture for Data Centers
An example of multi-tier DC network architecture is presented in An example of multi-tier DC network architecture is presented in
this figure. It provides a view of physical components inside a DC. this figure. It provides a view of physical components inside a DC.
A cloud network is composed of intra-Data Center (DC) networks and A cloud network is composed of intra-Data Center (DC) networks and
network services, and, inter-DC network and network connectivity network services, and, inter-DC network and network connectivity
services. Depending upon the scale, DC distribution, operations services. Depending upon the scale, DC distribution, operations
model, Capex and Opex aspects, DC networking elements can act as model, Capex and Opex aspects, DC networking elements can act as
skipping to change at page 7, line 13 skipping to change at page 7, line 20
also service virtualization. also service virtualization.
In some DC architectures, it is possible that some tier layers In some DC architectures, it is possible that some tier layers
provide L2 and/or L3 services, are collapsed, and that Internet provide L2 and/or L3 services, are collapsed, and that Internet
connectivity, inter-DC connectivity and VPN support are handled by a connectivity, inter-DC connectivity and VPN support are handled by a
smaller number of nodes. Nevertheless, one can assume that the smaller number of nodes. Nevertheless, one can assume that the
functional blocks fit with the architecture above. functional blocks fit with the architecture above.
The following components can be present in a DC: The following components can be present in a DC:
o End Device: a DC resource to which the networking service is
provided. End Device may be a compute resource (server or
server blade), storage component or a network appliance
(firewall, load-balancer, IPsec gateway). Alternatively, the
End Device may include software based networking functions used
to interconnect multiple hosts. An example of soft networking
is the virtual switch in the server blades, used to
interconnect multiple virtual machines (VMs). End Device may be
single or multi-homed to the Top of Rack switches (ToRs).
o Top of Rack (ToR): Hardware-based Ethernet switch aggregating o Top of Rack (ToR): Hardware-based Ethernet switch aggregating
all Ethernet links from the End Devices in a rack representing all Ethernet links from the End Devices in a rack representing
the entry point in the physical DC network for the hosts. ToRs the entry point in the physical DC network for the hosts. ToRs
may also provide routing functionality, virtual IP network may also provide routing functionality, virtual IP network
connectivity, or Layer2 tunneling over IP for instance. ToRs connectivity, or Layer2 tunneling over IP for instance. ToRs
are usually multi-homed to switches in the Intra-DC network. are usually multi-homed to switches in the Intra-DC network.
Other deployment scenarios may use an intermediate Blade Switch Other deployment scenarios may use an intermediate Blade Switch
before the ToR or an EoR (End of Row) switch to provide similar before the ToR or an EoR (End of Row) switch to provide similar
function as a ToR. function as a ToR.
skipping to change at page 7, line 44 skipping to change at page 7, line 41
switches aggregating multiple ToRs. Core switches are usually switches aggregating multiple ToRs. Core switches are usually
Ethernet switches but can also support routing capabilities. Ethernet switches but can also support routing capabilities.
o DC GW: Gateway to the outside world providing DC Interconnect o DC GW: Gateway to the outside world providing DC Interconnect
and connectivity to Internet and VPN customers. In the current and connectivity to Internet and VPN customers. In the current
DC network model, this may be simply a Router connected to the DC network model, this may be simply a Router connected to the
Internet and/or an IPVPN/L2VPN PE. Some network implementations Internet and/or an IPVPN/L2VPN PE. Some network implementations
may dedicate DC GWs for different connectivity types (e.g., a may dedicate DC GWs for different connectivity types (e.g., a
DC GW for Internet, and another for VPN). DC GW for Internet, and another for VPN).
Note that End Devices may be single or multi-homed to ToRs.
1.4. Tenant networking view 1.4. Tenant networking view
The DC network architecture is used to provide L2 and/or L3 service The DC network architecture is used to provide L2 and/or L3 service
connectivity to each tenant. An example is depicted in Figure 2: connectivity to each tenant. An example is depicted in Figure 2:
+----- L3 Infrastructure ----+ +----- L3 Infrastructure ----+
| | | |
,--+-'. ;--+--. ,--+--. ,--+--.
..... Rtr1 )...... . Rtr2 ) .....( Rtr1 )...... ( Rtr2 )
| '-----' | '-----' | `-----' | `-----'
| Tenant1 |LAN12 Tenant1| | Tenant1 |LAN12 Tenant1|
|LAN11 ....|........ |LAN13 |LAN11 ....|........ |LAN13
'':'''''''':' | | '':'''''''':' .............. | | ..............
,'. ,'. ,+. ,+. ,'. ,'. | | | | | |
(VM )....(VM ) (VM )... (VM ) (VM )....(VM ) ,-. ,-. ,-. ,-. ,-. ,-.
`-' `-' `-' `-' `-' `-' (VM )....(VM ) (VM )... (VM ) (VM )....(VM )
`-' `-' `-' `-' `-' `-'
Figure 2 : Logical Service connectivity for a single tenant Figure 2 : Logical Service connectivity for a single tenant
In this example one or more L3 contexts and one or more LANs (e.g., In this example one or more L3 contexts and one or more LANs (e.g.,
one per application type) running on DC switches are assigned for DC one per application type) running on DC switches are assigned for DC
tenant 1. tenant 1.
For a multi-tenant DC, a virtualized version of this type of service For a multi-tenant DC, a virtualized version of this type of service
connectivity needs to be provided for each tenant by the Network connectivity needs to be provided for each tenant by the Network
Virtualization solution. Virtualization solution.
2. Reference Models 2. Reference Models
2.1. Generic Reference Model 2.1. Generic Reference Model
The following diagram shows a DC reference model for network The following diagram shows a DC reference model for network
virtualization using Layer3 overlays where edge devices provide a virtualization using Layer3 overlays where NVEs provide a logical
logical interconnect between Tenant End Systems that belong to interconnect between Tenant Systems that belong to specific tenant
specific tenant network. network.
+--------+ +--------+ +--------+ +--------+
| Tenant | | Tenant | | Tenant +--+ +----| Tenant |
| End +--+ +---| End | | System | | (') | System |
| System | | | | System | +--------+ | ................... ( ) +--------+
+--------+ | ................... | +--------+ | +-+--+ +--+-+ (_)
| +-+--+ +--+-+ | | | NV | | NV | |
| | NV | | NV | | +--|Edge| |Edge|---+
+--|Edge| |Edge|--+
+-+--+ +--+-+ +-+--+ +--+-+
/ . L3 Overlay . \ / . .
+--------+ / . Network . \ +--------+ / . L3 Overlay +--+-++--------+
| Tenant +--+ . . +----| Tenant | +--------+ / . Network | NV || Tenant |
| End | . . | End | | Tenant +--+ . |Edge|| System |
| System | . +----+ . | System | | System | . +----+ +--+-++--------+
+--------+ .....| NV |........ +--------+ +--------+ .....| NV |........
|Edge| |Edge|
+----+ +----+
| |
| |
+--------+ =====================
| Tenant | | |
| End | +--------+ +--------+
| System | | Tenant | | Tenant |
+--------+ | System | | System |
+--------+ +--------+
Figure 3 : Generic reference model for DC network virtualization Figure 3 : Generic reference model for DC network virtualization
over a Layer3 infrastructure over a Layer3 infrastructure
A Tenant System can be attached to a Network Virtualization Edge
(NVE) node in several ways:
- locally, by being co-located i.e. resident in the same device
- remotely, via a point-to-point connection or a switched network
(e.g. Ethernet)
When an NVE is local, the state of Tenant Systems can be provided
without protocol assistance. For instance, the operational status of
a VM can be communicated via a local API. When an NVE is remote, the
state of Tenant Systems needs to be exchanged via a data or control
plane protocol, or via a management entity.
The functional components in this picture do not necessarily map The functional components in this picture do not necessarily map
directly with the physical components described in Figure 1. directly with the physical components described in Figure 1.
For example, an End Device can be a server blade with VMs and For example, an End Device can be a server blade with VMs and
virtual switch, i.e. the VM is the Tenant End System and the NVE virtual switch, i.e. the VM is the Tenant System and the NVE
functions may be performed by the virtual switch and/or the functions may be performed by the virtual switch and/or the
hypervisor. hypervisor. In this case, the Tenant System and NVE function are co-
located.
Another example is the case where an End Device can be a traditional Another example is the case where an End Device can be a traditional
physical server (no VMs, no virtual switch), i.e. the server is the physical server (no VMs, no virtual switch), i.e. the server is the
Tenant End System and the NVE functions may be performed by the ToR. Tenant System and the NVE function may be performed by the ToR.
Other End Devices in this category are Physical Network Appliances Other End Devices in this category are Physical Network Appliances
or Storage Systems. or Storage Systems.
A Tenant End System attaches to a Network Virtualization Edge (NVE)
node, either directly or via a switched network (typically
Ethernet).
The NVE implements network virtualization functions that allow for The NVE implements network virtualization functions that allow for
L2 and/or L3 tenant separation and for hiding tenant addressing L2 and/or L3 tenant separation and for hiding tenant addressing
information (MAC and IP addresses), tenant-related control plane information (MAC and IP addresses), tenant-related control plane
activity and service contexts from the Routed Backbone nodes. activity and service contexts from the Routed Backbone nodes.
Core nodes utilize L3 techniques to interconnect NVE nodes in Core nodes utilize L3 techniques to interconnect NVE nodes in
support of the overlay network. These devices perform forwarding support of the overlay network. These devices perform forwarding
based on outer L3 tunnel header, and generally do not maintain per based on outer L3 tunnel header, and generally do not maintain per
tenant-service state albeit some applications (e.g., multicast) may tenant-service state albeit some applications (e.g., multicast) may
require control plane or forwarding plane information that pertain require control plane or forwarding plane information that pertain
to a tenant, group of tenants, tenant service or a set of services to a tenant, group of tenants, tenant service or a set of services
that belong to one or more tunnels. When such tenant or tenant- that belong to one or more tunnels. When such tenant or tenant-
service related information is maintained in the core, overlay service related information is maintained in the core, overlay
virtualization provides knobs to control that information. virtualization provides knobs to control that information.
2.2. NVE Reference Model 2.2. NVE Reference Model
The NVE is composed of a tenant service instance that Tenant End The NVE is composed of a Virtual Network instance that Tenant
Systems interface with and an overlay module that provides tunneling Systems interface with and an overlay module that provides tunneling
overlay functions (e.g. encapsulation/decapsulation of tenant overlay functions (e.g. encapsulation/decapsulation of tenant
traffic from/to the tenant forwarding instance, tenant traffic from/to the tenant forwarding instance, tenant
identification and mapping, etc), as described in figure 4: identification and mapping, etc), as described in figure 4:
+------- L3 Network ------+ +------- L3 Network ------+
| | | |
| Tunnel Overlay | | Tunnel Overlay |
+------------+---------+ +---------+------------+ +------------+---------+ +---------+------------+
| +----------+-------+ | | +---------+--------+ | | +----------+-------+ | | +---------+--------+ |
| | Overlay Module | | | | Overlay Module | | | | Overlay Module | | | | Overlay Module | |
| +---------+--------+ | | +---------+--------+ | | +---------+--------+ | | +---------+--------+ |
| |VN context| | VN context| | | |VN context| | VN context| |
| | | | | | | | | | | |
| +--------+-------+ | | +--------+-------+ | | +--------+-------+ | | +--------+-------+ |
| | |VNI| . |VNI| | | | |VNI| . |VNI| | | | |VNI| . |VNI| | | | |VNI| . |VNI| |
NVE1 | +-+------------+-+ | | +-+-----------+--+ | NVE2 NVE1 | +-+------------+-+ | | +-+-----------+--+ | NVE2
| | VAPs | | | | VAPs | | | | VAPs | | | | VAPs | |
+----+------------+----+ +----+------------+----+ +----+------------+----+ +----+-----------+-----+
| | | | | | | |
-------+------------+-----------------+------------+------- -------+------------+-----------------+-----------+-------
| | Tenant | | | | Tenant | |
| | Service IF | | | | Service IF | |
Tenant End Systems Tenant End Systems Tenant Systems Tenant Systems
Figure 4 : Generic reference model for NV Edge Figure 4 : Generic reference model for NV Edge
Note that some NVE functions (e.g. data plane and control plane Note that some NVE functions (e.g. data plane and control plane
functions) may reside in one device or may be implemented separately functions) may reside in one device or may be implemented separately
in different devices. in different devices.
For example, the NVE functionality could reside solely on the End For example, the NVE functionality could reside solely on the End
Devices, on the ToRs or on both the End Devices and the ToRs. In the Devices, on the ToRs or on both the End Devices and the ToRs. In the
latter case we say that the the End Device NVE component acts as the latter case we say that the End Device NVE component acts as the NVE
NVE Spoke, and ToRs act as NVE hubs. Tenant End Systems will Spoke, and ToRs act as NVE hubs. Tenant Systems will interface with
interface with the tenant service instances maintained on the NVE VNIs maintained on the NVE spokes, and VNIs maintained on the NVE
spokes, and tenant service instances maintained on the NVE spokes spokes will interface with VNIs maintained on the NVE hubs.
will interface with the tenant service instances maintained on the
NVE hubs.
2.3. NVE Service Types 2.3. NVE Service Types
NVE components may be used to provide different types of virtualized NVE components may be used to provide different types of virtualized
service connectivity. This section defines the service types and service connectivity. This section defines the service types and
associated attributes associated attributes
2.3.1. L2 NVE providing Ethernet LAN-like service 2.3.1. L2 NVE providing Ethernet LAN-like service
L2 NVE implements Ethernet LAN emulation (ELAN), an Ethernet based L2 NVE implements Ethernet LAN emulation (ELAN), an Ethernet based
multipoint service where the Tenant End Systems appear to be multipoint service where the Tenant Systems appear to be
interconnected by a LAN environment over a set of L3 tunnels. It interconnected by a LAN environment over a set of L3 tunnels. It
provides per tenant virtual switching instance with MAC addressing provides per tenant virtual switching instance with MAC addressing
isolation and L3 tunnel encapsulation across the core. isolation and L3 tunnel encapsulation across the core.
2.3.2. L3 NVE providing IP/VRF-like service 2.3.2. L3 NVE providing IP/VRF-like service
Virtualized IP routing and forwarding is similar from a service Virtualized IP routing and forwarding is similar from a service
definition perspective with IETF IP VPN (e.g., BGP/MPLS IPVPN and definition perspective with IETF IP VPN (e.g., BGP/MPLS IPVPN and
IPsec VPNs). It provides per tenant routing instance with addressing IPsec VPNs). It provides per tenant routing instance with addressing
isolation and L3 tunnel encapsulation across the core. isolation and L3 tunnel encapsulation across the core.
skipping to change at page 12, line 32 skipping to change at page 13, line 13
| | | | | | | | | | | |
| +-------+-------+ | | +-------+-------+ | | +-------+-------+ | | +-------+-------+ |
| ||VNI| ... |VNI|| | | ||VNI| ... |VNI|| | | ||VNI| ... |VNI|| | | ||VNI| ... |VNI|| |
NVE1 | +-+-----------+-+ | | +-+-----------+-+ | NVE2 NVE1 | +-+-----------+-+ | | +-+-----------+-+ | NVE2
| | VAPs | | | | VAPs | | | | VAPs | | | | VAPs | |
+----+-----------+----+ +----+-----------+----+ +----+-----------+----+ +----+-----------+----+
| | | | | | | |
-----+-----------+-----------------+-----------+----- -----+-----------+-----------------+-----------+-----
| | Tenant | | | | Tenant | |
| | Service IF | | | | Service IF | |
Tenant End Systems Tenant End Systems Tenant Systems Tenant Systems
Figure 5 : Generic reference model for NV Edge Figure 5 : Generic reference model for NV Edge
3.1.1. Virtual Access Points (VAPs) 3.1.1. Virtual Access Points (VAPs)
Tenant End Systems are connected to the VNI Instance through Virtual Tenant Systems are connected to the VNI Instance through Virtual
Access Points (VAPs). The VAPs can be in reality physical ports on a Access Points (VAPs).
ToR or virtual ports identified through logical interface
identifiers (VLANs, internal VSwitch Interface ID leading to a VM). The VAPs can be physical ports or virtual ports identified through
logical interface identifiers (VLANs, internal VSwitch Interface ID
leading to a VM).
3.1.2. Virtual Network Instance (VNI) 3.1.2. Virtual Network Instance (VNI)
The VNI represents a set of configuration attributes defining access The VNI represents a set of configuration attributes defining access
and tunnel policies and (L2 and/or L3) forwarding functions. and tunnel policies and (L2 and/or L3) forwarding functions.
Per tenant FIB tables and control plane protocol instances are used Per tenant FIB tables and control plane protocol instances are used
to maintain separate private contexts between tenants. Hence tenants to maintain separate private contexts between tenants. Hence tenants
are free to use their own addressing schemes without concerns about are free to use their own addressing schemes without concerns about
address overlapping with other tenants. address overlapping with other tenants.
skipping to change at page 14, line 34 skipping to change at page 15, line 14
. Auto-provisioning/Service discovery . Auto-provisioning/Service discovery
. Address advertisement and tunnel mapping . Address advertisement and tunnel mapping
. Tunnel management . Tunnel management
A control plane component can be an on-net control protocol or a A control plane component can be an on-net control protocol or a
management control entity. management control entity.
3.1.5.1. Auto-provisioning/Service discovery 3.1.5.1. Distributed vs Centralized Control Plane
NVEs must be able to select the appropriate VNI for each Tenant End A control/management plane entity can be centralized or distributed.
Both approaches have been used extensively in the past. The routing
model of the Internet is a good example of a distributed approach.
Transport networks have usually used a centralized approach to
manage transport paths.
It is also possible to combine the two approaches i.e. using a
hybrid model. A global view of network state can have many benefits
but it does not preclude the use of distributed protocols within the
network. Centralized controllers provide a facility to maintain
global and distribute that state to the network which in combination
with distributed protocols can aid in achieving greater network
efficiencies, improve reliability and robustness. Domain and/or
deployment specific constraints define the balance between
centralized and distributed approaches.
On one hand, a control plane module can reside in every NVE. This is
how routing control plane modules are implemented in routers. At the
same time, an external controller can manage a group of NVEs via an
agent sitting in each NVE. This is how an SDN controller could
communicate with the nodes it controls, via OpenFlow for instance.
In the case where a centralized control plane is preferred, the
controller will need to be distributed to more than one node for
redundancy. Depending upon the size of the DC domain, hence the
number of NVEs to manage, it should be possible to use several
external controllers. Inter-controller communication will thus be
necessary for scalability and redundancy.
3.1.5.2. Auto-provisioning/Service discovery
NVEs must be able to select the appropriate VNI for each Tenant
System. This is based on state information that is often provided by System. This is based on state information that is often provided by
external entities. For example, in a VM environment, this external entities. For example, in a VM environment, this
information is provided by compute management systems, since these information is provided by compute management systems, since these
are the only entities that have visibility on which VM belongs to are the only entities that have visibility on which VM belongs to
which tenant. which tenant.
A mechanism for communicating this information between Tenant End A mechanism for communicating this information between Tenant
Systems and the local NVE is required. As a result the VAPs are Systems and the local NVE is required. As a result the VAPs are
created and mapped to the appropriate Tenant Instance. created and mapped to the appropriate VNI.
Depending upon the implementation, this control interface can be Depending upon the implementation, this control interface can be
implemented using an auto-discovery protocol between Tenant End implemented using an auto-discovery protocol between Tenant Systems
Systems and their local NVE or through management entities. and their local NVE or through management entities.
When a protocol is used, appropriate security and authentication When a protocol is used, appropriate security and authentication
mechanisms to verify that Tenant End System information is not mechanisms to verify that Tenant System information is not spoofed
spoofed or altered are required. This is one critical aspect for or altered are required. This is one critical aspect for providing
providing integrity and tenant isolation in the system. integrity and tenant isolation in the system.
Another control plane protocol can also be used to advertize NVE Another control plane protocol can also be used to advertize
tenant service instance (tenant and service type provided to the supported VNs to other NVEs. Alternatively, management control
tenant) to other NVEs. Alternatively, management control entities entities can also be used to perform these functions.
can also be used to perform these functions.
3.1.5.2. Address advertisement and tunnel mapping 3.1.5.3. Address advertisement and tunnel mapping
As traffic reaches an ingress NVE, a lookup is performed to As traffic reaches an ingress NVE, a lookup is performed to
determine which tunnel the packet needs to be sent to. It is then determine which tunnel the packet needs to be sent to. It is then
encapsulated with a tunnel header containing the destination address encapsulated with a tunnel header containing the destination address
of the egress overlay node. Intermediate nodes (between the ingress of the egress overlay node. Intermediate nodes (between the ingress
and egress NVEs) switch or route traffic based upon the outer and egress NVEs) switch or route traffic based upon the outer
destination address. destination address.
One key step in this process consists of mapping a final destination One key step in this process consists of mapping a final destination
address to the proper tunnel. NVEs are responsible for maintaining address to the proper tunnel. NVEs are responsible for maintaining
skipping to change at page 15, line 39 skipping to change at page 17, line 5
When a control plane protocol is used to distribute address When a control plane protocol is used to distribute address
advertisement and tunneling information, the auto- advertisement and tunneling information, the auto-
provisioning/Service discovery could be accomplished by the same provisioning/Service discovery could be accomplished by the same
protocol. In this scenario, the auto-provisioning/Service discovery protocol. In this scenario, the auto-provisioning/Service discovery
could be combined with (be inferred from) the address advertisement could be combined with (be inferred from) the address advertisement
and tunnel mapping. Furthermore, a control plane protocol that and tunnel mapping. Furthermore, a control plane protocol that
carries both MAC and IP addresses eliminates the need for ARP, and carries both MAC and IP addresses eliminates the need for ARP, and
hence addresses one of the issues with explosive ARP handling. hence addresses one of the issues with explosive ARP handling.
3.1.5.3. Tunnel management 3.1.5.4. Tunnel management
A control plane protocol may be required to exchange tunnel state A control plane protocol may be required to exchange tunnel state
information. This may include setting up tunnels and/or providing information. This may include setting up tunnels and/or providing
tunnel state information. tunnel state information.
This applies to both unicast and multicast tunnels. This applies to both unicast and multicast tunnels.
For instance, it may be necessary to provide active/standby status For instance, it may be necessary to provide active/standby status
information between NVEs, up/down status information, information between NVEs, up/down status information,
pruning/grafting information for multicast tunnels, etc. pruning/grafting information for multicast tunnels, etc.
3.2. Service Overlay Topologies 3.2. Multi-homing
Multi-homing techniques can be used to increase the reliability of
an nvo3 network. It is also important to ensure that physical
diversity in an nvo3 network is taken into account to avoid single
points of failure.
Multi-homing can be enabled in various nodes, from tenant systems
into TORs, TORs into core switches/routers, and core nodes into DC
GWs.
The nvo3 underlay nodes (i.e. from NVEs to DC GWs) rely on IP
routing as the means to re-route traffic upon failures and/or ECMP
techniques.
Tenant systems can either be L2 or L3 nodes. In the former case
(L2), techniques such as LAG or STP for instance can be used. In the
latter case (L3), it is possible that no dynamic routing protocol is
enabled. Tenant systems can be multi-homed into remote NVE using
several interfaces (physical NICS or vNICS) with an IP address per
interface either to the same nvo3 network or into different nvo3
networks. When one of the links fails, the corresponding IP is not
reachable but the other interfaces can still be used. When a tenant
system is co-located with an NVE, IP routing can be relied upon to
handle routing over diverse links to TORs.
External connectivity is handled by to or more nvo3 gateways. Each
gateway is connected to a different domain (e.g. ISP) and runs BGP
multi-homing. They serve as an access point to external networks
such as VPNs or the Internet. When a connection to an upstream
router is lost, the alternative connection is used and the failed
route withdrawn.
3.3. Service Overlay Topologies
A number of service topologies may be used to optimize the service A number of service topologies may be used to optimize the service
connectivity and to address NVE performance limitations. connectivity and to address NVE performance limitations.
The topology described in Figure 3 suggests the use of a tunnel mesh The topology described in Figure 3 suggests the use of a tunnel mesh
between the NVEs where each tenant instance is one hop away from a between the NVEs where each tenant instance is one hop away from a
service processing perspective. Partial mesh topologies and an NVE service processing perspective. Partial mesh topologies and an NVE
hierarchy may be used where certain NVEs may act as service transit hierarchy may be used where certain NVEs may act as service transit
points. points.
skipping to change at page 16, line 41 skipping to change at page 18, line 41
in the core network. in the core network.
o Tunnels are used to aggregate traffic and hence offer the o Tunnels are used to aggregate traffic and hence offer the
advantage of minimizing the amount of forwarding state required advantage of minimizing the amount of forwarding state required
within the underlay network within the underlay network
o Decoupling of the overlay addresses (MAC and IP) used by VMs o Decoupling of the overlay addresses (MAC and IP) used by VMs
from the underlay network. This offers a clear separation from the underlay network. This offers a clear separation
between addresses used within the overlay and the underlay between addresses used within the overlay and the underlay
networks and it enables the use of overlapping addresses spaces networks and it enables the use of overlapping addresses spaces
by Tenant End Systems by Tenant Systems
o Support of a large number of virtual network identifiers o Support of a large number of virtual network identifiers
Overlay networks also create several challenges: Overlay networks also create several challenges:
o Overlay networks have no controls of underlay networks and lack o Overlay networks have no controls of underlay networks and lack
critical network information critical network information
o Overlays typically probe the network to measure link o Overlays typically probe the network to measure link
properties, such as available bandwidth or packet loss properties, such as available bandwidth or packet loss
rate. It is difficult to accurately evaluate network rate. It is difficult to accurately evaluate network
skipping to change at page 18, line 4 skipping to change at page 19, line 49
Dynamic data plane learning implies that flooding of unknown Dynamic data plane learning implies that flooding of unknown
destinations be supported and hence implies that broadcast and/or destinations be supported and hence implies that broadcast and/or
multicast be supported. Multicasting in the core network for dynamic multicast be supported. Multicasting in the core network for dynamic
learning may lead to significant scalability limitations. Specific learning may lead to significant scalability limitations. Specific
forwarding rules must be enforced to prevent loops from happening. forwarding rules must be enforced to prevent loops from happening.
This can be achieved using a spanning tree, a shortest path tree, or This can be achieved using a spanning tree, a shortest path tree, or
a split-horizon mesh. a split-horizon mesh.
It should be noted that the amount of state to be distributed is It should be noted that the amount of state to be distributed is
dependent upon network topology and the number of virtual machines. dependent upon network topology and the number of virtual machines.
Different forms of caching can also be utilized to minimize state Different forms of caching can also be utilized to minimize state
distribution between the various elements. distribution between the various elements. The control plane should
not require an NVE to maintain the locations of all the tenant
systems whose VNs are not present on the NVE.
4.2.2. Coordination between data plane and control plane 4.2.2. Coordination between data plane and control plane
For an L2 NVE, the NVE needs to be able to determine MAC addresses For an L2 NVE, the NVE needs to be able to determine MAC addresses
of the end systems present on a VAP (for instance, dataplane of the end systems present on a VAP. This can be achieved via
learning may be relied upon for this purpose). For an L3 NVE, the dataplane learning or a control plane. For an L3 NVE, the NVE needs
NVE needs to be able to determine IP addresses of the end systems to be able to determine IP addresses of the end systems present on a
present on a VAP. VAP.
In both cases, coordination with the NVE control protocol is needed In both cases, coordination with the NVE control protocol is needed
such that when the NVE determines that the set of addresses behind a such that when the NVE determines that the set of addresses behind a
VAP has changed, it triggers the local NVE control plane to VAP has changed, it triggers the local NVE control plane to
distribute this information to its peers. distribute this information to its peers.
4.2.3. Handling Broadcast, Unknown Unicast and Multicast (BUM) traffic 4.2.3. Handling Broadcast, Unknown Unicast and Multicast (BUM) traffic
There are two techniques to support packet replication needed for There are two techniques to support packet replication needed for
broadcast, unknown unicast and multicast: broadcast, unknown unicast and multicast:
skipping to change at page 19, line 15 skipping to change at page 21, line 15
4.2.4. Path MTU 4.2.4. Path MTU
When using overlay tunneling, an outer header is added to the When using overlay tunneling, an outer header is added to the
original frame. This can cause the MTU of the path to the egress original frame. This can cause the MTU of the path to the egress
tunnel endpoint to be exceeded. tunnel endpoint to be exceeded.
In this section, we will only consider the case of an IP overlay. In this section, we will only consider the case of an IP overlay.
It is usually not desirable to rely on IP fragmentation for It is usually not desirable to rely on IP fragmentation for
performance reasons. Ideally, the interface MTU as seen by a Tenant performance reasons. Ideally, the interface MTU as seen by a Tenant
End System is adjusted such that no fragmentation is needed. TCP System is adjusted such that no fragmentation is needed. TCP will
will adjust its maximum segment size accordingly. adjust its maximum segment size accordingly.
It is possible for the MTU to be configured manually or to be It is possible for the MTU to be configured manually or to be
discovered dynamically. Various Path MTU discovery techniques exist discovered dynamically. Various Path MTU discovery techniques exist
in order to determine the proper MTU size to use: in order to determine the proper MTU size to use:
o Classical ICMP-based MTU Path Discovery [RFC1191] [RFC1981] o Classical ICMP-based MTU Path Discovery [RFC1191] [RFC1981]
o o
Tenant End Systems rely on ICMP messages to discover the Tenant Systems rely on ICMP messages to discover the MTU of
MTU of the end-to-end path to its destination. This method the end-to-end path to its destination. This method is not
is not always possible, such as when traversing middle always possible, such as when traversing middle boxes
boxes (e.g. firewalls) which disable ICMP for security (e.g. firewalls) which disable ICMP for security reasons
reasons
o Extended MTU Path Discovery techniques such as defined in o Extended MTU Path Discovery techniques such as defined in
[RFC4821] [RFC4821]
It is also possible to rely on the overlay layer to perform It is also possible to rely on the overlay layer to perform
segmentation and reassembly operations without relying on the Tenant segmentation and reassembly operations without relying on the Tenant
End Systems to know about the end-to-end MTU. The assumption is that Systems to know about the end-to-end MTU. The assumption is that
some hardware assist is available on the NVE node to perform such some hardware assist is available on the NVE node to perform such
SAR operations. However, fragmentation by the overlay layer can lead SAR operations. However, fragmentation by the overlay layer can lead
to performance and congestion issues due to TCP dynamics and might to performance and congestion issues due to TCP dynamics and might
require new congestion avoidance mechanisms from then underlay require new congestion avoidance mechanisms from then underlay
network [FLOYD]. network [FLOYD].
Finally, the underlay network may be designed in such a way that the Finally, the underlay network may be designed in such a way that the
MTU can accommodate the extra tunnel overhead. MTU can accommodate the extra tunnel overhead.
4.2.5. NVE location trade-offs 4.2.5. NVE location trade-offs
skipping to change at page 21, line 7 skipping to change at page 23, line 7
Better visibility between overlays and underlays can be achieved by Better visibility between overlays and underlays can be achieved by
providing mechanisms to exchange information about: providing mechanisms to exchange information about:
o Performance metrics (throughput, delay, loss, jitter) o Performance metrics (throughput, delay, loss, jitter)
o Cost metrics o Cost metrics
5. Security Considerations 5. Security Considerations
The tenant to overlay mapping function can introduce significant As a framework document, no protocols are being defined and hence no
security risks if appropriate protocols are not used that can specific security consideration are raised.
support mutual authentication.
No other new security issues are introduced beyond those described The following security aspects shall be discussed in respective
already in the related L2VPN and L3VPN RFCs. solutions documents:
Traffic isolation between NVO3 domains is guaranteed by the use of
per tenant FIB tables (VNIs).
The creation of overlay networks and the tenant to overlay mapping
function can introduce significant security risks. When dynamic
protocols are used, authentication should be supported. When a
centralized controller is used, access to that controller should be
restricted to authorized personnel. This can be achieved via login
authentication.
6. IANA Considerations 6. IANA Considerations
IANA does not need to take any action for this draft. IANA does not need to take any action for this draft.
7. References 7. References
7.1. Normative References 7.1. Normative References
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
skipping to change at page 22, line 15 skipping to change at page 24, line 20
[RFC4821] Mathis, M. et al, "Packetization Layer Path MTU [RFC4821] Mathis, M. et al, "Packetization Layer Path MTU
Discovery", RFC4821, March 2007 Discovery", RFC4821, March 2007
8. Acknowledgments 8. Acknowledgments
In addition to the authors the following people have contributed to In addition to the authors the following people have contributed to
this document: this document:
Dimitrios Stiliadis, Rotem Salomonovitch, Alcatel-Lucent Dimitrios Stiliadis, Rotem Salomonovitch, Alcatel-Lucent
Lucy Yong, Huawei
This document was prepared using 2-Word-v2.0.template.dot. This document was prepared using 2-Word-v2.0.template.dot.
Authors' Addresses Authors' Addresses
Marc Lasserre Marc Lasserre
Alcatel-Lucent Alcatel-Lucent
Email: marc.lasserre@alcatel-lucent.com Email: marc.lasserre@alcatel-lucent.com
Florin Balus Florin Balus
Alcatel-Lucent Alcatel-Lucent
 End of changes. 57 change blocks. 
156 lines changed or deleted 246 lines changed or added

This html diff was produced by rfcdiff 1.41. The latest version is available from http://tools.ietf.org/tools/rfcdiff/