draft-ietf-nvo3-mcast-framework-04.txt   draft-ietf-nvo3-mcast-framework-05.txt 
NVO3 working group A. Ghanwani NVO3 working group A. Ghanwani
Internet Draft Dell Internet Draft Dell
Intended status: Informational L. Dunbar Intended status: Informational L. Dunbar
Expires: August 14, 2016 M. McBride Expires: November 8, 2016 M. McBride
Huawei Huawei
V. Bannai V. Bannai
Google Google
R. Krishnan R. Krishnan
Dell Dell
February 15, 2016 May 9, 2016
A Framework for Multicast in NVO3 A Framework for Multicast in Network Virtualization Overlays
draft-ietf-nvo3-mcast-framework-04 draft-ietf-nvo3-mcast-framework-05
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. This document may not be modified, provisions of BCP 78 and BCP 79. This document may not be modified,
and derivative works of it may not be created, except to publish it and derivative works of it may not be created, except to publish it
as an RFC and to translate it into languages other than English. as an RFC and to translate it into languages other than English.
skipping to change at page 1, line 43 skipping to change at page 1, line 43
months and may be updated, replaced, or obsoleted by other documents months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as at any time. It is inappropriate to use Internet-Drafts as
reference material or to cite them other than as "work in progress." reference material or to cite them other than as "work in progress."
The list of current Internet-Drafts can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/ietf/1id-abstracts.txt http://www.ietf.org/ietf/1id-abstracts.txt
The list of Internet-Draft Shadow Directories can be accessed at The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/shadow.html
This Internet-Draft will expire on August 14, 2016. This Internet-Draft will expire on November 8, 2016.
Copyright Notice Copyright Notice
Copyright (c) 2016 IETF Trust and the persons identified as the Copyright (c) 2016 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License. warranty as described in the Simplified BSD License.
Abstract Abstract
This document discusses a framework of supporting multicast traffic This document discusses a framework of supporting multicast traffic
in a network that uses Network Virtualization Overlays over Layer 3 in a network that uses Network Virtualization Overlays (NVO3). Both
(NVO3). Both infrastructure multicast and application-specific infrastructure multicast and application-specific multicast are
multicast are discussed. It describes the various mechanisms that discussed. It describes the various mechanisms that can be used for
can be used for delivering such traffic as well as the data plane delivering such traffic as well as the data plane and control plane
and control plane considerations for each of the mechanisms. considerations for each of the mechanisms.
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
1.1. Infrastructure multicast..................................3 1.1. Infrastructure multicast..................................3
1.2. Application-specific multicast............................3 1.2. Application-specific multicast............................4
1.3. Terminology clarification.................................4 1.3. Terminology clarification.................................4
2. Acronyms.......................................................4 2. Acronyms.......................................................4
3. Multicast mechanisms in networks that use NVO3.................5 3. Multicast mechanisms in networks that use NVO3.................5
3.1. No multicast support......................................5 3.1. No multicast support......................................5
3.2. Replication at the source NVE.............................6 3.2. Replication at the source NVE.............................6
3.3. Replication at a multicast service node...................8 3.3. Replication at a multicast service node...................8
3.4. IP multicast in the underlay..............................9 3.4. IP multicast in the underlay..............................9
3.5. Other schemes............................................11 3.5. Other schemes............................................11
4. Simultaneous use of more than one mechanism...................11 4. Simultaneous use of more than one mechanism...................11
5. Other issues..................................................11 5. Other issues..................................................11
5.1. Multicast-agnostic NVEs..................................11 5.1. Multicast-agnostic NVEs..................................11
5.2. Multicast membership management for DC with VMs..........12 5.2. Multicast membership management for DC with VMs..........12
6. Summary.......................................................12 6. Summary.......................................................12
7. Security Considerations.......................................13 7. Security Considerations.......................................13
8. IANA Considerations...........................................13 8. IANA Considerations...........................................13
9. References....................................................13 9. References....................................................13
9.1. Normative References.....................................13 9.1. Normative References.....................................13
9.2. Informative References...................................13 9.2. Informative References...................................13
10. Acknowledgments..............................................14 10. Acknowledgments..............................................15
1. Introduction 1. Introduction
Network virtualization using Overlays over Layer 3 (NVO3) is a Network virtualization using Overlays over Layer 3 (NVO3) is a
technology that is used to address issues that arise in building technology that is used to address issues that arise in building
large, multitenant data centers that make extensive use of server large, multitenant data centers that make extensive use of server
virtualization [RFC7364]. virtualization [RFC7364].
This document provides a framework for supporting multicast traffic, This document provides a framework for supporting multicast traffic,
in a network that uses Network Virtualization using Overlays over in a network that uses Network Virtualization using Overlays over
Layer 3 (NVO3). Both infrastructure multicast (ARP/ND, DHCP, mDNS, Layer 3 (NVO3). Both infrastructure multicast and application-
etc.) and application-specific multicast are considered. It specific multicast are considered. It describes the various
describes the various mechanisms and considerations that can be used mechanisms and considerations that can be used for delivering such
for delivering such traffic in networks that use NVO3. traffic in networks that use NVO3.
The reader is assumed to be familiar with the terminology as defined The reader is assumed to be familiar with the terminology as defined
in the NVO3 Framework document [RFC7365] and NVO3 Architecture in the NVO3 Framework document [RFC7365] and NVO3 Architecture
document [NVO3-ARCH]. document [NVO3-ARCH].
1.1. Infrastructure multicast 1.1. Infrastructure multicast
Infrastructure multicast includes protocols such as ARP/ND, DHCP, Infrastructure multicast includes protocols such as Address
and mDNS. It is possible to provide solutions for these that do not Resolution Protocol (ARP), Neighbor Discovery (ND), Dynamic Host
involve multicast in the underlay network. In the case of ARP/ND, Configuration Protocol (DHCP), multicast Domain Name Server (mDNS),
an NVA can be used for distributing the mappings of IP address to etc.. It is possible to provide solutions for these that do not
MAC address to all NVEs. The NVEs can then trap ARP Request/ND involve multicast in the underlay network. In the case of ARP/ND, a
network virtualization authority (NVA) can be used for distributing
the mappings of IP address to MAC address to all network
virtualization edges (NVEs). The NVEs can then trap ARP Request/ND
Neighbor Solicitation messages from the TSs that are attached to it Neighbor Solicitation messages from the TSs that are attached to it
and respond to them, thereby eliminating the need to for and respond to them, thereby eliminating the need to for
broadcast/multicast of such messages. In the case of DHCP, the NVE broadcast/multicast of such messages. In the case of DHCP, the NVE
can be configured to forward these messages using a helper function. can be configured to forward these messages using a helper function.
Of course it is possible to support all of these infrastructure Of course it is possible to support all of these infrastructure
multicast protocols natively if the underlay provides multicast multicast protocols natively if the underlay provides multicast
transport. However, even in the presence of multicast transport, it transport. However, even in the presence of multicast transport, it
may be beneficial to use the optimizations mentioned above to reduce may be beneficial to use the optimizations mentioned above to reduce
the amount of such traffic in the network. the amount of such traffic in the network.
skipping to change at page 4, line 37 skipping to change at page 4, line 43
MSN: Multicast Service Node MSN: Multicast Service Node
NVA: Network Virtualization Authority NVA: Network Virtualization Authority
NVE: Network Virtualization Edge NVE: Network Virtualization Edge
NVGRE: Network Virtualization using GRE NVGRE: Network Virtualization using GRE
SSM: Source-Specific Multicast SSM: Source-Specific Multicast
STT: Stateless Tunnel Transport
TS: Tenant system TS: Tenant system
VM: Virtual Machine VM: Virtual Machine
VN: Virtual Network VN: Virtual Network
VXLAN: Virtual eXtensible LAN VXLAN: Virtual eXtensible LAN
3. Multicast mechanisms in networks that use NVO3 3. Multicast mechanisms in networks that use NVO3
In NVO3 environments, traffic between NVEs is transported using an In NVO3 environments, traffic between NVEs is transported using an
encapsulation such as VXLAN [VXLAN], NVGRE [RFC7637], STT [STT], encapsulation such as Virtual eXtensible Local Area Network (VXLAN)
etc. [RFC7348,VXLAN-GPE], Network Virtualization Using Generic Routing
Encapsulation (NVGRE) [RFC7637], , Geneve [Geneve], Generic UDP
Encapsulation (GUE) [GUE], etc.
Besides the need to support the Address Resolution Protocol (ARP) Besides the need to support ARP and ND, there are several
and Neighbor Discovery (ND), there are several applications that applications that require the support of multicast and/or broadcast
require the support of multicast and/or broadcast in data centers in data centers [DC-MC]. With NVO3, there are many possible ways
[DC-MC]. With NVO3, there are many possible ways that multicast may that multicast may be handled in such networks. We discuss some of
be handled in such networks. We discuss some of the attributes of the attributes of the following four methods:
the following four methods:
1. No multicast support. 1. No multicast support.
2. Replication at the source NVE. 2. Replication at the source NVE.
3. Replication at a multicast service node. 3. Replication at a multicast service node.
4. IP multicast in the underlay. 4. IP multicast in the underlay.
These mechanisms are briefly mentioned in the NVO3 Framework [FW] These methods are briefly mentioned in the NVO3 Framework [FW] and
and NVO3 architecture [NVO3-ARCH] document. This document attempts NVO3 architecture [NVO3-ARCH] document. This document provides more
to provide more details about the basic mechanisms underlying each details about the basic mechanisms underlying each of these methods
of these mechanisms and discusses the issues and tradeoffs of each. and discusses the issues and tradeoffs of each.
We note that other methods are also possible, such as [EDGE-REP], We note that other methods are also possible, such as [EDGE-REP],
but we focus on the above four because they are the most common. but we focus on the above four because they are the most common.
3.1. No multicast support 3.1. No multicast support
In this scenario, there is no support whatsoever for multicast In this scenario, there is no support whatsoever for multicast
traffic when using the overlay. This method can only work if the traffic when using the overlay. This method can only work if the
following conditions are met: following conditions are met:
1. All of the application traffic in the network is unicast 1. All of the application traffic in the network is unicast
traffic and the only multicast/broadcast traffic is from ARP/ND traffic and the only multicast/broadcast traffic is from ARP/ND
protocols. protocols.
2. A network virtualization authority (NVA) is used by the NVEs to 2. An NVA is used by the NVEs to determine the mapping of a given
determine the mapping of a given Tenant System's MAC/IP address Tenant System's (TS's) MAC/IP address to its NVE. In other
to its NVE. In other words, there is no data plane learning. words, there is no data plane learning. Address resolution
Address resolution requests via ARP/ND that are issued by the requests via ARP/ND that are issued by the TSs must be resolved
Tenant Systems must be resolved by the NVE that they are by the NVE that they are attached to.
attached to.
With this approach, it is not possible to support application- With this approach, it is not possible to support application-
specific multicast. However, certain multicast/broadcast specific multicast. However, certain multicast/broadcast
applications such as DHCP can be supported by use of a helper applications such as DHCP can be supported by use of a helper
function in the NVE. function in the NVE.
The main drawback of this approach, even for unicast traffic, is The main drawback of this approach, even for unicast traffic, is
that it is not possible to initiate communication with a Tenant that it is not possible to initiate communication with a TS for
System for which a mapping to an NVE does not already exist with the which a mapping to an NVE does not already exist with the NVA. This
NVA. This is a problem in the case where the NVE is implemented in is a problem in the case where the NVE is implemented in a physical
a physical switch and the Tenant System is a physical end station switch and the TS is a physical end station that has not registered
that has not registered with the NVA. with the NVA.
3.2. Replication at the source NVE 3.2. Replication at the source NVE
With this method, the overlay attempts to provide a multicast With this method, the overlay attempts to provide a multicast
service without requiring any specific support from the underlay, service without requiring any specific support from the underlay,
other than that of a unicast service. A multicast or broadcast other than that of a unicast service. A multicast or broadcast
transmission is achieved by replicating the packet at the source transmission is achieved by replicating the packet at the source
NVE, and making copies, one for each destination NVE that the NVE, and making copies, one for each destination NVE that the
multicast packet must be sent to. multicast packet must be sent to.
For this mechanism to work, the source NVE must know, a priori, the For this mechanism to work, the source NVE must know, a priori, the
IP addresses of all destination NVEs that need to receive the IP addresses of all destination NVEs that need to receive the
packet. For the purpose of ARP/ND, this would involve knowing the packet. For the purpose of ARP/ND, this would involve knowing the
IP addresses of all the NVEs that have Tenant Systems in the virtual IP addresses of all the NVEs that have TSs in the virtual network
network (VN) of the Tenant System that generated the request. (VN) of the TS that generated the request. For the support of
For the support of application-specific multicast traffic, application-specific multicast traffic, a method similar to that of
a method similar to that of receiver-sites registration for a receiver-sites registration for a particular multicast group
particular multicast group described in [LISP-Signal-Free] can be described in [LISP-Signal-Free] can be used. The registrations from
used. The registrations from different receiver-sites can be merged different receiver-sites can be merged at the NVA, which can
at the NVA, which can construct a multicast replication-list construct a multicast replication-list inclusive of all NVEs to
inclusive of all NVEs to which receivers for a particular multicast which receivers for a particular multicast group are attached. The
group are attached. The replication-list for each specific multicast replication-list for each specific multicast group is maintained by
group is maintained by the NVA. the NVA.
The receiver-sites registration is achieved by egress NVEs The receiver-sites registration is achieved by egress NVEs
performing the IGMP/MLD snooping to maintain state for which performing the IGMP/MLD snooping to maintain state for which
attached Tenant Systems have subscribed to a given IP multicast attached TSs have subscribed to a given IP multicast group. When
group. When the members of a multicast group are outside the NVO3 the members of a multicast group are outside the NVO3 domain, it is
domain, it is necessary for NVO3 gateways to keep track of the necessary for NVO3 gateways to keep track of the remote members of
remote members of each multicast group. The NVEs and NVO3 gateways each multicast group. The NVEs and NVO3 gateways then communicate
then communicate the multicast groups that are of interest to the the multicast groups that are of interest to the NVA. If the
NVA. If the membership is not communicated to the NVA, and if it is membership is not communicated to the NVA, and if it is necessary to
necessary to prevent hosts attached to an NVE that have not prevent hosts attached to an NVE that have not subscribed to a
subscribed to a multicast group from receiving the multicast multicast group from receiving the multicast traffic, the NVE would
traffic, the NVE would need to maintain multicast group membership need to maintain multicast group membership information.
information.
In the absence of IGMP/MLD snooping, the traffic would be delivered In the absence of IGMP/MLD snooping, the traffic would be delivered
to all hosts that are part of the VN. to all TSs that are part of the VN.
In multi-homing environments, i.e., in those where a TS is attached In multi-homing environments, i.e., in those where a TS is attached
to more than one NVE, the NVA would be expected to provide to more than one NVE, the NVA would be expected to provide
information to all of the NVEs under its control about all of the information to all of the NVEs under its control about all of the
NVEs to which such a TS is attached. The ingress NVE can choose any NVEs to which such a TS is attached. The ingress NVE can choose any
one of the egress NVEs for the data frames destined towards the TS. one of the egress NVEs for the data frames destined towards the TS.
This method requires multiple copies of the same packet to all NVEs This method requires multiple copies of the same packet to all NVEs
that participate in the VN. If, for example, a tenant subnet is that participate in the VN. If, for example, a tenant subnet is
spread across 50 NVEs, the packet would have to be replicated 50 spread across 50 NVEs, the packet would have to be replicated 50
times at the source NVE. This also creates an issue with the times at the source NVE. This also creates an issue with the
forwarding performance of the NVE. forwarding performance of the NVE.
Note that this method is similar to what was used in VPLS [RFC4762] Note that this method is similar to what was used in Virtual Private
prior to support of MPLS multicast [RFC7117]. While there are some LAN Service (VPLS) [RFC4762] prior to support of Multi-Protocol
similarities between MPLS VPN and the NVO3 overlay, there are some Label Switching (MPLS) multicast [RFC7117]. While there are some
key differences: similarities between MPLS Virtual Private Network (VPN) and NVO3,
there are some key differences:
- The CE-to-PE attachment in VPNs is somewhat static, whereas in a - The Customer Edge (CE) to Provider Edge (PE) attachment in VPNs is
DC that allows VMs to migrate anywhere, the TS attachment to NVE somewhat static, whereas in a DC that allows VMs to migrate
is much more dynamic. anywhere, the TS attachment to NVE is much more dynamic.
- The number of PEs to which a single VPN customer is attached in - The number of PEs to which a single VPN customer is attached in
an MPLS VPN environment is normally far less than the number of an MPLS VPN environment is normally far less than the number of
NVEs to which a VN's VMs are attached in a DC. NVEs to which a VN's VMs are attached in a DC.
When a VPN customer has multiple multicast groups, [RFC6513] When a VPN customer has multiple multicast groups, [RFC6513]
"Multicast VPN" combines all those multicast groups within each "Multicast VPN" combines all those multicast groups within each
VPN client to one single multicast group in the MPLS (or VPN) VPN client to one single multicast group in the MPLS (or VPN)
core. The result is that messages from any of the multicast core. The result is that messages from any of the multicast
groups belonging to one VPN customer will reach all the PE nodes groups belonging to one VPN customer will reach all the PE nodes
of the client. In other words, any messages belonging to any of the client. In other words, any messages belonging to any
multicast groups under customer X will reach all PEs of the multicast groups under customer X will reach all PEs of the
customer X. When the customer X is attached to only a handful of customer X. When the customer X is attached to only a handful of
PEs, the use of this approach does not result in excessive wastage PEs, the use of this approach does not result in excessive wastage
of bandwidth in the provider's network. of bandwidth in the provider's network.
In a DC environment, a typical server/hypervisor based virtual In a DC environment, a typical server/hypervisor based virtual
switch may only support 10's VMs (as of this writing). A subnet switch may only support 10's VMs (as of this writing). A subnet
with N VMs may be, in the worst case, spread across N vSwitches. with N VMs may be, in the worst case, spread across N vSwitches.
Using "MPLS VPN multicast" approach in such a scenario would Using "MPLS VPN multicast" approach in such a scenario would
require the creation of a Multicast group in the core for this VN require the creation of a Multicast group in the core for this VN
to reach all N NVEs. If only small percentage of this client's VMs to reach all N NVEs. If only small percentage of this client's VMs
participate in application specific multicast, a great number of participate in application specific multicast, a great number of
NVEs will receive multicast traffic that is not forwarded to any NVEs will receive multicast traffic that is not forwarded to any
of their attached VMs, resulting in considerable wastage of of their attached VMs, resulting in considerable wastage of
skipping to change at page 8, line 29 skipping to change at page 8, line 33
3.3. Replication at a multicast service node 3.3. Replication at a multicast service node
With this method, all multicast packets would be sent using a With this method, all multicast packets would be sent using a
unicast tunnel encapsulation from the ingress NVE to a multicast unicast tunnel encapsulation from the ingress NVE to a multicast
service node (MSN). The MSN, in turn, would create multiple copies service node (MSN). The MSN, in turn, would create multiple copies
of the packet and would deliver a copy, using a unicast tunnel of the packet and would deliver a copy, using a unicast tunnel
encapsulation, to each of the NVEs that are part of the multicast encapsulation, to each of the NVEs that are part of the multicast
group for which the packet is intended. group for which the packet is intended.
This mechanism is similar to that used by the ATM Forum's LAN This mechanism is similar to that used by the Asynchronous Transfer
Emulation [LANE] specification [LANE]. Mode (ATM) Forum's LAN Emulation (LANE)LANE specification [LANE].
The following are the possible ways for the MSN to get the The following are the possible ways for the MSN to get the
membership information for each multicast group: membership information for each multicast group:
- The MSN can obtain this information by snooping the IGMP/MLD - The MSN can obtain this information by snooping the IGMP/MLD
messages from the Tenant Systems and/or sending query messages to messages from the TSs and/or sending query messages to the TS. In
the Tenant Systems. In order for MSN to snoop the IGMP/MLD order for MSN to snoop the IGMP/MLD messages between TSs and their
messages between TSs and their corresponding routers, the NVEs corresponding routers, the NVEs that TSs are attached have to
that TSs are attached have to encapsulate a special outer header, encapsulate a special outer header, e.g. outer destination being
e.g. outer destination being the multicast server node. See the multicast server node. See Section 3.3.2 for detail.
Section 3.3.2 for detail.
- The MSN can obtain the membership information from the NVEs that - The MSN can obtain the membership information from the NVEs that
snoop the IGMP/MLD messages. This can be done by having the MSN snoop the IGMP/MLD messages. This can be done by having the MSN
communicate with the NVEs, or by having the NVA obtain the communicate with the NVEs, or by having the NVA obtain the
information from the NVEs, and in turn have MSN communicate with information from the NVEs, and in turn have MSN communicate with
the NVA. the NVA.
Unlike the method described in Section 3.2, there is no performance Unlike the method described in Section 3.2, there is no performance
impact at the ingress NVE, nor are there any issues with multiple impact at the ingress NVE, nor are there any issues with multiple
copies of the same packet from the source NVE to the multicast copies of the same packet from the source NVE to the multicast
skipping to change at page 9, line 35 skipping to change at page 9, line 37
3.4. IP multicast in the underlay 3.4. IP multicast in the underlay
In this method, the underlay supports IP multicast and the ingress In this method, the underlay supports IP multicast and the ingress
NVE encapsulates the packet with the appropriate IP multicast NVE encapsulates the packet with the appropriate IP multicast
address in the tunnel encapsulation header for delivery to the address in the tunnel encapsulation header for delivery to the
desired set of NVEs. The protocol in the underlay could be any desired set of NVEs. The protocol in the underlay could be any
variant of Protocol Independent Multicast (PIM), or protocol variant of Protocol Independent Multicast (PIM), or protocol
dependent multicast, such as [ISIS-Multicast]. dependent multicast, such as [ISIS-Multicast].
If an NVE connects to its attached TSs via Layer 2 network, there If an NVE connects to its attached TSs via a Layer 2 network, there
are multiple ways for NVEs to support the application specific are multiple ways for NVEs to support the application specific
multicast: multicast:
- The NVE only supports the basic IGMP/MLD snooping function, let - The NVE only supports the basic IGMP/MLD snooping function, let
the TSs routers handling the application specific multicast. This the TSs routers handling the application specific multicast. This
scheme doesn't utilize the underlay IP multicast protocols. scheme doesn't utilize the underlay IP multicast protocols.
- The NVE can act as a pseudo multicast router for the directly - The NVE can act as a pseudo multicast router for the directly
attached VMs and support proper mapping of IGMP/MLD's messages to attached VMs and support proper mapping of IGMP/MLD's messages to
the messages needed by the underlay IP multicast protocols. the messages needed by the underlay IP multicast protocols.
skipping to change at page 11, line 5 skipping to change at page 11, line 8
delivered to that NVE). It also introduces an additional network delivered to that NVE). It also introduces an additional network
management burden to optimize which tenants should be part of the management burden to optimize which tenants should be part of the
same tenant group (based on the NVEs they share), which somewhat same tenant group (based on the NVEs they share), which somewhat
dilutes the value proposition of NVO3 which is to completely dilutes the value proposition of NVO3 which is to completely
decouple the overlay and physical network design allowing complete decouple the overlay and physical network design allowing complete
freedom of placement of VMs anywhere within the data center. freedom of placement of VMs anywhere within the data center.
Multicast schemes such as BIER (Bit Indexed Explicit Replication) Multicast schemes such as BIER (Bit Indexed Explicit Replication)
[BIER-ARCH] may be able to provide optimizations by allowing the [BIER-ARCH] may be able to provide optimizations by allowing the
underlay network to provide optimum multicast delivery without underlay network to provide optimum multicast delivery without
requiring routers in the core of the network to main per-multicast requiring routers in the core of the network to maintain per-
group state. multicast group state.
3.5. Other schemes 3.5. Other schemes
There are still other mechanisms that may be used that attempt to There are still other mechanisms that may be used that attempt to
combine some of the advantages of the above methods by offering combine some of the advantages of the above methods by offering
multiple replication points, each with a limited degree of multiple replication points, each with a limited degree of
replication [EDGE-REP]. Such schemes offer a trade-off between the replication [EDGE-REP]. Such schemes offer a trade-off between the
amount of replication at an intermediate node (router) versus amount of replication at an intermediate node (router) versus
performing all of the replication at the source NVE or all of the performing all of the replication at the source NVE or all of the
replication at a multicast service node. replication at a multicast service node.
skipping to change at page 13, line 27 skipping to change at page 13, line 30
9. References 9. References
9.1. Normative References 9.1. Normative References
[RFC7365] Lasserre, M. et al., "Framework for data center (DC) [RFC7365] Lasserre, M. et al., "Framework for data center (DC)
network virtualization", October 2014. network virtualization", October 2014.
[RFC7364] Narten, T. et al., "Problem statement: Overlays for [RFC7364] Narten, T. et al., "Problem statement: Overlays for
network virtualization", October 2014. network virtualization", October 2014.
[NVO3-ARCH] [NVO3-ARCH] Narten, T. et al.," An Architecture for Overlay Networks
Narten, T. et al.," An Architecture for Overlay Networks (NVO3)", <draft-ietf-nvo3-arch-06>, work in progress,
(NVO3)", work in progress. April 2016.
[RFC3376] Cain B. et al., "Internet Group Management Protocol, [RFC3376] Cain B. et al., "Internet Group Management Protocol,
Version 3", October 2002. Version 3", October 2002.
[RFC6513] Rosen, E. et al., "Multicast in MPLS/BGP IP VPNs", [RFC6513] Rosen, E. et al., "Multicast in MPLS/BGP IP VPNs",
February 2012. February 2012.
9.2. Informative References 9.2. Informative References
[RFC7348] Mahalingam, M. et al., " Virtual eXtensible Local Area [RFC7348] Mahalingam, M. et al., " Virtual eXtensible Local Area
Network (VXLAN): A Framework for Overlaying Virtualized Network (VXLAN): A Framework for Overlaying Virtualized
Layer 2 Networks over Layer 3 Networks", August 2014. Layer 2 Networks over Layer 3 Networks", August 2014.
[RFC7637] Garg, P. and Wang, Y. (Eds.), "NVGRE: Network [RFC7637] Garg P. and Wang, Y. (Eds.), "NVGRE: Network
Vvirtualization using Generic Routing Encapsulation", Virtualization using Generic Routing Encapsulation",
September 2015. September 2015.
[STT] Davie, B. and Gross, J., "A stateless transport tunneling [DC-MC] McBride, M. and Lui, H., "Multicast in the data center
protocol for network virtualization," work in progress. overview," <draft-mcbride-armd-mcast-overview-02>, work in
progress, July 2012.
[DC-MC] McBride, M. and Lui, H., "Multicast in the data center
overview," work in progress.
[ISIS-Multicast] [ISIS-Multicast]
Yong, L. et al., "ISIS Protocol Extension for Building Yong, L. et al., "ISIS Protocol Extension for Building
Distribution Trees", work in progress. Distribution Trees", <draft-yong-isis-ext-4-distribution-
tree-03>, work in progress, October 2014.
[RFC4762] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private [RFC4762] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private
LAN Service (VPLS) using Label Distribution Protocol (LDP) LAN Service (VPLS) using Label Distribution Protocol (LDP)
signaling," January 2007. signaling," January 2007.
[RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014. [RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014.
[LANE] "LAN emulation over ATM," The ATM Forum, af-lane-0021.000, [LANE] "LAN emulation over ATM," The ATM Forum, af-lane-0021.000,
January 1995. January 1995.
[EDGE-REP] [EDGE-REP]
Marques P. et al., "Edge multicast replication for BGP IP Marques P. et al., "Edge multicast replication for BGP IP
VPNs," work in progress.. VPNs," <draft-marques-l3vpn-mcast-edge-01>, work in
progress, June 2012.
[RFC 3569] [RFC 3569] Bhattacharyya, S. (Ed.), "An Overview of Source-Specific
S. Bhattacharyya, Ed., "An Overview of Source-Specific
Multicast (SSM)", July 2003. Multicast (SSM)", July 2003.
[LISP-Signal-Free] [LISP-Signal-Free]
Moreno, V. and Farinacci, D., "Signal-Free LISP Moreno, V. and Farinacci, D., "Signal-Free LISP
Multicast", work in progress. Multicast", <draft-ietf-lisp-signal-free-multicast-01>,
work in progress, April 2016.
[VXLAN-GPE]
Kreeger, L. and Elzur, U. (Eds.), "Generic Protocol
Extension for VXLAN", <draft-ietf-nvo3-vxlan-gpe-02>, work
in progress, April 2016.
[Geneve] Gross, J. and Ganga, I. (Eds.), "Geneve: Generic Network
Virtualization Encapsulation", <draft-ietf-nvo3-geneve-
01>, work in progress, January 2016.
[GUE] Herbert, T. et al., "Generic UDP Encapsulation", <draft-
ietf-nvo3-gue-02>, work in progress, December 2015.
[BIER-ARCH]
Wijnands, IJ. (Ed.) et al., "Multicast using Bit Index
Explicit Replication," <draft-ietf-bier-architecture-03>,
January 2016.
10. Acknowledgments 10. Acknowledgments
Thanks are due to Dino Farinacci, Erik Nordmark, Lucy Yong, Nicolas Thanks are due to Dino Farinacci, Erik Nordmark, Lucy Yong, Nicolas
Bouliane, and Saumya Dikshit for their comments and suggestions. Bouliane, Saumya Dikshit, and Matthew Bocci, for their comments and
suggestions.
This document was prepared using 2-Word-v2.0.template.dot. This document was prepared using 2-Word-v2.0.template.dot.
Authors' Addresses Authors' Addresses
Anoop Ghanwani Anoop Ghanwani
Dell Dell
Email: anoop@alumni.duke.edu Email: anoop@alumni.duke.edu
Linda Dunbar Linda Dunbar
 End of changes. 42 change blocks. 
120 lines changed or deleted 137 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/