draft-ietf-nvo3-mcast-framework-05.txt   draft-ietf-nvo3-mcast-framework-06.txt 
NVO3 working group A. Ghanwani NVO3 working group A. Ghanwani
Internet Draft Dell Internet Draft Dell
Intended status: Informational L. Dunbar Intended status: Informational L. Dunbar
Expires: November 8, 2016 M. McBride Expires: November 8, 2017 M. McBride
Huawei Huawei
V. Bannai V. Bannai
Google Google
R. Krishnan R. Krishnan
Dell Dell
May 9, 2016 February 1, 2017
A Framework for Multicast in Network Virtualization Overlays A Framework for Multicast in Network Virtualization Overlays
draft-ietf-nvo3-mcast-framework-05 draft-ietf-nvo3-mcast-framework-06
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. This document may not be modified, provisions of BCP 78 and BCP 79. This document may not be modified,
and derivative works of it may not be created, except to publish it and derivative works of it may not be created, except to publish it
as an RFC and to translate it into languages other than English. as an RFC and to translate it into languages other than English.
skipping to change at page 2, line 37 skipping to change at page 2, line 37
considerations for each of the mechanisms. considerations for each of the mechanisms.
Table of Contents Table of Contents
1. Introduction...................................................3 1. Introduction...................................................3
1.1. Infrastructure multicast..................................3 1.1. Infrastructure multicast..................................3
1.2. Application-specific multicast............................4 1.2. Application-specific multicast............................4
1.3. Terminology clarification.................................4 1.3. Terminology clarification.................................4
2. Acronyms.......................................................4 2. Acronyms.......................................................4
3. Multicast mechanisms in networks that use NVO3.................5 3. Multicast mechanisms in networks that use NVO3.................5
3.1. No multicast support......................................5 3.1. No multicast support......................................6
3.2. Replication at the source NVE.............................6 3.2. Replication at the source NVE.............................6
3.3. Replication at a multicast service node...................8 3.3. Replication at a multicast service node...................8
3.4. IP multicast in the underlay..............................9 3.4. IP multicast in the underlay.............................10
3.5. Other schemes............................................11 3.5. Other schemes............................................11
4. Simultaneous use of more than one mechanism...................11 4. Simultaneous use of more than one mechanism...................12
5. Other issues..................................................11 5. Other issues..................................................12
5.1. Multicast-agnostic NVEs..................................11 5.1. Multicast-agnostic NVEs..................................12
5.2. Multicast membership management for DC with VMs..........12 5.2. Multicast membership management for DC with VMs..........13
6. Summary.......................................................12 6. Summary.......................................................13
7. Security Considerations.......................................13 7. Security Considerations.......................................13
8. IANA Considerations...........................................13 8. IANA Considerations...........................................14
9. References....................................................13 9. References....................................................14
9.1. Normative References.....................................13 9.1. Normative References.....................................14
9.2. Informative References...................................13 9.2. Informative References...................................14
10. Acknowledgments..............................................15 10. Acknowledgments..............................................16
1. Introduction 1. Introduction
Network virtualization using Overlays over Layer 3 (NVO3) is a Network virtualization using Overlays over Layer 3 (NVO3) is a
technology that is used to address issues that arise in building technology that is used to address issues that arise in building
large, multitenant data centers that make extensive use of server large, multitenant data centers that make extensive use of server
virtualization [RFC7364]. virtualization [RFC7364].
This document provides a framework for supporting multicast traffic, This document provides a framework for supporting multicast traffic,
in a network that uses Network Virtualization using Overlays over in a network that uses Network Virtualization using Overlays over
Layer 3 (NVO3). Both infrastructure multicast and application- Layer 3 (NVO3). Both infrastructure multicast and application-
specific multicast are considered. It describes the various specific multicast are considered. It describes the various
mechanisms and considerations that can be used for delivering such mechanisms and considerations that can be used for delivering such
traffic in networks that use NVO3. traffic in networks that use NVO3.
The reader is assumed to be familiar with the terminology as defined The reader is assumed to be familiar with the terminology as defined
in the NVO3 Framework document [RFC7365] and NVO3 Architecture in the NVO3 Framework document [RFC7365] and NVO3 Architecture
document [NVO3-ARCH]. document [NVO3-ARCH].
1.1. Infrastructure multicast 1.1. Infrastructure multicast
Infrastructure multicast includes protocols such as Address Infrastructure multicast is a capability needed by networking
Resolution Protocol (ARP), Neighbor Discovery (ND), Dynamic Host services, such as Address Resolution Protocol (ARP), Neighbor
Configuration Protocol (DHCP), multicast Domain Name Server (mDNS), Discovery (ND), Dynamic Host Configuration Protocol (DHCP),
etc.. It is possible to provide solutions for these that do not multicast Domain Name Server (mDNS), etc.. RFC3819 Section 5 and 6
involve multicast in the underlay network. In the case of ARP/ND, a have detailed description for some of the infrastructure multicast
network virtualization authority (NVA) can be used for distributing [RFC 3819]. It is possible to provide solutions for these that do
the mappings of IP address to MAC address to all network not involve multicast in the underlay network. In the case of
virtualization edges (NVEs). The NVEs can then trap ARP Request/ND ARP/ND, a network virtualization authority (NVA) can be used for
Neighbor Solicitation messages from the TSs that are attached to it distributing the mappings of IP address to MAC address to all
and respond to them, thereby eliminating the need to for network virtualization edges (NVEs). The NVEs can then trap ARP
broadcast/multicast of such messages. In the case of DHCP, the NVE Request/ND Neighbor Solicitation messages from the TSs that are
can be configured to forward these messages using a helper function. attached to it and respond to them, thereby eliminating the need to
for broadcast/multicast of such messages. In the case of DHCP, the
NVE can be configured to forward these messages using a helper
function.
Of course it is possible to support all of these infrastructure Of course it is possible to support all of these infrastructure
multicast protocols natively if the underlay provides multicast multicast protocols natively if the underlay provides multicast
transport. However, even in the presence of multicast transport, it transport. However, even in the presence of multicast transport, it
may be beneficial to use the optimizations mentioned above to reduce may be beneficial to use the optimizations mentioned above to reduce
the amount of such traffic in the network. the amount of such traffic in the network.
1.2. Application-specific multicast 1.2. Application-specific multicast
Application-specific multicast traffic, which may be either Source- Application-specific multicast traffic are originated and consumed
Specific Multicast (SSM) or Any-Source Multicast (ASM)[RFC 3569], by user applications. The Application-specific multicast, which can
has the following characteristics: be either Source-Specific Multicast (SSM) or Any-Source Multicast
(ASM)[RFC 3569], has the following characteristics:
1. Receiver hosts are expected to subscribe to multicast content 1. Receiver hosts are expected to subscribe to multicast content
using protocols such as IGMP [RFC3376] (IPv4) or MLD (IPv6). using protocols such as IGMP [RFC3376] (IPv4) or MLD (IPv6).
Multicast sources and listeners participant in these protocols Multicast sources and listeners participant in these protocols
using addresses that are in the Tenant System address domain. using addresses that are in the Tenant System address domain.
2. The list of multicast listeners for each multicast group is not 2. The list of multicast listeners for each multicast group is not
known in advance. Therefore, it may not be possible for an NVA known in advance. Therefore, it may not be possible for an NVA
to get the list of participants for each multicast group ahead to get the list of participants for each multicast group ahead
of time. of time.
1.3. Terminology clarification 1.3. Terminology clarification
In this document, the terms host, tenant system (TS) and virtual In this document, the terms host, tenant system (TS) and virtual
machine (VM) are used interchangeably to represent an end station machine (VM) are used interchangeably to represent an end station
that originates or consumes data packets. that originates or consumes data packets.
2. Acronyms 2. Acronyms
ASM: Any-Source Multicast ASM: Any-Source Multicast
IGMP: Internet Group Management Protocol
LISP: Locator/ID Separation Protocol LISP: Locator/ID Separation Protocol
MSN: Multicast Service Node MSN: Multicast Service Node
NVA: Network Virtualization Authority NVA: Network Virtualization Authority
NVE: Network Virtualization Edge NVE: Network Virtualization Edge
NVGRE: Network Virtualization using GRE NVGRE: Network Virtualization using GRE
SSM: Source-Specific Multicast PIM: Protocol-Independent Multicast
SSM: Source-Specific Multicast
TS: Tenant system TS: Tenant system
VM: Virtual Machine VM: Virtual Machine
VN: Virtual Network VN: Virtual Network
VXLAN: Virtual eXtensible LAN VXLAN: Virtual eXtensible LAN
3. Multicast mechanisms in networks that use NVO3 3. Multicast mechanisms in networks that use NVO3
In NVO3 environments, traffic between NVEs is transported using an In NVO3 environments, traffic between NVEs is transported using an
encapsulation such as Virtual eXtensible Local Area Network (VXLAN) encapsulation such as Virtual eXtensible Local Area Network (VXLAN)
[RFC7348,VXLAN-GPE], Network Virtualization Using Generic Routing [RFC7348,VXLAN-GPE], Network Virtualization Using Generic Routing
Encapsulation (NVGRE) [RFC7637], , Geneve [Geneve], Generic UDP Encapsulation (NVGRE) [RFC7637], , Geneve [Geneve], Generic UDP
skipping to change at page 5, line 16 skipping to change at page 5, line 20
VXLAN: Virtual eXtensible LAN VXLAN: Virtual eXtensible LAN
3. Multicast mechanisms in networks that use NVO3 3. Multicast mechanisms in networks that use NVO3
In NVO3 environments, traffic between NVEs is transported using an In NVO3 environments, traffic between NVEs is transported using an
encapsulation such as Virtual eXtensible Local Area Network (VXLAN) encapsulation such as Virtual eXtensible Local Area Network (VXLAN)
[RFC7348,VXLAN-GPE], Network Virtualization Using Generic Routing [RFC7348,VXLAN-GPE], Network Virtualization Using Generic Routing
Encapsulation (NVGRE) [RFC7637], , Geneve [Geneve], Generic UDP Encapsulation (NVGRE) [RFC7637], , Geneve [Geneve], Generic UDP
Encapsulation (GUE) [GUE], etc. Encapsulation (GUE) [GUE], etc.
What makes NVO3 different from any other network is that some NVEs,
especially the NVE implemented on server, might not support PIM or
other native multicast mechanisms. They might just encapsulate the
data packets from VMs with an outer unicast header. Therefore, it is
important for networks using NVO3 to have mechanisms to support
multicast as a network capability for NVEs, to map multicast traffic
from VMs (users/applications) to an equivalent multicast capability
inside the NVE, or to figure out the outer destination address if
NVE does not support native multicast (e.g. PIM) or IGMP.
Besides the need to support ARP and ND, there are several Besides the need to support ARP and ND, there are several
applications that require the support of multicast and/or broadcast applications that require the support of multicast and/or broadcast
in data centers [DC-MC]. With NVO3, there are many possible ways in data centers [DC-MC]. With NVO3, there are many possible ways
that multicast may be handled in such networks. We discuss some of that multicast may be handled in such networks. We discuss some of
the attributes of the following four methods: the attributes of the following four methods:
1. No multicast support. 1. No multicast support.
2. Replication at the source NVE. 2. Replication at the source NVE.
skipping to change at page 6, line 18 skipping to change at page 6, line 31
requests via ARP/ND that are issued by the TSs must be resolved requests via ARP/ND that are issued by the TSs must be resolved
by the NVE that they are attached to. by the NVE that they are attached to.
With this approach, it is not possible to support application- With this approach, it is not possible to support application-
specific multicast. However, certain multicast/broadcast specific multicast. However, certain multicast/broadcast
applications such as DHCP can be supported by use of a helper applications such as DHCP can be supported by use of a helper
function in the NVE. function in the NVE.
The main drawback of this approach, even for unicast traffic, is The main drawback of this approach, even for unicast traffic, is
that it is not possible to initiate communication with a TS for that it is not possible to initiate communication with a TS for
which a mapping to an NVE does not already exist with the NVA. This which a mapping to an NVE does not already exist in the NVA. This
is a problem in the case where the NVE is implemented in a physical is a problem in the case where the NVE is implemented in a physical
switch and the TS is a physical end station that has not registered switch and the TS is a physical end station that has not registered
with the NVA. with the NVA.
3.2. Replication at the source NVE 3.2. Replication at the source NVE
With this method, the overlay attempts to provide a multicast With this method, the overlay attempts to provide a multicast
service without requiring any specific support from the underlay, service without requiring any specific support from the underlay,
other than that of a unicast service. A multicast or broadcast other than that of a unicast service. A multicast or broadcast
transmission is achieved by replicating the packet at the source transmission is achieved by replicating the packet at the source
skipping to change at page 7, line 35 skipping to change at page 8, line 5
spread across 50 NVEs, the packet would have to be replicated 50 spread across 50 NVEs, the packet would have to be replicated 50
times at the source NVE. This also creates an issue with the times at the source NVE. This also creates an issue with the
forwarding performance of the NVE. forwarding performance of the NVE.
Note that this method is similar to what was used in Virtual Private Note that this method is similar to what was used in Virtual Private
LAN Service (VPLS) [RFC4762] prior to support of Multi-Protocol LAN Service (VPLS) [RFC4762] prior to support of Multi-Protocol
Label Switching (MPLS) multicast [RFC7117]. While there are some Label Switching (MPLS) multicast [RFC7117]. While there are some
similarities between MPLS Virtual Private Network (VPN) and NVO3, similarities between MPLS Virtual Private Network (VPN) and NVO3,
there are some key differences: there are some key differences:
- The Customer Edge (CE) to Provider Edge (PE) attachment in VPNs is - The Customer Edge (CE) to Provider Edge (PE) attachment in VPNs is
somewhat static, whereas in a DC that allows VMs to migrate somewhat static, whereas in a DC that allows VMs to migrate
anywhere, the TS attachment to NVE is much more dynamic. anywhere, the TS attachment to NVE is much more dynamic.
- The number of PEs to which a single VPN customer is attached in - The number of PEs to which a single VPN customer is attached in
an MPLS VPN environment is normally far less than the number of an MPLS VPN environment is normally far less than the number of
NVEs to which a VN's VMs are attached in a DC. NVEs to which a VN's VMs are attached in a DC.
When a VPN customer has multiple multicast groups, [RFC6513] When a VPN customer has multiple multicast groups, [RFC6513]
"Multicast VPN" combines all those multicast groups within each "Multicast VPN" combines all those multicast groups within each
VPN client to one single multicast group in the MPLS (or VPN) VPN client to one single multicast group in the MPLS (or VPN)
core. The result is that messages from any of the multicast core. The result is that messages from any of the multicast
groups belonging to one VPN customer will reach all the PE nodes groups belonging to one VPN customer will reach all the PE nodes
of the client. In other words, any messages belonging to any of the client. In other words, any messages belonging to any
multicast groups under customer X will reach all PEs of the multicast groups under customer X will reach all PEs of the
customer X. When the customer X is attached to only a handful of customer X. When the customer X is attached to only a handful of
PEs, the use of this approach does not result in excessive wastage PEs, the use of this approach does not result in excessive wastage
of bandwidth in the provider's network. of bandwidth in the provider's network.
In a DC environment, a typical server/hypervisor based virtual In a DC environment, a typical server/hypervisor based virtual
switch may only support 10's VMs (as of this writing). A subnet switch may only support 10's VMs (as of this writing). A subnet
with N VMs may be, in the worst case, spread across N vSwitches. with N VMs may be, in the worst case, spread across N vSwitches.
Using "MPLS VPN multicast" approach in such a scenario would Using "MPLS VPN multicast" approach in such a scenario would
require the creation of a Multicast group in the core for this VN require the creation of a Multicast group in the core for this VN
to reach all N NVEs. If only small percentage of this client's VMs to reach all N NVEs. If only small percentage of this client's VMs
participate in application specific multicast, a great number of participate in application specific multicast, a great number of
NVEs will receive multicast traffic that is not forwarded to any NVEs will receive multicast traffic that is not forwarded to any
of their attached VMs, resulting in considerable wastage of of their attached VMs, resulting in considerable wastage of
bandwidth. bandwidth.
Therefore, the Multicast VPN solution may not scale in DC Therefore, the Multicast VPN solution may not scale in DC
environment with dynamic attachment of Virtual Networks to NVEs and environment with dynamic attachment of Virtual Networks to NVEs and
greater number of NVEs for each virtual network. greater number of NVEs for each virtual network.
3.3. Replication at a multicast service node 3.3. Replication at a multicast service node
With this method, all multicast packets would be sent using a With this method, all multicast packets would be sent using a
unicast tunnel encapsulation from the ingress NVE to a multicast unicast tunnel encapsulation from the ingress NVE to a multicast
service node (MSN). The MSN, in turn, would create multiple copies service node (MSN). The MSN, in turn, would create multiple copies
of the packet and would deliver a copy, using a unicast tunnel of the packet and would deliver a copy, using a unicast tunnel
encapsulation, to each of the NVEs that are part of the multicast encapsulation, to each of the NVEs that are part of the multicast
group for which the packet is intended. group for which the packet is intended.
This mechanism is similar to that used by the Asynchronous Transfer This mechanism is similar to that used by the Asynchronous Transfer
Mode (ATM) Forum's LAN Emulation (LANE)LANE specification [LANE]. Mode (ATM) Forum's LAN Emulation (LANE)LANE specification [LANE].
The MSN is similar to the RP in PIM SM, but different in that the
user data traffic are carried by the NVO3 tunnels.
The following are the possible ways for the MSN to get the The following are the possible ways for the MSN to get the
membership information for each multicast group: membership information for each multicast group:
- The MSN can obtain this information by snooping the IGMP/MLD - The MSN can obtain this membership information from the IGMP/MLD
messages from the TSs and/or sending query messages to the TS. In report messages sent from the TSs. The IGMP/MLD report messages
order for MSN to snoop the IGMP/MLD messages between TSs and their are in response to IGMP/MLD query messages sent from the MSN to
corresponding routers, the NVEs that TSs are attached have to the TSs via NVEs that TSs are attached. In order for the MSN to
encapsulate a special outer header, e.g. outer destination being receive the IGMP/MLD report messages from the TSs, each of the
the multicast server node. See Section 3.3.2 for detail. IGMP/MLD query messages has to be encapsulated with the MSN
address in the outer source address field and the address of the
NVE in the outer destination address field. Each of the
encapsulated IGMP/MLD query messages also has the VNID to which
TSs belong in the outer header and a multicast address that
identifies a multicast group in the inner destination field. The
NVEs can establish the mapping between the MSN address and the
multicast address upon receiving the encapsulated IGMP/MLD query
messages. With the proper "MSN Address" <-> "Multicast-Address"
mapping, the NVEs can encapsulate the IGMP/MLD report messages
from TSs with the address of the MSN in the outer destination
address field.
- The MSN can obtain the membership information from the NVEs that - The MSN can obtain the membership information from the NVEs that
snoop the IGMP/MLD messages. This can be done by having the MSN have the capability to establish multicast groups by snooping
communicate with the NVEs, or by having the NVA obtain the native IGMP/MLD messages (p.s. the communication must be specific
to the multicast addresses), or by having the NVA obtain the
information from the NVEs, and in turn have MSN communicate with information from the NVEs, and in turn have MSN communicate with
the NVA. the NVA. This approach requires additional protocol between MSN
and NVEs.
Unlike the method described in Section 3.2, there is no performance Unlike the method described in Section 3.2, there is no performance
impact at the ingress NVE, nor are there any issues with multiple impact at the ingress NVE, nor are there any issues with multiple
copies of the same packet from the source NVE to the multicast copies of the same packet from the source NVE to the Multicast
service node. However, there remain issues with multiple copies of Service Node. However, there remain issues with multiple copies of
the same packet on links that are common to the paths from the MSN the same packet on links that are common to the paths from the MSN
to each of the egress NVEs. Additional issues that are introduced to each of the egress NVEs. Additional issues that are introduced
with this method include the availability of the MSN, methods to with this method include the availability of the MSN, methods to
scale the services offered by the MSN, and the sub-optimality of the scale the services offered by the MSN, and the sub-optimality of the
delivery paths. delivery paths.
Finally, the IP address of the source NVE must be preserved in Finally, the IP address of the source NVE must be preserved in
packet copies created at the multicast service node if data plane packet copies created at the multicast service node if data plane
learning is in use. This could create problems if IP source address learning is in use. This could create problems if IP source address
reverse path forwarding (RPF) checks are in use. reverse path forwarding (RPF) checks are in use.
skipping to change at page 9, line 41 skipping to change at page 10, line 30
NVE encapsulates the packet with the appropriate IP multicast NVE encapsulates the packet with the appropriate IP multicast
address in the tunnel encapsulation header for delivery to the address in the tunnel encapsulation header for delivery to the
desired set of NVEs. The protocol in the underlay could be any desired set of NVEs. The protocol in the underlay could be any
variant of Protocol Independent Multicast (PIM), or protocol variant of Protocol Independent Multicast (PIM), or protocol
dependent multicast, such as [ISIS-Multicast]. dependent multicast, such as [ISIS-Multicast].
If an NVE connects to its attached TSs via a Layer 2 network, there If an NVE connects to its attached TSs via a Layer 2 network, there
are multiple ways for NVEs to support the application specific are multiple ways for NVEs to support the application specific
multicast: multicast:
- The NVE only supports the basic IGMP/MLD snooping function, let - The NVE only supports the basic IGMP/MLD snooping function, let
the TSs routers handling the application specific multicast. This the TSs routers handling the application specific multicast. This
scheme doesn't utilize the underlay IP multicast protocols. scheme doesn't utilize the underlay IP multicast protocols.
- The NVE can act as a pseudo multicast router for the directly - The NVE can act as a pseudo multicast router for the directly
attached VMs and support proper mapping of IGMP/MLD's messages to attached VMs and support proper mapping of IGMP/MLD's messages to
the messages needed by the underlay IP multicast protocols. the messages needed by the underlay IP multicast protocols.
With this method, there are none of the issues with the methods With this method, there are none of the issues with the methods
described in Sections 3.2. described in Sections 3.2.
With PIM Sparse Mode (PIM-SM), the number of flows required would be With PIM Sparse Mode (PIM-SM), the number of flows required would be
(n*g), where n is the number of source NVEs that source packets for (n*g), where n is the number of source NVEs that source packets for
the group, and g is the number of groups. Bidirectional PIM (BIDIR- the group, and g is the number of groups. Bidirectional PIM (BIDIR-
PIM) would offer better scalability with the number of flows PIM) would offer better scalability with the number of flows
skipping to change at page 12, line 36 skipping to change at page 13, line 20
MSN. MSN.
5.2. Multicast membership management for DC with VMs 5.2. Multicast membership management for DC with VMs
For data centers with virtualized servers, VMs can be added, deleted For data centers with virtualized servers, VMs can be added, deleted
or moved very easily. When VMs are added, deleted or moved, the NVEs or moved very easily. When VMs are added, deleted or moved, the NVEs
to which the VMs are attached are changed. to which the VMs are attached are changed.
When a VM is deleted from an NVE or a new VM is added to an NVE, the When a VM is deleted from an NVE or a new VM is added to an NVE, the
VM management system should notify the MSN to send the IGMP/MLD VM management system should notify the MSN to send the IGMP/MLD
query messages to the relevant NVEs, so that the multicast query messages to the relevant NVEs (as described in Section 3.3),
membership can be updated promptly. Otherwise, if there are changes so that the multicast membership can be updated promptly.
of VMs attachment to NVEs, then for the duration of the configured Otherwise, if there are changes of VMs attachment to NVEs, within
default time interval that the TSs routers use for IGMP/MLD queries, the duration of the configured default time interval that the TSs
multicast data may not reach the VM(s) that moved. routers use for IGMP/MLD queries, multicast data may not reach the
VM(s) that moved.
6. Summary 6. Summary
This document has identified various mechanisms for supporting This document has identified various mechanisms for supporting
application specific multicast in networks that use NVO3. It application specific multicast in networks that use NVO3. It
highlights the basics of each mechanism and some of the issues with highlights the basics of each mechanism and some of the issues with
them. As solutions are developed, the protocols would need to them. As solutions are developed, the protocols would need to
consider the use of these mechanisms and co-existence may be a consider the use of these mechanisms and co-existence may be a
consideration. It also highlights some of the requirements for consideration. It also highlights some of the requirements for
supporting multicast applications in an NVO3 network. supporting multicast applications in an NVO3 network.
skipping to change at page 14, line 5 skipping to change at page 14, line 36
[RFC6513] Rosen, E. et al., "Multicast in MPLS/BGP IP VPNs", [RFC6513] Rosen, E. et al., "Multicast in MPLS/BGP IP VPNs",
February 2012. February 2012.
9.2. Informative References 9.2. Informative References
[RFC7348] Mahalingam, M. et al., " Virtual eXtensible Local Area [RFC7348] Mahalingam, M. et al., " Virtual eXtensible Local Area
Network (VXLAN): A Framework for Overlaying Virtualized Network (VXLAN): A Framework for Overlaying Virtualized
Layer 2 Networks over Layer 3 Networks", August 2014. Layer 2 Networks over Layer 3 Networks", August 2014.
[RFC7637] Garg P. and Wang, Y. (Eds.), "NVGRE: Network [RFC7637] Garg P. and Wang, Y. (Eds.), "NVGRE: Network
Virtualization using Generic Routing Encapsulation", Vvirtualization using Generic Routing Encapsulation",
September 2015. September 2015.
[DC-MC] McBride, M. and Lui, H., "Multicast in the data center [DC-MC] McBride, M. and Lui, H., "Multicast in the data center
overview," <draft-mcbride-armd-mcast-overview-02>, work in overview," <draft-mcbride-armd-mcast-overview-02>, work in
progress, July 2012. progress, July 2012.
[ISIS-Multicast] [ISIS-Multicast]
Yong, L. et al., "ISIS Protocol Extension for Building Yong, L. et al., "ISIS Protocol Extension for Building
Distribution Trees", <draft-yong-isis-ext-4-distribution- Distribution Trees", <draft-yong-isis-ext-4-distribution-
tree-03>, work in progress, October 2014. tree-03>, work in progress, October 2014.
[RFC4762] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private [RFC4762] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private
LAN Service (VPLS) using Label Distribution Protocol (LDP) LAN Service (VPLS) using Label Distribution Protocol (LDP)
signaling," January 2007. signaling," January 2007.
[RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014. [RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014.
skipping to change at page 14, line 28 skipping to change at page 15, line 15
[RFC4762] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private [RFC4762] Lasserre, M., and Kompella, V. (Eds.), "Virtual Private
LAN Service (VPLS) using Label Distribution Protocol (LDP) LAN Service (VPLS) using Label Distribution Protocol (LDP)
signaling," January 2007. signaling," January 2007.
[RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014. [RFC7117] Aggarwal, R. et al., "Multicast in VPLS," February 2014.
[LANE] "LAN emulation over ATM," The ATM Forum, af-lane-0021.000, [LANE] "LAN emulation over ATM," The ATM Forum, af-lane-0021.000,
January 1995. January 1995.
[EDGE-REP] [EDGE-REP]
Marques P. et al., "Edge multicast replication for BGP IP Marques P. et al., "Edge multicast replication for BGP IP
VPNs," <draft-marques-l3vpn-mcast-edge-01>, work in VPNs," <draft-marques-l3vpn-mcast-edge-01>, work in
progress, June 2012. progress, June 2012.
[RFC 3569] Bhattacharyya, S. (Ed.), "An Overview of Source-Specific [RFC 3569]
S. Bhattacharyya, Ed., "An Overview of Source-Specific
Multicast (SSM)", July 2003. Multicast (SSM)", July 2003.
[LISP-Signal-Free] [LISP-Signal-Free]
Moreno, V. and Farinacci, D., "Signal-Free LISP Moreno, V. and Farinacci, D., "Signal-Free LISP
Multicast", <draft-ietf-lisp-signal-free-multicast-01>, Multicast", <draft-ietf-lisp-signal-free-multicast-01>,
work in progress, April 2016. work in progress, April 2016.
[VXLAN-GPE] [VXLAN-GPE]
Kreeger, L. and Elzur, U. (Eds.), "Generic Protocol Kreeger, L. and Elzur, U. (Eds.), "Generic Protocol
Extension for VXLAN", <draft-ietf-nvo3-vxlan-gpe-02>, work Extension for VXLAN", <draft-ietf-nvo3-vxlan-gpe-02>, work
in progress, April 2016. in progress, April 2016.
[Geneve] Gross, J. and Ganga, I. (Eds.), "Geneve: Generic Network [Geneve]
Gross, J. and Ganga, I. (Eds.), "Geneve: Generic Network
Virtualization Encapsulation", <draft-ietf-nvo3-geneve- Virtualization Encapsulation", <draft-ietf-nvo3-geneve-
01>, work in progress, January 2016. 01>, work in progress, January 2016.
[GUE] Herbert, T. et al., "Generic UDP Encapsulation", <draft- [GUE]
Herbert, T. et al., "Generic UDP Encapsulation", <draft-
ietf-nvo3-gue-02>, work in progress, December 2015. ietf-nvo3-gue-02>, work in progress, December 2015.
[BIER-ARCH] [BIER-ARCH]
Wijnands, IJ. (Ed.) et al., "Multicast using Bit Index Wijnands, IJ. (Ed.) et al., "Multicast using Bit Index
Explicit Replication," <draft-ietf-bier-architecture-03>, Explicit Replication," <draft-ietf-bier-architecture-03>,
January 2016. January 2016.
[RFC 3819]
P. Harn et al., "Advice for Internet Subnetwork Designers",
July 2004.
10. Acknowledgments 10. Acknowledgments
Thanks are due to Dino Farinacci, Erik Nordmark, Lucy Yong, Nicolas Many thanks are due to Dino Farinacci, Erik Nordmark, Lucy Yong,
Bouliane, Saumya Dikshit, and Matthew Bocci, for their comments and Nicolas Bouliane, Saumya Dikshit, Joe Touch, Olufemi Komolafe, and
suggestions. Matthew Bocci, for their valuable comments and suggestions.
This document was prepared using 2-Word-v2.0.template.dot. This document was prepared using 2-Word-v2.0.template.dot.
Authors' Addresses Authors' Addresses
Anoop Ghanwani Anoop Ghanwani
Dell Dell
Email: anoop@alumni.duke.edu Email: anoop@alumni.duke.edu
Linda Dunbar Linda Dunbar
 End of changes. 42 change blocks. 
77 lines changed or deleted 126 lines changed or added

This html diff was produced by rfcdiff 1.45. The latest version is available from http://tools.ietf.org/tools/rfcdiff/