1. Leaf-Spine Layer 3 Clos Topology (Two-Tier):
- The leaf-spine topology has become the de facto standard for networking topologies when building medium- to large-scale data center infrastructures. The leaf-spine topology is adapted from Clos telecommunications networks.
- The IP fabric within a PoD resembles a two-tier or 3-stage folded Clos fabric.
- The two-tier leaf-spine topology is shown in Figure 1.
- The bottom layer of the IP fabric has the leaf devices (top-of-rack switches), and the top layer has spines.
- The role of the leaf is to provide connectivity to the endpoints in the data center network. These endpoints include compute servers and storage devices as well as other networking devices like routers, switches, load balancers, firewalls, and any other physical or virtual networking endpoints. Because all endpoints connect only to the leaf, policy enforcement, including security, traffic-path selection, QoS marking, traffic policing, and shaping, is implemented at the leaf. More importantly, the leafs act as the anycast gateways for the server segments to facilitate mobility with the VXLAN overlay.
- The role of the spine is to provide connectivity between leafs. The major role of the spine is to participate in the control-plane and data-plane operations for traffic forwarding between leafs.
- The spine devices serve two purposes: BGP control plane (route reflectors for leaf or eBGP peering with leaf) and IP forwarding based on the outer IP header in the underlay network. Since there are no network endpoints connected to the spine, tenant VRFs or VXLAN segments are not created on spines. Their routing table size requirements are also very light to accommodate just the underlay reachability. Note that all spine devices need not act as BGP route reflectors; only selected spines in the spine layer can act as BGP route reflectors in the overlay design.
1.1. As a design principle, the following requirements apply to the leaf-spine topology:
- Each leaf connects to all spines in the network through 40-GbE links.
- Spines are not interconnected with each other.
- Leafs are not interconnected with each other for data-plane purposes. (The leafs may be interconnected for control-plane operations such as forming a server-facing vLAG.)
- The network endpoints do not connect to the spines.
This type of topology has the predictable latency and also provides the ECMP forwarding in the underlay network. The number of hops between two leaf devices is always two within the fabric. This topology also enables easier scale out in the horizontal direction as the data center expands and is limited by the port density and bandwidth supported by the spine devices.
This validated design recommends the same hardware in the spine layer. Mixing different hardware is not recommended.
1.2. IP Fabric Infrastructure Links:
All fabric nodes—leafs, spines, and super-spines—are interconnected with Layer 3 interfaces. In the validated design,
- 40-GbE links are used between the fabric nodes.
- All these links are configured as Layer 3 interfaces with /31 IPv4 address. For a simple 3-stage fabric, IP unnumbered interfaces can be used. We do not recommend a mix of unnumbered and numbered interfaces within a fabric. Also, for a 5-stage IP fabric, numbered interfaces are highly recommended.
- The MTU for these links is set to jumbo MTU. This is a requirement to handle the VXLAN encapsulation of Ethernet frames.
1.3.1 Server-Facing Links (Individual Leaf/ToR):
The server-facing or access links are on the leaf nodes. In the validated design,
- 10-GbE links are used for server-facing VLANs.
- These links are configured as Layer 2 trunks with associated VLANs.
- The MTU for these links is set to the default: 1500 bytes.
- Spanning tree is disabled.
1.3.2 Server-Facing Links (vLAG Pair/ToR):
vLAG configuration involves three steps:
- Node ID configuration on the pair of devices.
- Inter-switch links or ISL configuration on both devices.
- Configuring the server-facing port channels and adding the required VLANs on them.
See below for more info on configurations: http://www.brocade.com/content/html/en/brocade-validated-design/brocade-ip-fabric-bvd/GUID-F54693FB-7866-43B5-8610-71F2B2B85ECA.html
1.4 Leaf-Spine L3 Clos Topology:
Spanning tree must be enabled if there are Layer 2 switches/bridges between a leaf and servers.