Saturday, 9 June 2018

Control Plane and Data Plane- Highlights

  • We have 2 planes- Data and Control Plane
  • ASICs are there in data plane and CPU is there in Control Plane
  • Data Plane is a forwarding plane. It checks only the destination MAC address.
  • if it doesn’t match its own addresses, it just forwards based on its forwarding tables.
  • If it matches, it sends it up to CPU (Control Plane)
  • We also have a trap mechanism where it filters for specific MAC address. For eg, in STP, even though the MAC address is not addressed to us, a trap is created which forces to send the STP BPDUs to the Control Plane.
  • Another case where trap occurs can be seen when the MAC address is addressed to us but the IP address is different (eg, using static ARP), then, since we won’t have the address in our forwarding tables even though we are on same SVI VLAN, it traps to the Control Plane in order to sends back a “Destination Unreachable” message.
DATA PLANE and CONTROL PLANE:
  • Responsible to forward the transit traffic (either user or production traffic).
  • The actual forwarding happens in the data plane.
  • Control Plane is responsible for decision making (like whether to prefer BGP route or OSPF route and chooses BGP route) and then forwards it to data plane.
  • Even the rules for data plane has to be programmed through Control Plane. For eg, we program the data plane to forward the packets or to send to the CPU via control plane
  • We can see the data plane forwarding rules via #show ip route hosts //we will see that for these IP address, send to the CPU
  • #show platform trident l3 hardware routes
//this command can be used to see the hardware programming for L3 .ie. IP addresses. For L2, it is programmed in TCAM. Now, in the above command, if MAC address is 00:00:00:00:00:00 for any IP, then, it is forwarded to the CPU. If any IP to other MAC address is there, it had been learnt through ARP. (note: we can verify by seeing that all the entries of ARP will also be there in table, other than the ones on management interface, since management is not connected to ASIC)
  • #show process top //can be used to see the live running process on EOS.
        If we ping a switch using very large size and repeat, then, we can see that the CPU usage is very high.
  • #bash
        $netstat -s
        //This command can be used to see the CPU drops
  • Now, during troubleshooting, if we have to see even transit traffic, we cannot do so using tcpdump. For that, we use port mirroring.
#monitor session <anysessionname> source <interface> <direction>
Here, <anysessioname> = anand
interface = et 23
direction= both (looks at both ingress and egress ports)
#monitor session <anysessionname> destination et 21 //Now, if we connect a host to et21 and run wireshark, we can see all the traffic (both transit and CPU oriented ) on the interface et 23

  • The protocol or application itself doesn’t really determine whether the traffic is control, management, or data plane, but more importantly how the router processes it.
  • For example suppose we have a simple 3 router topology that is R1–R2–R3, and R1 and R3 are running OSPF with each other. From R1 and R3′s perspective, these packets are part of the control plane, because they are locally originated/destined, and need to be process switched in order to look into the packet details and actually build the OSPF database. However from R2′s perspective, these packets would be in its data plane, because the traffic is neither originated from or destined to it.

No comments:

Post a Comment