This repo helps you gain hands-on experience in writing custom programs for the data plane switches using the P4 language.
You will build a custom virtual network using mininet that provides a P4-based software switch, bmv2, which is based on the v1model’s simple_switch_grpc target. The bmv2 target switch can be managed statically using the thrift API. The bmv2 switch can also be dynamically managed via the control plane using the P4Runtime API.
In the next few sections, you will set up your machine with the VM image, test few tutorial examples with template code (you write the remainder code), and develop your custom application.
-
Install virtual box ○ Download and install the appropriate executable for Windows / Ubuntu and follow the instructions
-
Download and setup the VM from the following link. This VM contains mininet, bmv2 switch tools (P4, P4Runtime), and other necessary tools such as wireshark
If anyone wants to complete the last task, feel free to send PR :) Link to problem statement
Q1. Statefull firewall
- Bloom Filters is a probabilistic data structure which helps us to know an element is a part of set for huge search spaces using very less amount of memory. It is very helpful in case of statefull firewall due to its property of no false negatives which means it never shows that an element is not seen while it is actually seen by our firewall.Bloom Filter can generate some false positive results also which means that the element is deemed to be present but it is actually not present but its probability can be reduced as we see more elements.
b. Without firewall
With firewall
c. Alternative to bloom filters is Heaps/hashmaps for 100% accuracy but there are few limitations to it as follows:
- 10x more memory than bloom filter and not cost effective
- It is very difficult to be implemented in data plane , due to repetitive operations to be performed such as updation, insertion, deletion to maintain the heaps.
Q2. Link Monitor
-
Without updating the probe packet fields , we dont have similar results in iperf and send/recieve program.
Using the send and recieve we get link utilisation by egress packets at all ports of all switches in the path.When we do iperf h1 h4 , the packets are forwarded at 4 places , at switch1 from port 1 to port 4 , again at switch 4 from port 2 to port 1 , at switch 2 from port 3 to port 4 , at switch 3 from port 2 to port 1, etc. The path of probe packet travels each switch twice in this path which can be easily seen from topology daigram.
On matching ,the values at recieve.py and using iperf , it is very similar ,the little bit difference is cause because recieve program gives results continuously but iperf gives average results.
-
Counters cant be used for this case as we need to read the value of the counter from Data plane which is not possible.We cant read but only write counter values in data plane but can do both from data plane
-
Yes , there is an alternative called Direct counters which are attached to tables and are updated when there is an exact match.
Q3. Multicast
Default:Not reachable to any host
-
Default topogoly
-
Since h1,h2,h3 are from group 1 , they all are reachable from one another but h4 is not reachable from any other host as its MAC address is not known.Due to default action of multicast which was set by control are set , so h4 is able to forward to h1,h2,h3.This packet processing rules of each switch is done in control plane using sX-runtime.json file (X is switch number)
-
Once the entry is made in sX-runtime.json file in sig-topo folder, we can see that all pings are working
-
-
After adding 4 extra hosts
-
Default action of multicast of group 1
-
After creating two multicast groups, I observed same result with ping all, but if we individually ping h1,h2,h3,h4 from h5 , it remembers the path using MAC forwarding,which changes the result gradually.
After I pinged h1,h2,h3,h4 from h5, pingall shows it to be reachable
Similarly, after I ping h8 from h4 , ping all shows the reachable paths just by simple MAC forwarding.
-
Q4. Equal-Cost Multipath Forwarding
-
After completing load balancing code,I have sent 4-5 messages from H1 to 10.0.0.1, first message was recieved by h3 and from 4 onwards it was reacieved by h2.
-
After adding one path , one switch(s4) and one host(h4) is added.
Changes done are as follows:
- s4-runtime.json : file similar to s2/s3 runtime json , just to add table enteries to balance the load if there is a match when egress port =1
- s1-runtime.json : added one table entry in egress and ingress to first decide the select option and then in egress to send the packets/frames if there is a match.
- topology.json : entry for h4 is added similar to rest other servers , switch entry added similar to rest and added two links for switch s4 with s1 and h4
I sent 4-5 messages from H1 with 10.0.0.1 as destination address, first message was recieved by h3 and then by h2 for next 2-3 messages and finally by h4 after 4th message.This clearly shows the changes made by me were working.
Q5. All related code is in folder called “custom”
If anyone wants to complete the last task, feel free to send PR :)