diff --git a/_config.yml b/_config.yml
index be8f7c52..13a7233c 100644
--- a/_config.yml
+++ b/_config.yml
@@ -19,15 +19,16 @@ title: Batfish
description: >- # this means to ignore newlines until "baseurl:"
url: "https://batfish.github.io" # the base hostname & protocol for your site, e.g. http://example.com
+show_excerpts: true
+
google_analytics: GTM-TZKVKVG
-intentionet_url: https://www.intentionet.com/
# Social
slack_url: https://join.slack.com/t/batfish-org/shared_invite/enQtMzA0Nzg2OTAzNzQ1LTUxOTJlY2YyNTVlNGQ3MTJkOTIwZTU2YjY3YzRjZWFiYzE4ODE5ODZiNjA4NGI5NTJhZmU2ZTllOTMwZDhjMzA
github-project-url: https://github.com/batfish/batfish
-# github_username:
-# linkedin_username:
-# twitter_username:
+# github_username:
+# linkedin_username:
+# twitter_username:
# Build settings
@@ -40,8 +41,7 @@ plugins:
- jekyll-feed
include:
- - _pages
-
+ - _posts
# Exclude from processing.
# The following items will not be processed, by default. Create a custom list
diff --git a/_includes/banner.html b/_includes/banner.html
index 4b1212ac..5899a734 100644
--- a/_includes/banner.html
+++ b/_includes/banner.html
@@ -2,13 +2,17 @@
- Want to get your hands dirty? See our Python docs (a cheat sheet) or Ansible docs.
+ Want to get your hands dirty? See docs.
Got questions or feature requests? Join us on
Slack or GitHub.
-
+
+
+
+ Or read our blog to dig deeper into the rationale for Batfish.
+
-
- The Intentionet team is joining AWS. We are excited to continue, under a new umbrella, our mission of transforming how networks are engineered.
-
- Batfish will become an AWS-managed open source project. It will remain available under the same open source
-license, and we will continue our work on the project in collaboration with the community. You are encouraged
-to continue asking questions via Slack, opening issues and PRs on GitHub, and participating in the community to
-improve Batfish for everyone.
-
-
- {%- endif -%}
-
+
+
diff --git a/_pages/firewall-analysis.html b/_pages/firewall-analysis.html
deleted file mode 100644
index 891bc3d1..00000000
--- a/_pages/firewall-analysis.html
+++ /dev/null
@@ -1,388 +0,0 @@
----
-layout: page
-title: Firewall and ACL Audits and Analysis | Batfish
-permalink: /firewall-analysis
-description: Learn how Batfish can help you with Firewall and ACL audits and analysis.
-keywords: firewall analysis, firewall change management, acl analysis, acl change management, firewall workflow, acl workflow, firewall automation, acl automation
----
-
-
-
-
-Manually reasoning about what an ACL or firewall policy will do to a packet, when it has rules that cover
- multiple fields of the IP header, is extremely difficult and as the rules grow in size becomes virtually
- impossible. Add to that the task of understanding what a change to the ACL or firewall policy will do,
- and you are in for quite a ride.
-
-This is where a tool such as Batfish comes into play. It provides multiple methods of analyzing ACLs and
- firewall rules, so that you can ensure that your security policy functions as desired. And it is built for
- you to do this programmatically, so you can easily incorporate this into any workflow.
-
-Batfish provides 4 core capabilities for Firewall and ACL audit and analysis:.
-
-
-
testFilters
-
searchFilters
-
filterLineReachability
-
compareFilters
-
-
-This article will show you how to use each of these capabilities to audit/analyze an ACL of firewall policy.
- So let’s get started. This article assumes some familiarity with Batfish.
- Please view the GitHub page
- and docs page for an overview.
-
Analyzing the Firewall/ACL behavior for a specific flow
-Note: A detailed Jupyter notebook covering this example can be found
- here
-
-
-
-The testFilters query allows you to analyze the behavior of a filter
- (Firewall policy or ACL rule) with respect to a specific flow. For example, if I want to determine if an ACL
- is allowing a specific host to reach the
- DNS server, I would use testFilters as shown below.
-
-
-
-
-
-# Check if a representative host can reach the DNS server
-
-dns_flow = HeaderConstraints(srcIps="10.10.10.1", dstIps="218.8.104.58",
- applications=["dns"])
-
Matched line 660 permit udp 10.10.10.0/24 218.8.104.58/32 eq domain
-
-
-
-
-
-With a simple query, you can get a definitive answer about the behavior of a filter with regards to a specific flow.
- So before you make a change to an ACL or firewall rule, you can test the existing one and determine what action
- it takes for a specific flow (or sets of flows).
-
-But if you wanted to analyze the behavior of a Firewall or ACL for a large set of flows, you would use
- searchFilters instead.
-
Analyzing the Firewall/ACL behavior for a large set of flows
-Note: A detailed Jupyter notebook covering this example can be found
-here
-
-
-Building on the previous example, let’s see if the ACL permits access to the DNS server for ALL hosts in a given subnet.
- To do that, you actually “search” for any DNS flow from the source subnet to the DNS server that is denied.
- You are asking Batfish to see if there are any flows that violate your policy that all hosts in the given
- subnet MUST have access to the DNS server.
-
Matched line 460 deny udp 10.10.10.42/32 218.8.104.58/32 eq domain
-
-
-
-
-
-As you can see, we did get a flow that matches the search condition and thus violates our desired guarantee of the
- entire subnet being able to reach the DNS server. The columns carry the same information as those for
- testFilters and provide insight
- into the violation. In particular, we see that a flow with source IP 10.10.10.42 is denied by an earlier line
- in the ACL. Such needles in the haystack are impossible to find with other tools and techniques.
-
-Both testFilters and
- searchFilters help you analyze
- the behavior of an ACL or firewall rule with regards to a given flow or sets of flows. But Batfish provides
- another interesting capability to analyze a filter -
- filterLineReachability.
-
Identifying unreachable filter lines
-Note: A detailed Jupyter notebook covering this example can be found here
-
-
-What once started out as a nice pristine and easy to understand ACL or firewall policy, over time turns into quite a
- mess as the network and services evolve. Oftentimes, you end up creating new rules that are partially or fully
- masked by earlier rules, thereby rendering your changes partially and wholly ineffective.
- To help you in this situation, Batfish has the query
- filterLineReachability.
-
-
-
-
-
-# Find unreachable lines in filters of rtr-with-acl
-Each line in the answer above identifies an unreachable line in a firewall policy or ACL. Let’s take a closer look at the first one.
- It shows that the line 670 permit ip 166.146.58.184 any is unreachable because it is blocked by the
- line 540 shown in the Blocking_Lines column. The Different_Action column
- indicates that the blocking line 540 has the opposite action as the blocked line 670,
- a more worrisome situation than if actions were the same.
-The Reason column shows that the line is unreachable because it has other lines that block it,
- Lines can also be independently unreachable (i.e., no packet will ever match it) or may be unreachable
- because of circular references.
- The filterLineReachability
- question identifies such cases as well, and provides more information about them in the Additional_Info
- column.
-
-
-
Comparing the behavior of Firewall policies or ACLs
-Note: A detailed Jupyter notebook covering this example can be found here
-
-
-As your network evolves, the ACL and firewall policies change and grow, often resulting in large rule-sets that can
- exceed the capability of the specific device it is applied to, or result in a performance degradation.
- So you periodically try to refactor / compress these rules to mitigate this problem.
- The compareFilters query in
- Batfish allows you to compare the behaviors of your existing ACL/firewall policy and the new compressed one to
- ensure that you are not permitting or denying more traffic than desired.
-
-
-
-
-# Now, compare the two ACLs in the two snapshots
-
-answer = bfq.compareFilters().answer(snapshot=compressed_snapshot, reference_snapshot=original_snapshot)
-show(answer.frame())
-
-The compareFilters question
- compares two filters (firewall policies or ACLs) and returns pairs of lines, one from each filter, that match
- the same flow(s) but treat them differently. If it reports no output, the filters are guaranteed to be identical.
- The analysis is exhaustive and considers all possible flows.
-
-As we can see from the output above, our compressed ACL is not the same as the original one.
- In particular, line 210 of the compressed ACL will deny some flows that were being permitted by
- line 510 of the original; and line 510 of the compressed ACL will permit some flows
- that were being denied by line 220 of the original ACL. Because the permit statements correspond
- to ICMP traffic, we can tell that the traffic treated by the two filters is ICMP. To narrow things down to
- specific source and destination IPs that are impacted, you can run the
- searchFilters query.
-
-To see these queries in action on example networks, please check out the Jupyter notebooks
- here.
- Also, don’t forget to check out our video tutorials.
-
Community created resources
-A number of members of the Batfish open-source community have created content related to ACL and firewall policy
- changes. You can find a few of them here:
-
-Analyzing firewall poliices and ACLs is a pretty complex task. Batfish has a number of core
- capabilities that greatly simplifies this, as shown above.
- Batfish Enterprise
- (offered by Intentionet) builds on these capabilities and provides
- an interactive UI-based and programmatic workflow to make it even easier to use. To learn more about
- Batfish Enterprise, follow this
- link.
-
-If you want to learn more about Batfish or get involved in the open-source community, please join our
- Slack
- workspace or find us on GitHub.
-
- {%- endif -%}
-
-
\ No newline at end of file
diff --git a/_post/2018-04-15-welcome-to-jekyll.markdown b/_post/2018-04-15-welcome-to-jekyll.markdown
deleted file mode 100644
index 70420dca..00000000
--- a/_post/2018-04-15-welcome-to-jekyll.markdown
+++ /dev/null
@@ -1,28 +0,0 @@
----
-layout: post
-var: thingy
-title: "Welcome to Jekyll!"
-date: 2018-04-15 22:14:55 -0700
-categories: jekyll update
----
-
-## hi
-You’ll find this post in your `_posts` directory. Go ahead and edit it and re-build the site to see your changes. You can rebuild the site in many different ways, but the most common way is to run `jekyll serve`, which launches a web server and auto-regenerates your site when a file is updated.
-
-To add new posts, simply add a file in the `_posts` directory that follows the convention `YYYY-MM-DD-name-of-post.ext` and includes the necessary front matter. Take a look at the source for this post to get an idea about how it works.
-
-Jekyll also offers powerful support for code snippets:
-
-{% highlight ruby %}
-def print_hi(name)
- puts "Hi, #{name}"
-end
-print_hi('Tom')
-#=> prints 'Hi, Tom' to STDOUT.
-{% endhighlight %}
-
-Check out the [Jekyll docs][jekyll-docs] for more info on how to get the most out of Jekyll. File all bugs/feature requests at [Jekyll’s GitHub repo][jekyll-gh]. If you have questions, you can ask them on [Jekyll Talk][jekyll-talk].
-
-[jekyll-docs]: https://jekyllrb.com/docs/home
-[jekyll-gh]: https://github.com/jekyll/jekyll
-[jekyll-talk]: https://talk.jekyllrb.com/
diff --git a/_posts/2017-09-12-new-network-engineering-workflow-formal-validation.md b/_posts/2017-09-12-new-network-engineering-workflow-formal-validation.md
new file mode 100644
index 00000000..70c39ec0
--- /dev/null
+++ b/_posts/2017-09-12-new-network-engineering-workflow-formal-validation.md
@@ -0,0 +1,81 @@
+---
+title: "The New Network Engineering Workflow – Formal Validation"
+author: Samir Parikh
+date: "2017-09-12"
+tags:
+ - "batfish"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+ - "validation"
+ - "verification"
+---
+
+At Future:NET 2017, hosted by VMWare in Las Vegas on August 30th and 31st, our CEO Ratul Mahajan gave the [keynote presentation](https://youtu.be/LfoCK7Au6to). Ratul spoke at length about how we can help network engineers and operators make their networks highly agile, reliable, and secure by adapting proven approaches employed by hardware and software engineers.
+
+![](/assets/images/ratul_keynote_2017.jpg){:width="400px"}
+
+Ratul observed that designing and operating a network is an extremely complicated task. Modern enterprise, cloud and telecommunications networks are extremely large, with hundreds to thousands of devices and have complicated policies for reachability, security and performance. And while the devices and applications that form these networks have become increasingly sophisticated, the tools and processes network engineers and operators have to design, build and maintain them have not even come close to keeping pace.
+
+The best we have done is a set of increasingly sophisticated monitoring and visualization tools, which while valuable are woefully insufficient—-they only tell you something has gone wrong, only after something has gone wrong, and even then they provide little help in figuring out the root cause or how to fix things.
+
+In contrast, hardware and software engineers have available to them a rich array of technologies and tools that help them do their jobs safely, correctly, and rapidly. These include capabilities like design rule checking, functional verification, power analysis, unit-testing, continuous integration, change management, etc.
+
+In his keynote, Ratul introduced the concept of the new network engineering workflow inspired by capabilities used by hardware and software engineers.
+
+
+
+
+
+
+This workflow includes **Formal Validation, High-level Intent Specification, and Development Tools.**
+
+### In this post, I will explore the emerging space of Formal Validation for networks.
+
+Formal validation (or verification, as some are wont to call it) has been leveraged successfully in hardware and software engineering to provide correctness guarantees for critical functions. In our present day, it is hard to find a more critical function than the network that powers the applications and services that we cannot live without—Gmail, Google Maps, Spotify, Apple Music, Amazon.com, etc.
+
+For networks to be robust and give engineers peace of mind, what we need is proactive guarantees of operation, with the ability to predict what may go wrong, why it would go wrong and how to avoid that from happening.
+
+This is where formal validation comes into play. So what is formal validation/verification?
+
+> **[Formal verification](http://archive.eetasia.com/www.eetasia.com/STATIC/PDF/201005/EEOL_2010MAY21_EDA_TA_01.pdf?SOURCES=DOWNLOAD "Open Link")** is the act of proving or disproving the correctness of algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics
+
+So what does that mean in the context of networks? It means providing comprehensive guarantees for properties such as reachability, security, and performance. Comprehensive means that the guarantees must hold for not just one or two scenarios but all scenarios (e.g., all possible packets, all possible failure. etc.) With formal validation, network engineers can provide such guarantee on various aspects of network behavior.
+
+For example, formal validation tools can answer some or all of the following questions about network behavior:
+
+1. Is the DNS server globally reachable?
+2. Are two subnets isolated with respect to all traffic?
+3. Do spine routers treat all destinations identically?
+4. Can any unencrypted packet go between two branch offices?
+5. Will any interface failure lead to connectivity loss?
+6. Can any external announcement disrupt internal connectivity?
+7. Is all communication with the device secure?
+
+By leveraging formal validation, network engineers can be assured that any new network design, any routing or security policy change, any failure will not hurt the critical services that the network supports. No longer will network engineers and operators need to worry about rolling out a change that may cause an outage for a critical service because they can guarantee that this will not happen in advance.
+
+Over the last several years there has been a number of academic research initiatives to bring formal validation into the realm of networking. Broadly speaking there are two categories of formal validation techniques: 1) **Control-plane validation,** and 2) **Data-plane validation.**
+
+Data-plane validation techniques, such as [Header Space Analysis (HSA)](https://www.usenix.org/system/files/conference/nsdi12/nsdi12-final8.pdf "Open Link"), [VeriFlow](https://www.usenix.org/system/files/conference/nsdi13/nsdi13-final100.pdf "Open Link"), and [Network-optimized Datalog (NoD)](https://www.usenix.org/node/189019 "Open Link") read the **_current_** forwarding behavior of the network (FIBs, ACLs, firewall rules) and model that to provide correctness guarantees on the **_current_** behavior of the network with respect to all possible packets.
+
+
+
+
+
+Control-plane validation techniques, such as [Batfish](https://www.usenix.org/system/files/conference/nsdi15/nsdi15-paper-fogel.pdf "Open Link") and [Minesweeper](https://ratul.org/papers/sigcomm2017-minesweeper.pdf "Open Link"), start with modeling the network control plane. They understand the operation of devices and the protocols that drive their dynamic state such as OSPF, ISIS, BGP, ACLs, firewall rules, etc. Control-plane validation tools, such as Batfish, understand and model the network control plane and compute the reachability (RIBs) and forwarding behavior (FIBs, ACLs, FW) and combine that information to provide correctness guarantees for **_all possible_** states of the network.
+
+
+
+
+
+![]()
+
+The difference in techniques manifests itself in the types of guarantees the system can provide. With control-plane validation, the answers to all 7 of the questions listed above can be guaranteed and they can be guaranteed for the current and any future possible state of the network. Whereas with data-plane validation, answers to the first 4 can be guaranteed for only the current state of the network.
+
+The difference between control-plane and data-plane validation also manifests in whether they can provide proactive guarantees before the configuration is pushed to the network. **Control-plane validation, because it reasons about the configuration itself, can be used for proactive protection.** Data-plane validation cannot be used in that way since it reasons about the current state of the network.
+
+Formal network validation is not science fiction anymore and its effectiveness has been proven on many real, large networks. Those who want to experience it themselves, and aren’t afraid of getting their hands dirty, should go to [www.batfish.org](http://www.batfish.org/), which links to an open source project with combined capabilities of NoD, Batfish, and Minesweeper, for control as well as data plane validation. (Intentionet builds its validation solution on top of this open-source project.)
+
+With the ability to comprehensively guarantee critical aspects of network behavior, network engineers and operators can rapidly innovate, explore new technologies and designs, and respond to the changing business needs of their companies.
+
+This is a truly exciting time to be involved in networking. In future blog posts, we will explore the state of **High-level Intent Specification and Development Tools.**
diff --git a/_posts/2017-11-27-dont-accidentally-break-internet-like-level-three.md b/_posts/2017-11-27-dont-accidentally-break-internet-like-level-three.md
new file mode 100644
index 00000000..d445e19b
--- /dev/null
+++ b/_posts/2017-11-27-dont-accidentally-break-internet-like-level-three.md
@@ -0,0 +1,74 @@
+---
+title: "Don't accidentally break the Internet like Level 3 (or Google, Telia, Telekom Malaysia, ...)"
+author: Samir Parikh
+date: "2017-11-27"
+tags:
+ - "batfish"
+ - "bgp"
+ - "networkautomation"
+ - "networkoutage"
+ - "networkvalidation"
+ - "predeployment"
+ - "route-leak"
+ - "validation"
+---
+
+On Monday, Nov 6th, 2017, Level 3 Communications (now part of CenturyLink) made national headlines when a configuration error resulted in a massive outage for many users in the USA. The impacted users were customers of several large ISPs, including Comcast. It took 90 minutes for Level 3 to diagnose and remediate the error, and it took even longer for impacted users to regain Internet access.
+
+> **Given how critical Internet connectivity has become to our lives, outages are highly disruptive for users and damage the reputation of the organizations involved.**
+
+Posts that provide insight into the root cause (e.g., [Wired](https://www.wired.com/story/how-a-tiny-error-shut-off-the-internet-for-parts-of-the-us/), [Outages list](https://puck.nether.net/pipermail/outages/2017-November/010947.html), [Thousand Eyes](https://blog.thousandeyes.com/comcast-outage-level-3-route-leak/)) indicate that the error was related to the configuration of BGP (Border Gateway Protocol), the routing protocol underlying the Internet. In particular, Level 3 appears to have “leaked” more specific BGP routes that it obtained from some of its neighbors. This leak led to traffic traversing paths without ample capacity creating heavy congestion and packet drops.
+
+The error is eerily similar to [Google’s error](http://www.popularmechanics.com/technology/news/a27971/google-accidentally-broke-japans-internet/), which took down Internet access for most of Japan a few months ago, and to many, many prior incidents. To list just a few: [Telia leak](https://www.theregister.co.uk/2016/06/20/telia_engineer_blamed_massive_net_outage/), which impacted part large parts of Europe; [iTel leak](https://dyn.com/blog/global-impacts-of-recent-leaks/), which impacted Microsoft and Netflix; [Axcelex leak](https://blog.thousandeyes.com/route-leak-causes-amazon-and-aws-outage/), which impacted Amazon and AWS; and [Telekom Malaysia](https://bgpmon.net/massive-route-leak-cause-internet-slowdown/), which, as it happens, impacted Level 3.
+
+> **The fundamental reason behind configuration errors is not the incompetence of the network engineers, but the inherent complexity in today’s network configuration. Humans are simply incapable of reasoning about its correctness.**
+
+To combat this, most network operators rely on small scale test-labs to create a mock of the production network on which they can test and validate changes. This process is time-consuming, expensive and incomplete. Most of these headline-grabbing network outages are a result of specific conditions in the production network coupled with the configuration change. Ensuring the test environment emulates the exact conditions of the production environment in order to catch these errors is simply not possible.
+
+> **Fortunately, recent advances in network validation can provide strong guarantees on the correctness of network configuration and completely prevent such errors. One such technology is [Batfish](http://github.com/batfish/batfish).**
+
+Batfish is a free, open-source tool to validate network configuration. Given a network’s configuration, it models how the network will process routing information and forward traffic. With this capability, before a configuration change is pushed to the network engineers can validate that it does what it is intended to do. They can also analyze, compared to the current configuration, how the network will behave differently after the change.
+
+Using Batfish to prevent accidental route leaks is easy. We illustrate that using the following example network.
+
+![Network Diagram](/assets/images/Net_Diagram.jpg)
+
+AS65000 is mimicking Level 3 in this week’s incident and AS65001 is mimicking Comcast. AS65001 announces a less specific route (10.1.0.0/16) and two more specific routes (10.1.\*.0/24) for the purpose of traffic engineering. AS65000’s policy is to block the more specifics from leaking to AS65002 using route filters.
+
+We created two sets of configurations ([route-leak-1](https://github.com/batfish/batfish/tree/67034746f4d6e52c34bff7bfaa7a4e75b51477da/test_rigs/route-leak-1), [route-leak-2](https://github.com/batfish/batfish/tree/67034746f4d6e52c34bff7bfaa7a4e75b51477da/test_rigs/route-leak-2)) for all the routers of AS65000. The second set represents the new configuration, with an error that allows more specifics routes from AS65001 to leak due to inconsistent BGP communities in route filters on AS65000’s peering routers, R2 and R3. We do not know the exact error that caused route leaks in the Level 3 incident, but that is not important for our purposes; Batfish can detect leaking irrespective of the underlying error.
+
+To detect the error before deploying the buggy configuration to the network, the operator can do the following (after installing and running Batfish per instructions [here](https://github.com/batfish/batfish/wiki)).
+
+- Initialize the first snapshot representing the reference (current) configuration, and the second snapshot that represents to new (planned) configuration.
+
+```
+ batfish> init-testrig route-leak-1
+ batfish> init-delta-testrig route-leak-2
+```
+
+- Compute which routes will be announced in the second snapshot but not the first
+
+```
+ batfish> get bgpAdvertisements differential=true
+
+ + EBGP\_SENT dstIp:192.168.51.2 **srcNode:as65000\_R1** srcIp:192.168.51.1 **net:10.1.1.0/24** nhip:192.168.51.1 origin:INCOMPLETE asPath:\[65000, 65001\] communities:\[4259840002, 4259840010\] orIp:192.168.51.1
+
+ + EBGP\_SENT dstIp:192.168.51.2 **srcNode:as65000\_R1** srcIp:192.168.51.1 **net:10.1.2.0/24** nhip:192.168.51.1 origin:INCOMPLETE asPath:\[65000, 65001\] communities:\[4259840002, 4259840010\] orIp:192.168.51.1
+```
+
+The output shows that **R1** announces two /24 routes only in the second configuration (indicated by ‘+’), a red flag if the change was never intended to leak such routes. Had it shown no routes, or only routes expected due to the change, the configuration can be deemed safe to deploy (assuming other correctness checks pass too).
+
+- To confirm if these additional prefixes are part of a less specific route that is already being advertised, one can run the following command
+
+```
+ batfish> get bgpAdvertisements prefixSpace=\["10.1.1.0/24:0-23"\]
+
+ EBGP\_SENT dstIp:192.168.51.2 **srcNode:as65000\_R1** srcIp:192.168.51.1 **net:10.1.0.0/16** nhip:192.168.51.1 origin:INCOMPLETE asPath:\[65000, 65001\] communities:\[4259840002\] orIp:192.168.51.1
+```
+
+The output shows that there is indeed a less specific route (10.1.0.0/16) covering the more specifics.
+
+That is it! While we demonstrate Batfish’s interactive usage here, similar checks (and many more) can be embedded in the network’s CI/CD pipeline, such that configuration errors are caught before they impact connectivity. The ability to catch errors proactively is a key advantage of control plane validation. Read more about [control versus data validation here](/2017/09/12/new-network-engineering-workflow-formal-validation.html).
+
+Using control plane validation, network engineers can make configuration changes without taking down the Internet, making headlines like **_“How a tiny error shut off the internet for parts of the US”_** a thing of the past.
+
diff --git a/_posts/2018-01-29-intent-specification-languages-simplifying-network-configuration.md b/_posts/2018-01-29-intent-specification-languages-simplifying-network-configuration.md
new file mode 100644
index 00000000..5b6bfdfd
--- /dev/null
+++ b/_posts/2018-01-29-intent-specification-languages-simplifying-network-configuration.md
@@ -0,0 +1,52 @@
+---
+title: "Intent specification languages - simplifying network configuration"
+date: "2018-01-29"
+author: Todd Millstein
+tags:
+ - "batfish"
+ - "networkautomation"
+ - "networkintent"
+ - "networkvalidation"
+ - "predeployment"
+ - "validation"
+---
+
+The growing scale and complexity of today’s networks have outpaced network engineers’ ability to reason about their correct operation. As a consequence, misconfigurations that lead to downtime and security breaches have become all too common. In his keynote presentation at Future: NET 2017, Ratul Mahajan, the CEO of Intentionet, introduced a new network engineering workflow to alleviate such problems (see image below). The foundation of this new workflow, [formal validation of network configurations](/2017/09/12/new-network-engineering-workflow-formal-validation.html), was introduced in a previous blog post.
+
+![](/assets/images/network_engineering_workflow3.png){:width="800px"}
+
+**In this post, we discuss another component of the workflow:** **_languages for specifying network-wide intent_****.** Network-wide specification languages help bridge the abstraction gap between the intended high-level policies of a network and its low-level configuration. Many network policies involve _global_ network properties---_prefer a certain neighbor_, _never announce a particular destination externally_\---but network configurations describe the behavior of _individual_ devices. Engineers must therefore manually decompose network-wide policy into many independent, device-level configurations, such that policy-compliant behavior results from the distributed interactions of these devices. This task is further complicated by fault tolerance requirements: not only does a network need to adhere to its requirements during the good times, but it also should forward traffic properly in the presence of a reasonable number of node and/or link failures.
+
+Network-wide specification languages attempt to close this abstraction gap by allowing network engineers to express global network policies directly, with a compiler automatically generating the corresponding low-level configurations. This approach is analogous to the trend in software engineering over the last several decades, which has led to ever-higher levels of abstraction and has been a huge boon for the software industry: Imagine writing today's complex software in machine code!
+
+Both the academic and industrial communities have been exploring high-level intent specification languages. On the academic side, languages for software-defined networking (SDN) such as [Flowlog](https://cs.brown.edu/~sk/Publications/Papers/Published/nfsk-flowlog-tierless/), [Frenetic](http://www.frenetic-lang.org/), [Maple](http://conferences.sigcomm.org/sigcomm/2013/papers/sigcomm/p87.pdf), and [Merlin](http://www.frenetic-lang.org/merlin/) allow declarative expression of intra-domain routing policies. These policies are automatically compiled to directives for the OpenFlow protocol, which provides an API to access and update the forwarding rules of each network device. On the industrial side, Cisco’s [Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) and Apstra’s [AOS](http://www.apstra.com/) provide high-level policy languages targeted to data center network design and configuration, along with systems that automatically implement the high-level policies on the devices.
+
+Recently, as part of a research collaboration between Intentionet, Microsoft, Princeton University, and UCLA, we have developed the [Propane](https://propane-lang.org/) network-wide programming language. **Propane enables high-level expression of both intra- and inter-domain routing in a uniform framework, along with the preference order of paths in case of failures.** The Propane compiler automatically compiles these policies into configurations for the BGP routing protocol, thereby leveraging BGP's distributed mechanisms for routing and fault tolerance.
+
+
+
+
+
+The Propane compiler guarantees that the distributed BGP implementation it produces will remain compliant with the user’s Propane policy, for **any combination of failures**. This means that traffic will never be sent along a path that was disallowed by the policy and also that routing will always converge to the best policy-compliant paths currently available. Of course, if there are sufficiently many failures in the network, there may be no available (policy-compliant or otherwise) paths and traffic will be dropped.
+
+In studies of policy we obtained for a data center and a backbone network from a large cloud provider, we have found that we could decompose realistic routing policies into two subparts. One subpart involves definitions of sets of destination prefixes and sets of peers --- such definitions may be relatively lengthy but are straightforward conceptually. The second part defines the core routing policy, which specifies the legal paths that traffic to the defined sets of destination prefixes may follow. **Surprisingly, we found that the core routing policy of this large cloud provider can be expressed in as little as 30-50 lines of Propane code, whereas the policy requires thousands of lines of low-level BGP configuration.** We believe this large reduction in code is one of the key benefits of Propane, making routing policy easier to define, validate, and evolve.
+
+Below we provide a small example to illustrate the Propane language. More information, including a tutorial on using Propane and the source code of the Propane compiler, is available at [propane-lang.org](https://propane-lang.org/).
+
+Propane policies consist of one or more clauses of the form X=>P where X defines a set of destination prefixes and P defines a set of ranked paths along which traffic may flow to reach the given destination. For example, suppose that the network has two peers, Peer1 and Peer2 and that Peer1 should always be preferred for the set of destination prefixes D. The following clause specifies this policy:
+
+> D => exit(Peer1) >> exit(Peer2)
+
+Here exit(P) represents the set of paths that exit the network at peer P and the constraint S1 >> S2 indicates a preference for the set of paths S1 over the set of paths S2. Therefore, this Propane clause specifies that traffic to destinations in D should never exit from peers other than Peer1 and Peer2, and further that this traffic should only exit from Peer2 when no path through Peer1 is available (e.g., due to failures).
+
+Today network engineers must carefully implement such policies through low-level BGP configuration directives at multiple nodes in the network. For example, implementing our example above requires all routers that peer with neighbors other than Peer1 and Peer2 to be configured to filter route announcements to destinations in D that come from those peers. Additionally, all routers in the network that connect directly with Peer1 must assign a higher local preference than those connecting to Peer2. And of course, network engineers must implement these requirements in the configuration language of each device vendor. In contrast, the Propane compiler automatically produces the low-level BGP configuration directives at each node that are necessary to faithfully implement the high-level, network-wide Propane policy.
+
+Finally, we note that network-wide specification languages are complementary to tools that validate low-level network configurations. Even if network configurations are generated from high-level policies, the generated configurations still have the potential to contain errors for many reasons, including bugs in the compiler that generates the configurations, errors, and omissions in the original network-wide specification, and manual configuration edits performed by network operators to quickly resolve issues in the field.
+
+Further, network engineers of the future will likely provide multiple high-level specifications, in order to declaratively express disparate aspects (routing, security, performance, etc…) of the network behavior. In such a setting, it will be critical to [validate the overall network configurations](2017/09/12/new-network-engineering-workflow-formal-validation.html) that result from this combination of specifications.
+
+
+
+
+
+This is an exciting time as researchers and practitioners explore new approaches to improve and simplify network engineering, such as specifying network-wide intent.
diff --git a/_posts/2018-08-21-automation-without-validation-risky-operation.md b/_posts/2018-08-21-automation-without-validation-risky-operation.md
new file mode 100644
index 00000000..cb9d50ab
--- /dev/null
+++ b/_posts/2018-08-21-automation-without-validation-risky-operation.md
@@ -0,0 +1,35 @@
+---
+title: "Automation without validation: Risky operation"
+date: "2018-08-21"
+author: Ratul Mahajan
+tags:
+ - "batfish"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+ - "validation"
+---
+
+If you run a large, complex network, you have either already heavily invested in automating key management tasks or are about to. Network automation is a great way to reduce human errors and accomplish those tasks with consistency and speed.
+
+![Picture of Frustration](/assets/images/Pictofigo_Frustration.png) _To err is human; to really foul things up requires a computer._ — BILL VAUGHAN
+
+But network automation is not without risks. One risk is bugs in automation logic itself, which occur because handling the diversity of network vendors and devices effectively is hard. Another risk is humans providing incorrect inputs to automation. One senior network engineer recounted to us an incident that drives this point home. His team had an automated data center network expansion. A script automatically populated most of the configuration for new devices, but it needed humans to fill in details such as the AS number. Inevitably, one of the many times that the script was used to provision a new device, the engineer fat-fingered the AS number. That disrupted many key services for an hour.
+
+While errors in non-automated environments may only impact individual routers, errors in automated environments can bring down the entire network in one fell swoop. **In a way, automation trades off the risk of frequent, low-impact errors with infrequent, high-impact ones.**
+
+Further, automation is rarely perfectly versatile. Even with good automation, operators need to manually make changes that are not supported by the automation framework. A common pattern, for instance, is to generate new configurations via automation but make subsequent changes manually. Unfortunately, even a single manual change can destroy the original correctness guarantees.
+
+Enter network validation, a technology to guarantee that the configurations generated by automation or humans are correct. “Correctness” may be specified in terms of best practices, compliance requirements, or intended data flow behaviors (e.g., all external traffic should traverse a firewall). It may also be defined based on the expected impact of planned changes, e.g., no users should lose connectivity or no services should be unavailable after the change.
+
+Good validation technologies will ensure that the correctness guarantees hold for all possible packet flows and device/link failures and will account for differing device behaviors even in complex, multi-vendor networks. No human can accomplish this via manual review of configuration or testing in a lab environment.
+
+
+
+
+
+Typically, network validation is invoked during the change review and testing phase and before configuration changes are pushed to the network. Unlike lab tests and human reviews, this validation is full-scale and fully automatic. **Network validation closes a key gap in the CI/CD (continuous integration/continuous deployment) pipeline of configuration deployment and provides a robust line of defense against configuration errors.**
+
+If you are serious about network automation, you should get serious about network validation as well. It will safeguard your network against human and automation errors and provide peace of mind that you need as you evolve your network faster than ever.
+
+Getting started is easy! Try the open source [Batfish](https://batfish.org/) tool.
diff --git a/_posts/2018-08-21-plug-hole-in-your-network-automation.md b/_posts/2018-08-21-plug-hole-in-your-network-automation.md
new file mode 100644
index 00000000..e468ee33
--- /dev/null
+++ b/_posts/2018-08-21-plug-hole-in-your-network-automation.md
@@ -0,0 +1,45 @@
+---
+title: "Plug the hole in your network automation — validate changes before you deploy"
+date: "2018-08-21"
+author: Samir Parikh
+tags:
+ - "batfish"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+ - "pybatfish"
+ - "validation"
+---
+
+We are excited to announce the release of [pybatfish](https://github.com/batfish/pybatfish), an open-source Python SDK for Batfish. Batfish is an open-source, multi-vendor network validation framework that enables network engineers, architects and operators to proactively test and validate network design and configuration. It is being used in some of the world’s largest networks to prevent deployment of incorrect configurations that can lead to outages or security breaches.
+
+![](/assets/images/promo-batfish.png)
+
+Historically, network engineers have had to rely on manual lab testing and peer reviews to validate new designs or changes to the network. In recent years, virtual instances of popular network elements and orchestration tools like Vagrant have made a thin layer of automatic testing possible. Such emulation environments can be invaluable. They help users understand the vendor configuration syntax, build base configuration templates, and test new features.
+
+Unfortunately, the emulation-based approach is severely limited. It fails to replicate the scale of the production environment, and the configurations tested on these virtual environments cannot be pushed to production as is (e.g., because interface names don’t match).
+
+Thus, if you want to be sure that the change you are about to push to the network won’t result in an outage or a security breach, you need a full-scale analysis of the production environment. This has not been possible. Until now.
+
+With Batfish, you get comprehensive correctness guarantees for network configuration even in heterogeneous, multi-vendor environments. Batfish simulates the network behavior and builds a model just from device configurations (and, optionally, other data such as routing announcements from peers). With this, Batfish predicts how the network will forward packets and how it will react to failures.
+
+**Proactive correctness guarantees are the missing piece of today’s automation workflows. Validation with Batfish fixes that.**
+
+This capability of building the model from the configuration itself is very important. It enables Batfish to evaluate network changes and guarantee correctness proactively, without requiring configuration changes to be first pushed to the network. Solutions that model network behavior by extracting the RIBs and FIBs from the network cannot provide proactive guarantees about the correctness of the network.
+
+Batfish allows you to query the model for a broad range of information, which can be used to build your validation test suite. Network compliance and auditing tasks like extracting and validating NTP server, DNS server, TACACS server, interface MTU, etc. are trivial even for a multi-vendor network.
+
+Furthermore, complex validation tasks that rely on packet forwarding behavior (which depends on the RIBs and FIBs) become effortless with Batfish:
+
+- Find all possible paths between the application and database servers for your e-commerce site.
+- Validate that a firewall or ACL change needed to deploy a new application to production won’t a) take down existing services and b) create a security hole.
+- Guarantee that the routing policy change you are about to make won’t blackhole some of your traffic (or worse [take down the internet](2017/11/27/dont-accidentally-break-internet-like-level-three.html)).
+
+At Intentionet, we are committed to helping you build reliable and secure networks, faster. And our commitment to open-source is just as strong as when we started our research six years ago.
+
+**The impossible is now possible: you can build reliable and secure networks faster.**
+
+Today’s release of [pybatfish](https://www.github.com/batfish/pybatfish) furthers that commitment. Pybatfish simplifies the use of Batfish and enables seamless integration into any network automation workflow.
+
+Check out the [Jupyter notebooks](https://github.com/batfish/pybatfish/tree/master/jupyter_notebooks) and [videos](https://youtu.be/Ca7kPAtfFqo), to see examples detailing how easy it is to start proactively validating your network. To learn more, join the [Batfish community on Slack.](https://join.slack.com/t/batfish-org/shared_invite/enQtMzA0Nzg2OTAzNzQ1LTUxOTJlY2YyNTVlNGQ3MTJkOTIwZTU2YjY3YzRjZWFiYzE4ODE5ODZiNjA4NGI5NTJhZmU2ZTllOTMwZDhjMzA)
+
diff --git a/_posts/2018-10-23-network_dev_tools.md b/_posts/2018-10-23-network_dev_tools.md
new file mode 100644
index 00000000..209f8d6d
--- /dev/null
+++ b/_posts/2018-10-23-network_dev_tools.md
@@ -0,0 +1,49 @@
+---
+title: "Network Engineers: Time to Restock your Tool Chest"
+date: "2018-10-23"
+author: Dan Halperin
+tags:
+ - "batfish"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+ - "validation"
+---
+
+At Future: Net 2017, our CEO Ratul Mahajan introduced a new network engineering workflow. Designed to evaluate the operation of ever more complex and scaled networks, this workflow aims to eliminate misconfigurations that can lead to a downward spiral of outages, security breaches, and other failures; and to make networks less of a long pole in application delivery.
+
+![](/assets/images/female_construction_worker_on_her_way_to_work_at_a_server_room.png){:width="600px"}
+
+As we discussed in previous articles, there are three pillars of this new networking engineering workflow: 1) formal validation of network configurations, 2) intent specification languages to bridge the gap between the intended high-level policies of a network and its low-level configuration, and 3) development tools.
+
+We discussed [formal validation](/2017/09/12/new-network-engineering-workflow-formal-validation.html) and [intent specification languages](/2018/01/29/intent-specification-languages-simplifying-network-configuration.html) in previous blogs. We will discuss development tools in this blog.
+
+When you compare software and network engineering trends at a high level, the contrast is striking. Application development has become remarkably agile, robust and responsive, while the networks that carry those apps have not. They continue to be slow to evolve and prone to error. A critical reason for the difference is tools.
+
+From integrated development environments (IDEs) with valuable plugins for static analysis, dead code detection, and linting to unit test frameworks that enable continuous testing to automated build and deployment tools, the discipline of software engineering has evolved rapidly and for the better.
+
+Network engineers, on the other hand, have not been so lucky. There is not even an IDE for network configurations; we just have simple editors like Vi (or Vim) and Emacs. (See Table 1.)
+
+**Table 1 – A Tale of Two Tool Chests**
+
+
Software Engineering Dev Tools
Network Engineering Dev Tools
IDE
IntelliJ
Eclipse
Pycharm
Continuous build and test
Travis
Jenkins
Unit Test
JUnit
Pytest
Static Analysis
Pylint
Findbugs
Checkstyle
PMD
Fuzz Testers
JFuzz
Editors
Vi
emacs
Sublime
Atom
+
+
+
+Given the fact that each vendor has its own configuration language (or in some cases, multiple configuration languages), the possibility of a true IDE emerging is slim at best. But the lack of a true IDE is just one of the challenges facing network engineers.
+
+Network validation frameworks are the critical gap. Developed jointly by researchers here at Intentionet, UCLA, USC and Microsoft Research, [Batfish](https://www.batfish.org/) is designed to plug that gap with a tool that detects bugs in network configurations.
+
+As discussed in a prior [blog post](/2018/08/21/plug-hole-in-your-network-automation.html), Batfish simulates the network behavior and builds a model from device configurations, predicting how the network will forward packets and how it will react to failures. This capability of building the model from just device configurations enables Batfish to evaluate network changes and guarantee correctness proactively, without requiring configuration changes to be first pushed to the network.
+
+Leveraging the Python SDK, [Pybatfish](https://www.github.com/batfish/pybatfish), we can now build a suite of development tools for network engineers that can be leveraged to create an automated test environment. Here are four categories of such tools:
+
+**Linting and dead-code detection**. With Pybatfish, a network engineer can identify undefined references or unused declarations, such as route-maps, ACLs, prefix-lists, and community-lists, or find a non-standard NTP server or identify unreachable lines in an ACL.
+
+**Unit testing**. Each network has a list of constant configuration elements that are expected to always be true: All interfaces must have MTU of 4096; all devices in a given data center must have a defined set of NTP, DNS, TACACS servers; all interfaces must have LLDP enabled. These can easily be converted into automated unit tests with Pybatfish, such that only changes that pass all of these tests can be pushed to the network. Just like we do today for software changes.
+
+**Continuous integration (CI)**. Validating key device configuration attributes is useful, but validating overall network behavior is even more important. You can now write tests to ensure that a DNS server is globally reachable, or no single-interface failure leads to a loss of connectivity for a critical application, etc. You can now define the critical network behaviors that must always hold true as CI tests and embed these into an automated test environment.
+
+**Change analysis**. Starting with a known working configuration, you can add the planned configuration to see the configuration and behavior variance. You can see, for instance, if flow A is blocked in the planned but not in the reference config, or that flow B takes another path, or if new flows appear that were not intended. This exploration allows you to analyze a proposed change in real-time, using the results to build your unit test and integration test suite.
+
+Software engineers have leveraged a suite of tools to rapidly respond to changing business needs, accelerate development and improve reliability. The network is now increasingly becoming the bottleneck in application deployment and the digital transformation of business. It is time for network engineers to respond. The foundational elements for the new network engineering workflow are here.
diff --git a/_posts/2018-12-12-we-made-networks-work.md b/_posts/2018-12-12-we-made-networks-work.md
new file mode 100644
index 00000000..25672cd3
--- /dev/null
+++ b/_posts/2018-12-12-we-made-networks-work.md
@@ -0,0 +1,47 @@
+---
+title: "We made networks work. Now let’s make them work well."
+date: "2018-12-12"
+author: Ratul Mahajan
+tags:
+ - "batfish"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+ - "validation"
+---
+
+A few decades ago, car odometers were designed to roll over to zero after 99,999 miles because it was rare for cars to last that long. But today cars come with a warranty for 100,000 miles because it is rare for cars to not last that long. This massive reliability improvement has come about despite the significantly higher complexity of modern cars. Cars have followed the arc of many engineering artifacts, where human ingenuity brought them to their initial working form and then robust engineering techniques made them work well.
+
+![](/assets/images/two-cars-odometer.png)
+
+The computer hardware and software domains have also invested heavily in robust engineering techniques to improve reliability. For hardware, there is an entire industry for electronic design and automation (EDA) tools that help engineers design hardware and provide confidence that the designs are correct before devices are fabricated. Similarly, the software industry has adopted techniques such as unit testing, symbolic analysis, and continuous integration (CI) that help engineers write correct software. In both these domains, investments into correctness were essential to continue to viably engineer increasingly complex systems.
+
+## Networking is a reliability laggard
+
+One domain where reliability improvements have lagged is computer networking, where outages and security breaches that disrupt millions of users and critical services are all too common. While there are many underlying causes for these incidents, studies have consistently shown that the vast majority are caused by errors in the configuration of network devices. Yet engineers continue to manually reason about the correctness of network configurations.
+
+The increased use of automation to push changes to devices is a positive trend. But while automation can prevent “fat fingering”, it does not guarantee correctness. In fact, it increases the risk that the entire network will be knocked out quickly by a single, incorrect change. The faster the car, the greater the danger of a serious collision.
+
+It is high time that networking adopted more reliable engineering practices. While the original Internet was an academic curiosity, today’s networks are too critical for businesses and society, and also too complex—they span the globe and connect billions of endpoints—-for their correctness to be left (solely) to human reasoning.
+
+The lack of guarantees for correctness not only increases the risk of outages and breaches but also hurts agility. When engineers are afraid of breaking the network, they cannot update it as quickly as applications and business require. We must get serious about network correctness.
+
+## Comprehensive and proactive analysis
+
+We should aim for strong correctness guarantees. The strongest guarantees stem from verification, i.e., comprehensive analysis that can prove that the network will behave exactly as intended in all possible cases. The network will block all packets that it is supposed to block; it will carry all packets that it is supposed to carry, only along the intended paths; and it will continue to do so despite equipment failures. This type of analysis is much more than traditional network validation techniques, such as emulation or grepping for specific configuration lines, which offer no correctness guarantees. In a future post, we will discuss different forms of network validation and their relative abilities.
+
+In addition to being comprehensive, we should aim for being proactive, i.e., check for correctness before changing the network configuration. Comprehensive and proactive analysis can make configuration-related outages and security breaches a thing of the past. It can ensure at all times that access control lists (ACLs) are correct, sensitive resources are protected against all untrusted entities no matter what types of packets they send, and that critical services remain accessible even when failures occur.
+
+## Overcoming heterogeneity and large search spaces
+
+Historically, rigorous correctness guarantees for networks have been considered out of reach, but the research community has recently developed promising techniques that overcome long standing obstacles. One obstacle is heterogeneity. There are many (physical and virtual) devices and vendors, each with their own configuration language, data format and default settings. To accurately model network behavior in the face of this diversity, researchers have adapted ideas from the programming languages community, such as parser generators and intermediate representations that are neutral to source languages.
+
+The second obstacle is the vast space of possible cases that must be searched for bugs. There are more than 1 trillion possible TCP packets (given a 40-byte header) and a million possible two-link failure cases in a 1000-link network. To comprehensively and tractably search such large spaces, researchers have adapted ideas from hardware and software verification such as satisfiability (SAT) constraint solvers and binary decision diagrams (BDDs).
+
+The progress in this sphere has moved well beyond theory, and the community has built tools such as [Batfish](https://www.batfish.org/) that can guarantee correctness for real-world networks. Batfish is an open source tool originally developed by researchers at Microsoft, UCLA, and USC; engineers at Intentionet, Princeton, BBN Technologies, and other institutions have since made significant contributions to it. Based on network configurations (and optionally, other data such as routing tables) Batfish can flag any violation of intended security or reliability policy.
+
+**Today, Batfish is being used at several global organizations to proactively validate network configurations.**
+
+Combined with automation, Batfish enables a new paradigm for networking, one where every single change is proactively and comprehensively validated for correctness and only correct changes are deployed.
+
+We now have the means to transform networks from engineering artifacts like older cars, which were not expected to last for 100,000 miles, to ones that we know will work far beyond traditional limits. So, let’s make networks work really well!
diff --git a/_posts/2019-01-16-the-what-when-and-how-of-network-validation.md b/_posts/2019-01-16-the-what-when-and-how-of-network-validation.md
new file mode 100644
index 00000000..b3c863f3
--- /dev/null
+++ b/_posts/2019-01-16-the-what-when-and-how-of-network-validation.md
@@ -0,0 +1,151 @@
+---
+title: "The what, when, and how of network validation"
+date: "2019-01-16"
+author: Ratul Mahajan
+tags:
+ - "batfish"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+ - "validation"
+---
+
+When historically tasked with configuring and managing a computer network, engineers have been forced to do almost everything manually: generate device configurations (and changes to them), commit them to the network, and check that the network behaves as expected afterward. These tasks are not only laborious but also anxiety-inducing, since a single mistake can bring down the network or open a gaping security hole.
+
+But now networking is on an exciting journey of developing technologies that aid engineers with these tasks and help them to run complex networks with high reliability. These technologies provide two capabilities: **automation** and **validation**.
+
+- **_Automation_** augments the hands and eyes of network engineers and helps them log into devices, extract information, copy data, and so on.
+- **_Validation_** augments their brains and helps them predict the impact of different actions, reason about correctness, and diagnose unexpected network behavior.
+
+These capabilities are conceptually distinct, though some tools provide both. Network validation is the focus of this post.
+
+### How well do you speak validation?
+
+There is a wide maturity gap today between automation and validation.
+
+![](/assets/images/Screenshot-2023-12-17-at-9.01.32-PM.png){:width="800px"}
+
+Because it builds on server automation, network automation is more mature and has a well-developed lingua franca. Engineers can precisely describe its different modalities using terms like “idempotent,” “task-based,” “state-based,” “agentless,” etc.
+
+Network validation, however, does not have a nuanced vocabulary. The general term “network validation” gets used to refer to a number of disparate activities, and specific terms get used by different engineers to mean different things.
+
+This lack of nuance hinders the communication and collaboration required to advance network validation technology. That, in turn, harms the adoption of network automation. It is too risky to use automation without effective validation; a single typo can bring down the entire network within seconds.
+
+
+
+**The faster the car, the better the collision-prevention system needs to be.**
+
+
+
+In this post, we outline different dimensions of network validation and hope to start a conversation about developing a precise vocabulary. When talking about network validation, there are three important dimensions to consider:
+
+- i) **What** is the scope of validation?
+- ii) **When** is validation done?
+- iii) **How** is validation done?
+
+The “what” involves three different scopes; the “when,” two simple possibilities; and the “how,” four separate approaches. The choices along these dimensions can effectively describe the functioning of existing validation tools.
+
+### What is the scope of validation?
+
+
+
+![](/assets/images/stacked-umbrellas.png){:width="400px"}
+
+The most critical dimension to consider is the scope of validation, which determines the level of protection. As with hardware and software validation, there are three possibilities.
+
+#### Unit testing
+
+Unit testing checks the correctness of individual aspects of device configuration such as correct DNS configuration, interfaces running OSPF, accurate BGP autonomous system numbers (ASN), and compatible parameters for IPSec tunnel endpoints.
+
+Unit testing is simple and direct—the root cause of the fault is immediately clear when a test fails— but it says little about the end-to-end _behavior_ of the network. It is not hard to imagine situations where all unit tests pass, but the network does not deliver a single packet.
+
+#### Functional testing
+
+Functional testing checks end-to-end behavior for specific scenarios such as whether DNS packets from host1 can reach the server, data center leaf routers have a default route pointing to the spine, a specific border router is preferred for traffic to Google.com, and traffic utilizes the backup path when a link fails.
+
+Unlike unit testing, functional testing can provide assurance on network behaviors. However, as with software testing, its Achilles heel is completeness. It provides correctness guarantees only for tested scenarios, while the space of possible packets, failures, and external routes is astronomically large. For packets alone, there are more than a trillion possible TCP packets (40-byte header).
+
+Because it is impossible to test all scenarios, the correctness guarantees of functional testing are inherently incomplete. Just because a few (or even a hundred) test packets cannot cross the isolation boundary does not mean that _no_ packet can. Completeness of guarantees is where verification comes in.
+
+#### Verification
+
+Verification ensures correctness for all possible scenarios within a well-defined context. It is a formal (mathematical) approach, though the term “verification” sometimes gets incorrectly used for other types of checks. Example guarantees that verification can provide are: _all_ DNS packets, irrespective of the source host or port, can reach the DNS server; the path via a specific border router is preferred for _all_ external destinations; and the services stay available despite _any_ link failure. Such strong guarantees offer network engineers the confidence to rapidly evolve their networks.
+
+### When is validation done?
+
+The timing of validation is the second dimension to consider, for which there are two possibilities.
+
+#### Post-deployment
+
+Validation is done _after_ deploying changes to the network, to check if they had the intended impact. With post-deployment validation, errors can make it to the production network, but the duration of their impact is lowered.
+
+#### Pre-deployment
+
+![](/assets/images/pre-post-deploy.png){:width="1000px"}
+
+
+
+Validation is done proactively, _before_ deploying changes to the network. Proactive validation can ensure that erroneous changes never reach the production network, providing a higher degree of protection than post-deployment validation.
+
+### How is validation done?
+
+The final dimension of network validation to consider is the validation approach itself. There are four main approaches, each of which is capable if catching different types of errors.
+
+#### Text analysis
+
+Text analysis scans network configuration and other data without deeply understanding semantics. It can check, for instance, that lines with “name-server” (for DNS configuration) exist and contain specific IP addresses.
+
+Text analysis cannot check network behavior and tends to be brittle. Another line, for instance, could counteract the checked line, but it is commonly used when other options are not available.
+
+#### Emulation
+
+Emulation uses a testbed of physical or virtual devices (VMs or containers), where engineers can deploy configurations and check the resulting network behavior.
+
+When the emulation and production software is similar, emulation can help predict what will happen in production. However, it is difficult to build a full-scale replica of the production network using emulation either because of limited resources or because software /assets/images of some devices are not available. Using smaller versions dilutes correctness guarantees.
+
+#### Operational state analysis
+
+In operational state analysis, engineers push changes to the production network and check if they produced the intended impact (e.g., did the newly configured BGP session come up?).
+
+The key advantage of this approach is that it checks the behavior of the actual network. However, it can only support post-hoc validation and will, therefore, leak any errors to the network. Further, it cannot be used to test behavior for scenarios such as large-scale failures because that may disrupt running applications.
+
+#### Model-based analysis
+
+Model-based analysis builds a model of network behavior based on its configuration and other data such as routing announcements from outside. It then analyzes it to check behaviors in a range of scenarios. It is a broad category that includes simulation as well as abstract mathematical methods. Its two formal variants have been previously covered [here](2017/09/12/new-network-engineering-workflow-formal-validation.html).
+
+Model-based analysis is the only approach that can perform verification because evaluating all possible scenarios needs a model (though not all model-based tools support verification). A key concern with it is model accuracy. But as we know from other domains, such models get better than human experts over time, and they need not be completely accurate to find errors.
+
+#### Comparing validation approaches
+
+Different validation approaches are capable of finding different types of errors. The class of errors found by text analysis are a subset of those found by other approaches.
+
+![](/assets/images/emulation.png){:width="400px"}
+
+
+
+While model-based analysis can consider the widest range of scenarios, operational state analysis can find bugs in device software that model-based analysis cannot.
+
+Errors that emulation can find overlap heavily with operational state analysis, though it can help find device software bugs triggered by failures often difficult to study in the production network. Similarly, operational state analysis can find errors that emulation misses because emulation is rarely able to faithfully mimic the size and traffic conditions of production networks.
+
+### Classifying existing tools
+
+The table below classifies some open source tools along these dimensions.
+
+| | **What** | **When** | **How** |
+| --- | --- | --- | --- |
+| [Batfish](https://www.batfish.org/) | Verification | **Pre-deployment** | Model-based analysis |
+| [GNS3,](https://www.gns3.com/) [vrnetlab](https://github.com/plajjan/vrnetlab) | Functional testing | **Pre-deployment** | Emulation |
+| [ns-3](https://www.nsnam.org/) | Functional testing | **Pre-deployment** | Model-based analysis |
+| Ping, Traceroute | Functional testing | Post-deployment | Operational state analysis |
+| [rcc](https://github.com/noise-lab/rcc), most homebrew scripts | Unit testing | **Pre-deployment** | Text analysis |
+| Scripts using [NAPALM](https://napalm-automation.net/)+[TextFSM](https://github.com/networktocode/ntc-templates) | Unit testing | Post-deployment | Operational state analysis |
+
+
+
+### Summary
+
+In this post, we outlined the rich space for network validation in terms of its three key dimensions: scope (what), timing (when), and analysis approach (how).
+
+We did not address the important question of which option(s) to pick for a given network. The answer is not as straightforward as picking the “best” one within each dimension. For instance, while model-based analysis provides the strong guarantees for configuration correctness, coupling it with emulation or operational-state analysis may be needed if bugs in device software are a concern. Further complicating matters, the choices are not completely independent because some combinations such as emulation and verification are incompatible.
+
+In a future post, we will outline how to make these choices and implement network automation plus validation pipeline to effectively augment engineers’ hands, eyes, and brains. Such a pipeline would enable engineers to evolve even the most complex networks with confidence, without fear of outages and security holes. Stay tuned!
diff --git a/_posts/2019-03-15-designing-a-network-validation-pipeline.md b/_posts/2019-03-15-designing-a-network-validation-pipeline.md
new file mode 100644
index 00000000..3017170a
--- /dev/null
+++ b/_posts/2019-03-15-designing-a-network-validation-pipeline.md
@@ -0,0 +1,121 @@
+---
+title: "Designing a Network Validation Pipeline"
+date: "2019-03-15"
+author: Samir Parikh
+tags:
+ - "batfish"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+ - "validation"
+---
+
+The networking industry is on an exciting journey of automating tasks that engineers have historically done manually, such as deploying configuration changes to devices and reasoning about the correctness of those changes before and after deployment. These capabilities can tame the complexity of modern networks and make them more agile, reliable, and secure.
+
+Success on this journey requires effective pipelines for network _automation_ as well as _validation_. Network automation focuses on the mechanical aspects of engineers’ tasks such as generating device configuration, logging into devices and copying data. Network validation focuses on analytical aspects such as predicting the impact of configuration changes and reasoning about their correctness. Automation and validation go hand-in-hand because automation without validation is risky.
+
+**If automation is a powerful engine that helps a car go fast, validation is a collision prevention system that helps it do that safely.**
+
+Because much has already been written about network automation, we focus on network validation. Previously, we discussed different dimensions of network validation and the options available within each. We now discuss how to combine those options into an effective network validation pipeline. But first, let’s recap the prior discussion, which you can read in detail [here](2019/01/16/the-what-when-and-how-of-network-validation.html).
+
+### Recap: The what, when, and how of network validation
+
+We call the dimensions of network validation **what**, **when**, and **how** These correspond to the scope, timing, and approach used for validation.
+
+![](/assets/images/what-is-the-scope-of-validation.png){:width="800px"}
+
+* * *
+
+![](/assets/images/when-is-validation-done.png){:width="800px"}
+
+* * *
+
+![](/assets/images/how-is-validation-done.png){:width="800px"}
+
+### Factors to consider
+
+When designing a network validation pipeline, two important factors must be considered.
+
+**Coverage**: Coverage refers to the types of errors or bugs that the validation pipeline can find. Higher coverage is, of course, more desirable. The “How” figure above provides a guide for thinking about coverage. It shows the classes of issues that different approaches can find. Model-based analysis covers the widest range of scenarios, but emulation and operational state analysis can find unique classes of issues, such as bugs in device software.
+
+The coverage of emulation and operational state analysis overlaps heavily, though each approach can find some unique issues. Emulation can help find device software bugs that are triggered only under failures because we wouldn’t want to induce a failure in the production network to study its impact. Operational state analysis can help find bugs that are triggered only at a specific scale or traffic conditions that are difficult to emulate.
+
+Issues found by text analysis are a subset of those found by other approaches. We thus exclude text analysis from our recommendations below. However, if other validation approaches are not available, text analysis is better than no validation.
+
+**Resources**: A validation pipeline that requires fewer resources is preferable. Emulation is resource hungry because it needs VM or container /assets/images for production devices and a physical or virtual testbed on which to run these /assets/images. The size of the testbed depends on how closely we want to emulate the production network. On the other hand, approaches like model-based analysis can work with a single well-provisioned server.
+
+### Validation pipelines
+
+With these factors in mind, we outline three pipelines. The first one is optimized for coverage, the second one for resources, and the third one balances the two. All pipelines allow us to find issues with configurations errors (design bugs) as well as device software bugs, though they differ on when and which types of issues are covered.
+
+#### Coverage-optimized pipeline
+
+![](/assets/images/model-based-emulation.png){:width="800px"}
+
+
+
+The figure above shows a validation pipeline that optimizes for coverage. It chains model-based analysis, full-scale emulation, and operational state analysis for verification, functional testing, and unit testing, respectively.
+
+1. **Model-based analysis**: Model-based analysis is first in the pipeline because it can quickly find configuration errors and has the broadest coverage. This leads to a faster development cycle. As soon as a candidate configuration is generated, it can be verified for correctness using a battery of checks, akin to testing software as part of the build process. If a check fails, new candidate configurations are generated and the process repeats. Model-based analysis allows you to check the expected behavior of the network. Some of these checks may be specific to changes being made, such as:
+
+ 1. Will the newly configured BGP session actually come up?
+ 2. Will the ACL change allow users to access the new web server?
+
+ Other checks may ensure adherence to overall network policy, such as:
+ 1. No packet should traverse between two isolated subnets.
+ 2. No single link failure will result in a critical service becoming unavailable.
+2. **Full-scale emulation**: We next perform functional testing in an emulation environment that replicates the production network. Emulation is typically limited due to resources and execution time, and only a limited set of test cases can be executed. However, since model-based analysis has already eliminated configuration errors, we can optimize the tests conducted using emulation. In particular, emulation should focus on validating the device software and ensuring that configuration changes do not trigger device software bugs. We run functional tests that check, for instance, that:
+ 1. Selected flows are blocked or forwarded as expected (typically done by using ping or traceroute).
+ 2. Traffic fails-over as expected after a link or node failure.
+ 3. Device functions properly under varying traffic loads (typically done using traffic generators).In addition, coupling emulation with model-based analysis can improve the coverage of software bugs. In particular, we can compare the routing and forwarding tables of every emulated device with what is predicted by model-based analysis. If the two agree, we get stronger guarantees than with emulation alone. Disagreements point to a device software bug (or to a modeling inaccuracy that needs fixing).
+3. **Operational state analysis**: Finally, we deploy the configuration to the network and use operational state analysis to ensure that changes are pushed correctly and produce expected behavior. Given the validation that has occurred earlier, we can perform quick unit testing in this step, such as:
+ 1. Check that the configuration on the device is identical to what was pushed.
+ 2. All expected interfaces are up.
+ 3. All expected routing sessions are established.
+
+#### Resource-optimized pipeline
+
+Emulating the full production network is not always viable because VM or container /assets/images may be unavailable or do not match the capabilities of production devices in terms of supported features or interface types. An alternative pipeline, shown below, can be used in such settings.
+
+![](/assets/images/model-based-analysis.png){:width="800px"}
+
+
+
+1. **Model-based analysis**: Like the coverage-optimized pipeline, this pipeline starts with comprehensive verification using model-based analysis to weed out configuration errors before changes are pushed to production.
+2. **Operational state analysis**: We next jump straight to operational state analysis. Since the emulation stage has been eliminated, the ability to find device software bugs _before_ configuration is deployed is lost. Additionally, due to the risk of outages in production, it is not feasible to test how the network will react to failures. But if model-based analysis has comprehensively looked for configuration errors, the risk of major incidents in production is low.We can, however, regain some of the lost coverage against device software bugs through more _post_\-deployment tests than in the coverage-optimized pipeline. In particular, in addition to unit tests that check the status of routing protocol sessions and interfaces, we can perform functional tests that we did earlier in emulation and are feasible to do on the production network. Such tests include:
+ 1. Checking that selected flows are blocked and forwarded as expected by running ping and traceroute.
+ 2. Comparing routing and forwarding tables in production to those predicted by model-based analysis.
+
+#### Balancing coverage and resources
+
+![](/assets/images/balancing_coverage.png){:width="800px"}
+
+
+
+The two pipelines above optimize, respectively, for coverage and resources. In many cases, however, neither may be the appropriate solution. The coverage-optimized pipeline may be infeasible because it needs full-scale emulation, and completely removing emulation, as in the resource-optimized pipeline, may be undesirable.
+
+One way to achieve a balance is via a scaled-down emulated environment that represents key aspects of the production network. This environment may even be customized to the change under test. For instance, if the change involves access control lists (ACL), the environment should have a device with the modified ACL.
+
+Such a scaled-down environment can help uncover a class of device software bugs and is particularly helpful for detecting bugs in features that were previously not used in the network (because other features have been exercised heavily in the production network).
+
+However, not all types of tests that can be run in full-scale emulation, can be performed with scaled-down emulation. For instance, we cannot check if the network reacts as expected to any device or link failure. This shortcoming may be acceptable if model-based analysis has already confirmed that failovers are configured correctly.
+
+#### Pipeline comparison summary
+
+The table below summarizes the coverage provided and resources needed by the three pipelines above.
+
+| | **Coverage Optimized** | **Resource Optimized** | **Balanced** |
+| --- | --- | --- | --- |
+| **Coverage for configuration errors** | Pre-deployment | Pre-deployment | Pre-deployment |
+| **Coverage for device software bugs** | Pre-deployment | Post-deployment | Some pre-deployment, some post-deployment |
+| **Resources** | High | Low | Medium |
+
+
+
+### You can build it today!
+
+The validation pipelines above are not pipe dreams, but they can be built today using open source tools alone. [Batfish](https://www.batfish.org/) can perform model-based analysis and verification; emulation engines such as [GNS3](https://www.gns3.com/) can emulate a variety of network devices; and a combination of [NAPALM](https://napalm-automation.net/) and [TextFSM](https://github.com/networktocode/ntc-templates) templates can help analyze the operational state of the network. Finally, automation frameworks such as [Ansible](https://www.ansible.com/) and [Salt](https://www.saltstack.com/) can orchestrate and link these validation steps.
+
+Even existing networks can be incrementally transitioned to such a pipeline. We will outline an approach for that in a future post.
+
+By combining automation and validation, engineers will be able to rapidly evolve the network, securely and reliably. They can finally have a car that is both fast _and_ safe.
diff --git a/_posts/2019-04-01-announcing_ai-ml.md b/_posts/2019-04-01-announcing_ai-ml.md
new file mode 100644
index 00000000..d1fc1d04
--- /dev/null
+++ b/_posts/2019-04-01-announcing_ai-ml.md
@@ -0,0 +1,37 @@
+---
+title: "Announcing AI-ML"
+date: "2019-04-01"
+author: Samir Parikh
+tags:
+ - "automated-intent"
+ - "batfish"
+ - "intent"
+ - "networkautomation"
+ - "networkvalidation"
+---
+
+We are proud to announce **Batfish AI-ML®**, our latest product. Batfish AI-ML, or Automatic Intent Mind Link, is the **industry’s first and only automatic intent extraction solution.** It works seamlessly across all networks, be they data centers, enterprise campuses, service provider networks, or hybrid and multi-cloud deployments.
+
+![](/assets/images/april-comic.png){:width="600px"}
+
+## Why Batfish AI-ML?
+
+Network engineers have told us repeatedly that the hardest part of intent-based networking (IBN) is intent specification. Knowing in their heads what the network should do is one thing, but telling that to a machine, also called programming in some circles, is an entirely different ballgame. The difficulty of this task renders IBN almost pointless.
+
+That all changes now because Batfish AI-ML dispenses the need for intent specification. All network engineers have to do is to _think_ about the policies their network is intended to meet. Batfish AI-ML will automatically extract network intents from their thoughts. The further this thinking is from the painstaking complexity of the real network, the better. Network engineers no longer have to fight YAML, YANG, NetConf, OpenConfig, or any of their messy brethren.
+
+## How does it work?
+
+Batfish AI-ML builds on recent advances in Brain Network Interface (BNI). To extract intent, network engineers simply don the Batfish AI-ML helmet while thinking about their network intents. For best results, we recommend shaved (or bald) heads.
+
+When linked with the [Batfish intent validation framework](http://www.batfish.org/), the Batfish AI-ML helmet can send Slack notifications to users’ manager with a pass/fail status of each intent. The managers can also configure Batfish AI-ML to deliver a low-voltage shock for each intent that is violated.
+
+## Try it today!
+
+You can order the Batfish AI-ML helmet from our [website](https://www.batfish.org/) or leading online retailers.
+
+If you buy this week, your helmet will be upgraded for free to support new and exciting features coming down the pipeline, such as:
+
+- Advanced filtering to separate network intents from unrelated thoughts
+- A multi-user mode that reconciles conflicting network intents.
+- Functioning on top of a tinfoil hat, which network engineers wear as a safety precaution
diff --git a/_posts/2019-06-14-announcing-batfish-ansible.md b/_posts/2019-06-14-announcing-batfish-ansible.md
new file mode 100644
index 00000000..c4b8cfc3
--- /dev/null
+++ b/_posts/2019-06-14-announcing-batfish-ansible.md
@@ -0,0 +1,159 @@
+---
+title: "Announcing Ansible modules for Batfish"
+date: "2019-06-14"
+author: Samir Parikh
+tags:
+ - "ansible"
+ - "batfish"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+ - "validation"
+---
+
+We are excited to announce Ansible modules for Batfish. Now, network engineers can invoke the power of Batfish within Ansible-based automation workflows.
+
+
+
+Network automation is like a car with a powerful engine— it may get you places quickly, but does not guarantee that you’ll get there _safely_. Safe driving requires advanced collision prevention systems. Similarly, safe network automation requires pre-deployment validation which can ensure that network changes have the intended impact, do not cause an outage, and do not open a security hole, _before_ the change is pushed to the network.
+
+Batfish is a powerful pre-deployment validation framework. It can guarantee security, reliability, and compliance of the network by analyzing the configuration (changes) of network devices. It builds a detailed model of network behavior from device configurations and finds violations of network policies (built-in, user-defined, and best-practices).
+
+Before today, using Batfish required writing Python code. **Today’s release enables engineers to add validation to their Ansible playbooks without writing any Python code.**
+
+Let’s walk through a few example use cases to get a taste of how it can be done.
+
+## Use case I: Fact extraction
+
+To extract “facts” (config settings) from configuration files, one can simply do the following.
+
+```
+ - name: Setup connection to Batfish service
+ bf_session:
+ host: localhost
+ name: local\_batfish
+
+ - name: Initialize the example network
+ bf_init_snapshot:
+ network: example_network
+ snapshot: example_snapshot
+ snapshot_data: ../networks/example
+ overwrite: true
+
+ - name: Retrieve Batfish Facts
+ bf_extract_facts:
+ output_directory: data/bf_facts
+ register: bf_facts
+```
+
+The first task above establishes a connection to the Batfish server. The second command initializes the snapshot from provided data. The third command extracts facts from the snapshot and writes them to a directory. For each device Batfish will generate a file in the specified output directory.
+
+The complete output can be found [here](https://github.com/batfish/ansible/tree/master/tutorials/playbooks/data/bf_facts). This snippet below highlights key facts that Batfish extracts from device configuration files:
+
+```
+ nodes:
+ as1border1: ⇐ Device name
+ BGP: ⇐ BGP Process configuration attributes
+ Router_ID: 1.1.1.1
+ Neighbors: ⇐ BGP Neighbor configuration attributes
+ 10.12.11.2:
+ Export_Policy:
+ - as1_to_as2
+ Local_AS: 1
+ Local_IP: 10.12.11.1
+ Peer_Group: as2
+ Remote_AS: '2'
+ Remote_IP: 10.12.11.2
+ …
+ Community_Lists: ⇐ Defined Community-lists
+ - as1_community
+ Configuration_Format: CISCO_IOS ⇐ Device Vendor & OS Type
+ DNS: ... ⇐ DNS configuration attributes
+ IP6_Access_Lists: [] ⇐ Defined IPv6 access-lists
+ IP_Access_Lists: ⇐ Defined IPv4 access-lists
+ - '101'
+ Interfaces: ⇐ Interface configuration attributes
+ GigabitEthernet0/0:
+ Access_VLAN: null
+ Active: true
+ All_Prefixes:
+ - 1.0.1.1/24
+ Allowed_VLANs: ''
+ Description: null
+ Incoming_Filter_Name: null
+ MTU: 1500
+ Native_VLAN: null
+ OSPF_Area_Name: 1
+ OSPF_Cost: 1
+ ...
+ Primary_Address: 1.0.1.1/24
+ VRF: default
+ VRRP_Groups: []
+ NTP ... ⇐ NTP configuration attributes
+ Route6_Filter_Lists: [] ⇐ Defined IPv6 prefix-list/route-filters
+ Route_Filter_Lists: ⇐ Defined IPv4 prefix-list/route-filters
+ - '101'
+ Routing_Policies: ⇐ Defined routing policies/route-maps
+ - as1_to_as2
+ SNMP ... ⇐ SNMP configuration attributes
+ Syslog ... ⇐ Syslog configuration attributes
+ VRFs: ⇐ Defined VRFs
+ - default
+ version: batfish_v0 ⇐ Batfish Fact model version
+```
+
+The functionality above uses the full-config (e.g., the output of “show run”) parsing capabilities of Batfish. While there are other tools available for parsing configs, Batfish is unique in being vendor neutral (unlike Cisco’s Parse Genie) and being able to parse full configurations instead of specific show commands.
+
+Those advantages aside, the real power of Batfish is in being able to _validate_ configs, with respect to both config settings and the resulting network behavior. We talk about these next.
+
+## Use case II: Fact validation
+
+Validating that facts in device configs match what is expected is easy with the **_bf\_validate\_facts_** module.
+
+```
+- name: Validate facts gathered by Batfish
+ bf_validate_facts:
+ expected_facts: data/validation
+ register: bf_validate
+```
+
+The task above will read facts from the specified folder and check that they match those in the initialized snapshot (done in a prior task). You can validate a subset of the attributes or all of them. The task will fail if any of the facts on any of the nodes does not match.
+
+## Use case III: Behavior validation
+
+Beyond parsing configs, Batfish builds a full model of device configurations and resulting network behavior. This model can be validated in a range of ways, as follows:
+
+```
+ - name: Validate different aspects of network configuration and behavior
+ bf_assert:
+ assertions:
+ - type: assert_reachable
+ name: Confirm web server is reachable for HTTPS traffic received on Gig0/0 on as1border1
+ parameters:
+ startLocation: '@enter(as1border1[GigabitEthernet0/0])'
+ headers:
+ dstIps: '2.128.0.101'
+ ipProtocols: 'tcp'
+ dstPorts: '443'
+
+ - type: assert_filter_denies
+ name: Confirm that the filter 101 on as2border2 drops SSH traffic
+ parameters:
+ filter_name: 'as2border2["101"]'
+ headers:
+ applications: 'ssh'
+
+ - type: assert_no_incompatible_bgp_sessions
+ name: Confirm that all BGP peers are properly configured
+
+ - type: assert_no_undefined_references
+ name: Confirm that there are NO undefined references on any network device
+```
+
+The task above includes four example assertions from our assertion library. The _**bf\_assert**_ module includes more, and based on community feedback, we’ll continue to make more of Batfish’s capabilities available this manner.
+
+Today’s release makes network validating broadly accessible, furthering our commitment to helping network engineers build secure and reliable networks.
+
+To help you get started with Batfish and Ansible, we have created a series of tutorials which can be found in this [GitHub repository](https://github.com/batfish/ansible/tree/master/tutorials).
+
+For feedback and feature requests, reach us via [Slack](https://join.slack.com/t/batfish-org/shared_invite/enQtMzA0Nzg2OTAzNzQ1LTUxOTJlY2YyNTVlNGQ3MTJkOTIwZTU2YjY3YzRjZWFiYzE4ODE5ODZiNjA4NGI5NTJhZmU2ZTllOTMwZDhjMzA) or [GitHub](https://github.com/batfish/batfish).
diff --git a/_posts/2020-02-20-network-as-code-from-hype-to-substance.md b/_posts/2020-02-20-network-as-code-from-hype-to-substance.md
new file mode 100644
index 00000000..045a4447
--- /dev/null
+++ b/_posts/2020-02-20-network-as-code-from-hype-to-substance.md
@@ -0,0 +1,35 @@
+---
+title: "Network as code: From hype to substance"
+date: "2020-02-20"
+author: Ratul Mahajan
+tags:
+ - "batfish"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+ - "validation"
+---
+
+Last week, Arista and Cumulus hosted webinars on building CI/CD pipelines for the network (see [Arista Webinar](https://www.ansible.com/resources/webinars-training/ansible-network-automation-with-arista-cloudvision-and-arista), [Cumulus Webinar](https://cumulusnetworks.com/learn/resources/webinars/webinar-network-validation-with-dinesh-dutt)). Both webinars communicated a vision that included generating configuration (changes) automatically, pre-deployment validation, and automated deployment, followed by post-deployment validation.
+
+I found these webinars exciting for two reasons. The first was the emphasis they placed on validation, and in particular, on pre-deployment validation. Validation is often overlooked when talking about network automation. But given that a bad configuration change can bring down the whole network, pre-deployment validation is essential to reducing the risk that stems from automated changes. It was great to see the presenters of both webinars acknowledge this risk and advocate robust automation pipelines.
+
+![](/assets/images/Screen-Shot-2020-02-16-at-2.41.00-PM-300x170.png){:width="600px"}
+[Cumulus Webinar: Automating the Validation Workflow](https://cumulusnetworks.com/learn/resources/webinars/webinar-network-validation-with-dinesh-dutt)
+
+
+![](/assets/images/Screen-Shot-2020-02-16-at-2.41.33-PM-300x152.png){:width="600px"}
+[Arista Webinar: Ansible AVD Overview and Demo](https://www.ansible.com/resources/webinars-training/ansible-network-automation-with-arista-cloudvision-and-arista)
+
+
+Incidentally, both webinars recommended Batfish for pre-deployment validation. It tickles me to see how far Batfish has come. I can trace its roots to 2012, when I was at Microsoft Research. At Microsoft, I was fortunate to observe and contribute to the rapid scaling of Azure and other networks. That experience crystallized my belief that networking needed to change radically. The applications were evolving and migrating rapidly thanks to virtualization (VMs and Containers) and modern CI/CD tools. The network was the least nimble — and the least reliable — component of the technology stack. To keep pace with the applications, the network needed to be like software.
+
+I started tackling an important part of this multifaceted challenge along with researchers at UCLA and USC. We knew from prior studies that configuration errors are the dominant cause of network outages and breaches. So, our thought was: if we could validate configuration changes _before_ they are pushed to the network, configuration errors would never reach the network. And if all network changes could be rigorously and automatically validated, engineers would have the confidence to evolve the network rapidly to meet application needs.
+
+In 2014, we had the kernel of a promising tool. We called it Batfish and open sourced it. Almost exactly three years ago, in 2017, I left Microsoft for Intentionet with the mission to enable network engineers to “design and evolve the network like software.” Batfish and a commitment to open source would go on to become the core of Intentionet. Batfish becoming an essential element of the network CI/CD pipelines is an attestation to both its original premise and it being multi-vendor and open-source.
+
+The second exciting aspect of the webinars was how much they had in common despite coming from different vendors. It suggests that the networking community has gone beyond the hype of three years ago and is converging on a shared methodology and concrete workflows. If you don’t remember 2017, “fidget spinner” was a top-10 search term on Google, and intent-based networking was being billed as the next big thing in networking. Cisco unleashed a marketing campaign around it, VMWare dedicated its Future:NET event to it, and Juniper was touting a similar concept under the guise of “self-driving networks.” Gartner and other analysts were rushing to define intent-based networking and assessing different companies’ vision against their definition.
+
+Three years on, we have gone beyond the hype, and underlying substance is emerging. A recent [NetDevOps survey](https://dgarros.github.io/netdevops-survey/reports/2019) shows that, over the last three years, the fraction of network engineers who have implemented CI pipelines has nearly doubled, from 18% to 35%. Equipment vendors too have gone beyond marketing. Devices have vastly improved APIs that help streamline network CI/CD pipelines. Equally importantly, as the webinars illustrate, they are also educating network engineers about modern software development practices. Cisco’s DevNet program, along with revamped certifications, and Juniper’s NRELabs are also notable in this direction.
+
+All this is not to say that the network has become code. After all, two-thirds of network engineers are still not using CI pipelines. But that should not cloud the progress over the last few years, and I am optimistic that we can drive that number down to zero in the next few years. Building mission-critical networks without automation and rigorous validation will soon be unthinkable, just as building mission-critical software without rigorous CI/CD pipelines is unthinkable today.
diff --git a/_posts/2020-06-25-network-model-based-security.md b/_posts/2020-06-25-network-model-based-security.md
new file mode 100644
index 00000000..6ef8d260
--- /dev/null
+++ b/_posts/2020-06-25-network-model-based-security.md
@@ -0,0 +1,49 @@
+---
+title: "Network-model-based security: A new approach that blends the advantages of other leading methods"
+date: "2020-06-25"
+author: Ratul Mahajan
+---
+
+Effective network security is largely based on a central challenge: making sure that only authorized communication among security principals (users, systems, or groups) is allowed. But meeting this challenge has gotten harder as security methods grow more granular and complex.
+
+
+
+As organizations deploy microsegmentation and move from coarser methods like subnet-level security to finer-grained controls at the individual host and service levels, they face a connectivity matrix with many more principals to account for. In this new world, the task of ensuring only authorized pairs of principals can communicate is far too complex to be done manually. Having the right tools is important. So how do you know which tool to choose?
+
+To think about this, let's review the most traditional current methods. Broadly speaking, network security analysis tools take one of two approaches: **probing** or **packet filter analysis**. The first approach sends test probes into the network to help uncover which ports are accessible, on which servers, and from which vantage points, such as the Internet. _Nmap_ is a popular penetration testing tool that uses probes.
+
+The second approach analyzes packet filters in the network such as ACLs, firewall rules, and security groups in public clouds. Instead of sending packets into the network, the tool inspects the packet filters to determine what is and is not permitted: is Telnet blocked, is SSH allowed, and so on. This approach is used in firewall vendor products as well as those that analyze public cloud security groups.
+
+Each approach has its plusses and minuses. Probing-based tools measure end-to-end behavior, accounting for the full impact of network paths and filters hit by packets from the source to destination. But their analysis is inherently incomplete because they can't generate packets whose headers track to all possible combinations of addresses, protocols, and ports. As a result, these tools can fail to identify vulnerabilities that exist in the network.
+
+Probing-based tools also impose a high setup tax. You have to setup a few probe sources, which need to be whitelisted to ensure that probe traffic is not blocked by intrusion-detection systems. Given these requirements, it's not feasible to run probing-based tools from a large set of sources, and even less feasible to run them every time you change the network—which means more vulnerabilities are likely to get missed.
+
+Packet filter analysis tools, on the other hand, require minimal infrastructure setup and can be complete with respect to an individual filter. However, these tools don't analyze end-to-end security, which is a function of network paths and the combined impact of multiple packet filters that appear along those paths. So these tools can also miss vulnerabilities, or falsely report vulnerabilities when they get paths wrong.
+
+### End-to-end analysis without gaps, guaranteed
+
+Given these choices, each with its own risks, what is a network security engineer to do? Wouldn't it be great to have the best of both worlds? Good news: a new approach makes this combination possible, while giving you proactive capabilities and avoiding the pitfalls of the previous methods. Batfish uses this network-model-based approach, as do Amazon Inspector and Google Network Intelligence Center.
+
+Network-model-based tools analyze network-wide configuration and state to infer the (possibly multiple) paths taken by any flow, along with the packet filters encountered along those paths. The tool then does a complete analysis of which flows are carried or blocked by the network, covering all possible sources, destinations, and packet headers. This level of analysis would be generally be prohibitive, but here it's enabled by data structures and algorithms originally invented for hardware verification and adapted for networks. The result is end-to-end analysis that is mathematically precise, without any missed vulnerabilities or false reports.
+
+On top of providing end-to-end and comprehensive analysis, network-model-based tools are easy to set up and can be run continuously in your environment.
+
+### The best part: Know before you change
+
+Another key innovation of the network-model-based approach is that it can evaluate security posture before the network is changed. This ability helps you switch from reactive to proactive security analysis, because the tool alerts you to vulnerabilities before the network is impacted. Previously, only packet filter analysis tools could be proactive, but their ability to do so only applied to packet filter changes, not the other changes on your network.
+
+The table below summarizes the capabilities of the three approaches mentioned here.
+
+
Probing
Packet filter analysis
Network model based
End-to-end analysis
✅
✅
Analyze all possible packets
✅
✅
Easy setup, continuous operation
✅
✅
Proactive security
(only filter changes)
✅
+
+
+
+### Picking the right tool
+
+So, suppose you are keen on using a network-model-based tool to secure your network. Which one should you choose? We'll cover the various tools more deeply in a future blog, but for now, here are two key considerations: ● Does the tool provide proactive security? ● Can the tool cover your entire network, which might span multiple public and private clouds?
+
+When you weigh your options based on these criteria, you'll see that not all tools are created equal. Batfish enables proactive security and can analyze all types of networks, including public cloud. To read more about Batfish in action, and how it provides proactive security guarantees, check out [this article](https://tech.ebayinc.com/engineering/safe-acl-change-through-model-based-analysis/) describing how eBay network engineers _**used Batfish to shrink their border ACLs by 80% without any service impact**_.
+
+### Summary
+
+The last few years have seen the emergence of a new approach to network security that combines the best features of probing and packet filter analysis. Using a network-model-based security tool gives you continuous, end-to-end security analysis that covers your whole network and considers all possible packets. And with the right tool, you can achieve all of these benefits proactively and identify vulnerabilities before your network changes are even applied.
diff --git a/_posts/2020-08-05-a-practical-approach-to-building-a-network-ci-cd-pipeline.md b/_posts/2020-08-05-a-practical-approach-to-building-a-network-ci-cd-pipeline.md
new file mode 100644
index 00000000..8a70df5f
--- /dev/null
+++ b/_posts/2020-08-05-a-practical-approach-to-building-a-network-ci-cd-pipeline.md
@@ -0,0 +1,49 @@
+---
+title: "A practical approach to building a network CI/CD pipeline"
+date: "2020-08-05"
+author: Samir Parikh
+tags:
+ - "ansible"
+ - "batfish"
+ - "network-cicd"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+---
+
+Continuous integration and continuous deployment (CI/CD) is the practice of automatically packaging, testing, and deploying code, generally in small increments. This modern DevOps practice has made software development agile and reliable, and it holds the same promise for networking as more environments transition to the infrastructure-as-code (IaC) model.
+
+In this post, we’ll outline a practical network CI/CD pipeline similar to the ones we’ve helped build for our customers and other Batfish users. A demo of this pipeline is on [YouTube](https://www.youtube.com/watch?v=ORFiReqaUzY) and the code is on [GitHub](https://github.com/batfish/af19-demo). Reading our earlier blogs isn’t a prerequisite for reading this one, but if you’re curious about basic concepts behind network validation and CI/CD, you may find it useful to check out our posts on [validation types](/2019/01/16/the-what-when-and-how-of-network-validation.html) (including verification versus testing) and [CI/CD pipeline structure options](/2019/03/15/designing-a-network-validation-pipeline.html).
+
+
+
+
+
+Let’s get started. Our CI/CD pipeline, shown in the figure above, has three main inputs:
+
+1. **Source of Truth (SoT)**: The SoT stores your network configuration constants such as IP addresses, NTP servers, VLANs, and DNS servers. It also stores behavior related to these constants, such as list of flows that should be permitted or denied, and zones within the network that are allowed or disallowed access from the internet. You can have multiple SoTs in a variety of formats, as long as there is only one authoritative source for any piece of information. You can capture the SoT in text files, which allows it to be stored in Git, or in a database like NetBox. Our demo uses multiple YAML files—one for the base system configuration, and one for packet filtering information to generate firewall zone policies via [Capirca](https://github.com/google/capirca), a multi-platform access control list (ACL) generation system.
+2. **Configuration templates**: The SoT is combined with configuration templates to build network config files. It helps to separate the boilerplate configuration structure into multiple templates that can generate the config files when combined with the SoT, enabling maximum re-use across devices. Our demo uses multiple Jinja2 templates for each device role.
+3. **Policies**: These define good things that should happen in the network — for example, all border gateway protocol (BGP) sessions should come up or all pairs of routers should be able to communicate — or bad things that should never happen, like packets from the Internet being able to communicate with a router. Our demo uses a combination of Ansible playbooks and pytest files to express policy. (Using both isn’t necessary. We did that to illustrate the options.)
+
+The demo pipeline combines these inputs into a four-step workflow:
+
+1. **Propose change**: The workflow begins with you or another network engineer/operator proposing a change to the network, or by the system initiating a script in response to, say, a ServiceNow ticket. Whichever method you choose, proposing the change entails modifying either the SoT or the templates.You should consider scripting all common network changes, such as adding a new flow to the packet filters or bringing down a device for maintenance. The scripting approach is helpful for not only consistency and speed but also because it allows you to evolve the SoT format without having to modify the mechanism by which changes are proposed. Our demo uses Ansible playbooks as change scripts for changing packet filters at the border firewall and for provisioning new leaf routers in the fabric.For any changes you don’t initiate using scripts, you can modify the SoT or templates manually. These scripted or manual modifications are committing to a new Git branch. This commit triggers the pipeline file, which in turn orchestrates the successive workflow steps below as long as the prior one hasn’t failed.
+2. **Generate configs**: The second step generates config files from the templates and the SoT. This can be as simple as calling the render function on the template, or it can be more involved with custom data manipulation. Our demo uses custom logic to convert Capirca definitions in the SoT to device specific ACL syntax, and then merges that with the templates we created using jinja2.
+3. **Validate change**: The third step examines the generated configs and policies to determine which policies pass or fail. Our demo uses Batfish for validation.
+4. **Deploy change**: This final step occurs only after the validation step judges that no critical policy is failing and, optionally, you conduct a manual review. Our demo does not include deployment scripts (or network devices), but you can create these based on systems like Ansible, Nornir, or NAPALM.
+
+### Tips and tricks
+
+So far, we’ve described the optimal operations of our demo pipeline from a high-level view. The reality, of course, is that you’ll probably have to resolve additional setup issues as they arise. By looking at our [code](https://github.com/batfish/af19-demo), you can see many of the issues we encountered, warts and all. Let’s list a few of them here that aren’t specific to our network, and tell you how we suggest you work around them if they apply to you.
+
+- **Multiple SoT formats**. Using multiple SoT formats can be a useful pattern. Because all SoT information isn’t required to be in the same format or database, you can store different elements of your configuration in whatever method they’re best suited to. For example, you might use separate databases to store NTP server and BGP peers.As mentioned earlier, our demo pipeline uses two SoT formats: Capirca definitions for ACLs and firewalls rules, and a YAML-based format for all other data. We chose these formats for convenience and flexibility. Using Capirca meant that we didn’t have to write vendor-specific configuration templates for packet filters and we can more easily swap out our firewall vendor if needed.Using multiple SoT formats does complicate the configuration generation script slightly, because you have the additional step of merging them together. In this case, it means running Capirca first and then merging its results with what was produced via the jinja2 rendering. For our demo pipeline, we deemed this additional step to be a worthwhile tradeoff given the advantages of using Capirca.
+- **Local Gitlab setup**. If you’re using Gitlab CI, it helps when prototyping your pipeline to run the GitLab community edition container locally with the Shell executor as the runner. This enables rapid iteration and will free you from worrying about permissions and remote environment setup for the runner. This setup is fairly easy, as our [instructions](https://github.com/batfish/af19-demo#local-gitlab-setup-on-a-mac) If any port conflicts arise, as they commonly do with this method, Gitlab has [instructions](https://docs.gitlab.com/omnibus/docker/#expose-gitlab-on-different-ports) for resolving those.
+- **Debugging the pipeline**. Debugging a CI/CD pipeline is not as straightforward as debugging locally running code, but it is not difficult either with the right approach. Two useful tools, available in all CI systems, are the _pipeline_ _output_ and _job artifacts_. The pipeline output shows the stdout and stderr of the scripts that run in the pipeline. However, not everything you want to see after the CI pipeline has run is suitable for printing to stdout. For example, you might want to see the configs generated in Step 2 above in order to debug the generation or validation scripts. That’s where job artifacts come in. These are files and folders created when the pipeline runs and saved for later viewing. Our demo pipeline saves generated configs as artifacts, but you can add anything you need to the artifact set. GitLab provides instructions on creating and viewing artifacts [here](https://docs.gitlab.com/ee/ci/pipelines/job_artifacts.html).
+
+### Summary
+
+Hopefully we’ve convinced you that a CI/CD pipeline is within reach for your network, and that our related resources ([code](https://github.com/batfish/af19-demo), [demo](https://www.youtube.com/watch?v=ORFiReqaUzY)) are helpful as you embark on that journey.
+
+The goal of this post was to lay down the network pipeline CI/CD basics. Stay-tuned for future discussions focused on creating network policies for CI/CD and integrating CI/CD into your existing “brownfield” network.
+
+We love hearing about your experience. For any questions or comments, reach us on [Batfish Slack](https://join.slack.com/t/batfish-org/shared_invite/enQtMzA0Nzg2OTAzNzQ1LTcyYzY3M2Q0NWUyYTRhYjdlM2IzYzRhZGU1NWFlNGU2MzlhNDY3OTJmMDIyMjQzYmRlNjhkMTRjNWIwNTUwNTQ).
diff --git a/_posts/2020-10-09-pre-deployment-validation-of-bgp-route-policies.md b/_posts/2020-10-09-pre-deployment-validation-of-bgp-route-policies.md
new file mode 100644
index 00000000..eff03e13
--- /dev/null
+++ b/_posts/2020-10-09-pre-deployment-validation-of-bgp-route-policies.md
@@ -0,0 +1,147 @@
+---
+title: "Pre-deployment validation of BGP route policies"
+date: "2020-10-09"
+author: Todd Millstein
+excerpt:
We discuss how validating route policies prior to deployment can prevent outages big and small.
+tags:
+ - "batfish"
+ - "bgp"
+ - "networkautomation"
+ - "networkvalidation"
+ - "predeployment"
+ - "route-leak"
+ - "route-policy"
+ - "verification"
+---
+
+A common culprit behind some of the biggest outages in the Internet is misconfigured BGP route policies. For example:
+
+- [BGP Leak Causing Internet Outages in Japan and Beyond](https://www.bgpmon.net/bgp-leak-causing-internet-outages-in-japan-and-beyond/)
+- [How a Tiny Error Shut Off the Internet for Parts of the US](https://www.wired.com/story/how-a-tiny-error-shut-off-the-internet-for-parts-of-the-us/)
+- [Telia engineer error to blame for massive net outage](https://www.theregister.com/2016/06/20/telia_engineer_blamed_massive_net_outage/)
+
+
+
+Such outages typically occur when route policy configuration changes end up accidentally leaking routes or accepting routes they shouldn't.
+
+That these events occur regularly and across a wide swath of networks reinforces our belief that getting network configuration right is a tools problem, not an experience or training problem. Validating network configuration in general, and BGP route policies in particular, is a highly complex task for which engineers need better tools. While network engineers often know what their route policies should or should not do (e.g., see [MANRS guidelines](https://www.manrs.org/)), ensuring that the policy implementation matches their intent is notoriously hard.
+
+[Batfish](https://github.com/batfish/batfish) solves this problem by providing two ways to analyze routing policies:
+
+1. Test the policy against a specific set of input routes with _testRoutePolicies_
+2. Find (search for) a specific set of input routes that trigger a specific action by the policy with _searchRoutePolicies_ (new!).
+
+These questions bring to route policies similar capabilities that you may already know and love from Batfish for [analyzing ACLs and firewall rules](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Analyzing%20ACLs%20and%20Firewall%20Rules.ipynb). They make it easy to find bugs in routing policies and get strong correctness guarantees, all before you deploy changes to the network.
+
+Here are just a few examples of the kinds of intents that you can validate with these analyses:
+
+- Deny all incoming routes with private addresses
+- Only permit incoming routes if they have the correct origin AS
+- Tag incoming routes from a neighbor with a specific community
+- Set the local preference for all customer routes to 300
+- Advertise only prefixes that we (and our customers) own
+
+### Testing Route Policy Behavior
+
+The _testRoutePolicies_ question enables you to test the behavior of a route policy for specific routes of interest. You can find out,
+
+- if the route will be permitted or denied by the policy.
+- if permitted, how attributes such as communities are transformed.
+
+For example, to test the "_deny all incoming routes with private addresses_" intent you would run _testRoutePolicies_ on routes with prefixes in the private address space and check that all of them are denied.
+
+Let’s take a look at an example route-policy from\_customer and evaluate its behavior with testRoutePolicies.
+
+```
+route-map from\_customer deny 100
+ match ip address prefix-list private-ips
+!
+route-map from\_customer permit 200
+ match ip address prefix-list from44
+ match as-path origin44
+ set community 20:30
+ set local-preference 300
+!
+route-map from\_customer deny 300
+ match ip address prefix-list from44
+!
+route-map from\_customer permit 400
+ set community 20:30
+ set local-preference 300
+
+ip prefix-list private-ips seq 5 permit 10.0.0.0/8 ge 8
+ip prefix-list private-ips seq 10 permit 172.16.0.0/28 ge 28
+ip prefix-list private-ips seq 15 permit 192.168.0.0/16
+!
+ip prefix-list from44 seq 10 permit 5.5.5.0/24 ge 24
+!
+ip as-path access-list origin44 permit \_44$
+```
+
+```
+inRoute1 = BgpRoute(network="10.0.0.0/24", originatorIp="4.4.4.4", originType="egp",protocol="bgp")
+result = bfq.testRoutePolicies(policies="from\_customer",direction="in",
+ inputRoutes=\[inRoute1\]).answer().frame()
+print(result)
+
+ Node Policy\_Name Input\_Route Action Output\_Route Difference
+
+0 border1 from\_customer BgpRoute(network='10.0.0.0/24', originatorIp='4.4.4.4', originType='egp', protocol='bgp', asPath=\[\], communities=\[\], localPreference=0, metric=0, sourceProtocol=None) DENY None None
+1 border2 from\_customer BgpRoute(network='10.0.0.0/24', originatorIp='4.4.4.4', originType='egp', protocol='bgp', asPath=\[\], communities=\[\], localPreference=0, metric=0, sourceProtocol=None) DENY None None
+```
+
+As you can see, Batfish correctly determines that the 10.0.0.0/24 route advertisement will get denied by the policy.
+
+This capability is extremely useful when designing (or changing) your routing policy. For a concrete set of routes you can determine the specific behavior of the routing policy. The _testRoutePolicies_ question achieves this by simulating the behavior of the route policy on input routes.
+
+### Searching for Route Policy Misbehaviors (Verification)
+
+Testing is extremely useful for debugging route policies, but it can only guarantee that the policy behaves correctly on the specific routes that are tested. The space of potential input routes is so large, it would be infeasible to test each one individually. This is where _searchRoutePolicies_ comes into play. It allows you to verify the policy against your intent, across all possible routes. The _searchRoutePolicies_ question has been recently added to Batfish and can be used to analyze a host of common route policy behaviors.
+
+The _searchRoutePolicies_ question provides comprehensive guarantees by searching for routes that cause a route policy to behave in a particular way. You start by describing a space of potential input routes---using any combination of prefix ranges, a list of allowed communities, an AS-path regular expression, etc.---along with an action (permit or deny). Batfish will search this space of potential input routes and identify a route, if one exists, for which the route policy you are evaluating takes the specified action.
+
+For example, to verify the "_deny all incoming routes with private addresse_s" intent, you would specify the space of interest as all routes with private addresses and search if anything in that space is permitted. If Batfish returns any route, that indicates that the routing policy violates your intent. Conversely, if there are no results then you can be sure that the intent is satisfied, and that all routes with private addresses are indeed denied.
+
+```
+\# Define the space of private addresses and route announcements
+privateIps = \["10.0.0.0/8:8-32", "172.16.0.0/28:28-32", "192.168.0.0/16:16-32"\]
+inRoutes1 = BgpRouteConstraints(prefix=privateIps)
+
+\# Verify that no such announcement is permitted by our policy
+result = bfq.searchRoutePolicies(policies="from\_customer",
+ inputConstraints=inRoutes1,
+ action="permit").answer().frame()
+
+print(result.loc\[0\])
+
+Node border2
+Policy\_Name from\_customer
+Input\_Route BgpRoute(network='192.168.0.0/32', originatorIp='0.0.0.0', originType='igp', protocol='bgp', asPath=\[\], communities=\[\], localPreference=0, metric=0, sourceProtocol=None)
+Action PERMIT
+Output\_Route BgpRoute(network='192.168.0.0/32', originatorIp='0.0.0.0', originType='igp', protocol='bgp', asPath=\[\], communities=\['20:30'\], localPreference=300, metric=0, sourceProtocol=None)
+Difference BgpRouteDiffs(diffs=\[BgpRouteDiff(fieldName='communities', oldValue='\[\]', newValue='\[20:30\]'), BgpRouteDiff(fieldName='localPreference', oldValue='0', newValue='300')\])
+```
+
+Batfish has found a route advertisement 192.168.0.0/32 that will be allowed by the routing policy, despite our intent being for it to be denied. There may be multiple route advertisements that violate our intent, Batfish picks one as an example to highlight the error. If you look closely at the routing policy, the route-map from\_customer is going to deny routes that match the prefix-list private-ips. The last entry in that prefix-list is incorrect. It is missing the "ge 16" option. As defined, that entry only matches the exact route 192.168.0.0/16, which means any other prefix from that 192.168.0.0/16 space will not be matched and therefore not be denied by the route-map.
+
+```
+route-map from\_customer deny 100 match ip address prefix-list private-ips
+
+ip prefix-list private-ips seq 5 permit 10.0.0.0/8 ge 8 ip prefix-list private-ips seq 10 permit 172.16.0.0/28 ge 28
+ip prefix-list private-ips seq 15 permit 192.168.0.0/16
+```
+
+You can also use _searchRoutePolicies_ to ensure that your routing policy is correctly transforming routes it accepts. To do this, you specify a space of output routes, using a combination of prefix ranges, a list of communities, AS-path regular expressions, etc…, along with the space of input routes. Batfish will return any input route that after being transformed by the routing policy falls in the space of the output routes. This capability can be used to validate an intent like "_set the local preference for all customer routes to 300_" by searching for input customer routes that do not land in the output space of routes with a local preference of 300.
+
+You may be curious how this magic works under the hood--after all, the space of routes can be huge, representing billions of potential routes. This is where the power of Batfish comes in. Batfish encodes the route policy, which is essentially a function that maps input routes to output routes, as a mathematical equation, with a series of constraints. Using a similar algorithm to how we search for packets that meet specific criteria, Batfish solves this mathematical equation.
+
+If you have any questions, or have complex routing policies that you need help analyzing get in touch via [GitHub](https://github.com/batfish/batfish) or [Slack](https://join.slack.com/t/batfish-org/shared_invite/enQtMzA0Nzg2OTAzNzQ1LTcyYzY3M2Q0NWUyYTRhYjdlM2IzYzRhZGU1NWFlNGU2MzlhNDY3OTJmMDIyMjQzYmRlNjhkMTRjNWIwNTUwNTQ).
+
+### Examples and More Information
+
+To learn more, check out these resources:
+
+1. Our new [Jupyter notebook](https://github.com/batfish/pybatfish/blob/master/jupyter_notebooks/Analyzing%20Routing%20Policies.ipynb), which provides examples of using both _testRoutePolicies_ and _searchRoutePolicies_ to validate route policies.
+2. A [NANOG talk](https://www.youtube.com/watch?v=rxqEe7vztRE) on using _testRoutePolicies_ for pre-deployment validation.
+3. [Documentation](https://pybatfish.readthedocs.io/en/latest/notebooks/routingProtocols.html) for the questions.
diff --git a/_posts/2020-10-16-three-ways-to-break-a-network-and-one-to-save-it.md b/_posts/2020-10-16-three-ways-to-break-a-network-and-one-to-save-it.md
new file mode 100644
index 00000000..d16f11e3
--- /dev/null
+++ b/_posts/2020-10-16-three-ways-to-break-a-network-and-one-to-save-it.md
@@ -0,0 +1,80 @@
+---
+title: "Three ways to break a network (and one to save it)"
+date: "2020-10-16"
+author: Dan Halperin
+---
+
+When people mention network configuration bugs, the first thing that comes to your mind is likely typos–or if you prefer technical terms, “fat fingers". Of course, if you are an experienced network engineer, you know there is more to config bugs than keyboard gremlins.
+
+At Intentionet, we work on a network validation engine called [Batfish](https://www.batfish.org/) which has helped engineers find and squash countless configuration bugs. I thought it’d be fun to share some examples we've seen over the years. While bugs are unique like snowflakes, I think of them as falling into three broad categories.
+
+### Fat Fingers (when there is a gap between the brain and hands)
+
+This category of bugs is as old as typewriters. Some entertaining examples off the top of my head:
+
+- Incorrectly assigned address (a public IP!) to a router interface: _100.x.y.z_ instead of _10.x.y.z_
+- Mistyped prefix list name: **prefix-list PFX-LIST 10.1O0.10.128/25** (the letter ‘O’ instead of the number ‘0’ in the prefix)
+- Mistyped access-list for a SNMP community: _SNMP-ACCESS-LIST_ instead of _SNMP\_ACCESS\_LIST_ (‘-’ vs ‘\_’)
+- Mistyped route-map name: _BLEAD-TRAFFIC_ instead of _BLEED-TRAFFIC_
+- Mistyped keyword: **no bgp defaults ipv4-unicast** instead of **no bgp default ipv4-unicast**
+- Wrong BGP neighbor IP: **neighbor 169.254.127.3 activate** instead of **neighbor 169.54.127.1 activate**
+- Duplicate BGP local AS number on leaf routers in a data center (which effectively disconnected their racks)
+
+These mistakes all come from real production configurations, which device operating systems of course accepted happily.
+
+### Misunderstanding configuration semantics (when you forget what you learned in school)
+
+In this category of bugs, engineers typed exactly what they intended. However, what they wanted was incorrect because they misunderstood configuration semantics. As an example, check out the address group definition below on a Cisco IOS device.
+
+```
+ipv4 address object-group MY-SOURCES
+ 10 10.17.0.0 255.255.0.0
+ 20 10.30.137.190 255.255.255.255
+```
+
+This group was being used in an ACL guarding which sources could access a sensitive resource. So, which sources can access the resource?
+
+- 10.17.1.1? The right answer is yes.
+- 10.30.137.190? Yes
+- 10.30.137.191? Still yes
+- 10.10.10.10? Still yes!
+- 8.8.8.8? Still yes!!
+
+If you are confused, you are not the only one. In fact, we’ve seen multiple engineers getting tripped up by such a bug. In Cisco IOS, the part after the IP address is expected to be a wildcard (reverse netmask), not netmask. So the engineer should have used 0.0.255.255 on Line 10 and 0.0.0.0 on Line 20.
+
+The use of 255.255.255.255 on Line 20 was essentially allowing all source IPs! Maybe the engineers in question had been spending their time with Cisco ASA – which is, of course, a different Cisco operating system with the exact opposite behavior!
+
+As another example, check out the snippet below.
+
+```
+ip prefix-list MY-LIST
+ seq 10 permit 192.0.1.0/24 !
+route-map MY-RT-MAP permit 10
+ match ip address MY-LIST
+```
+
+Will the route map permit routes for 192.0.1.0/24? Well, you cannot tell from the information I provided. The gotcha is that the syntax in the snippet refers to an ACL, not a prefix list. For matching on the prefix-list, the command should be **match ip address prefix-list MY-LIST**.
+
+This category of bugs is particularly nefarious. You might stare long and hard at your configs after seeing unexpected network behavior and not spot the bug even when it’s right under your nose. In your mind, the config is correct. Batfish users found such bugs when Batfish-predicted behavior differed from their expectation (e.g., arbitrary source IPs being allowed in the first case). Some users initially chalked it up to a Batfish bug—and sometimes, so did I—until they discovered the bug was in human interpretation of the config at hand.
+
+### Missed interactions (when the enemy is someplace else)
+
+Network configuration is complex because portions of the configuration within a device and even across devices interact with each other. To ensure intended network-wide behavior, engineers need to reason about many configuration lines collectively, sometimes on the same device and sometimes across multiple devices. This often exceeds the limit of human abilities.
+
+A common case of such interactions, on a single device, relates to ACLs and firewall rules. We have seen numerous bugs where traffic was not being permitted/denied as intended even though the line meant to do that task was correct. The problem was that the line was shadowed by earlier lines. Some interesting examples:
+
+- 32 different /28s early in the ACL collectively shadowing a /23 prefix
+- a **permit any any** inserted at the first line in the ACL for debugging, rendered the remainder useless
+- **permit 10.23.34.16/30 any** shadowed **deny 10.23.34.19 any** which appeared 10 lines later
+
+It is easy to miss these types of errors when the ACL or firewall policy has 100s or 1000s of lines.
+
+Here is an example of interactions across devices. This network had two border routers and preferred to use one of them for some external services. The design intent was for traffic to fail over to the second one when the first router or its uplink failed. Simulating this failure in Batfish revealed that routing was indeed correctly configured to move the traffic to the second router, but there was an ACL entry on it that denied access to some of the services. Thus, the network was not redundant as intended.
+
+I have seen many other examples of network redundancy not working as intended because of a configuration bug on a different device. The root causes vary from not advertising a necessary next-hop into IGP, to not putting a BGP neighbor in the right peer group, to missing network statements, to misconfigured iBGP session, and so on. Redundancy problems are notoriously hard to find because the bug could lie dormant on a device that has not been changed in a while and the network functions normally until something fails (or is taken down for maintenance). Batfish helps find such bugs by simulating the impact of failures.
+
+### Squashing configuration bugs
+
+Rooting out configuration bugs is key to network security and reliability because they account for 50-80% of network outages and breaches. More automation certainly helps, but it is not the complete answer. Some of the bugs above happened in automated environments because the input to automation was wrong or incorrect semantics were encoded in the automation pipeline.
+
+Testing network changes before deployment is thus critical. Batfish enables you to do that at production-scale. Its testing is based on how the network will behave, as opposed to what the configuration text looks like, allowing you to catch all types of bugs. This of course assumes that you have the right tests in place. We will cover that in a future post. Stay tuned!
diff --git a/_posts/2020-10-28-lesson-from-a-network-outage-networks-need-automated-reasoning.md b/_posts/2020-10-28-lesson-from-a-network-outage-networks-need-automated-reasoning.md
new file mode 100644
index 00000000..70eeef64
--- /dev/null
+++ b/_posts/2020-10-28-lesson-from-a-network-outage-networks-need-automated-reasoning.md
@@ -0,0 +1,61 @@
+---
+title: "Lesson from a network outage: Networks need automated reasoning"
+date: "2020-10-28"
+author: Ratul Mahajan
+tags:
+ - "batfish"
+ - "batfish-enterprise"
+ - "change-testing"
+ - "change-validation"
+ - "networkoutage"
+ - "networkvalidation"
+ - "predeployment"
+ - "validation"
+---
+
+In the afternoon of October 23, within a few minutes of each other, two friends sent me a link to the recently-released “[June 15, 2020 T-Mobile Network Outage Report](https://docs.fcc.gov/public/attachments/DOC-367699A1.docx)” by the Public Safety and Homeland Security Bureau (PSHB) of the FCC. Given what Intentionet does, the report sounded interesting and I started reading it immediately.
+
+The report details the massive impact of T-Mobile’s network outage. It lasted over 12 hours, and “at least 41% of all calls that attempted to use T-Mobile’s network during the outage failed, including at least 23,621 failed calls to 911.” The report also shares some individual stories behind the staggering numbers. For example, “One commenter noted that his mother, who has dementia, could not reach him after her car would not start and her roadside-assistance provider could not call her to clarify her location; she was stranded for seven hours but eventually contacted her son via a friend’s WhatsApp.”
+
+> _**“The outage was initially caused by an equipment failure and then exacerbated by a network routing misconfiguration that occurred when T-Mobile introduced a new router into its network.”**_
+
+As someone who is working to eliminate such outages, parts of the report that discuss its root causes were illuminating. I learned that “the outage was initially caused by an equipment failure and then exacerbated by a network routing misconfiguration that occurred when T-Mobile introduced a new router into its network.”
+
+Reading through the sequence of events that caused the outage, I could not help but conclude that this outage was completely preventable. T-Mobile should have known that their network is vulnerable to the failure and should have also known that the configuration change was erroneous before making the change.
+
+### Anatomy of the outage
+
+Let me explain using the same example network as the report, shown below. The network runs the Open Shortest-Path First (OSPF) routing protocol in which each link is configured with a weight and traffic uses the least weight path to the destination. The left diagram shows such a path from Seattle to Miami when all links are working, and the right diagram shows the path when the Seattle-Los Angeles link fails.
+
+![](/assets/images/outage-anatomy.png){:width="800px"}
+
+The notable thing is that the paths taken by the traffic are deterministic and knowable ahead of time, before any link weight is (re)configured or any failure occurs. For a large network, this computation tends to be too complex for humans, but computers are great for this type of thing.
+
+For the T-Mobile outage, the first trigger was a link weight misconfiguration, which caused traffic to take unintended paths that were not suitably provisioned, thus causing an outage. This error could have been completely prevented by analyzing the impact of the planned change and ensuring that it did the right thing.
+
+The second trigger was a link failure, which caused even more traffic to take unintended paths. This trigger was not preventable---equipment failures are a fact of life in any large network—but networks are designed to tolerate such failures. However, the fault-tolerance that T-Mobile thought existed in their network did not because of a configuration error. Whether this was due to the same misconfiguration as the first trigger or a different one is not clear from the report. In any case, the adverse impact of the failure could have been prevented by simulating failures and ensuring that the network responds correctly to failures.
+
+Another factor behind the outage was a latent bug in the call routing software used by T-Mobile. It appears that the software had not been tested under the conditions induced by the events above.
+
+Finally, T-Mobile’s attempts to alleviate the situation made it worse. They deactivated a link that they thought would divert the traffic to better paths but instead worsened the congestion. This also could have been prevented by analyzing the impact of link deactivation before deactivating it.
+
+### Automated reasoning to the rescue
+
+While the outage was preventable, it would be unfair to pin blame on T-Mobile’s network engineers. Reasoning about the behavior of large networks is an incredibly complex task. Large networks have hundreds to thousands of devices, each with thousands of configuration lines. Judging the correctness of the network configuration requires network engineers to reason about the collective impact of all these configuration lines. Further, they need to do this reasoning not only for the “normal” case but also for all plausible failure cases, and there are a large number of such cases in a network like T-Mobile’s. It is simply unreasonable to expect humans, no matter how skilled, to be able to judge the correctness of configuration or predict the impact of changes.
+
+To overcome these limits of human abilities, we must employ an approach based on software and algorithms, where the correctness of configuration and its response to failures is automatically analyzed. Fortunately, the technology to do this already exists. Tools such as [Batfish](http://www.batfish.org), to which we at Intentionet contribute, can easily do this type of reasoning at the scale and complexity of real-world networks. As an example, see a blueprint we published last year on how failures can be automatically analyzed: [Analyzing the Impact of Failures (and letting loose a Chaos Monkey)](https://pybatfish.readthedocs.io/en/latest/notebooks/linked/analyzing-the-impact-of-failures-and-letting-loose-a-chaos-monkey.html).
+
+### Industry-wide change is needed
+
+I certainly do not mean to single out T-Mobile. The problem is systemic and similar outages have happened at other companies as well. To list a few from this summer alone:
+
+- [IBM Cloud suffers prolonged outage](https://techcrunch.com/2020/06/09/ibm-cloud-suffers-prolonged-outage/)
+- [Inadvertent Routing Error Causing Major Outage](https://securityboulevard.com/2020/09/inadvertent-routing-error-causing-major-outage/)
+- [CenturyLink outage led to a 3.5% drop in global web traffic](https://www.zdnet.com/article/centurylink-outage-led-to-a-3-5-drop-in-global-web-traffic/)
+- [Cloudflare DNS goes down, taking a large piece of the internet with it](https://techcrunch.com/2020/07/17/cloudflare-dns-goes-down-taking-a-large-piece-of-the-internet-with-it/)
+
+Based on my understanding of these outages, each was avoidable if effective and automatic configuration analysis using tools like Batfish were employed. In fact, studies show that 50-80% of all network outages can be attributed to configuration errors.
+
+Modern society relies on computer networks, and this reliance has increased manifold this year as many of us have started working remotely. The networking community must respond by building robust infrastructure where outages are highly rare. We cannot continue to rely on human expertise alone to prevent outages and must start augmenting it systematically with automated reasoning. The hardware and software industries made this leap many years ago and experienced a dramatic improvement in reliability.
+
+We have the technology and the ability, and we should have plenty of motivation. All we need is a collective will to fortify our defenses. It is time.
diff --git a/_posts/2020-12-18-validating-the-validator.md b/_posts/2020-12-18-validating-the-validator.md
new file mode 100644
index 00000000..7669a2a0
--- /dev/null
+++ b/_posts/2020-12-18-validating-the-validator.md
@@ -0,0 +1,66 @@
+---
+title: "Validating the validator"
+date: "2020-12-18"
+author: Victor Heohiardi
+tags:
+ - "batfish"
+ - "intent"
+ - "networkautomation"
+ - "networkvalidation"
+ - "validation"
+---
+
+Batfish provides a unique power to its users: validate network configurations before pushing them to the network. Its analysis is production-scale—unlike with emulation, you don’t need to build a trimmed version of your network. It is also comprehensive—considers all traffic, not just a few flows. These abilities help network engineers proactively fix errors that are responsible for 50-80% of the outages.
+
+Batfish finds these errors by modeling and predicting network behavior given its configuration. The higher the fidelity of Batfish models the better it will be at flagging errors.
+
+So, the question is: _How do we build and validate Batfish models?_
+
+As any network engineer will testify, accurately predicting network behavior based on configuration is super challenging. Device behaviors differ substantially across vendors (Cisco vs. Juniper), across platforms of the same vendor (IOS vs. IOS-XR), and sometimes even versions of the same platform. Further, it is impossible to build high-fidelity models based solely on reading RFCs or vendor docs. RFCs are silent about vendor config syntax, and vendor docs are often incomplete, ambiguous and sometimes even misleading. And don’t even get me started on how wrong the Internet is— to see what I mean, try using it to figure EIGRP metric computation.
+
+To appreciate the need to go beyond RFCs and docs, consider the following FRR configuration.
+
+```
+1 !
+2 ip community-list 14 permit 65001:4
+3 ip community-list 24 permit 65002:4
+4 !
+5 route-map com\_update permit 10
+6 match community 14
+7 on-match goto 20
+8 set community 65002:4 additive
+9 !
+10 route-map com\_update permit 20
+11 match community 65002:4
+12 set community 65002:5 additive
+13 !
+14 route-map com\_update permit 30
+15 match community 24
+16 set community 65002:6 additive
+17 !
+```
+
+If a route with community 65001:4 is processed by this route map, which communities will be attached in the end?
+
+- Will 65002:5 be attached? If you answered ‘yes’, you’d be wrong. The community list referenced on line 11 (65002:4) is not defined and hence the match will not occur.
+- Will 65002:6 be attached? If you answered ‘no’, you’d be wrong. Line 15 will match because the community 65002:4, which was attached earlier in Line 8, matches list ‘24’.
+
+Correctly predicting the behavior of this route map requires that you know 1) that FRR expects defined community lists in ‘match community’ statements, 2) what happens when an undefined list is mentioned, and 3) if ‘match community’ statements can match on communities attached by earlier statements in the route map or only on the original set of communities. It is not easy to glean all this information from the docs.
+
+That is why Batfish models are guided by actual device behaviors. Benchmarking these behaviors in our labs and observing them in many real networks helps us build the models and continuously improve their fidelity.
+
+To build models for a feature on a particular platform and version, we use three types of benchmarks to capture the device behavior in detail.
+
+- **Feature-level benchmarking**. We start by lighting up the device of interest in our lab. Commonly, we use virtual /assets/images but need physical devices when that is not an option. We then configure the feature of interest in various ways and observe its behavior for different inputs. For FRR route maps, for instance, we would configure the route map with defined/undefined community lists and inject routes with different communities.
+- **Device-level benchmarking.** Network configuration would be a lot simpler if features didn’t interact with each other but the reality is different. For instance, a firewall that supports both network address translation (NAT) and zone filters, one needs to fully understand if filters are applied post-NAT operations and if they operate on pre- or post-NAT headers. To account for feature interactions, we create more complex scenarios where the features are jointly configured in different ways and benchmark the device behavior in each of those scenarios.
+- **Network-level benchmarking.** We also construct larger network topologies with multiple devices (possibly of different vendors) configured in common patterns, to help validate end-to-end behaviors.
+
+Batfish models faithfully mimic all the behaviors in our benchmarks. Model building and the benchmarking steps are not executed as a strict waterfall. Rather, we follow an iterative process, where we refine the models in successive iterations. For example, network-level benchmarking may uncover a modeling gap, for which we’d go back to Step 1 to fully understand the gap.
+
+The most challenging part of this process is devising interesting test cases. We can do it mainly because of the experience of our engineers and help from the Batfish community members who contribute test cases and report issues.
+
+Ensuring model fidelity is not a one-time process. It is possible that we missed a feature interaction or that the model fidelity is compromised when we extend Batfish to more platforms and features. Two activities help us guard against these risks. First, when on-boarding a new network and then periodically after that, we compare the show data (interfaces, RIBs, FIBs, etc.) from the network to what is predicted by Batish. This helps flag any feature interactions that are not modeled. This way, as Batfish encounters more networks, its fidelity keeps improving.
+
+Second, as part of our nightly CI (continuous integration) runs, we check that the network state computed by the latest Batfish code base continues to match the show data from real networks and our benchmarking labs. This helps quickly catch unintended regressions in model fidelity.
+
+All said and done, are Batfish models guaranteed to be perfect? No, but neither are humans tasked with checking configuration correctness today. Think of Batfish as a lethal config-error-killing force that combines the knowledge of the most knowledgeable network engineer you have met for each platform that you analyze. Even those engineers might miss an error. But unlike them, Batfish will never forget the platform idiosyncrasies it has learned over time, and it will always catch situations where your Seattle change interacts poorly with your Chicago configuration. However, Batfish will not go with you to grab a drink to celebrate the complex change you executed without a hitch.
diff --git a/_posts/2021-03-10-dont-be-afraid-of-network-change.md b/_posts/2021-03-10-dont-be-afraid-of-network-change.md
new file mode 100644
index 00000000..15d150af
--- /dev/null
+++ b/_posts/2021-03-10-dont-be-afraid-of-network-change.md
@@ -0,0 +1,53 @@
+---
+title: "Don't be afraid of (network) change"
+date: "2021-03-10"
+author: Ratul Mahajan
+tags:
+ - "batfish"
+ - "batfish-enterprise"
+ - "change-automation"
+ - "change-testing"
+ - "change-validation"
+ - "network-automation"
+ - "network-change-automation"
+ - "network-ci-cd"
+ - "network-validation"
+ - "network-cicd"
+ - "predeployment-validation"
+---
+
+Companies large and small crave agile, resilient networks. They crave infrastructure that adapts rapidly to business needs without outages or security breaches. But changing the network is a risky proposition today, be it adding a firewall rule or provisioning a new rack. 50-80% of network outages are caused by bad network configuration changes. This high level of risk forces networking teams to tread carefully (and slowly) and prevents them from automating network changes.
+
+No wonder Dilbert does not want to be the one updating the firewall.
+
+
+
+![Dilbert Firewall Cartoon](/assets/images/AML-25765_2942121_mutable_color.jpg)
+
+Network changes today center around MOPs (method of procedure) that outline the steps to implement a change, for instance:
+
+- Run pre-change test (e.g. “show” commands) to test that the network is ready for the change
+- Enter configuration commands that implement the change
+- Run post-change tests to confirm that the change worked
+- If something went wrong, roll back the change
+
+MOPs are built by network architects and engineers based on their knowledge of the network and are reviewed by Change Approval Boards (CAB).
+
+In any large network that has been operational for months, and is managed by multiple engineers, there invariably are special cases (“snowflakes”) that are all-too-easy for engineers and the CAB to miss. The inability of humans to consider all such cases is what creates the risk of change-induced outages.
+
+For some assurance of correctness, MOPs for complex changes are tested in physical and virtual labs and the test reports are included in the review process. While labs are helpful, they rarely mimic the full production network (especially the snowflakes). It is also impossible in a lab to run all test cases needed to fully validate a change—-fully validating a firewall change to isolate two network segments must consider billions of test packets. The gap between the lab and production networks and the incompleteness of tests, means that the CAB does not have a complete view of the impact of the change. The risk of them approving an incorrect change is still high.
+
+**Today, it is almost impossible to guarantee that network changes won’t disrupt critical services or open security holes.**
+
+We built the Change Review workflow in Batfish Enterprise for provably safe network changes. This feature enables network engineers and operators to comprehensively validate MOPs. You specify the change commands and pre- and post-change tests (interactively or via API calls). Batfish Enterprise then simulates your full production network (with all of its snowflakes), predicts the full impact of the change (for all traffic), and flags any test failures.
+
+You can now be confident that the planned change is correct and can be safely deployed to the network. You can also attach the proof-of-correctness test reports to change management tickets, making CAB reviews easier and faster.
+
+**With the new Change Review workflow in Batfish Enterprise, you can ensure that the security and availability of the network is never compromised by a configuration change.**
+
+The rigorous validation of the MOP and full visibility into the impact of the change will enable you to reduce outages and dramatically increase change velocity. These correctness guarantees are also the foundation upon which you can automate the network change workflow.
+
+See the solution in action in [these videos](https://youtube.com/playlist?list=PLUXUN_5CNTWLbcrA0m37TBVM_ccfrGiYt).
+
+
+
diff --git a/_posts/2021-03-31-network-test-automation-rock-paper-scissors-lizard-or-fish.md b/_posts/2021-03-31-network-test-automation-rock-paper-scissors-lizard-or-fish.md
new file mode 100644
index 00000000..db0350c5
--- /dev/null
+++ b/_posts/2021-03-31-network-test-automation-rock-paper-scissors-lizard-or-fish.md
@@ -0,0 +1,80 @@
+---
+title: "Network test automation: Rock, Paper, Scissors, Lizard, or Fish?"
+date: "2021-03-31"
+author: Chirag Vyas
+tags:
+ - "batfish"
+ - "change-automation"
+ - "change-testing"
+ - "change-validation"
+ - "emulation"
+ - "gns3"
+ - "network-automation"
+ - "network-change-automation"
+ - "network-ci-cd"
+ - "network-validation"
+ - "network-cicd"
+ - "predeployment-validation"
+ - "simulation"
+---
+
+When building a network automation pipeline, one of the most important questions to consider is: How do you test network changes to prove that they will work as intended and won’t cause an outage or open a security hole? In a world without automation, this burden falls on network engineers and approval boards. **But in a world where network changes are automated, testing of changes must be automated as well.**
+
+There are multiple types of testing that you should consider, such as:
+
+- Data validation (e.g., the input for an IP address is a valid IP address)
+
+- Syntax validation (e.g., configuration commands are syntactically valid)
+
+- Network behavior validation (e.g., firewall rule change will permit intended flows)
+
+I will focus on network behavior validation in this blog. It provides the strongest form of protection by validating the end-to-end impact of changes.
+
+How can you validate network behavior that a change will produce _before_ the change is deployed to the production network? You have two options.
+
+- You can build a lab that **emulates** the production network, using physical or virtual devices, and apply and test the changes there.
+
+- You can **simulate** the change using models of network devices.
+
+GNS3 is a popular emulation tool, and Batfish is a comprehensive, multi-vendor simulation tool. The title of this blog post is a play on their logos (and of course [Big Bang Theory](https://bigbangtheory.fandom.com/wiki/Rock,_Paper,_Scissors,_Lizard,_Spock)), though the discussion below applies equally to other emulation tools such as EVE-NG and VIRL.
+
+As Batfish developers, we are frequently asked if engineers need both tools. The answer is: Both tools should be part of your testing toolkit as they are built to solve different problems. The table summarizes our view which I’ll discuss in more detail below. Before proceeding, I should add that GNS3 is an excellent tool and [we use it extensively to build and test high-fidelity device models in Batfish](/2020/12/18/validating-the-validator.html).
+
+
+
+
+
+
+
Correctness guarantees
✔
✘
Configuration compliance and drift
✔
✘
High-level, vendor-neutral APIs
✔
✘
Embed in CI
✔
⚠ (Slow, resource heavy)
Analyze production network’s twin
✔
⚠ (Rarely possible)
Test new software versions and features
✘
✔
Test performance
✘
✔ (Some scenarios)
Our recommendation: Use the fish for day-to-day configuration changes, and use the lizard for qualifying new software /assets/images and lighting up new features.
+
+### Unique Batfish strengths
+
+Batfish has three unique strengths. First, only Batfish can provide correctness guarantees that span all possible flows. For instance, when opening access to a new /24 prefix, you may want to know that no port to that destination prefix is blocked from any source, or that you have not accidentally impacted any other destination. Such guarantees are not possible in GNS3 but are almost trivial in Batfish.
+
+Second, with Batfish you can not only test that the configuration produces the right behavior, but also that it complies with your site standards and has not drifted from its desired state. Batfish builds a vendor-neutral configuration model that you can query to validate, for instance, that the TACACS servers are correct and that the correct route map is attached to each BGP peer. This is not possible with GNS3.
+
+Finally, while both tools will let you check network behaviors by running traceroutes and examining RIBs, only Batfish offers simple, vendor-neutral APIs. These APIs make it trivial, for instance, to check that packets take all expected paths (ECMP) and understand why a traceroute path was taken. Doing such inferences in GNS3 will take lots of careful test generation, vendor-specific parsing of RIBs and "show" data, and manual correlation.
+
+> **Thus, Batfish enables comprehensive testing with strong correctness guarantees and vendor-neutral APIs. As we’ll see next, it also enables you to mimic your full production network and embed testing in your CI framework.**
+
+### Overlapping capabilities but advantage Batfish
+
+There are two testing goals for which you may consider either tool in theory, though Batfish is the pragmatic choice. First, you can invoke either tool in your CI (continuous integration) framework such that the analysis is run automatically for each change. Because of the time it takes to run GNS3 tests and the amount of resources it will consume, such usage is only a theoretical possibility for most networks. With Batfish today, many users run a wide array of tests, which finish within minutes for even networks with thousands of devices.
+
+Second, ideally, you want the tests to be run in an environment that closely mimics your production network. Creating an exact replica of the production network is nearly impossible with GNS3. Software /assets/images may not be available for some devices; they are certainly not available for cloud gateways. Or, you may run into other limitations such as the number of allowed ports or different interface names. You will need to get around such limitations by creating a smaller network and modifying your network configs, which leaves testing gaps. Batfish can easily mimic your entire production environment, including cloud deployments.
+
+### Unique GNS3 strengths
+
+A unique strength of GNS3 is that it can help qualify software /assets/images that you are going to run in production. It can test that the new versions of device software run in a stable manner and that features that you haven’t used before work as expected.
+
+GNS3 can also help you test some aspects of performance. You can test that the device can process large routing tables (though not that it does not drop packets under high load, for which you need real hardware). You can also emulate different link delays and loss rates to evaluate the impact of network conditions on, for instance, on event monitoring systems.
+
+One observation I make is that these types of tests do not need to be done frequently. They are needed only when you upgrade software or change network design to use new features. They are not needed for day-to-day changes such as adding firewall rules, updating route policies, or provisioning new racks.
+
+### Summing it up
+
+The choice of network testing tool depends on testing goals. To flag potential bugs in vendor software, you need GNS3. To find errors in your network configuration, you need Batfish. Both testing goals are important. **Thus, we recommend using the lizard to qualify software /assets/images and the fish for day-to-day network configuration changes.** That way you will lower the risk of software bugs causing network outages, and you will have configuration change testing that is comprehensive and production-scale.
+
+
+
+_Acknowledgement: Thanks to [Titus Cheung](https://www.linkedin.com/in/titus-cheung-ba62a58/) (HSBC Equities Regional Infrastructure Manager and Architect) for sharing his insights on where each tool applies and providing feedback on this article._
diff --git a/_posts/2021-04-30-test-drive-network-change-mops-without-a-lab.md b/_posts/2021-04-30-test-drive-network-change-mops-without-a-lab.md
new file mode 100644
index 00000000..fc539d69
--- /dev/null
+++ b/_posts/2021-04-30-test-drive-network-change-mops-without-a-lab.md
@@ -0,0 +1,127 @@
+---
+title: "Test drive network change MOPs without a lab"
+date: "2021-04-30"
+author: Matt Brown
+tags:
+ - "batfish"
+ - "batfish-enterprise"
+ - "change-automation"
+ - "change-testing"
+ - "change-validation"
+ - "network-automation"
+ - "network-validation"
+ - "predeployment-validation"
+---
+
+Imagine that you could predict and test the full impact of every single change to the network. Imagine also being able to do this within minutes, for the production network itself (not a small-scale replica), and without having to set up and maintain a test lab. Will this capability enable you to reduce the risk of outages and breaches? Will it enable you to be more responsive to the changing business needs of your organization?
+
+**Change Reviews in Batfish Enterprise enable you to realize these agility and resiliency benefits by letting you test drive your change MOP (method of procedure) on a twin of your production network.** If you like the results, you can confidently push the change to production.
+
+Let us consider an example change: adding a new subnet to the data center fabric. Your MOP may look like the following.
+
+
+
+
+
+
+ MOP for adding a new subnet to a leaf in the DC fabric
+
+ User inputs
+
+
10.250.89.0/24: The subnet prefix to add
+
leaf89: The leaf router where to add the subnet
+
+
+
+
+
+
+ Pre-change tests
+
+
+
Route to 10.250.89.0/24 should NOT already exist in the fabric. Log into a spine router, run “show ip route 10.250.89.0/24”, and check that the subnet prefix is not present.
+
+
+
+
+
+
+ Change commands
+
+ Log into leaf89 and enter the following commands after double checking that Ethernet7 is shutdown and vlan 389 is unused.
+
+
+ interface Ethernet7
+ switchport mode access
+ switchport access vlan 389
+ interface Vlan389
+ ip address 10.250.89.1/24
+ no shutdown
+ router bgp 65089
+ address-family ipv4 unicast
+ network 10.250.0.0/16
+
+
+
+
+
+ Post-change tests
+
+
+
Route to 10.250.89.0/24 should exist in the fabric. Log into a spine router, run “show ip route 10.250.89.0/24”, and check that the subnet prefix exists.
+
Route to 10.250.0.0/16 aggregate that covers the subnet should exist at the border routers and the firewall. Log into a border router, run “show ip route 10.250.0.0/16”, and check that the aggregate route exists.
+
+
+
+
+
+
+ Rollback commands and tests
+
+ // Skipped for this article
+ // Batfish Enterprise can help test the rollback procedure as well
+
+
+
+
+
+
+
+To test drive this MOP in Batfish Enterprise, you would first specify the planned implementation. Simply select the device from the list and enter the configuration commands planned for it. When you do that, Batfish Enterprise will check that the commands are valid. In the screenshot below, for instance, it is warning that the interface address is not correctly specified.
+
+![](/assets/images/Screen-Shot-2021-04-29-at-7.30.23-AM.png){:width="800px"}
+
+You would next specify your tests. Batfish Enterprise test templates allow you to check all manners of network behaviors. For our example change, as shown below, you would use the **Devices Have Routes** template to test that the subnet route is not present before the change (a pre-change test) and is present after the change (a post-change test).
+
+![](/assets/images/Screen-Shot-2021-04-29-at-7.33.55-AM.png){:width="800px"}
+
+Compared to what you might do during the maintenance window, this test will check for the presence of the route on all devices, not just one or two you might log into while making the change. You would similarly specify the second post-change test to ensure that the /16 aggregate is present on the border routers and the firewall.
+
+After entering the configuration commands and tests, you tell Batfish Enterprise to evaluate the change. If a test fails, Batfish Enterprise will show you which devices are failing and why. Below, we see that it is showing that the second test (about the aggregate) is failing, and the aggregate prefix is not present on the border routers and the firewall.
+
+![](/assets/images/Screen-Shot-2021-04-29-at-7.37.15-AM.png){:width="800px"}
+
+If you had applied this change during the maintenance window, you’d have to rollback the change, debug it, and then schedule it for a future maintenance window. With Batfish Enterprise, you make such discoveries ahead of time, and the change will be successfully executed in one maintenance window.
+
+Based on the information provided by Batfish Enterprise, you’d quickly realize that the test is failing because the subnet prefix (**10.250.89.0/24**) is not covered by any of the existing aggregates announced by the border leafs. Past subnet additions succeeded because those prefixes were drawn from existing aggregates. The fix is easy once you make this discovery. You would add another aggregate to the list announced by the two border leafs and update the change specification within Batfish Enterprise. The screenshot below shows the change commands for **bl02**, one of the border leafs.
+
+![](/assets/images/Screen-Shot-2021-04-29-at-7.38.21-AM.png){:width="800px"}
+
+When you run validation again now, Batfish Enterprise will consider the combined impact of changes across all devices and tell you if any test is still failing.
+
+Batfish Enterprise offers two additional ways for you to gain confidence in the change. If you have defined network-level policies—-behaviors that must always be true of the network (e.g., the corporate Website should always be accessible from the Internet)—-it will check that the change does not violate any of those.
+
+Further, it enables you to see the full impact of the change, including changes in RIBs, FIBs, and end-to-end connectivity. The two screenshots below show that 1) two new prefixes (the subnet prefix and the aggregate) will be added to leaf devices, and 2) new flows will be allowed from the Internet to **leaf89** (where the subnet was added). The connectivity between the internet and other leafs is unchanged. **Using such views, you can easily verify that the change has exactly the intended impact--no more, no less.**
+
+![](/assets/images/Screen-Shot-2021-04-29-at-7.38.38-AM.png){:width="800px"}
+
+![](/assets/images/Screen-Shot-2021-04-29-at-7.39.09-AM.png){:width="800px"}
+
+You are now done. **Within a matter of minutes, you have validated that your planned change passes your tests, does not violate your network policy, and has exactly the intended impact. You can now enter the maintenance window with confidence!**
+
+* * *
+
+Check out these related resources:
+
+- [Demo video](https://www.youtube.com/watch?v=I_3N72LTj3c&ab_channel=Intentionet) that shows this specific change example in action.
+- [Demo video](https://www.youtube.com/watch?v=K-2WYakenxI&ab_channel=Intentionet) that shows even automated changes can be fully tested before deployment.
diff --git a/_posts/2021-05-18-automating-the-long-pole-of-network-changes.md b/_posts/2021-05-18-automating-the-long-pole-of-network-changes.md
new file mode 100644
index 00000000..0445bf92
--- /dev/null
+++ b/_posts/2021-05-18-automating-the-long-pole-of-network-changes.md
@@ -0,0 +1,114 @@
+---
+title: "Automating the long pole of network changes"
+date: "2021-05-18"
+author: Matt Brown
+tags:
+ - "batfish"
+ - "batfish-enterprise"
+ - "change-automation"
+ - "change-testing"
+ - "change-validation"
+ - "network-automation"
+ - "network-validation"
+ - "predeployment-validation"
+---
+
+When it comes to automating network changes, most network engineers want to start with automatic config generation and deployment. It just feels like that is the heart of the challenge, and it certainly feels like a fun thing to do.
+
+**But assume for a moment that you’ve automated config generation and deployment. Have you now significantly reduced the time it takes to service change requests?**
+
+If your network is like the many that we (or [the good folks at NetworkToCode](https://www.youtube.com/watch?v=qw6jKa7yLBQ)) work with, chances are that the answer is no. There are several critical and critical-path tasks that have not been automated, including:
+
+- Ensuring that the generated change is correct
+- Reviewing and approving the change
+- Testing the impact of the change post deployment (post-change testing)
+
+Per [Amdahl’s law](https://en.wikipedia.org/wiki/Amdahl%27s_law), unless you automate these tasks too, your end-to-end gains from network automation will be limited. By automating config generation and deployment, you have likely shaved off only tens of minutes from the time it takes you to service a request, without making a big dent into the end-to-end time. This effect is illustrated in the figure below.
+
+![](/assets/images/Screen-Shot-2021-05-11-at-1.40.06-PM.png){:width="800px"}
+
+At the same time, without automating testing, the risk of a bad change making it to the network is still high. You cannot count on the change generation script always generating correct changes. It is all too easy for this script to overlook a legacy snowflake in the network or interact poorly with some recent changes (perhaps made manually). You thus need to still validate that the auto-generated change is correct and, for big changes, have a colleague review them for added assurance. This is an error prone task given the complexity of modern networks.
+
+**Change Reviews in Batfish Enterprise enable you to fully automate change testing, attack the long poles in your change workflows, and make network changes more reliable.**
+
+Let us illustrate how they work via an example: Allowing access to a new service from the outside. The service request may look like the following:
+
+
ticket_id
tkt1234
service_name
NewService
destination_prefixes
10.100.40.0/24, 10.200.50.0/24
ports
tcp/80, tcp/8080
+
+
+
+Your change generation script will use the request parameters to generate the configuration commands for one or more devices. For example, it may generate the following change to the Palo Alto firewall at the edge of the network:
+
+set service S\_TCP\_80 protocol tcp port 80
+set service-group SG\_NEWSERVICE members S\_TCP\_80
+set service S\_TCP\_8080 protocol tcp port 8080
+set service-group SG\_NEWSERVICE members S\_TCP\_8080
+
+set address tkt123-dst1 ip-netmask 10.100.40.0/24
+set address-group tkt123-dst static tkt123-dst1
+set address tkt123-dst2 ip-netmask 10.200.50.0/24
+set address-group tkt123-dst static tkt123-dst2
+
+set rulebase security rules tkt123 from OUTSIDE
+set rulebase security rules tkt123 to INSIDE
+set rulebase security rules tkt123 source any
+set rulebase security rules tkt123 destination tkt123-dst
+set rulebase security rules tkt123 application any
+set rulebase security rules tkt123 service SG\_NEWSERVICE
+set rulebase security rules tkt123 action allow
+
+This change may be generated using Jinja2 templates, an internal source-of-truth like Netbox, or the Palo Alto Ansible module. Regardless of how it is generated, you can submit it to Batfish Enterprise and analyze it using three criteria.
+
+### Criterion 1: The change should have the intended behavior
+
+The first criterion is that the change should result in the intended behavior. In our example, that means the firewall should allow the traffic to the service and external sources should be able to reach the service. Batfish Enterprise provides a variety of test templates that enable you to validate the network behavior before and after the change (pre- and post-change tests).
+
+For our example, you would use the **Service Accessibility** template to test that the service is not accessible from the Internet before the change and is accessible after the change. This test can be expressed using the following YAML:
+
+![](/assets/images/Screen-Shot-2021-05-11-at-2.43.29-PM.png){:width="600px"}
+
+You would also use the **Cross-Zone Firewall Policy** template to test firewall behavior for service traffic from outside zone to inside zones. It should not allow this traffic before the change and allow it after the change. Both end-to-end service accessibility and firewall-focused tests are useful because they can yield different results. The firewall-focused test may pass and the end-to-end test may still fail if the traffic is blocked elsewhere on the path.
+
+You would generate these change-behavior tests as part of change generation. Thus, based on the service request parameters, you are generating the change and the tests for the change.
+
+### Criterion 2: The change should not violate network policy
+
+In addition to the change meeting its behavioral specification, you need to also ensure that it does not violate any network policy. Network policies are behaviors that must always be true, independent of the change. For example, certain subnets must never be accessible from the Internet and access to the main corporate Website must never be blocked. It is possible to have a situation where change behavior tests pass but a key network policy is violated, for instance, if request parameters are wrong. Such changes must not reach the network.
+
+With Batfish Enterprise, once you’ve configured the network policies, they are evaluated automatically for each change. Policies use the same basic templates as change behavior tests. A policy that certain services must never be accessible from the Internet will use the **Service Protection** template as follows:
+
+![](/assets/images/Screen-Shot-2021-05-11-at-2.45.28-PM.png){:width="600px"}
+
+### Criterion 3: The change should not cause collateral damage
+
+A final criterion to build confidence in the change is testing that it does not do collateral damage, e.g., accidentally allowing more traffic than intended. Batfish Enterprise can predict the full impact of the change, including how the routing tables will differ and how traffic will be permitted or blocked by it. The screenshot below shows how the network connectivity is impacted by our example change. We see that the Internet can now reach **leaf40** and **leaf50**, where the two prefixes are hosted, and no other traffic will be allowed or denied as a result of the change.
+
+![](/assets/images/Screen-Shot-2021-05-11-at-2.50.54-PM.png){:width="800px"}
+
+These differences enable you to determine if the change has exactly the impact you want--no more, no less. Using Batfish Enterprise APIs you can assert that the change does not allow traffic to any leaf routers other than the ones corresponding to the service request or that the routing tables are not altered on any router.
+
+### Rapid iteration
+
+When you evaluate a change per the criteria above, you may find that one or more tests fail. Batfish Enterprise helps you debug such failures quickly. For our example, we may find that while we successfully opened the firewall, end-to-end connectivity is still missing. Batfish Enterprise will then output (screenshot below — the same information is available programmatically) an example flow (within the traffic we intended to allow) that cannot reach NewService. It will also show that this traffic is being dropped at the border routers and that there is in fact a different flow that can reach the service. We can see that the difference between these two flows is the destination port: **tcp/8080** versus **tcp/80**.
+
+![](/assets/images/Screen-Shot-2021-05-11-at-2.54.51-PM.png){:width="800px"}
+
+What is happening here is port 8080 is not open at the border routers. Thus, we have a situation where some traffic can reach NewService and some cannot. Without the comprehensive analysis provided by Batfish Enterprise, you may incorrectly infer that the change is correct.
+
+**Importantly, Batfish Enterprise is finding such problems with the planned change prior to the maintenance window.** Without it, you may discover such problems above during the maintenance window. At that point, you’d have to roll back the change, debug it, and schedule it for another maintenance window. That would have significantly stretched the service request time.
+
+After determining the root cause, you could modify your change generation script and rerun all tests with its new output. Or, because script modifications take time, you may want to first determine the correct change manually to close the service request in a timely manner. Batfish Enterprise lets you mix automatic and manual changes and test their combined impact.
+
+### Simplified reviewing
+
+To facilitate change reviews, you can attach the results of all the change behavior tests, the results of network policy evaluation, and the full impact report of the change to the service request ticket. This information makes it super easy for the reviewer to approve the change. Once approved, you deploy the change to the production network with the confidence that it will work exactly as intended.
+
+### Summary
+
+Chances are that your team spends more time on ensuring that the config change is correct than on generation and deployment. To realize the end-to-end gains from automation, you must automate not just config generation and deployment, but also testing. Batfish Enterprise enables you to build an end-to-end automation pipeline that automates testing and simplifies reviews. The result is a network that moves fast and does not break.
+
+* * *
+
+Check out these related resources
+
+- [Demo video](https://www.youtube.com/watch?v=aupJMkfuHR8&ab_channel=Intentionet) that shows this specific example in action
diff --git a/_posts/2021-05-26-closing-the-loop-on-testing-network-changes.md b/_posts/2021-05-26-closing-the-loop-on-testing-network-changes.md
new file mode 100644
index 00000000..b184285d
--- /dev/null
+++ b/_posts/2021-05-26-closing-the-loop-on-testing-network-changes.md
@@ -0,0 +1,144 @@
+---
+title: "Closing the loop on testing network changes"
+date: "2021-05-26"
+author: Dinesh Dutt and Ratul Mahajan
+tags:
+ - "batfish"
+ - "change-automation"
+ - "change-testing"
+ - "change-validation"
+ - "network-automation"
+ - "network-validation"
+ - "predeployment-validation"
+ - "suzieq"
+---
+
+“_..the best way to guard against error is to design systems with layered and overlapping defenses ... like slices of Swiss cheese being layered on top of one another until there were no holes you could see through_” - from The Premonition, Michael Lewis
+
+_This post was co-authored by Dinesh Dutt and Ratul Mahajan. It was also posted on [Medium](https://medium.com/the-elegant-network/closing-the-loop-on-testing-network-changes-44c356c1ca43) and [The Elegant Network](https://elegantnetwork.github.io/posts/closing-the-loop-testing/)._
+
+Network changes, such as adding a rack, adding a VLAN or a BGP peer or upgrading the OS, can easily cause an outage and materially impact your business. Rigorous testing is key to minimizing the chances of change-induced outages. A central tenet of such testing is test automation—a program should do the testing, not (error-prone) humans. Test automation should target all stages of the change process. Prior to deployment, it should test that the change is correct and that the network is ready for it. Post deployment, it should test that change was correctly deployed and had the intended impact. This **“closed-loop test automation”** makes the change process highly resilient and catches problems as early as possible.
+
+But writing the code to automate network testing can be quite complicated. For instance, if you were adding a new leaf, prior to deploying your change, you may want to test that the IP addresses on the new leaf do not overlap with existing ones. So, you may write a script that mines addresses from configurations and then checks for uniqueness. Similarly, post deployment, you may want to test that all spines have the new prefix. So, you may write a script to fetch and process “show” data from all spines. Writing such scripts is fairly complex because you have to know the right commands, know how to extract the data (via textfsm or json queries), and so on. Sometimes, you have to combine information from multiple commands. And of course, this is different for every vendor, and in many cases, across different versions of the vendor. Your ability to automate network testing increases dramatically if you have tools that can take out much of the complexity. In this article, we cover two such open-source tools, Batfish and Suzieq, that help you easily automate closed-loop testing.
+
+### The three stages of closed-loop test automation
+
+Closed-loop network testing has three stages:
+
+1. **Pre-approval testing:** Before you schedule the planned change for a maintenance window, test that it is correct.
+2. **Deployment pre-testing:** Before you deploy the change, test that the network is in a state where the change is safe to make. It is not uncommon that the current device state has drifted from what is assumed to have been configured.
+3. **Deployment post-testing:** Right after you deploy the change to the network, test that the change produced the intended behavior.
+
+![](/assets/images/Screen-Shot-2021-05-27-at-11.46.46-PM.png){:width="800px"}
+
+The figure places these stages within a typical change workflow. Pre-approval testing is done when you are designing and reviewing the change, and deployment pre- and post-testing is done during the maintenance window.
+
+Experienced network engineers will recognize that the three stages map to how manual change validation happens today. Peer-reviews fill in for pre-approval testing, and running show commands before and after the change fill in for deployment pre- and post-tests. We show how you can automate such validation. To be clear, we are not claiming that automatic testing can or should completely supplant human judgement. Rather, automation and humans can work together to make network changes more efficient and robust. This hybrid mode is akin to software, where developers use both a battery of automatic tests and peer reviews, and information produced by automatic tests greatly assists human reviewers.
+
+### Tools of the trade
+
+Let us first introduce the tools of the test automation trade that we’ll use.
+
+#### Batfish
+
+[Batfish](https://batfish.org) analyzes network configuration to validate that the behavior of individual devices and the network as a whole matches the user’s intent. It constructs a model of the devices (Cisco router, Arista router, Palo Alto firewall etc.) and uses the device configuration files to put the model in the state specified by the configuration. It can then use formal verification analysis to comprehensively answer questions such as whether a BGP session is up, if an IP address can communicate with another, and so on. Because of its reliance on only device config files, it is a simple yet powerful tool that doesn’t require you to run slow and expensive emulation tools such as GNS3, EVE-NG or Vagrant to validate network behavior.
+
+#### Suzieq
+
+[Suzieq](https://github.com/netenglabs/suzieq) is the first open source multi-vendor network observability platform that gathers operational data about the network to answer complex questions about the health of the network. It enables both deep data collection and analysis. It can fetch all kinds of data from the network, including routing protocol state, interface state, network overlay state and packet forwarding table state. You can then easily perform sophisticated analysis of the collected data via either a GUI or CLI or programmatically. Using either a REST API or the native python interface, you can also write tests that assert that the network is behaving as you intended (e.g., is this interface up? is this route present?).
+
+You may be wondering: why do I need two tools? The short answer is that their testing capabilities are complementary. **Batfish reasons about network behavior based on configuration** (which may not have been deployed yet) and provides comprehensive guarantees for large sets of packets. **Suzieq reasons about actual network behavior based on operational data.** You use Batfish before pushing the change to the network for comprehensive analysis that the change is correct. This analysis assumes that the network is in a certain state at the time of deployment. Suzieq first helps validate those assumptions, and then also helps ensure that the change had the intended impact after it is deployed.
+
+Testing focus aside, there are many similarities between the two tools that make them work well together. Both Batfish and Suzieq are multi-vendor and normalize information across vendors, so your tests are independent of the specific vendors in your network. Both have Python libraries that make it easy to build end-to-end workflows. And they both use the popular Pandas data analysis framework to present and analyze information. Pandas represents data using a DataFrame, a tabular data structure with rows and columns. You can find out the names of the columns in a DataFrame using the columns() method and use its powerful data analysis methods to inspect and analyze network behavior. A particularly useful method is “query” which filters rows/columns per user specifications. Some examples of using query are [here](https://suzieq.readthedocs.io/en/latest/pandas-query-examples/).
+
+### Implementing closed loop testing
+
+We now illustrate how Batfish and Suzieq combine to enable closed-loop network test automation. We will consider a trivial 2-leaf, 2-spine Clos topology, though the testing workflow and the example tests below apply equally to bigger, real-world networks. For brevity, however, we describe only a handful of tests in this post. A real test suite should have more tests that cover additional concerns. Developing a good test suite is an art form by itself, and we’ll cover this topic in a future post. The configuration fragments and tests used in this post are available in [this GitHub repo](https://github.com/netenglabs/closed-loop-testing-blog).
+
+#### Installing the Software
+
+Download the Docker container for Batfish and launch the service as follows:
+
+docker pull batfish/batfish
+docker run --name batfish -v batfish-data:/data -p 9997:9997 -p 9996:9996 batfish/batfish
+
+Install pybatfish (the Python client for Batfish) and Suzieq in a Python virtual environment. You need at least Python version 3.7.1 to use these two packages.
+
+pip install pybatfish
+pip install suzieq
+
+A quick introduction to the Pythonic interfaces of Batfish and Suzieq is useful now. The Python APIs of Batfish are documented [here](https://pybatfish.readthedocs.io/en/latest/index.html). Batfish provides a set of questions that return information about your network such as the properties of nodes, interfaces, BGP sessions, and routing tables. For example, to get the status of all BGP sessions, you would use the **bgpSessionStatus** question as follows.
+
+bfq.bgpSessionStatus().answer().frame()
+
+The **.answer().frame()** part transforms the information returned by the question into a DataFrame that you can inspect and test using Pandas APIs.
+
+Suzieq’s Python interface is defined [here](https://suzieq.readthedocs.io/en/latest/developer/pythonAPI/). Suzieq organizes information in tables. For example, you can get the BGP table via:
+
+bgp\_tbl = get\_sqobject(‘bgp’)
+
+Every table contains a set of functions that return a Pandas DataFrame. Two common functions are get() and aver() (because assert is a Python keyword, Suzieq uses aver, an old synonym). Because Suzieq analyzes the operational state of the network, you must first gather this state by running the Suzieq poller for the devices of interest. [These instructions](https://suzieq.readthedocs.io/en/latest/poller/) will help you start the poller on your network.
+
+You are now ready to implement the first stage of closed-loop testing.
+
+#### Pre-approval testing
+
+The goal of pre-approval testing is to ensure that the change is correct. We show how a Python program using Batfish, the tool we’ll use in this stage of testing, helps you catch errors in your change. What exactly you test during pre-approval testing depends on the change. Let's continue with the example that we’re adding a leaf to our network. We have a new config that we created using the existing config from one of the other leaves. Or maybe you have a template and you’re using Ansible to generate the config for this new device. But due to a cut-and-paste or a coding error, one of the interface IP addresses is that of another device, and not the one you’re deploying. A string of numbers is easy to miss even with a peer review. Batfish can easily catch such errors.
+
+To start pre-approval testing with Batfish, put the configuration files of each router in a directory as described [here](https://pybatfish.readthedocs.io/en/latest/notebooks/interacting.html#Uploading-configurations). You can add leaf03’s config along with the modified spine configs to the configs subdirectory.
+
+You can write Python programs that use the Batfish API to automate your pre-approval testing. Here is an example of such a program.
+
+![](/assets/images/fig-2.png){:width="800px"}
+
+This program initializes the network snapshot (with planned config modifications) in **init\_bf()** and defines two tests. **test\_bgp\_status()** uses the **bgpSessionStatus** question to validate that all BGP sessions will be established after the change. **test\_all\_svi\_prefixes\_are\_on\_all\_leafs()** verifies that the SVI prefixes will be reachable on all nodes. It uses the **interfaceProperties** question to retrieve all SVI prefixes and verifies that each is reachable on all nodes. You retrieve the list of nodes using the **nodeProperties** question
+
+**TIP:** The first time you use Batfish on your network, take a look at the output of **bfq.initIssues().answer().frame()** to confirm that Batfish understands it well. The output of this command is also a good thing to check when a test fails because problems such as syntax errors are also reported in it.
+
+Hopefully, you now see the power of automated testing with tools like Batfish and Suzieq. A few lines of code can validate complex end-to-end behaviors across your entire network. When you add another leaf or spine, you can run this test suite as is. In fact, you can run the same test suite across different vendors. Our example network uses Arista EOS. You won’t have to change even a single line if it used Cisco or Juniper or Cumulus or a mix.
+
+You can even use pytest, the Python testing framework, to run the tests and make full use of an advanced testing framework. If any of the assertions fail, pytest will report them, and you can investigate the error, fix the config change, and rerun the test suite.
+
+Good testing tools also make it easy to debug test failures. How you do that depends on the test. For example, if we had assigned an incorrect interface IP address on the new leaf, **test\_bgp\_status()** would fail because not all sessions would be in **ESTABLISHED** state. You may then look at the output of **bgpSessionStatus** question, which for this example will show that the sessions on leaf03 and spine01 are incompatible. To understand why, you may then run the **bgpSessionCompatibility** question as follows.
+
+![](/assets/images/fig-3.png){:width="1000px"}
+
+This output tells you that you likely have the wrong IP address on leaf03 (**NO\_LOCAL\_IP**), and that spine01 expects to establish a session to 10.127.0.5 but no such IP is present in the snapshot (**UNKNOWN\_REMOTE**). If you fix the configs, and rerun the tests, they should all pass now, and you can be confident that your change is ready to be scheduled for deployment.
+
+#### Deployment pre-testing
+
+Pre-approval testing happened against the network snapshot that existed then. When the time comes to deploy the change during the maintenance window, the network may be in a different state. Some links may have failed and the planned change could interact with the failures in unexpected ways. Or, the network’s configurations may have drifted in an incompatible way. We must thus test that the change is safe to deploy right before deploying it.
+
+A combination of Batfish and Suzieq enables deployment pre-testing. Suzieq will fetch the latest network configs and state. You can feed those new configs to Batfish along with the planned config change and re-run the tests that you ran before. This re-run confirms that the change is still correct and is compatible with the current network configuration.
+
+Suzieq helps you test that the network is in a state that is ready for the change you’re about to make. For example, if one of the spines is down, then attempts to configure it will fail. Alternatively, you must verify that the spine configuration change is using a port that is not already in use. It is important to double check our assumptions about the state of the network. Measure twice, cut once, as the adage goes. If there’s an unexpected surprise, you can abort proceeding with the change (no rollback needed).
+
+As in the case of Batfish, your automated test suite will be a Python program. The following snippet shows how you can use Suzieq to test that the spines are alive, the port Ethernet3 being provisioned to connect to the new leaf is free, and that the SVI prefix being allocated is unused.
+
+![](/assets/images/fig-4.png){:width="800px"}
+
+Each test uses **get\_sqobject()** to get the relevant tables, then uses the get function to retrieve the rows and columns of interest, and finally checks that a specific column has an expected value on all nodes. The **.all()** checks that the field has that value on all rows of the retrieved dataset. Thus, the test to check that all spines are alive uses the “device” table to retrieve information about the spines, and checks that the “status” column has the value “alive” in all rows. **test\_spine\_port\_is\_free()** assumes that the spine ports have been cabled up and uses the lack of an LLDP peer to confirm that the port connecting to the new leaf is unused
+
+Like Batfish, this code is vendor-agnostic and works for any Suzieq-supported vendor. If you add additional leafs, you just need to change the values of SPINE\_IFNAME and NEW\_SVI\_PREFIX. This is the power of writing tests using frameworks like Suzieq.
+
+If all deployment pre-tests pass, you can confidently deploy the change. But before you declare victory, you still need to test that the deployment went as planned. So, let's do that next.
+
+#### Deployment post-testing
+
+Deployment post-testing aims to verify that the change was successful. A simple list of things to test our example change include: the spines are now peering correctly with the new leaf, the new SVI prefix is correctly assigned, and that the SVI prefixes of all the leafs are reachable via all the other leafs.
+
+The Python program to test all this looks as follows.
+
+![](/assets/images/fig-5.png){:width="800px"}
+
+Before running deployment post-tests, make sure that you have added the new leaf to the Suzieq inventory and restarted the poller to gather data from the new leaf as well.
+
+Suzieq can also perform a battery of tests using the aver method of the table. For example, if you had accidentally deployed the config with the incorrect IP address on leaf03, you can use the interface aver which checks for consistency of addresses, MTUs, VLANs, etc. The output would look as follows:
+
+![](/assets/images/fig-6.png){:width="1000px"}
+
+Now you can examine the interface IP addresses of leaf03/Ethernet1 and spine01/Ethernet3 to determine which of the two has the incorrect IP address and fix it.
+
+### Summary
+
+As in the software domain, rigorous testing is key to evolving the network rapidly and safely. Closed-loop testing leads to the least surprise with the most reliability. It is done in three stages—-pre-approval testing, deployment pre-testing, and deployment post-testing—-and catches errors at the appropriate point during the deployment. However, writing automated tests can be a complex process without the help of appropriate tools. Fortunately, open source tools like Batfish and Suzieq greatly simplify writing and maintaining automated tests. Give them a try and make your network changes robust and error free.
diff --git a/_posts/2021-06-16-the-networking-test-pyramid.md b/_posts/2021-06-16-the-networking-test-pyramid.md
new file mode 100644
index 00000000..3a9784d0
--- /dev/null
+++ b/_posts/2021-06-16-the-networking-test-pyramid.md
@@ -0,0 +1,131 @@
+---
+title: "The networking test pyramid"
+date: "2021-06-16"
+author: Ratul Mahajan
+tags:
+ - "batfish"
+ - "change-automation"
+ - "change-testing"
+ - "change-validation"
+ - "network-automation"
+ - "network-validation"
+ - "predeployment-validation"
+ - "test-pyramid"
+---
+
+An automated test suite is the key to continuous integration (CI), the DevOps practice of rapidly integrating changes into mainline. The test suite is run on every change to check that individual modules and the full system continue to behave as expected as developers add new features or modify existing ones. A high-quality test suite gives developers and reviewers the confidence that the changes are correct and do not cause collateral damage.
+
+**As networks become more like software (“Infrastructure-as-Code”), automated test suites—sometimes also called “policies” or “polices-as-code”—are essential to safely and rapidly evolving networks to meet new business needs.** However, little guidance is available for network engineers toward creating a good test suite. What are all the types of tests that one should consider to cover all the bases? What is the purpose of each type and how do different types relate to each other?
+
+To help create high-quality test suites, we present the _networking test pyramid_. This pyramid is adapted from the well-known software test pyramid and groups tests into increasing granularity levels from checking individual elements of the network to checking end-to-end behavior. While this concept applies to all types of network testing, our focus is on testing network configurations—and changes to them—because most outages are related to configuration changes. The tested configurations and changes may be generated manually or automatically (e.g., using templates and a source-of-truth).
+
+In addition to a sound conceptual framework, creating a high-quality test suite also needs a practical testing framework. As anyone who has written automatic tests will testify, the choice of this framework is critical and directly influences what you can or cannot test. **An ideal test framework enables you to easily express tests at multiple granularities and frees you from worrying about low level details such as syntax and semantics of various device vendors.** We will show how [Batfish](https://www.batfish.org), an advanced network configuration analysis framework, rises to this challenge.
+
+### The Software Test Pyramid
+
+Before discussing the networking test pyramid, let us review the software test pyramid. Mike Cohn introduced the software test pyramid in his book “Succeeding with Agile” in 2009. Several versions of the pyramid have been proposed since. We use [Martin Fowler's version](https://martinfowler.com/articles/practical-test-pyramid.html) as our reference.
+
+![](/assets/images/Screen-Shot-2021-06-04-at-11.41.06-PM.png){:width="600px"}
+
+At the bottom, we have unit tests that check individual modules; in the middle, we have integration tests that check interactions between two or more modules; and at the top, we have end-to-end tests that check end-to-end behaviors such as responses to user requests. As we move from bottom to top, the focus changes from testing isolated aspects of the system to testing complex interactions.
+
+Because there are fewer interactions at the unit test level, these tests tend to be faster to execute and their failures easier to debug. Unit tests can also check modules for a broader range of inputs that may be hard to reproduce as part of an integration or E2E test. However, these advantages of unit tests do not imply that higher-level tests can be ignored. They are closer to what users and applications experience and validate interactions that may have been left untested by unit tests because of a missing test case, the difficulty of testing, or incorrect assumptions about how other modules behave. There have been many an Internet meme that drive home this point.
+
+
+
+Given the unique strengths of different pyramid levels, a good test suite contains tests at all levels. That helps you find and fix errors easily, achieve better test coverage, and directly validate user experience. By borrowing the test pyramid concept, we can develop networking test suites with similar advantages.
+
+### The Test Pyramid applied to Networking
+
+Our adaptation of the test pyramid is shown below.
+
+![](/assets/images/Screen-Shot-2021-06-07-at-3.22.50-PM.png){:width="600px"}
+
+While it has a different number of levels and uses different terminology, it retains the essence of its software test pyramid. As we move from bottom to top, the focus changes from testing localized, isolated aspects to complex interactions. Listing the levels from bottom to top:
+
+1. **Configuration content:** At this level, you test if device configurations contain what you expect. It includes checks such as whether interface IP addresses are correct, whether BGP peers are assigned to peer groups, whether access control list (ACL) names follow site standards, and whether certain lines appear in the config. You are not validating behavior at this level (which will happen later) but operating at the textual layer of configurations. You are primarily checking that human, coding, or source-of-truth error (depending on how you generate configs) has not led to improper configuration content.
+2. **Device behavior:** At this level, you test the behavior of individual devices such as which packets its ACLs (access control list) permit/deny and how its route maps process BGP announcements. When you tested configuration content, you did not directly test behaviors which result from the interaction of many configuration lines. For instance, the behavior of an ACL depends on the order of ACL lines and on definitions of address groups that it uses; ACL behavior testing ensures that all of these aspects combine to produce expected behavior. As the debate about unit-vs-integration tests illustrates, it is important to test higher-level behaviors even when lower-level aspects are tested well.
+3. **Device adjacencies:** At this level, you test if devices can establish the right adjacencies such as BGP sessions, GRE tunnels, and VPNs. These adjacencies form the foundation of network behavior, so testing them directly is important. It is possible to have a test suite where all configuration-content and device-behavior tests pass but protocol adjacency tests fail due to incompatibility between devices.
+4. **End-to-end:** At this level, you test the end-to-end behavior of the network control and data planes, whether it propagates and filters routing information as expected and whether it forwards and drops traffic as expected. These tests are powerful because they directly test applications experience and exercise a broad range of underlying aspects. When these tests pass, applications are more likely to experience a functioning network and give you confidence that many of the underlying aspects are correct and interacting as expected.
+
+If you wanted to map these levels to the software pyramid, one could think of a device as a module. So, the device behavior, device adjacencies, and E2E networking levels map respectively, to unit, integration, and E2E software levels. The software counterpart of configuration content testing would be analyses such as linting and checking compliance with formatting and variable naming rules. These non-behavioral checks are not in the software test pyramid, though software does undergo these tests (often as part of the build process).
+
+### Putting the Networking Test Pyramid to Practice
+
+Now that we have learned the theory of the networking test pyramid, it is time to put it to practice. We do that by developing a test suite using [pybatfish](https://pybatfish.readthedocs.io/en/latest/index.html), the Python client of Batfish, for the data center network below. It is a multi-vendor network with an eBGP-based leaf-spine fabric (Arista), along with a firewall (Palo Alto), and border routers (Juniper MX) that connect to the outside world.
+
+![](/assets/images/Screen-Shot-2021-06-06-at-9.02.31-PM.png){:width="600px"}
+
+In this data center, there are many aspects you’d want to test to get confidence in its behavior and configuration changes. For brevity, we will focus on testing aspects related to connectivity between leafs and to the outside world. A comprehensive test suite will check many other aspects as well such as NTP servers, interface MTUs, connectivity to management devices, and so on. [This GitHub repository](https://github.com/intentionet/test-pyramid) has the code for all the tests below, implemented in the pytest framework.
+
+#### Configuration content
+
+We start with the lowest level of the pyramid, where we check configuration content. Given our focus on basic connectivity, we may write the following tests.
+
+1. All interfaces IPs must have expected values. The expected values depend on the setup. Exact value for each interface could come from an IPAM or the requirement could be simply that all values should be drawn from certain prefixes and be unique. Our example tests use the second criteria.
+2. All local ASNs must have expected values. Again, the expected values depend on the setup. Our example tests assume that each layer of the DC must use local ASNs within a range and their ASNs should be globally unique.
+3. All configured BGP peers must have expected remote ASNs. This test is based on the ASN allocation above, so the remote ASNs of spine-facing peers on leafs must be within the range used for the spines.
+4. All leaf routers must have EBGP multipath enabled, to load balance via ECMP.
+
+Pybatfish implementations of all these tests is [here](https://github.com/intentionet/test-pyramid/blob/main/test_suite/test_config_content.py). The eBGP multipath test is:
+
+![](/assets/images/Screen-Shot-2021-06-06-at-8.02.00-PM.png){:width="800px"}
+
+Line 96 uses Batfish’s [bgpProcessConfiguration](https://pybatfish.readthedocs.io/en/latest/notebooks/configProperties.html#BGP-Process-Configuration) question to extract information about all BGP processes on leaf nodes. This information is returned as a Pandas DataFrame—Pandas is a popular data analysis framework and a DataFrame is a tabular representation of the data—in which rows correspond to BGP processes and columns to different settings of the process. Line 97 extracts BGP processes for which **Multipath\_EBGP** is False. Finally, Line 98 checks that no such processes were found.
+
+We see that tests take only a few lines of Python, and nowhere did we need to account for vendor-specific syntax or defaults. This simplicity stems from the structured, vendor-neutral data model that Batfish builds from device configs.
+
+#### Device behavior
+
+Having tested important configuration content, you can now test that various elements of the configuration combine to produce intended device-level behaviors. You may write the following tests.
+
+1. The access control list (ACLs) at the border router must block incoming traffic from private address spaces and the set of malicious sources that we have identified.
+2. The route map at the border router must filter routing announcements for private address spaces.
+3. The route map at the border leafs must aggregate SVI prefixes before propagating them upwards in the data center.
+
+Pybatfish implementation of these tests is [here](https://github.com/intentionet/test-pyramid/blob/main/test_suite/test_device_behavior.py). The ACL test is:
+
+![](/assets/images/Screen-Shot-2021-06-06-at-9.17.26-PM.png){:width="800px"}
+
+Line 13 uses the [searchFilters](https://pybatfish.readthedocs.io/en/latest/notebooks/filters.html#Search-Filters) question of Batfish to find any traffic from the blocked IP space that is permitted by the ACL, and then Line 16 asserts that no such traffic is found. In contrast to the alternative of grep’ing for blocked prefixes in the device configuration, this test actually validates behavior. Grep-based validation is fragile because a line that permits some or all of the blocked traffic may also appear in the configuration, resulting in you falsely concluding that the traffic is blocked.
+
+#### Device adjacencies
+
+Now that we have validated the behavior of individual devices, we can validate different types of adjacencies between devices using these tests:
+
+1. All pairs of interfaces that we expect to be connected (based on LLDP data or a source of truth like Netbox) must be in the same subnet.
+2. All (and only) expected BGP adjacencies must be present.
+
+Pybatfish implementation for both tests is [here](https://github.com/intentionet/test-pyramid/blob/main/test_suite/test_device_adjacencies.py). The second test is:
+
+![](/assets/images/Screen-Shot-2021-06-06-at-9.55.52-PM.png){:width="800px"}
+
+This code is using the [bgpEdges](https://pybatfish.readthedocs.io/en/latest/notebooks/routingProtocols.html#BGP-Edges) question of Batfish to extract all established BGP edges in the network. BGP adjacencies that do not get established due to, say, incompatible peer configuration settings, will not be returned. The question returns an edge per DataFrame row, which Line 19 transforms into a set of all node pairs. Line 21 asserts that this set is identical to what we expect based on the source of truth.
+
+#### End-to-end
+
+With the lower-level tests in place, you are ready to test important end-to-end aspects of the network's control and data planes. You may write the following tests:
+
+1. The default route must be present on all routers. This route comes into the data center from outside. We want to test that it is propagated everywhere. Traffic to the Internet will be dropped if the default route is accidentally filtered somewhere.
+2. Each leaf router must have the route to all SVI prefixes. Because end hosts are in these prefixes, this test checks that routing within the data center is working well and all pairs of hosts have a path to each other.
+3. All public services hosted in the data center must be reachable from the Internet. No valid connection request should be dropped by the network.
+4. No private service must be reachable from the Internet. This security property must hold no matter how an adversary crafts their packet.
+5. Key external services, such as Google DNS and AWS, must be accessible from all leaf routers.
+
+Pybatfish implementations of all these tests is [here](https://github.com/intentionet/test-pyramid/blob/main/test_suite/test_e2e.py). The public services test is:
+
+![](/assets/images/Screen-Shot-2021-06-06-at-11.06.46-PM.png){:width="800px"}
+
+For each public service, it is using the [reachability](https://pybatfish.readthedocs.io/en/latest/notebooks/forwarding.html#Reachability) question of Batfish to find valid flows that will fail. Valid flows are defined as those starting at the Internet (Line 11), have a source IP that is not among blocked prefixes (Line 13), have a source port among ports that are not blocked anywhere in the network (Line 14), and have destination IP and port corresponding to the service (Lines 15, 16). It is then asserting that no valid flow fails.
+
+This Batfish test is exhaustive. It is considering billions of possibilities in the space of valid flows, and it will find and report if there is any flow that cannot go all the way from the Internet to the service. Such strong guarantees are not possible if you were to test reachability of public services via traceroute.
+
+### Wrap up
+
+That wraps up our example test suite. Hopefully, you could see how the pyramid helps you think comprehensively about network tests and how Batfish helps you implement those tests. After you’ve defined a test suite for your network, you’d be able to run it for every planned change. Imagine how rapidly and confidently you’d then be able to change the network. That is the power of a good automated test suite!
+
+* * *
+
+Check out these related resources
+
+- Read “[A practical approach to building a network CI/CD pipeline](/2020/08/05/a-practical-approach-to-building-a-network-ci-cd-pipeline.html)” to learn how to embed your test suite in an automation pipeline.
diff --git a/_posts/2021-11-30-incrementally-automating-your-network.md b/_posts/2021-11-30-incrementally-automating-your-network.md
new file mode 100644
index 00000000..9cc7acaf
--- /dev/null
+++ b/_posts/2021-11-30-incrementally-automating-your-network.md
@@ -0,0 +1,53 @@
+---
+title: "Incrementally automating your network"
+date: "2021-11-30"
+author: Ratul Mahajan
+tags:
+ - "batfish"
+ - "change-automation"
+ - "change-testing"
+ - "change-validation"
+ - "network-automation"
+ - "network-validation"
+ - "predeployment-validation"
+---
+
+Network automation can significantly benefit your organization. [Gartner found that](https://www.networkcomputing.com/networking/five-things-you-need-do-avoid-network-automation-failures) automating 70% of the network changes reduces outages by 50% and speeds service delivery by 50%. But achieving these results is elusive for most organizations—-they never get to the point where a substantial fraction of changes are successfully automated. A key hurdle is creating a reliable SoT (Source of Truth), a herculean task, especially for brownfield networks. This article outlines an incremental approach to network change automation that is not gated on a fully-fleshed SoT.
+
+Let us begin by discussing the traditional “SoT-first” approach to network automation and its limitations. This approach proceeds as follows:
+
+1. Build an SoT using something like Nautobot or NetBox.
+2. Build jinja2 templates that can generate device configurations using the SoT.
+3. Iterate on Steps 1 and 2 until generated configs are “close enough” to running configs.
+4. Develop scripts to modify the SoT and templates to support common network changes.
+5. Develop scripts to deploy configuration and run pre- and post-checks.
+
+The promise here is that, once you have done these steps, you’d be able to automatically update the network for common change requests.
+
+This SoT-first approach is incredibly difficult to execute, however. Any long-running network will have variations and snowflakes, and it can be difficult to capture the true values for various settings and develop the right configuration templates. If you have three vintages of leaf configurations with different settings in each, how do you determine the correct SoT settings? Which attributes are device level vs group level? Which templates apply to which devices? There is also the challenge that not all configuration elements are easy to represent in the SoT. Access control lists, firewall rules, and route maps are particularly challenging.
+
+![](/assets/images/Screen-Shot-2021-11-18-at-8.20.44-PM.png){:width="600px"}
+
+**The difficulty of building the right SoT and templates means that it can take a long while to see the RoI (return on investment). Your organization may lose focus and give up along the way.**
+
+Is there a better approach to network automation—one that provides immediate returns and incremental value with incremental effort? Fortunately, the answer is yes. It is a change-focused approach that directly automates network changes, without requiring a detailed SoT that can generate full configs.
+
+Suppose you want to automate firewall changes that open access to new services. The inputs to the change are IP addresses, protocols, and ports of the service. As part of executing this change, you need to validate that the requested access is not already there (pre-check), generate the rulebase update to permit access, and validate that the update has the intended impact (post-check). If you can generate and validate this change directly, you do not need to be able to generate the firewall configuration from scratch.
+
+Thus, the change-focused approach to network automation works as follows:
+
+1. Select the type of change you want to automate—favor common or risky ones.
+2. Develop scripts to generate the configuration change
+3. Develop scripts to deploy configuration and run pre- and post-checks.
+
+**This approach will provide RoI as soon as you have automated the first change. After automating the top ten changes, you would likely experience almost all of the benefits of network change automation.**
+
+![](/assets/images/Screen-Shot-2021-11-18-at-8.21.09-PM.png){:width="600px"}
+
+To successfully execute the change-focused approach, your change MOPs (method of procedure) are a good place to start. As they are based on your network structure and business logic, they already have the information you need, including the configuration commands and pre- and post-checks. The right tools can help you effectively automate what the MOP is doing. For change generation, you may use jina templates rendered using request parameters or use Ansible resource modules.
+
+For change validation, you need assurance about end-to-end network behavior. When network configs are not generated from scratch, you cannot just assume that devices are configured in a certain way; snowflakes and gaps in your assumptions can cause the change to not have its intended impact, or worse, cause collateral damage. Ideally, you would validate the impact of the change before deploying the change to production. Simulation tools like Batfish or emulation tools like GNS3 enable such validation. See [our blog that discusses differences between these methods](/2021/03/31/network-test-automation-rock-paper-scissors-lizard-or-fish.html).
+
+For examples of change-focused automation using Batfish Enterprise, see these blogs: [Firewall rule changes](/2021/05/18/automating-the-long-pole-of-network-changes.html), [Provisioning a new subnet](/2021/04/30/test-drive-network-change-mops-without-a-lab.html).
+
+Change-focused automation is not incompatible with SoTs and can be augmented with whatever SoT you have. Even if you have partial SoT, with data that is not detailed enough to generate full device configs, you can use it to generate some types of changes and to generate automatic tests (e.g., no firewall change should open access to sensitive services defined in the SoT). However, unlike the SoT-first approach, change-focused automation is not gated on the availability of a fully-fleshed-out SoT. It can leverage even a minimal SoT and allows you to incrementally improve the SoT. It is this incremental nature that makes change-focused automation an attractive, practical approach to network automation.
diff --git a/_posts/2022-01-03-stopping-network-outages-before-they-start.md b/_posts/2022-01-03-stopping-network-outages-before-they-start.md
new file mode 100644
index 00000000..eddd8cfc
--- /dev/null
+++ b/_posts/2022-01-03-stopping-network-outages-before-they-start.md
@@ -0,0 +1,33 @@
+---
+title: "Stopping Network Outages Before They Start"
+author: Todd Millstein
+date: "2022-01-03"
+tags:
+ - "batfish"
+ - "change-automation"
+ - "change-testing"
+ - "change-validation"
+ - "network-automation"
+ - "network-validation"
+ - "predeployment-validation"
+---
+
+How do you detect buggy network configuration changes? My guess is that you use post-deployment checks and monitoring systems. And you should! But if that’s the only thing you’re doing, then you are unnecessarily risking network outages, breaches, and more. Those tools help you cure incidents after they occur, but they do nothing to prevent buggy changes from being deployed in the first place.
+
+**Of course, we all know that an ounce of prevention is worth a pound of cure, and we live by that motto in many walks of life.** The medical domain is an obvious example, where preventative measures -- physical exams, vaccinations, blood tests -- are the norm and save lives every day. The software industry likewise puts enormous effort into preventing errors: each code change must pass a suite of behavioral tests and static checks before it is deployed to production. These tools prevent critical software bugs every day.
+
+**Like software, network automation pipelines can and should use behavioral tests and static checks to prevent configuration errors from reaching the production network. ** There are a variety of tools available to support these tasks. Network emulation (e.g., [GNS3](https://www.gns3.com/) and [containerlab](https://containerlab.srlinux.dev/)) and simulation (e.g., [Batfish](https://www.batfish.org)) enable behavioral testing. Configuration checkers (e.g., [Batfish](https://www.batfish.org) again) perform static checks.
+
+Adding pre-deployment prevention into your workflow has huge benefits over relying solely on post-deployment cures:
+
+- **Incident Prevention:** Well, this first one is obvious but is also the most important -- you can now prevent incidents from ever occurring, and hence completely avoid the damage that they would otherwise cause. Even with the best post-deployment checks, some incidents will cause significant damage. This damage may be due to a delay in detecting the problem, and even if the problem is detected quickly, it may still take time to mitigate successfully. A prominent recent example is [Facebook’s global outage](https://engineering.fb.com/2021/10/04/networking-traffic/outage) in October 2021 due to a buggy configuration change. While the problem was detected soon after the change was deployed to the network, it took six hours to bring the network back to a fully functional state.
+- **Root-cause Diagnosis: ** Root-cause analysis is much easier to do for configuration bugs identified through pre-deployment checks. As an example, consider the lowly “fat finger” error, where a name is misspelled. This error can manifest in myriad ways in the live network, depending on the kind of name that is misspelled, the context in which it occurs, and the particular vendor. Batfish recently found a misspelled route-map name in a customer network that would have caused a forwarding loop. If the loop had been detected in the live network, diagnosing its root cause would have been very difficult. In contrast, a pre-deployment check that looks for undefined route-map names in the configuration can directly identify the bug independent of how it will affect the network’s behavior.
+- **Incident Remediation:** Once a problem is discovered and diagnosed, it has to be fixed. Again, the pre-deployment setting dramatically simplifies this task. When an issue arises in the live network, it must be mitigated as soon as possible to minimize (further) damage. Commonly this is done by rolling back to an old state. However, rollbacks can be expensive and they also must be done with great care for fear of introducing a new problem into the network. In the pre-deployment setting, the buggy configuration change hasn’t been deployed yet, so there is nothing to roll back. Further, once an appropriate update to the configuration change has been made, that version is itself validated with pre-deployment checks, ensuring that it truly fixes the problem and introduces no new problems.
+
+In summary, pre-deployment validation is a must-have tool for any network automation pipeline. Most network incidents are caused by buggy configuration changes, and catching them before deployment is orders-of-magnitude cheaper than detecting and remediating them in the live network. Add a critical layer of prevention to your network – reach out and we’ll help you get started today!
+
+Related resources:
+
+- [Test drive network change MOPs without a lab](/2021/04/30/test-drive-network-change-mops-without-a-lab.html): Learn how to test the impact of changes using Batfish Enterprise
+- [The networking test pyramid](/2021/06/16/the-networking-test-pyramid.html): Learn how to develop a comprehensive test suite for network configurations
+- [Network test automation: Rock, Paper, Scissors, Lizard, or Fish?](/2021/03/31/network-test-automation-rock-paper-scissors-lizard-or-fish.html) Learn the differences between simulation and emulation for behavior testing
diff --git a/_sass/bootstrap/_code.scss b/_sass/bootstrap/_code.scss
index 9de20fa0..896dfd87 100755
--- a/_sass/bootstrap/_code.scss
+++ b/_sass/bootstrap/_code.scss
@@ -13,7 +13,7 @@ code {
word-break: break-word;
// Streamline the style when inside anchors to avoid broken underline and more
- a > & {
+ a>& {
color: inherit;
}
}
@@ -40,6 +40,7 @@ pre {
display: block;
font-size: $code-font-size;
color: $pre-color;
+ background-color: #f5f5f5;
// Account for some code outputs that place code tags in pre tags
code {
diff --git a/_sass/bootstrap/_images.scss b/_sass/bootstrap/_images.scss
index 2bce02f6..724beee0 100755
--- a/_sass/bootstrap/_images.scss
+++ b/_sass/bootstrap/_images.scss
@@ -40,3 +40,8 @@
font-size: $figure-caption-font-size;
color: $figure-caption-color;
}
+
+.img-background {
+ background-color: #ffffff;
+ display: inline-block;
+}
diff --git a/_sass/bootstrap/_variables.scss b/_sass/bootstrap/_variables.scss
index 1e174294..29466e6e 100755
--- a/_sass/bootstrap/_variables.scss
+++ b/_sass/bootstrap/_variables.scss
@@ -9,7 +9,7 @@
//
// stylelint-disable
-$white: #fff !default;
+$white: #fff !default;
$gray-100: #f8f9fa !default;
$gray-200: #e9ecef !default;
$gray-300: #dee2e6 !default;
@@ -19,93 +19,93 @@ $gray-600: #6c757d !default;
$gray-700: #495057 !default;
$gray-800: #343a40 !default;
$gray-900: #212529 !default;
-$black: #000 !default;
-
-$grays: () !default;
-$grays: map-merge((
- "100": $gray-100,
- "200": $gray-200,
- "300": $gray-300,
- "400": $gray-400,
- "500": $gray-500,
- "600": $gray-600,
- "700": $gray-700,
- "800": $gray-800,
- "900": $gray-900
-), $grays);
-
-$blue: #007bff !default;
-$indigo: #6610f2 !default;
-$purple: #6f42c1 !default;
-$pink: #e83e8c !default;
-$red: #dc3545 !default;
-$orange: #fd7e14 !default;
-$yellow: #ffc107 !default;
-$green: #28a745 !default;
-$teal: #20c997 !default;
-$cyan: #17a2b8 !default;
-
-$colors: () !default;
-$colors: map-merge((
- "blue": $blue,
- "indigo": $indigo,
- "purple": $purple,
- "pink": $pink,
- "red": $red,
- "orange": $orange,
- "yellow": $yellow,
- "green": $green,
- "teal": $teal,
- "cyan": $cyan,
- "white": $white,
- "gray": $gray-600,
- "gray-dark": $gray-800
-), $colors);
-
-$primary: $blue !default;
-$secondary: $gray-600 !default;
-$success: $green !default;
-$info: $cyan !default;
-$warning: $yellow !default;
-$danger: $red !default;
-$light: $gray-100 !default;
-$dark: $gray-800 !default;
-
-$theme-colors: () !default;
-$theme-colors: map-merge((
- "primary": $primary,
- "secondary": $secondary,
- "success": $success,
- "info": $info,
- "warning": $warning,
- "danger": $danger,
- "light": $light,
- "dark": $dark
-), $theme-colors);
+$black: #000 !default;
+
+$grays: (
+ ) !default;
+$grays: map-merge(("100": $gray-100,
+ "200": $gray-200,
+ "300": $gray-300,
+ "400": $gray-400,
+ "500": $gray-500,
+ "600": $gray-600,
+ "700": $gray-700,
+ "800": $gray-800,
+ "900": $gray-900), $grays
+);
+
+$blue: #007bff !default;
+$indigo: #6610f2 !default;
+$purple: #6f42c1 !default;
+$pink: #e83e8c !default;
+$red: #dc3545 !default;
+$orange: #fd7e14 !default;
+$yellow: #ffc107 !default;
+$green: #28a745 !default;
+$teal: #20c997 !default;
+$cyan: #17a2b8 !default;
+
+$colors: (
+ ) !default;
+$colors: map-merge(("blue": $blue,
+ "indigo": $indigo,
+ "purple": $purple,
+ "pink": $pink,
+ "red": $red,
+ "orange": $orange,
+ "yellow": $yellow,
+ "green": $green,
+ "teal": $teal,
+ "cyan": $cyan,
+ "white": $white,
+ "gray": $gray-600,
+ "gray-dark": $gray-800), $colors
+);
+
+$primary: $blue !default;
+$secondary: $gray-600 !default;
+$success: $green !default;
+$info: $cyan !default;
+$warning: $yellow !default;
+$danger: $red !default;
+$light: $gray-100 !default;
+$dark: $gray-800 !default;
+
+$theme-colors: (
+ ) !default;
+$theme-colors: map-merge(("primary": $primary,
+ "secondary": $secondary,
+ "success": $success,
+ "info": $info,
+ "warning": $warning,
+ "danger": $danger,
+ "light": $light,
+ "dark": $dark), $theme-colors
+);
// stylelint-enable
// Set a specific jump point for requesting color jumps
-$theme-color-interval: 8% !default;
+$theme-color-interval: 8% !default;
// The yiq lightness value that determines when the lightness of color changes from "dark" to "light". Acceptable values are between 0 and 255.
-$yiq-contrasted-threshold: 150 !default;
+$yiq-contrasted-threshold: 150 !default;
// Customize the light and dark text colors for use in our YIQ color contrast function.
-$yiq-text-dark: $gray-900 !default;
-$yiq-text-light: $white !default;
+$yiq-text-dark: $gray-900 !default;
+$yiq-text-light: $white !default;
// Options
//
// Quickly modify global styling by enabling or disabling optional features.
-$enable-caret: true !default;
-$enable-rounded: true !default;
-$enable-shadows: false !default;
-$enable-gradients: false !default;
-$enable-transitions: true !default;
-$enable-hover-media-query: false !default; // Deprecated, no longer affects any compiled CSS
-$enable-grid-classes: true !default;
-$enable-print-styles: true !default;
+$enable-caret: true !default;
+$enable-rounded: true !default;
+$enable-shadows: false !default;
+$enable-gradients: false !default;
+$enable-transitions: true !default;
+$enable-hover-media-query: false !default; // Deprecated, no longer affects any compiled CSS
+$enable-grid-classes: true !default;
+$enable-print-styles: true !default;
// Spacing
@@ -116,48 +116,48 @@ $enable-print-styles: true !default;
// stylelint-disable
$spacer: 1rem !default;
-$spacers: () !default;
-$spacers: map-merge((
- 0: 0,
- 1: ($spacer * .25),
- 2: ($spacer * .5),
- 3: $spacer,
- 4: ($spacer * 1.5),
- 5: ($spacer * 3)
-), $spacers);
+$spacers: (
+ ) !default;
+$spacers: map-merge((0: 0,
+ 1: ($spacer * .25),
+ 2: ($spacer * .5),
+ 3: $spacer,
+ 4: ($spacer * 1.5),
+ 5: ($spacer * 3)), $spacers
+);
// This variable affects the `.h-*` and `.w-*` classes.
-$sizes: () !default;
-$sizes: map-merge((
- 25: 25%,
- 50: 50%,
- 75: 75%,
- 100: 100%,
- auto: auto
-), $sizes);
+$sizes: (
+ ) !default;
+$sizes: map-merge((25: 25%,
+ 50: 50%,
+ 75: 75%,
+ 100: 100%,
+ auto: auto), $sizes
+);
// stylelint-enable
// Body
//
// Settings for the `` element.
-$body-bg: $white !default;
-$body-color: $gray-900 !default;
+$body-bg: $white !default;
+$body-color: $gray-900 !default;
// Links
//
// Style anchor elements.
-$link-color: theme-color("primary") !default;
-$link-decoration: none !default;
-$link-hover-color: darken($link-color, 15%) !default;
-$link-hover-decoration: underline !default;
+$link-color: theme-color("primary") !default;
+$link-decoration: none !default;
+$link-hover-color: darken($link-color, 15%) !default;
+$link-hover-decoration: underline !default;
// Paragraphs
//
// Style p element.
-$paragraph-margin-bottom: 1rem !default;
+$paragraph-margin-bottom: 1rem !default;
// Grid breakpoints
@@ -170,10 +170,10 @@ $grid-breakpoints: (
sm: 576px,
md: 768px,
lg: 992px,
- xl: 1200px
-) !default;
+ xl: 1200px) !default;
-@include _assert-ascending($grid-breakpoints, "$grid-breakpoints");
+@include _assert-ascending($grid-breakpoints, "$grid-breakpoints"
+);
@include _assert-starts-at-zero($grid-breakpoints);
@@ -185,45 +185,45 @@ $container-max-widths: (
sm: 540px,
md: 720px,
lg: 960px,
- xl: 1140px
-) !default;
+ xl: 1140px) !default;
-@include _assert-ascending($container-max-widths, "$container-max-widths");
+@include _assert-ascending($container-max-widths, "$container-max-widths"
+);
// Grid columns
//
// Set the number of columns and specify the width of the gutters.
-$grid-columns: 12 !default;
-$grid-gutter-width: 30px !default;
+$grid-columns: 12 !default;
+$grid-gutter-width: 30px !default;
// Components
//
// Define common padding and border radius sizes and more.
-$line-height-lg: 1.5 !default;
-$line-height-sm: 1.5 !default;
+$line-height-lg: 1.5 !default;
+$line-height-sm: 1.5 !default;
-$border-width: 1px !default;
-$border-color: $gray-300 !default;
+$border-width: 1px !default;
+$border-color: $gray-300 !default;
-$border-radius: .25rem !default;
-$border-radius-lg: .3rem !default;
-$border-radius-sm: .2rem !default;
+$border-radius: .25rem !default;
+$border-radius-lg: .3rem !default;
+$border-radius-sm: .2rem !default;
-$box-shadow-sm: 0 .125rem .25rem rgba($black, .075) !default;
-$box-shadow: 0 .5rem 1rem rgba($black, .15) !default;
-$box-shadow-lg: 0 1rem 3rem rgba($black, .175) !default;
+$box-shadow-sm: 0 .125rem .25rem rgba($black, .075) !default;
+$box-shadow: 0 .5rem 1rem rgba($black, .15) !default;
+$box-shadow-lg: 0 1rem 3rem rgba($black, .175) !default;
-$component-active-color: $white !default;
-$component-active-bg: theme-color("primary") !default;
+$component-active-color: $white !default;
+$component-active-bg: theme-color("primary") !default;
-$caret-width: .3em !default;
+$caret-width: .3em !default;
-$transition-base: all .2s ease-in-out !default;
-$transition-fade: opacity .15s linear !default;
-$transition-collapse: height .35s ease !default;
+$transition-base: all .2s ease-in-out !default;
+$transition-fade: opacity .15s linear !default;
+$transition-collapse: height .35s ease !default;
// Fonts
@@ -231,354 +231,370 @@ $transition-collapse: height .35s ease !default;
// Font, line-height, and color for body text, headings, and more.
// stylelint-disable value-keyword-case
-$font-family-sans-serif: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol" !default;
-$font-family-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace !default;
-$font-family-base: $font-family-sans-serif !default;
+$font-family-sans-serif: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol" !default;
+$font-family-monospace: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace !default;
+$font-family-base: $font-family-sans-serif !default;
// stylelint-enable value-keyword-case
-$font-size-base: 1rem !default; // Assumes the browser default, typically `16px`
-$font-size-lg: ($font-size-base * 1.25) !default;
-$font-size-sm: ($font-size-base * .875) !default;
+$font-size-base: 1rem !default; // Assumes the browser default, typically `16px`
+$font-size-lg: (
+ $font-size-base * 1.25) !default;
+$font-size-sm: (
+ $font-size-base * .875) !default;
-$font-weight-light: 300 !default;
-$font-weight-normal: 400 !default;
-$font-weight-bold: 700 !default;
+$font-weight-light: 300 !default;
+$font-weight-normal: 400 !default;
+$font-weight-bold: 700 !default;
-$font-weight-base: $font-weight-normal !default;
-$line-height-base: 1.5 !default;
+$font-weight-base: $font-weight-normal !default;
+$line-height-base: 1.5 !default;
-$h1-font-size: $font-size-base * 2.5 !default;
-$h2-font-size: $font-size-base * 2 !default;
-$h3-font-size: $font-size-base * 1.75 !default;
-$h4-font-size: $font-size-base * 1.5 !default;
-$h5-font-size: $font-size-base * 1.25 !default;
-$h6-font-size: $font-size-base !default;
+$h1-font-size: $font-size-base * 2.5 !default;
+$h2-font-size: $font-size-base * 2 !default;
+$h3-font-size: $font-size-base * 1.75 !default;
+$h4-font-size: $font-size-base * 1.5 !default;
+$h5-font-size: $font-size-base * 1.25 !default;
+$h6-font-size: $font-size-base !default;
-$headings-margin-bottom: ($spacer / 2) !default;
-$headings-font-family: inherit !default;
-$headings-font-weight: 500 !default;
-$headings-line-height: 1.2 !default;
-$headings-color: inherit !default;
+$headings-margin-bottom: (
+ $spacer / 2) !default;
+$headings-font-family: inherit !default;
+$headings-font-weight: 500 !default;
+$headings-line-height: 1.2 !default;
+$headings-color: inherit !default;
-$display1-size: 6rem !default;
-$display2-size: 5.5rem !default;
-$display3-size: 4.5rem !default;
-$display4-size: 3.5rem !default;
+$display1-size: 6rem !default;
+$display2-size: 5.5rem !default;
+$display3-size: 4.5rem !default;
+$display4-size: 3.5rem !default;
-$display1-weight: 300 !default;
-$display2-weight: 300 !default;
-$display3-weight: 300 !default;
-$display4-weight: 300 !default;
-$display-line-height: $headings-line-height !default;
+$display1-weight: 300 !default;
+$display2-weight: 300 !default;
+$display3-weight: 300 !default;
+$display4-weight: 300 !default;
+$display-line-height: $headings-line-height !default;
-$lead-font-size: ($font-size-base * 1.25) !default;
-$lead-font-weight: 300 !default;
+$lead-font-size: (
+ $font-size-base * 1.25) !default;
+$lead-font-weight: 300 !default;
-$small-font-size: 80% !default;
+$small-font-size: 80% !default;
-$text-muted: $gray-600 !default;
+$text-muted: $gray-600 !default;
-$blockquote-small-color: $gray-600 !default;
-$blockquote-font-size: ($font-size-base * 1.25) !default;
+$blockquote-small-color: $gray-600 !default;
+$blockquote-font-size: (
+ $font-size-base * 1.25) !default;
-$hr-border-color: rgba($black, .1) !default;
-$hr-border-width: $border-width !default;
+$hr-border-color: rgba($black, .1) !default;
+$hr-border-width: $border-width !default;
-$mark-padding: .2em !default;
+$mark-padding: .2em !default;
-$dt-font-weight: $font-weight-bold !default;
+$dt-font-weight: $font-weight-bold !default;
-$kbd-box-shadow: inset 0 -.1rem 0 rgba($black, .25) !default;
-$nested-kbd-font-weight: $font-weight-bold !default;
+$kbd-box-shadow: inset 0 -.1rem 0 rgba($black, .25) !default;
+$nested-kbd-font-weight: $font-weight-bold !default;
-$list-inline-padding: .5rem !default;
+$list-inline-padding: .5rem !default;
-$mark-bg: #fcf8e3 !default;
+$mark-bg: #fcf8e3 !default;
-$hr-margin-y: $spacer !default;
+$hr-margin-y: $spacer !default;
// Tables
//
// Customizes the `.table` component with basic values, each used across all table variations.
-$table-cell-padding: .75rem !default;
-$table-cell-padding-sm: .3rem !default;
+$table-cell-padding: .75rem !default;
+$table-cell-padding-sm: .3rem !default;
-$table-bg: transparent !default;
-$table-accent-bg: rgba($black, .05) !default;
-$table-hover-bg: rgba($black, .075) !default;
-$table-active-bg: $table-hover-bg !default;
+$table-bg: transparent !default;
+$table-accent-bg: rgba($black, .05) !default;
+$table-hover-bg: rgba($black, .075) !default;
+$table-active-bg: $table-hover-bg !default;
-$table-border-width: $border-width !default;
-$table-border-color: $gray-300 !default;
+$table-border-width: $border-width !default;
+$table-border-color: $gray-300 !default;
-$table-head-bg: $gray-200 !default;
-$table-head-color: $gray-700 !default;
+$table-head-bg: $gray-200 !default;
+$table-head-color: $gray-700 !default;
-$table-dark-bg: $gray-900 !default;
-$table-dark-accent-bg: rgba($white, .05) !default;
-$table-dark-hover-bg: rgba($white, .075) !default;
-$table-dark-border-color: lighten($gray-900, 7.5%) !default;
-$table-dark-color: $body-bg !default;
+$table-dark-bg: $gray-900 !default;
+$table-dark-accent-bg: rgba($white, .05) !default;
+$table-dark-hover-bg: rgba($white, .075) !default;
+$table-dark-border-color: lighten($gray-900, 7.5%) !default;
+$table-dark-color: $body-bg !default;
-$table-striped-order: odd !default;
+$table-striped-order: odd !default;
-$table-caption-color: $text-muted !default;
+$table-caption-color: $text-muted !default;
// Buttons + Forms
//
// Shared variables that are reassigned to `$input-` and `$btn-` specific variables.
-$input-btn-padding-y: .375rem !default;
-$input-btn-padding-x: .75rem !default;
-$input-btn-line-height: $line-height-base !default;
+$input-btn-padding-y: .375rem !default;
+$input-btn-padding-x: .75rem !default;
+$input-btn-line-height: $line-height-base !default;
-$input-btn-focus-width: .2rem !default;
-$input-btn-focus-color: rgba($component-active-bg, .25) !default;
-$input-btn-focus-box-shadow: 0 0 0 $input-btn-focus-width $input-btn-focus-color !default;
+$input-btn-focus-width: .2rem !default;
+$input-btn-focus-color: rgba($component-active-bg, .25) !default;
+$input-btn-focus-box-shadow: 0 0 0 $input-btn-focus-width $input-btn-focus-color !default;
-$input-btn-padding-y-sm: .25rem !default;
-$input-btn-padding-x-sm: .5rem !default;
-$input-btn-line-height-sm: $line-height-sm !default;
+$input-btn-padding-y-sm: .25rem !default;
+$input-btn-padding-x-sm: .5rem !default;
+$input-btn-line-height-sm: $line-height-sm !default;
-$input-btn-padding-y-lg: .5rem !default;
-$input-btn-padding-x-lg: 1rem !default;
-$input-btn-line-height-lg: $line-height-lg !default;
+$input-btn-padding-y-lg: .5rem !default;
+$input-btn-padding-x-lg: 1rem !default;
+$input-btn-line-height-lg: $line-height-lg !default;
-$input-btn-border-width: $border-width !default;
+$input-btn-border-width: $border-width !default;
// Buttons
//
// For each of Bootstrap's buttons, define text, background, and border color.
-$btn-padding-y: $input-btn-padding-y !default;
-$btn-padding-x: $input-btn-padding-x !default;
-$btn-line-height: $input-btn-line-height !default;
+$btn-padding-y: $input-btn-padding-y !default;
+$btn-padding-x: $input-btn-padding-x !default;
+$btn-line-height: $input-btn-line-height !default;
-$btn-padding-y-sm: $input-btn-padding-y-sm !default;
-$btn-padding-x-sm: $input-btn-padding-x-sm !default;
-$btn-line-height-sm: $input-btn-line-height-sm !default;
+$btn-padding-y-sm: $input-btn-padding-y-sm !default;
+$btn-padding-x-sm: $input-btn-padding-x-sm !default;
+$btn-line-height-sm: $input-btn-line-height-sm !default;
-$btn-padding-y-lg: $input-btn-padding-y-lg !default;
-$btn-padding-x-lg: $input-btn-padding-x-lg !default;
-$btn-line-height-lg: $input-btn-line-height-lg !default;
+$btn-padding-y-lg: $input-btn-padding-y-lg !default;
+$btn-padding-x-lg: $input-btn-padding-x-lg !default;
+$btn-line-height-lg: $input-btn-line-height-lg !default;
-$btn-border-width: $input-btn-border-width !default;
+$btn-border-width: $input-btn-border-width !default;
-$btn-font-weight: $font-weight-normal !default;
-$btn-box-shadow: inset 0 1px 0 rgba($white, .15), 0 1px 1px rgba($black, .075) !default;
-$btn-focus-width: $input-btn-focus-width !default;
-$btn-focus-box-shadow: $input-btn-focus-box-shadow !default;
-$btn-disabled-opacity: .65 !default;
-$btn-active-box-shadow: inset 0 3px 5px rgba($black, .125) !default;
+$btn-font-weight: $font-weight-normal !default;
+$btn-box-shadow: inset 0 1px 0 rgba($white, .15),
+ 0 1px 1px rgba($black, .075) !default;
+$btn-focus-width: $input-btn-focus-width !default;
+$btn-focus-box-shadow: $input-btn-focus-box-shadow !default;
+$btn-disabled-opacity: .65 !default;
+$btn-active-box-shadow: inset 0 3px 5px rgba($black, .125) !default;
-$btn-link-disabled-color: $gray-600 !default;
+$btn-link-disabled-color: $gray-600 !default;
-$btn-block-spacing-y: .5rem !default;
+$btn-block-spacing-y: .5rem !default;
// Allows for customizing button radius independently from global border radius
-$btn-border-radius: $border-radius !default;
-$btn-border-radius-lg: $border-radius-lg !default;
-$btn-border-radius-sm: $border-radius-sm !default;
+$btn-border-radius: $border-radius !default;
+$btn-border-radius-lg: $border-radius-lg !default;
+$btn-border-radius-sm: $border-radius-sm !default;
-$btn-transition: color .15s ease-in-out, background-color .15s ease-in-out, border-color .15s ease-in-out, box-shadow .15s ease-in-out !default;
+$btn-transition: color .15s ease-in-out,
+ background-color .15s ease-in-out,
+ border-color .15s ease-in-out,
+ box-shadow .15s ease-in-out !default;
// Forms
-$label-margin-bottom: .5rem !default;
+$label-margin-bottom: .5rem !default;
-$input-padding-y: $input-btn-padding-y !default;
-$input-padding-x: $input-btn-padding-x !default;
-$input-line-height: $input-btn-line-height !default;
+$input-padding-y: $input-btn-padding-y !default;
+$input-padding-x: $input-btn-padding-x !default;
+$input-line-height: $input-btn-line-height !default;
-$input-padding-y-sm: $input-btn-padding-y-sm !default;
-$input-padding-x-sm: $input-btn-padding-x-sm !default;
-$input-line-height-sm: $input-btn-line-height-sm !default;
+$input-padding-y-sm: $input-btn-padding-y-sm !default;
+$input-padding-x-sm: $input-btn-padding-x-sm !default;
+$input-line-height-sm: $input-btn-line-height-sm !default;
-$input-padding-y-lg: $input-btn-padding-y-lg !default;
-$input-padding-x-lg: $input-btn-padding-x-lg !default;
-$input-line-height-lg: $input-btn-line-height-lg !default;
+$input-padding-y-lg: $input-btn-padding-y-lg !default;
+$input-padding-x-lg: $input-btn-padding-x-lg !default;
+$input-line-height-lg: $input-btn-line-height-lg !default;
-$input-bg: $white !default;
-$input-disabled-bg: $gray-200 !default;
+$input-bg: $white !default;
+$input-disabled-bg: $gray-200 !default;
-$input-color: $gray-700 !default;
-$input-border-color: $gray-400 !default;
-$input-border-width: $input-btn-border-width !default;
-$input-box-shadow: inset 0 1px 1px rgba($black, .075) !default;
+$input-color: $gray-700 !default;
+$input-border-color: $gray-400 !default;
+$input-border-width: $input-btn-border-width !default;
+$input-box-shadow: inset 0 1px 1px rgba($black, .075) !default;
-$input-border-radius: $border-radius !default;
-$input-border-radius-lg: $border-radius-lg !default;
-$input-border-radius-sm: $border-radius-sm !default;
+$input-border-radius: $border-radius !default;
+$input-border-radius-lg: $border-radius-lg !default;
+$input-border-radius-sm: $border-radius-sm !default;
-$input-focus-bg: $input-bg !default;
-$input-focus-border-color: lighten($component-active-bg, 25%) !default;
-$input-focus-color: $input-color !default;
-$input-focus-width: $input-btn-focus-width !default;
-$input-focus-box-shadow: $input-btn-focus-box-shadow !default;
+$input-focus-bg: $input-bg !default;
+$input-focus-border-color: lighten($component-active-bg, 25%) !default;
+$input-focus-color: $input-color !default;
+$input-focus-width: $input-btn-focus-width !default;
+$input-focus-box-shadow: $input-btn-focus-box-shadow !default;
-$input-placeholder-color: $gray-600 !default;
-$input-plaintext-color: $body-color !default;
+$input-placeholder-color: $gray-600 !default;
+$input-plaintext-color: $body-color !default;
-$input-height-border: $input-border-width * 2 !default;
+$input-height-border: $input-border-width * 2 !default;
-$input-height-inner: ($font-size-base * $input-btn-line-height) + ($input-btn-padding-y * 2) !default;
-$input-height: calc(#{$input-height-inner} + #{$input-height-border}) !default;
-
-$input-height-inner-sm: ($font-size-sm * $input-btn-line-height-sm) + ($input-btn-padding-y-sm * 2) !default;
-$input-height-sm: calc(#{$input-height-inner-sm} + #{$input-height-border}) !default;
-
-$input-height-inner-lg: ($font-size-lg * $input-btn-line-height-lg) + ($input-btn-padding-y-lg * 2) !default;
-$input-height-lg: calc(#{$input-height-inner-lg} + #{$input-height-border}) !default;
-
-$input-transition: border-color .15s ease-in-out, box-shadow .15s ease-in-out !default;
-
-$form-text-margin-top: .25rem !default;
-
-$form-check-input-gutter: 1.25rem !default;
-$form-check-input-margin-y: .3rem !default;
-$form-check-input-margin-x: .25rem !default;
-
-$form-check-inline-margin-x: .75rem !default;
-$form-check-inline-input-margin-x: .3125rem !default;
-
-$form-group-margin-bottom: 1rem !default;
-
-$input-group-addon-color: $input-color !default;
-$input-group-addon-bg: $gray-200 !default;
-$input-group-addon-border-color: $input-border-color !default;
-
-$custom-control-gutter: 1.5rem !default;
-$custom-control-spacer-x: 1rem !default;
-
-$custom-control-indicator-size: 1rem !default;
-$custom-control-indicator-bg: $gray-300 !default;
-$custom-control-indicator-bg-size: 50% 50% !default;
-$custom-control-indicator-box-shadow: inset 0 .25rem .25rem rgba($black, .1) !default;
-
-$custom-control-indicator-disabled-bg: $gray-200 !default;
-$custom-control-label-disabled-color: $gray-600 !default;
-
-$custom-control-indicator-checked-color: $component-active-color !default;
-$custom-control-indicator-checked-bg: $component-active-bg !default;
-$custom-control-indicator-checked-disabled-bg: rgba(theme-color("primary"), .5) !default;
-$custom-control-indicator-checked-box-shadow: none !default;
-
-$custom-control-indicator-focus-box-shadow: 0 0 0 1px $body-bg, $input-btn-focus-box-shadow !default;
-
-$custom-control-indicator-active-color: $component-active-color !default;
-$custom-control-indicator-active-bg: lighten($component-active-bg, 35%) !default;
-$custom-control-indicator-active-box-shadow: none !default;
-
-$custom-checkbox-indicator-border-radius: $border-radius !default;
-$custom-checkbox-indicator-icon-checked: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3E%3Cpath fill='#{$custom-control-indicator-checked-color}' d='M6.564.75l-3.59 3.612-1.538-1.55L0 4.26 2.974 7.25 8 2.193z'/%3E%3C/svg%3E"), "#", "%23") !default;
-
-$custom-checkbox-indicator-indeterminate-bg: $component-active-bg !default;
-$custom-checkbox-indicator-indeterminate-color: $custom-control-indicator-checked-color !default;
-$custom-checkbox-indicator-icon-indeterminate: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 4 4'%3E%3Cpath stroke='#{$custom-checkbox-indicator-indeterminate-color}' d='M0 2h4'/%3E%3C/svg%3E"), "#", "%23") !default;
-$custom-checkbox-indicator-indeterminate-box-shadow: none !default;
-
-$custom-radio-indicator-border-radius: 50% !default;
-$custom-radio-indicator-icon-checked: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3E%3Ccircle r='3' fill='#{$custom-control-indicator-checked-color}'/%3E%3C/svg%3E"), "#", "%23") !default;
-
-$custom-select-padding-y: .375rem !default;
-$custom-select-padding-x: .75rem !default;
-$custom-select-height: $input-height !default;
-$custom-select-indicator-padding: 1rem !default; // Extra padding to account for the presence of the background-image based indicator
-$custom-select-line-height: $input-btn-line-height !default;
-$custom-select-color: $input-color !default;
-$custom-select-disabled-color: $gray-600 !default;
-$custom-select-bg: $input-bg !default;
-$custom-select-disabled-bg: $gray-200 !default;
-$custom-select-bg-size: 8px 10px !default; // In pixels because image dimensions
-$custom-select-indicator-color: $gray-800 !default;
-$custom-select-indicator: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 4 5'%3E%3Cpath fill='#{$custom-select-indicator-color}' d='M2 0L0 2h4zm0 5L0 3h4z'/%3E%3C/svg%3E"), "#", "%23") !default;
-$custom-select-border-width: $input-btn-border-width !default;
-$custom-select-border-color: $input-border-color !default;
-$custom-select-border-radius: $border-radius !default;
-
-$custom-select-focus-border-color: $input-focus-border-color !default;
-$custom-select-focus-box-shadow: inset 0 1px 2px rgba($black, .075), 0 0 5px rgba($custom-select-focus-border-color, .5) !default;
-
-$custom-select-font-size-sm: 75% !default;
-$custom-select-height-sm: $input-height-sm !default;
-
-$custom-select-font-size-lg: 125% !default;
-$custom-select-height-lg: $input-height-lg !default;
-
-$custom-range-track-width: 100% !default;
-$custom-range-track-height: .5rem !default;
-$custom-range-track-cursor: pointer !default;
-$custom-range-track-bg: $gray-300 !default;
-$custom-range-track-border-radius: 1rem !default;
-$custom-range-track-box-shadow: inset 0 .25rem .25rem rgba($black, .1) !default;
-
-$custom-range-thumb-width: 1rem !default;
-$custom-range-thumb-height: $custom-range-thumb-width !default;
-$custom-range-thumb-bg: $component-active-bg !default;
-$custom-range-thumb-border: 0 !default;
-$custom-range-thumb-border-radius: 1rem !default;
-$custom-range-thumb-box-shadow: 0 .1rem .25rem rgba($black, .1) !default;
-$custom-range-thumb-focus-box-shadow: 0 0 0 1px $body-bg, $input-btn-focus-box-shadow !default;
-$custom-range-thumb-active-bg: lighten($component-active-bg, 35%) !default;
-
-$custom-file-height: $input-height !default;
-$custom-file-focus-border-color: $input-focus-border-color !default;
-$custom-file-focus-box-shadow: $input-btn-focus-box-shadow !default;
-
-$custom-file-padding-y: $input-btn-padding-y !default;
-$custom-file-padding-x: $input-btn-padding-x !default;
-$custom-file-line-height: $input-btn-line-height !default;
-$custom-file-color: $input-color !default;
-$custom-file-bg: $input-bg !default;
-$custom-file-border-width: $input-btn-border-width !default;
-$custom-file-border-color: $input-border-color !default;
-$custom-file-border-radius: $input-border-radius !default;
-$custom-file-box-shadow: $input-box-shadow !default;
-$custom-file-button-color: $custom-file-color !default;
-$custom-file-button-bg: $input-group-addon-bg !default;
+$input-height-inner: (
+ $font-size-base * $input-btn-line-height) + ($input-btn-padding-y * 2) !default;
+$input-height: calc(#{$input-height-inner} + #{$input-height-border}) !default;
+
+$input-height-inner-sm: (
+ $font-size-sm * $input-btn-line-height-sm) + ($input-btn-padding-y-sm * 2) !default;
+$input-height-sm: calc(#{$input-height-inner-sm} + #{$input-height-border}) !default;
+
+$input-height-inner-lg: (
+ $font-size-lg * $input-btn-line-height-lg) + ($input-btn-padding-y-lg * 2) !default;
+$input-height-lg: calc(#{$input-height-inner-lg} + #{$input-height-border}) !default;
+
+$input-transition: border-color .15s ease-in-out,
+ box-shadow .15s ease-in-out !default;
+
+$form-text-margin-top: .25rem !default;
+
+$form-check-input-gutter: 1.25rem !default;
+$form-check-input-margin-y: .3rem !default;
+$form-check-input-margin-x: .25rem !default;
+
+$form-check-inline-margin-x: .75rem !default;
+$form-check-inline-input-margin-x: .3125rem !default;
+
+$form-group-margin-bottom: 1rem !default;
+
+$input-group-addon-color: $input-color !default;
+$input-group-addon-bg: $gray-200 !default;
+$input-group-addon-border-color: $input-border-color !default;
+
+$custom-control-gutter: 1.5rem !default;
+$custom-control-spacer-x: 1rem !default;
+
+$custom-control-indicator-size: 1rem !default;
+$custom-control-indicator-bg: $gray-300 !default;
+$custom-control-indicator-bg-size: 50% 50% !default;
+$custom-control-indicator-box-shadow: inset 0 .25rem .25rem rgba($black, .1) !default;
+
+$custom-control-indicator-disabled-bg: $gray-200 !default;
+$custom-control-label-disabled-color: $gray-600 !default;
+
+$custom-control-indicator-checked-color: $component-active-color !default;
+$custom-control-indicator-checked-bg: $component-active-bg !default;
+$custom-control-indicator-checked-disabled-bg: rgba(theme-color("primary"), .5) !default;
+$custom-control-indicator-checked-box-shadow: none !default;
+
+$custom-control-indicator-focus-box-shadow: 0 0 0 1px $body-bg,
+ $input-btn-focus-box-shadow !default;
+
+$custom-control-indicator-active-color: $component-active-color !default;
+$custom-control-indicator-active-bg: lighten($component-active-bg, 35%) !default;
+$custom-control-indicator-active-box-shadow: none !default;
+
+$custom-checkbox-indicator-border-radius: $border-radius !default;
+$custom-checkbox-indicator-icon-checked: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3E%3Cpath fill='#{$custom-control-indicator-checked-color}' d='M6.564.75l-3.59 3.612-1.538-1.55L0 4.26 2.974 7.25 8 2.193z'/%3E%3C/svg%3E"), "#", "%23") !default;
+
+$custom-checkbox-indicator-indeterminate-bg: $component-active-bg !default;
+$custom-checkbox-indicator-indeterminate-color: $custom-control-indicator-checked-color !default;
+$custom-checkbox-indicator-icon-indeterminate: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 4 4'%3E%3Cpath stroke='#{$custom-checkbox-indicator-indeterminate-color}' d='M0 2h4'/%3E%3C/svg%3E"), "#", "%23") !default;
+$custom-checkbox-indicator-indeterminate-box-shadow: none !default;
+
+$custom-radio-indicator-border-radius: 50% !default;
+$custom-radio-indicator-icon-checked: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3E%3Ccircle r='3' fill='#{$custom-control-indicator-checked-color}'/%3E%3C/svg%3E"), "#", "%23") !default;
+
+$custom-select-padding-y: .375rem !default;
+$custom-select-padding-x: .75rem !default;
+$custom-select-height: $input-height !default;
+$custom-select-indicator-padding: 1rem !default; // Extra padding to account for the presence of the background-image based indicator
+$custom-select-line-height: $input-btn-line-height !default;
+$custom-select-color: $input-color !default;
+$custom-select-disabled-color: $gray-600 !default;
+$custom-select-bg: $input-bg !default;
+$custom-select-disabled-bg: $gray-200 !default;
+$custom-select-bg-size: 8px 10px !default; // In pixels because image dimensions
+$custom-select-indicator-color: $gray-800 !default;
+$custom-select-indicator: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 4 5'%3E%3Cpath fill='#{$custom-select-indicator-color}' d='M2 0L0 2h4zm0 5L0 3h4z'/%3E%3C/svg%3E"), "#", "%23") !default;
+$custom-select-border-width: $input-btn-border-width !default;
+$custom-select-border-color: $input-border-color !default;
+$custom-select-border-radius: $border-radius !default;
+
+$custom-select-focus-border-color: $input-focus-border-color !default;
+$custom-select-focus-box-shadow: inset 0 1px 2px rgba($black, .075),
+ 0 0 5px rgba($custom-select-focus-border-color, .5) !default;
+
+$custom-select-font-size-sm: 75% !default;
+$custom-select-height-sm: $input-height-sm !default;
+
+$custom-select-font-size-lg: 125% !default;
+$custom-select-height-lg: $input-height-lg !default;
+
+$custom-range-track-width: 100% !default;
+$custom-range-track-height: .5rem !default;
+$custom-range-track-cursor: pointer !default;
+$custom-range-track-bg: $gray-300 !default;
+$custom-range-track-border-radius: 1rem !default;
+$custom-range-track-box-shadow: inset 0 .25rem .25rem rgba($black, .1) !default;
+
+$custom-range-thumb-width: 1rem !default;
+$custom-range-thumb-height: $custom-range-thumb-width !default;
+$custom-range-thumb-bg: $component-active-bg !default;
+$custom-range-thumb-border: 0 !default;
+$custom-range-thumb-border-radius: 1rem !default;
+$custom-range-thumb-box-shadow: 0 .1rem .25rem rgba($black, .1) !default;
+$custom-range-thumb-focus-box-shadow: 0 0 0 1px $body-bg,
+ $input-btn-focus-box-shadow !default;
+$custom-range-thumb-active-bg: lighten($component-active-bg, 35%) !default;
+
+$custom-file-height: $input-height !default;
+$custom-file-focus-border-color: $input-focus-border-color !default;
+$custom-file-focus-box-shadow: $input-btn-focus-box-shadow !default;
+
+$custom-file-padding-y: $input-btn-padding-y !default;
+$custom-file-padding-x: $input-btn-padding-x !default;
+$custom-file-line-height: $input-btn-line-height !default;
+$custom-file-color: $input-color !default;
+$custom-file-bg: $input-bg !default;
+$custom-file-border-width: $input-btn-border-width !default;
+$custom-file-border-color: $input-border-color !default;
+$custom-file-border-radius: $input-border-radius !default;
+$custom-file-box-shadow: $input-box-shadow !default;
+$custom-file-button-color: $custom-file-color !default;
+$custom-file-button-bg: $input-group-addon-bg !default;
$custom-file-text: (
en: "Browse"
-) !default;
+ ) !default;
// Form validation
-$form-feedback-margin-top: $form-text-margin-top !default;
-$form-feedback-font-size: $small-font-size !default;
-$form-feedback-valid-color: theme-color("success") !default;
-$form-feedback-invalid-color: theme-color("danger") !default;
+$form-feedback-margin-top: $form-text-margin-top !default;
+$form-feedback-font-size: $small-font-size !default;
+$form-feedback-valid-color: theme-color("success") !default;
+$form-feedback-invalid-color: theme-color("danger") !default;
// Dropdowns
//
// Dropdown menu container and contents.
-$dropdown-min-width: 10rem !default;
-$dropdown-padding-y: .5rem !default;
-$dropdown-spacer: .125rem !default;
-$dropdown-bg: $white !default;
-$dropdown-border-color: rgba($black, .15) !default;
-$dropdown-border-radius: $border-radius !default;
-$dropdown-border-width: $border-width !default;
-$dropdown-divider-bg: $gray-200 !default;
-$dropdown-box-shadow: 0 .5rem 1rem rgba($black, .175) !default;
+$dropdown-min-width: 10rem !default;
+$dropdown-padding-y: .5rem !default;
+$dropdown-spacer: .125rem !default;
+$dropdown-bg: $white !default;
+$dropdown-border-color: rgba($black, .15) !default;
+$dropdown-border-radius: $border-radius !default;
+$dropdown-border-width: $border-width !default;
+$dropdown-divider-bg: $gray-200 !default;
+$dropdown-box-shadow: 0 .5rem 1rem rgba($black, .175) !default;
-$dropdown-link-color: $gray-900 !default;
-$dropdown-link-hover-color: darken($gray-900, 5%) !default;
-$dropdown-link-hover-bg: $gray-100 !default;
+$dropdown-link-color: $gray-900 !default;
+$dropdown-link-hover-color: darken($gray-900, 5%) !default;
+$dropdown-link-hover-bg: $gray-100 !default;
-$dropdown-link-active-color: $component-active-color !default;
-$dropdown-link-active-bg: $component-active-bg !default;
+$dropdown-link-active-color: $component-active-color !default;
+$dropdown-link-active-bg: $component-active-bg !default;
-$dropdown-link-disabled-color: $gray-600 !default;
+$dropdown-link-disabled-color: $gray-600 !default;
-$dropdown-item-padding-y: .25rem !default;
-$dropdown-item-padding-x: 1.5rem !default;
+$dropdown-item-padding-y: .25rem !default;
+$dropdown-item-padding-x: 1.5rem !default;
-$dropdown-header-color: $gray-600 !default;
+$dropdown-header-color: $gray-600 !default;
// Z-index master list
@@ -586,343 +602,349 @@ $dropdown-header-color: $gray-600 !default;
// Warning: Avoid customizing these values. They're used for a bird's eye view
// of components dependent on the z-axis and are designed to all work together.
-$zindex-dropdown: 1000 !default;
-$zindex-sticky: 1020 !default;
-$zindex-fixed: 1030 !default;
-$zindex-modal-backdrop: 1040 !default;
-$zindex-modal: 1050 !default;
-$zindex-popover: 1060 !default;
-$zindex-tooltip: 1070 !default;
+$zindex-dropdown: 1000 !default;
+$zindex-sticky: 1020 !default;
+$zindex-fixed: 1030 !default;
+$zindex-modal-backdrop: 1040 !default;
+$zindex-modal: 1050 !default;
+$zindex-popover: 1060 !default;
+$zindex-tooltip: 1070 !default;
// Navs
-$nav-link-padding-y: .5rem !default;
-$nav-link-padding-x: 1rem !default;
-$nav-link-disabled-color: $gray-600 !default;
+$nav-link-padding-y: .5rem !default;
+$nav-link-padding-x: 1rem !default;
+$nav-link-disabled-color: $gray-600 !default;
-$nav-tabs-border-color: $gray-300 !default;
-$nav-tabs-border-width: $border-width !default;
-$nav-tabs-border-radius: $border-radius !default;
-$nav-tabs-link-hover-border-color: $gray-200 $gray-200 $nav-tabs-border-color !default;
-$nav-tabs-link-active-color: $gray-700 !default;
-$nav-tabs-link-active-bg: $body-bg !default;
+$nav-tabs-border-color: $gray-300 !default;
+$nav-tabs-border-width: $border-width !default;
+$nav-tabs-border-radius: $border-radius !default;
+$nav-tabs-link-hover-border-color: $gray-200 $gray-200 $nav-tabs-border-color !default;
+$nav-tabs-link-active-color: $gray-700 !default;
+$nav-tabs-link-active-bg: $body-bg !default;
$nav-tabs-link-active-border-color: $gray-300 $gray-300 $nav-tabs-link-active-bg !default;
-$nav-pills-border-radius: $border-radius !default;
-$nav-pills-link-active-color: $component-active-color !default;
-$nav-pills-link-active-bg: $component-active-bg !default;
+$nav-pills-border-radius: $border-radius !default;
+$nav-pills-link-active-color: $component-active-color !default;
+$nav-pills-link-active-bg: $component-active-bg !default;
-$nav-divider-color: $gray-200 !default;
-$nav-divider-margin-y: ($spacer / 2) !default;
+$nav-divider-color: $gray-200 !default;
+$nav-divider-margin-y: (
+ $spacer / 2) !default;
// Navbar
-$navbar-padding-y: ($spacer / 2) !default;
-$navbar-padding-x: $spacer !default;
+$navbar-padding-y: (
+ $spacer / 2) !default;
+$navbar-padding-x: $spacer !default;
-$navbar-nav-link-padding-x: .5rem !default;
+$navbar-nav-link-padding-x: .5rem !default;
-$navbar-brand-font-size: $font-size-lg !default;
+$navbar-brand-font-size: $font-size-lg !default;
// Compute the navbar-brand padding-y so the navbar-brand will have the same height as navbar-text and nav-link
-$nav-link-height: ($font-size-base * $line-height-base + $nav-link-padding-y * 2) !default;
-$navbar-brand-height: $navbar-brand-font-size * $line-height-base !default;
-$navbar-brand-padding-y: ($nav-link-height - $navbar-brand-height) / 2 !default;
-
-$navbar-toggler-padding-y: .25rem !default;
-$navbar-toggler-padding-x: .75rem !default;
-$navbar-toggler-font-size: $font-size-lg !default;
-$navbar-toggler-border-radius: $btn-border-radius !default;
-
-$navbar-dark-color: rgba($white, .5) !default;
-$navbar-dark-hover-color: rgba($white, .75) !default;
-$navbar-dark-active-color: $white !default;
-$navbar-dark-disabled-color: rgba($white, .25) !default;
-$navbar-dark-toggler-icon-bg: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg viewBox='0 0 30 30' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath stroke='#{$navbar-dark-color}' stroke-width='2' stroke-linecap='round' stroke-miterlimit='10' d='M4 7h22M4 15h22M4 23h22'/%3E%3C/svg%3E"), "#", "%23") !default;
-$navbar-dark-toggler-border-color: rgba($white, .1) !default;
-
-$navbar-light-color: rgba($black, .5) !default;
-$navbar-light-hover-color: rgba($black, .7) !default;
-$navbar-light-active-color: rgba($black, .9) !default;
-$navbar-light-disabled-color: rgba($black, .3) !default;
-$navbar-light-toggler-icon-bg: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg viewBox='0 0 30 30' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath stroke='#{$navbar-light-color}' stroke-width='2' stroke-linecap='round' stroke-miterlimit='10' d='M4 7h22M4 15h22M4 23h22'/%3E%3C/svg%3E"), "#", "%23") !default;
+$nav-link-height: (
+ $font-size-base * $line-height-base + $nav-link-padding-y * 2) !default;
+$navbar-brand-height: $navbar-brand-font-size * $line-height-base !default;
+$navbar-brand-padding-y: (
+ $nav-link-height - $navbar-brand-height) / 2 !default;
+
+$navbar-toggler-padding-y: .25rem !default;
+$navbar-toggler-padding-x: .75rem !default;
+$navbar-toggler-font-size: $font-size-lg !default;
+$navbar-toggler-border-radius: $btn-border-radius !default;
+
+$navbar-dark-color: rgba($white, .5) !default;
+$navbar-dark-hover-color: rgba($white, .75) !default;
+$navbar-dark-active-color: $white !default;
+$navbar-dark-disabled-color: rgba($white, .25) !default;
+$navbar-dark-toggler-icon-bg: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg viewBox='0 0 30 30' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath stroke='#{$navbar-dark-color}' stroke-width='2' stroke-linecap='round' stroke-miterlimit='10' d='M4 7h22M4 15h22M4 23h22'/%3E%3C/svg%3E"), "#", "%23") !default;
+$navbar-dark-toggler-border-color: rgba($white, .1) !default;
+
+$navbar-light-color: rgba($black, .5) !default;
+$navbar-light-hover-color: rgba($black, .7) !default;
+$navbar-light-active-color: rgba($black, .9) !default;
+$navbar-light-disabled-color: rgba($black, .3) !default;
+$navbar-light-toggler-icon-bg: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg viewBox='0 0 30 30' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath stroke='#{$navbar-light-color}' stroke-width='2' stroke-linecap='round' stroke-miterlimit='10' d='M4 7h22M4 15h22M4 23h22'/%3E%3C/svg%3E"), "#", "%23") !default;
$navbar-light-toggler-border-color: rgba($black, .1) !default;
// Pagination
-$pagination-padding-y: .5rem !default;
-$pagination-padding-x: .75rem !default;
-$pagination-padding-y-sm: .25rem !default;
-$pagination-padding-x-sm: .5rem !default;
-$pagination-padding-y-lg: .75rem !default;
-$pagination-padding-x-lg: 1.5rem !default;
-$pagination-line-height: 1.25 !default;
+$pagination-padding-y: .5rem !default;
+$pagination-padding-x: .75rem !default;
+$pagination-padding-y-sm: .25rem !default;
+$pagination-padding-x-sm: .5rem !default;
+$pagination-padding-y-lg: .75rem !default;
+$pagination-padding-x-lg: 1.5rem !default;
+$pagination-line-height: 1.25 !default;
-$pagination-color: $link-color !default;
-$pagination-bg: $white !default;
-$pagination-border-width: $border-width !default;
-$pagination-border-color: $gray-300 !default;
+$pagination-color: $link-color !default;
+$pagination-bg: $white !default;
+$pagination-border-width: $border-width !default;
+$pagination-border-color: $gray-300 !default;
-$pagination-focus-box-shadow: $input-btn-focus-box-shadow !default;
-$pagination-focus-outline: 0 !default;
+$pagination-focus-box-shadow: $input-btn-focus-box-shadow !default;
+$pagination-focus-outline: 0 !default;
-$pagination-hover-color: $link-hover-color !default;
-$pagination-hover-bg: $gray-200 !default;
-$pagination-hover-border-color: $gray-300 !default;
+$pagination-hover-color: $link-hover-color !default;
+$pagination-hover-bg: $gray-200 !default;
+$pagination-hover-border-color: $gray-300 !default;
-$pagination-active-color: $component-active-color !default;
-$pagination-active-bg: $component-active-bg !default;
-$pagination-active-border-color: $pagination-active-bg !default;
+$pagination-active-color: $component-active-color !default;
+$pagination-active-bg: $component-active-bg !default;
+$pagination-active-border-color: $pagination-active-bg !default;
-$pagination-disabled-color: $gray-600 !default;
-$pagination-disabled-bg: $white !default;
-$pagination-disabled-border-color: $gray-300 !default;
+$pagination-disabled-color: $gray-600 !default;
+$pagination-disabled-bg: $white !default;
+$pagination-disabled-border-color: $gray-300 !default;
// Jumbotron
-$jumbotron-padding: 2rem !default;
-$jumbotron-bg: $gray-200 !default;
+$jumbotron-padding: 2rem !default;
+$jumbotron-bg: $gray-200 !default;
// Cards
-$card-spacer-y: .75rem !default;
-$card-spacer-x: 1.25rem !default;
-$card-border-width: $border-width !default;
-$card-border-radius: $border-radius !default;
-$card-border-color: rgba($black, .125) !default;
-$card-inner-border-radius: calc(#{$card-border-radius} - #{$card-border-width}) !default;
-$card-cap-bg: rgba($black, .03) !default;
-$card-bg: $white !default;
+$card-spacer-y: .75rem !default;
+$card-spacer-x: 1.25rem !default;
+$card-border-width: $border-width !default;
+$card-border-radius: $border-radius !default;
+$card-border-color: rgba($black, .125) !default;
+$card-inner-border-radius: calc(#{$card-border-radius} - #{$card-border-width}) !default;
+$card-cap-bg: rgba($black, .03) !default;
+$card-bg: $white !default;
-$card-img-overlay-padding: 1.25rem !default;
+$card-img-overlay-padding: 1.25rem !default;
-$card-group-margin: ($grid-gutter-width / 2) !default;
-$card-deck-margin: $card-group-margin !default;
+$card-group-margin: (
+ $grid-gutter-width / 2) !default;
+$card-deck-margin: $card-group-margin !default;
-$card-columns-count: 3 !default;
-$card-columns-gap: 1.25rem !default;
-$card-columns-margin: $card-spacer-y !default;
+$card-columns-count: 3 !default;
+$card-columns-gap: 1.25rem !default;
+$card-columns-margin: $card-spacer-y !default;
// Tooltips
-$tooltip-font-size: $font-size-sm !default;
-$tooltip-max-width: 200px !default;
-$tooltip-color: $white !default;
-$tooltip-bg: $black !default;
-$tooltip-border-radius: $border-radius !default;
-$tooltip-opacity: .9 !default;
-$tooltip-padding-y: .25rem !default;
-$tooltip-padding-x: .5rem !default;
-$tooltip-margin: 0 !default;
+$tooltip-font-size: $font-size-sm !default;
+$tooltip-max-width: 200px !default;
+$tooltip-color: $white !default;
+$tooltip-bg: $black !default;
+$tooltip-border-radius: $border-radius !default;
+$tooltip-opacity: .9 !default;
+$tooltip-padding-y: .25rem !default;
+$tooltip-padding-x: .5rem !default;
+$tooltip-margin: 0 !default;
-$tooltip-arrow-width: .8rem !default;
-$tooltip-arrow-height: .4rem !default;
-$tooltip-arrow-color: $tooltip-bg !default;
+$tooltip-arrow-width: .8rem !default;
+$tooltip-arrow-height: .4rem !default;
+$tooltip-arrow-color: $tooltip-bg !default;
// Popovers
-$popover-font-size: $font-size-sm !default;
-$popover-bg: $white !default;
-$popover-max-width: 276px !default;
-$popover-border-width: $border-width !default;
-$popover-border-color: rgba($black, .2) !default;
-$popover-border-radius: $border-radius-lg !default;
-$popover-box-shadow: 0 .25rem .5rem rgba($black, .2) !default;
+$popover-font-size: $font-size-sm !default;
+$popover-bg: $white !default;
+$popover-max-width: 276px !default;
+$popover-border-width: $border-width !default;
+$popover-border-color: rgba($black, .2) !default;
+$popover-border-radius: $border-radius-lg !default;
+$popover-box-shadow: 0 .25rem .5rem rgba($black, .2) !default;
-$popover-header-bg: darken($popover-bg, 3%) !default;
-$popover-header-color: $headings-color !default;
-$popover-header-padding-y: .5rem !default;
-$popover-header-padding-x: .75rem !default;
+$popover-header-bg: darken($popover-bg, 3%) !default;
+$popover-header-color: $headings-color !default;
+$popover-header-padding-y: .5rem !default;
+$popover-header-padding-x: .75rem !default;
-$popover-body-color: $body-color !default;
-$popover-body-padding-y: $popover-header-padding-y !default;
-$popover-body-padding-x: $popover-header-padding-x !default;
+$popover-body-color: $body-color !default;
+$popover-body-padding-y: $popover-header-padding-y !default;
+$popover-body-padding-x: $popover-header-padding-x !default;
-$popover-arrow-width: 1rem !default;
-$popover-arrow-height: .5rem !default;
-$popover-arrow-color: $popover-bg !default;
+$popover-arrow-width: 1rem !default;
+$popover-arrow-height: .5rem !default;
+$popover-arrow-color: $popover-bg !default;
-$popover-arrow-outer-color: fade-in($popover-border-color, .05) !default;
+$popover-arrow-outer-color: fade-in($popover-border-color, .05) !default;
// Badges
-$badge-font-size: 75% !default;
-$badge-font-weight: $font-weight-bold !default;
-$badge-padding-y: .25em !default;
-$badge-padding-x: .4em !default;
-$badge-border-radius: $border-radius !default;
+$badge-font-size: 75% !default;
+$badge-font-weight: $font-weight-bold !default;
+$badge-padding-y: .25em !default;
+$badge-padding-x: .4em !default;
+$badge-border-radius: $border-radius !default;
-$badge-pill-padding-x: .6em !default;
+$badge-pill-padding-x: .6em !default;
// Use a higher than normal value to ensure completely rounded edges when
// customizing padding or font-size on labels.
-$badge-pill-border-radius: 10rem !default;
+$badge-pill-border-radius: 10rem !default;
// Modals
// Padding applied to the modal body
-$modal-inner-padding: 1rem !default;
+$modal-inner-padding: 1rem !default;
-$modal-dialog-margin: .5rem !default;
-$modal-dialog-margin-y-sm-up: 1.75rem !default;
+$modal-dialog-margin: .5rem !default;
+$modal-dialog-margin-y-sm-up: 1.75rem !default;
-$modal-title-line-height: $line-height-base !default;
+$modal-title-line-height: $line-height-base !default;
-$modal-content-bg: $white !default;
-$modal-content-border-color: rgba($black, .2) !default;
-$modal-content-border-width: $border-width !default;
-$modal-content-border-radius: $border-radius-lg !default;
-$modal-content-box-shadow-xs: 0 .25rem .5rem rgba($black, .5) !default;
-$modal-content-box-shadow-sm-up: 0 .5rem 1rem rgba($black, .5) !default;
+$modal-content-bg: $white !default;
+$modal-content-border-color: rgba($black, .2) !default;
+$modal-content-border-width: $border-width !default;
+$modal-content-border-radius: $border-radius-lg !default;
+$modal-content-box-shadow-xs: 0 .25rem .5rem rgba($black, .5) !default;
+$modal-content-box-shadow-sm-up: 0 .5rem 1rem rgba($black, .5) !default;
-$modal-backdrop-bg: $black !default;
-$modal-backdrop-opacity: .5 !default;
-$modal-header-border-color: $gray-200 !default;
-$modal-footer-border-color: $modal-header-border-color !default;
-$modal-header-border-width: $modal-content-border-width !default;
-$modal-footer-border-width: $modal-header-border-width !default;
-$modal-header-padding: 1rem !default;
+$modal-backdrop-bg: $black !default;
+$modal-backdrop-opacity: .5 !default;
+$modal-header-border-color: $gray-200 !default;
+$modal-footer-border-color: $modal-header-border-color !default;
+$modal-header-border-width: $modal-content-border-width !default;
+$modal-footer-border-width: $modal-header-border-width !default;
+$modal-header-padding: 1rem !default;
-$modal-lg: 800px !default;
-$modal-md: 500px !default;
-$modal-sm: 300px !default;
+$modal-lg: 800px !default;
+$modal-md: 500px !default;
+$modal-sm: 300px !default;
-$modal-transition: transform .3s ease-out !default;
+$modal-transition: transform .3s ease-out !default;
// Alerts
//
// Define alert colors, border radius, and padding.
-$alert-padding-y: .75rem !default;
-$alert-padding-x: 1.25rem !default;
-$alert-margin-bottom: 1rem !default;
-$alert-border-radius: $border-radius !default;
-$alert-link-font-weight: $font-weight-bold !default;
-$alert-border-width: $border-width !default;
+$alert-padding-y: .75rem !default;
+$alert-padding-x: 1.25rem !default;
+$alert-margin-bottom: 1rem !default;
+$alert-border-radius: $border-radius !default;
+$alert-link-font-weight: $font-weight-bold !default;
+$alert-border-width: $border-width !default;
-$alert-bg-level: -10 !default;
-$alert-border-level: -9 !default;
-$alert-color-level: 6 !default;
+$alert-bg-level: -10 !default;
+$alert-border-level: -9 !default;
+$alert-color-level: 6 !default;
// Progress bars
-$progress-height: 1rem !default;
-$progress-font-size: ($font-size-base * .75) !default;
-$progress-bg: $gray-200 !default;
-$progress-border-radius: $border-radius !default;
-$progress-box-shadow: inset 0 .1rem .1rem rgba($black, .1) !default;
-$progress-bar-color: $white !default;
-$progress-bar-bg: theme-color("primary") !default;
-$progress-bar-animation-timing: 1s linear infinite !default;
-$progress-bar-transition: width .6s ease !default;
+$progress-height: 1rem !default;
+$progress-font-size: (
+ $font-size-base * .75) !default;
+$progress-bg: $gray-200 !default;
+$progress-border-radius: $border-radius !default;
+$progress-box-shadow: inset 0 .1rem .1rem rgba($black, .1) !default;
+$progress-bar-color: $white !default;
+$progress-bar-bg: theme-color("primary") !default;
+$progress-bar-animation-timing: 1s linear infinite !default;
+$progress-bar-transition: width .6s ease !default;
// List group
-$list-group-bg: $white !default;
-$list-group-border-color: rgba($black, .125) !default;
-$list-group-border-width: $border-width !default;
-$list-group-border-radius: $border-radius !default;
+$list-group-bg: $white !default;
+$list-group-border-color: rgba($black, .125) !default;
+$list-group-border-width: $border-width !default;
+$list-group-border-radius: $border-radius !default;
-$list-group-item-padding-y: .75rem !default;
-$list-group-item-padding-x: 1.25rem !default;
+$list-group-item-padding-y: .75rem !default;
+$list-group-item-padding-x: 1.25rem !default;
-$list-group-hover-bg: $gray-100 !default;
-$list-group-active-color: $component-active-color !default;
-$list-group-active-bg: $component-active-bg !default;
-$list-group-active-border-color: $list-group-active-bg !default;
+$list-group-hover-bg: $gray-100 !default;
+$list-group-active-color: $component-active-color !default;
+$list-group-active-bg: $component-active-bg !default;
+$list-group-active-border-color: $list-group-active-bg !default;
-$list-group-disabled-color: $gray-600 !default;
-$list-group-disabled-bg: $list-group-bg !default;
+$list-group-disabled-color: $gray-600 !default;
+$list-group-disabled-bg: $list-group-bg !default;
-$list-group-action-color: $gray-700 !default;
-$list-group-action-hover-color: $list-group-action-color !default;
+$list-group-action-color: $gray-700 !default;
+$list-group-action-hover-color: $list-group-action-color !default;
-$list-group-action-active-color: $body-color !default;
-$list-group-action-active-bg: $gray-200 !default;
+$list-group-action-active-color: $body-color !default;
+$list-group-action-active-bg: $gray-200 !default;
// Image thumbnails
-$thumbnail-padding: .25rem !default;
-$thumbnail-bg: $body-bg !default;
-$thumbnail-border-width: $border-width !default;
-$thumbnail-border-color: $gray-300 !default;
-$thumbnail-border-radius: $border-radius !default;
-$thumbnail-box-shadow: 0 1px 2px rgba($black, .075) !default;
+$thumbnail-padding: .25rem !default;
+$thumbnail-bg: $body-bg !default;
+$thumbnail-border-width: $border-width !default;
+$thumbnail-border-color: $gray-300 !default;
+$thumbnail-border-radius: $border-radius !default;
+$thumbnail-box-shadow: 0 1px 2px rgba($black, .075) !default;
// Figures
-$figure-caption-font-size: 90% !default;
-$figure-caption-color: $gray-600 !default;
+$figure-caption-font-size: 90% !default;
+$figure-caption-color: $gray-600 !default;
// Breadcrumbs
-$breadcrumb-padding-y: .75rem !default;
-$breadcrumb-padding-x: 1rem !default;
-$breadcrumb-item-padding: .5rem !default;
+$breadcrumb-padding-y: .75rem !default;
+$breadcrumb-padding-x: 1rem !default;
+$breadcrumb-item-padding: .5rem !default;
-$breadcrumb-margin-bottom: 1rem !default;
+$breadcrumb-margin-bottom: 1rem !default;
-$breadcrumb-bg: $gray-200 !default;
-$breadcrumb-divider-color: $gray-600 !default;
-$breadcrumb-active-color: $gray-600 !default;
-$breadcrumb-divider: quote("/") !default;
+$breadcrumb-bg: $gray-200 !default;
+$breadcrumb-divider-color: $gray-600 !default;
+$breadcrumb-active-color: $gray-600 !default;
+$breadcrumb-divider: quote("/") !default;
-$breadcrumb-border-radius: $border-radius !default;
+$breadcrumb-border-radius: $border-radius !default;
// Carousel
-$carousel-control-color: $white !default;
-$carousel-control-width: 15% !default;
-$carousel-control-opacity: .5 !default;
+$carousel-control-color: $white !default;
+$carousel-control-width: 15% !default;
+$carousel-control-opacity: .5 !default;
-$carousel-indicator-width: 30px !default;
-$carousel-indicator-height: 3px !default;
-$carousel-indicator-spacer: 3px !default;
-$carousel-indicator-active-bg: $white !default;
+$carousel-indicator-width: 30px !default;
+$carousel-indicator-height: 3px !default;
+$carousel-indicator-spacer: 3px !default;
+$carousel-indicator-active-bg: $white !default;
-$carousel-caption-width: 70% !default;
-$carousel-caption-color: $white !default;
+$carousel-caption-width: 70% !default;
+$carousel-caption-color: $white !default;
-$carousel-control-icon-width: 20px !default;
+$carousel-control-icon-width: 20px !default;
-$carousel-control-prev-icon-bg: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='#{$carousel-control-color}' viewBox='0 0 8 8'%3E%3Cpath d='M5.25 0l-4 4 4 4 1.5-1.5-2.5-2.5 2.5-2.5-1.5-1.5z'/%3E%3C/svg%3E"), "#", "%23") !default;
-$carousel-control-next-icon-bg: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='#{$carousel-control-color}' viewBox='0 0 8 8'%3E%3Cpath d='M2.75 0l-1.5 1.5 2.5 2.5-2.5 2.5 1.5 1.5 4-4-4-4z'/%3E%3C/svg%3E"), "#", "%23") !default;
+$carousel-control-prev-icon-bg: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='#{$carousel-control-color}' viewBox='0 0 8 8'%3E%3Cpath d='M5.25 0l-4 4 4 4 1.5-1.5-2.5-2.5 2.5-2.5-1.5-1.5z'/%3E%3C/svg%3E"), "#", "%23") !default;
+$carousel-control-next-icon-bg: str-replace(url("data:image/svg+xml;charset=utf8,%3Csvg xmlns='http://www.w3.org/2000/svg' fill='#{$carousel-control-color}' viewBox='0 0 8 8'%3E%3Cpath d='M2.75 0l-1.5 1.5 2.5 2.5-2.5 2.5 1.5 1.5 4-4-4-4z'/%3E%3C/svg%3E"), "#", "%23") !default;
-$carousel-transition: transform .6s ease !default; // Define transform transition first if using multiple transitons (e.g., `transform 2s ease, opacity .5s ease-out`)
+$carousel-transition: transform .6s ease !default; // Define transform transition first if using multiple transitons (e.g., `transform 2s ease, opacity .5s ease-out`)
// Close
-$close-font-size: $font-size-base * 1.5 !default;
-$close-font-weight: $font-weight-bold !default;
-$close-color: $black !default;
-$close-text-shadow: 0 1px 0 $white !default;
+$close-font-size: $font-size-base * 1.5 !default;
+$close-font-weight: $font-weight-bold !default;
+$close-color: $black !default;
+$close-text-shadow: 0 1px 0 $white !default;
// Code
-$code-font-size: 87.5% !default;
-$code-color: $pink !default;
+$code-font-size: 87.5% !default;
+$code-color: $pink !default;
-$kbd-padding-y: .2rem !default;
-$kbd-padding-x: .4rem !default;
-$kbd-font-size: $code-font-size !default;
-$kbd-color: $white !default;
-$kbd-bg: $gray-900 !default;
+$kbd-padding-y: .2rem !default;
+$kbd-padding-x: .4rem !default;
+$kbd-font-size: $code-font-size !default;
+$kbd-color: $white !default;
+$kbd-bg: $gray-900 !default;
-$pre-color: $gray-900 !default;
-$pre-scrollable-max-height: 340px !default;
+$pre-color: $gray-900 !default;
+$pre-scrollable-max-height: 340px !default;
// Printing
-$print-page-size: a3 !default;
-$print-body-min-width: map-get($grid-breakpoints, "lg") !default;
+$print-page-size: a3 !default;
+$print-body-min-width: map-get($grid-breakpoints, "lg") !default;
diff --git a/assets/images/AML-25765_2942121_mutable_color.jpg b/assets/images/AML-25765_2942121_mutable_color.jpg
new file mode 100644
index 00000000..2f445d7c
Binary files /dev/null and b/assets/images/AML-25765_2942121_mutable_color.jpg differ
diff --git a/assets/images/AWS-Topology-View-2-1024x701.png b/assets/images/AWS-Topology-View-2-1024x701.png
new file mode 100644
index 00000000..59ee5921
Binary files /dev/null and b/assets/images/AWS-Topology-View-2-1024x701.png differ
diff --git a/assets/images/Batfish_ansible-300x127.png b/assets/images/Batfish_ansible-300x127.png
new file mode 100644
index 00000000..c3fe35af
Binary files /dev/null and b/assets/images/Batfish_ansible-300x127.png differ
diff --git a/assets/images/Net_Diagram.jpg b/assets/images/Net_Diagram.jpg
new file mode 100644
index 00000000..0048eebf
Binary files /dev/null and b/assets/images/Net_Diagram.jpg differ
diff --git a/assets/images/Pictofigo_Frustration.png b/assets/images/Pictofigo_Frustration.png
new file mode 100644
index 00000000..88e494a2
Binary files /dev/null and b/assets/images/Pictofigo_Frustration.png differ
diff --git a/assets/images/Picture1.png b/assets/images/Picture1.png
new file mode 100644
index 00000000..7482b16d
Binary files /dev/null and b/assets/images/Picture1.png differ
diff --git a/assets/images/Screen-Shot-2020-02-16-at-2.41.00-PM-300x170.png b/assets/images/Screen-Shot-2020-02-16-at-2.41.00-PM-300x170.png
new file mode 100644
index 00000000..c03e623d
Binary files /dev/null and b/assets/images/Screen-Shot-2020-02-16-at-2.41.00-PM-300x170.png differ
diff --git a/assets/images/Screen-Shot-2020-02-16-at-2.41.33-PM-300x152.png b/assets/images/Screen-Shot-2020-02-16-at-2.41.33-PM-300x152.png
new file mode 100644
index 00000000..634f08ed
Binary files /dev/null and b/assets/images/Screen-Shot-2020-02-16-at-2.41.33-PM-300x152.png differ
diff --git a/assets/images/Screen-Shot-2020-10-14-at-3.42.20-PM-300x185.png b/assets/images/Screen-Shot-2020-10-14-at-3.42.20-PM-300x185.png
new file mode 100644
index 00000000..6ae0339b
Binary files /dev/null and b/assets/images/Screen-Shot-2020-10-14-at-3.42.20-PM-300x185.png differ
diff --git a/assets/images/Screen-Shot-2021-04-29-at-7.30.23-AM.png b/assets/images/Screen-Shot-2021-04-29-at-7.30.23-AM.png
new file mode 100644
index 00000000..5da92006
Binary files /dev/null and b/assets/images/Screen-Shot-2021-04-29-at-7.30.23-AM.png differ
diff --git a/assets/images/Screen-Shot-2021-04-29-at-7.33.55-AM.png b/assets/images/Screen-Shot-2021-04-29-at-7.33.55-AM.png
new file mode 100644
index 00000000..b67b3021
Binary files /dev/null and b/assets/images/Screen-Shot-2021-04-29-at-7.33.55-AM.png differ
diff --git a/assets/images/Screen-Shot-2021-04-29-at-7.37.15-AM.png b/assets/images/Screen-Shot-2021-04-29-at-7.37.15-AM.png
new file mode 100644
index 00000000..558e32c9
Binary files /dev/null and b/assets/images/Screen-Shot-2021-04-29-at-7.37.15-AM.png differ
diff --git a/assets/images/Screen-Shot-2021-04-29-at-7.38.21-AM.png b/assets/images/Screen-Shot-2021-04-29-at-7.38.21-AM.png
new file mode 100644
index 00000000..50ef1f82
Binary files /dev/null and b/assets/images/Screen-Shot-2021-04-29-at-7.38.21-AM.png differ
diff --git a/assets/images/Screen-Shot-2021-04-29-at-7.38.38-AM.png b/assets/images/Screen-Shot-2021-04-29-at-7.38.38-AM.png
new file mode 100644
index 00000000..4b433cd4
Binary files /dev/null and b/assets/images/Screen-Shot-2021-04-29-at-7.38.38-AM.png differ
diff --git a/assets/images/Screen-Shot-2021-04-29-at-7.39.09-AM.png b/assets/images/Screen-Shot-2021-04-29-at-7.39.09-AM.png
new file mode 100644
index 00000000..b6906adc
Binary files /dev/null and b/assets/images/Screen-Shot-2021-04-29-at-7.39.09-AM.png differ
diff --git a/assets/images/Screen-Shot-2021-05-11-at-1.40.06-PM.png b/assets/images/Screen-Shot-2021-05-11-at-1.40.06-PM.png
new file mode 100644
index 00000000..d292fb0a
Binary files /dev/null and b/assets/images/Screen-Shot-2021-05-11-at-1.40.06-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-05-11-at-2.43.29-PM.png b/assets/images/Screen-Shot-2021-05-11-at-2.43.29-PM.png
new file mode 100644
index 00000000..15ca448b
Binary files /dev/null and b/assets/images/Screen-Shot-2021-05-11-at-2.43.29-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-05-11-at-2.45.28-PM.png b/assets/images/Screen-Shot-2021-05-11-at-2.45.28-PM.png
new file mode 100644
index 00000000..2ca1e6e2
Binary files /dev/null and b/assets/images/Screen-Shot-2021-05-11-at-2.45.28-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-05-11-at-2.50.54-PM.png b/assets/images/Screen-Shot-2021-05-11-at-2.50.54-PM.png
new file mode 100644
index 00000000..14f2c3fb
Binary files /dev/null and b/assets/images/Screen-Shot-2021-05-11-at-2.50.54-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-05-11-at-2.54.51-PM.png b/assets/images/Screen-Shot-2021-05-11-at-2.54.51-PM.png
new file mode 100644
index 00000000..a684f59b
Binary files /dev/null and b/assets/images/Screen-Shot-2021-05-11-at-2.54.51-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-05-27-at-11.46.46-PM.png b/assets/images/Screen-Shot-2021-05-27-at-11.46.46-PM.png
new file mode 100644
index 00000000..f78b02ae
Binary files /dev/null and b/assets/images/Screen-Shot-2021-05-27-at-11.46.46-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-06-04-at-11.41.06-PM.png b/assets/images/Screen-Shot-2021-06-04-at-11.41.06-PM.png
new file mode 100644
index 00000000..6be27668
Binary files /dev/null and b/assets/images/Screen-Shot-2021-06-04-at-11.41.06-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-06-06-at-11.06.46-PM.png b/assets/images/Screen-Shot-2021-06-06-at-11.06.46-PM.png
new file mode 100644
index 00000000..e2e822ab
Binary files /dev/null and b/assets/images/Screen-Shot-2021-06-06-at-11.06.46-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-06-06-at-8.02.00-PM.png b/assets/images/Screen-Shot-2021-06-06-at-8.02.00-PM.png
new file mode 100644
index 00000000..78a8ee0a
Binary files /dev/null and b/assets/images/Screen-Shot-2021-06-06-at-8.02.00-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-06-06-at-9.02.31-PM.png b/assets/images/Screen-Shot-2021-06-06-at-9.02.31-PM.png
new file mode 100644
index 00000000..5b338df5
Binary files /dev/null and b/assets/images/Screen-Shot-2021-06-06-at-9.02.31-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-06-06-at-9.17.26-PM.png b/assets/images/Screen-Shot-2021-06-06-at-9.17.26-PM.png
new file mode 100644
index 00000000..80e03caa
Binary files /dev/null and b/assets/images/Screen-Shot-2021-06-06-at-9.17.26-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-06-06-at-9.55.52-PM.png b/assets/images/Screen-Shot-2021-06-06-at-9.55.52-PM.png
new file mode 100644
index 00000000..909f917e
Binary files /dev/null and b/assets/images/Screen-Shot-2021-06-06-at-9.55.52-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-06-07-at-3.22.50-PM.png b/assets/images/Screen-Shot-2021-06-07-at-3.22.50-PM.png
new file mode 100644
index 00000000..fa0c50aa
Binary files /dev/null and b/assets/images/Screen-Shot-2021-06-07-at-3.22.50-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-11-18-at-8.20.44-PM.png b/assets/images/Screen-Shot-2021-11-18-at-8.20.44-PM.png
new file mode 100644
index 00000000..5149665e
Binary files /dev/null and b/assets/images/Screen-Shot-2021-11-18-at-8.20.44-PM.png differ
diff --git a/assets/images/Screen-Shot-2021-11-18-at-8.21.09-PM.png b/assets/images/Screen-Shot-2021-11-18-at-8.21.09-PM.png
new file mode 100644
index 00000000..e241aee3
Binary files /dev/null and b/assets/images/Screen-Shot-2021-11-18-at-8.21.09-PM.png differ
diff --git a/assets/images/Screenshot-2023-12-17-at-9.01.32-PM.png b/assets/images/Screenshot-2023-12-17-at-9.01.32-PM.png
new file mode 100644
index 00000000..5a2bb8fc
Binary files /dev/null and b/assets/images/Screenshot-2023-12-17-at-9.01.32-PM.png differ
diff --git a/assets/images/april-comic.png b/assets/images/april-comic.png
new file mode 100644
index 00000000..62a2f783
Binary files /dev/null and b/assets/images/april-comic.png differ
diff --git a/assets/images/balancing_coverage.png b/assets/images/balancing_coverage.png
new file mode 100644
index 00000000..298ea276
Binary files /dev/null and b/assets/images/balancing_coverage.png differ
diff --git a/assets/images/batfish-full-500px.png b/assets/images/batfish-full-500px.png
new file mode 100644
index 00000000..5be031b7
Binary files /dev/null and b/assets/images/batfish-full-500px.png differ
diff --git a/assets/images/bgp-route-leak-e1602110402996.png b/assets/images/bgp-route-leak-e1602110402996.png
new file mode 100644
index 00000000..d694cbd6
Binary files /dev/null and b/assets/images/bgp-route-leak-e1602110402996.png differ
diff --git a/assets/images/ci_cd_pipeline.png b/assets/images/ci_cd_pipeline.png
new file mode 100644
index 00000000..98da2d08
Binary files /dev/null and b/assets/images/ci_cd_pipeline.png differ
diff --git a/assets/images/compare-reachability.gif b/assets/images/compare-reachability.gif
new file mode 100644
index 00000000..27cdf666
Binary files /dev/null and b/assets/images/compare-reachability.gif differ
diff --git a/assets/images/cyber-security-3400657_1920.jpg b/assets/images/cyber-security-3400657_1920.jpg
new file mode 100644
index 00000000..ac970f50
Binary files /dev/null and b/assets/images/cyber-security-3400657_1920.jpg differ
diff --git a/assets/images/emulation.png b/assets/images/emulation.png
new file mode 100644
index 00000000..20b9de75
Binary files /dev/null and b/assets/images/emulation.png differ
diff --git a/assets/images/female_construction_worker_on_her_way_to_work_at_a_server_room.png b/assets/images/female_construction_worker_on_her_way_to_work_at_a_server_room.png
new file mode 100644
index 00000000..be578ade
Binary files /dev/null and b/assets/images/female_construction_worker_on_her_way_to_work_at_a_server_room.png differ
diff --git a/assets/images/fig-2.png b/assets/images/fig-2.png
new file mode 100644
index 00000000..db84df74
Binary files /dev/null and b/assets/images/fig-2.png differ
diff --git a/assets/images/fig-3.png b/assets/images/fig-3.png
new file mode 100644
index 00000000..bec279c9
Binary files /dev/null and b/assets/images/fig-3.png differ
diff --git a/assets/images/fig-4.png b/assets/images/fig-4.png
new file mode 100644
index 00000000..7d613933
Binary files /dev/null and b/assets/images/fig-4.png differ
diff --git a/assets/images/fig-5.png b/assets/images/fig-5.png
new file mode 100644
index 00000000..30e5e5ad
Binary files /dev/null and b/assets/images/fig-5.png differ
diff --git a/assets/images/fig-6.png b/assets/images/fig-6.png
new file mode 100644
index 00000000..9f5d9df1
Binary files /dev/null and b/assets/images/fig-6.png differ
diff --git a/assets/images/gns3-logo.png b/assets/images/gns3-logo.png
new file mode 100644
index 00000000..dd5b331c
Binary files /dev/null and b/assets/images/gns3-logo.png differ
diff --git a/assets/images/how-is-validation-done.png b/assets/images/how-is-validation-done.png
new file mode 100644
index 00000000..87fa6f70
Binary files /dev/null and b/assets/images/how-is-validation-done.png differ
diff --git a/assets/images/model-based-analysis.png b/assets/images/model-based-analysis.png
new file mode 100644
index 00000000..42795905
Binary files /dev/null and b/assets/images/model-based-analysis.png differ
diff --git a/assets/images/model-based-emulation.png b/assets/images/model-based-emulation.png
new file mode 100644
index 00000000..9b7e07d5
Binary files /dev/null and b/assets/images/model-based-emulation.png differ
diff --git a/assets/images/monitor-design-validate-deploy.png b/assets/images/monitor-design-validate-deploy.png
new file mode 100644
index 00000000..1ff1d8c6
Binary files /dev/null and b/assets/images/monitor-design-validate-deploy.png differ
diff --git a/assets/images/multi_spec_lang_validation.png b/assets/images/multi_spec_lang_validation.png
new file mode 100644
index 00000000..3820317c
Binary files /dev/null and b/assets/images/multi_spec_lang_validation.png differ
diff --git a/assets/images/network_engineering_workflow3.png b/assets/images/network_engineering_workflow3.png
new file mode 100644
index 00000000..8e141fa8
Binary files /dev/null and b/assets/images/network_engineering_workflow3.png differ
diff --git a/assets/images/network_engineering_workflow_controlplane_validation2.png b/assets/images/network_engineering_workflow_controlplane_validation2.png
new file mode 100644
index 00000000..7e03d4cf
Binary files /dev/null and b/assets/images/network_engineering_workflow_controlplane_validation2.png differ
diff --git a/assets/images/network_engineering_workflow_dataplane_validation2-1.png b/assets/images/network_engineering_workflow_dataplane_validation2-1.png
new file mode 100644
index 00000000..4b381092
Binary files /dev/null and b/assets/images/network_engineering_workflow_dataplane_validation2-1.png differ
diff --git a/assets/images/nmap-subnet-expansion-v2-1024x252.png b/assets/images/nmap-subnet-expansion-v2-1024x252.png
new file mode 100644
index 00000000..47971985
Binary files /dev/null and b/assets/images/nmap-subnet-expansion-v2-1024x252.png differ
diff --git a/assets/images/outage-anatomy.png b/assets/images/outage-anatomy.png
new file mode 100644
index 00000000..09c5ca9a
Binary files /dev/null and b/assets/images/outage-anatomy.png differ
diff --git a/assets/images/pre-post-deploy.png b/assets/images/pre-post-deploy.png
new file mode 100644
index 00000000..c864d830
Binary files /dev/null and b/assets/images/pre-post-deploy.png differ
diff --git a/assets/images/promo-batfish.png b/assets/images/promo-batfish.png
new file mode 100644
index 00000000..cb3b7135
Binary files /dev/null and b/assets/images/promo-batfish.png differ
diff --git a/assets/images/propane_compiler.png b/assets/images/propane_compiler.png
new file mode 100644
index 00000000..97ee1d81
Binary files /dev/null and b/assets/images/propane_compiler.png differ
diff --git a/assets/images/ratul_keynote_2017.jpg b/assets/images/ratul_keynote_2017.jpg
new file mode 100644
index 00000000..84f957d0
Binary files /dev/null and b/assets/images/ratul_keynote_2017.jpg differ
diff --git a/assets/images/speak_validation.svg b/assets/images/speak_validation.svg
new file mode 100644
index 00000000..6a11b642
--- /dev/null
+++ b/assets/images/speak_validation.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/assets/images/stacked-umbrellas.png b/assets/images/stacked-umbrellas.png
new file mode 100644
index 00000000..3227eff6
Binary files /dev/null and b/assets/images/stacked-umbrellas.png differ
diff --git a/assets/images/traceroute-lpv-v2-1024x640.png b/assets/images/traceroute-lpv-v2-1024x640.png
new file mode 100644
index 00000000..85e71729
Binary files /dev/null and b/assets/images/traceroute-lpv-v2-1024x640.png differ
diff --git a/assets/images/two-cars-odometer.png b/assets/images/two-cars-odometer.png
new file mode 100644
index 00000000..03c1cc84
Binary files /dev/null and b/assets/images/two-cars-odometer.png differ
diff --git a/assets/images/what-is-the-scope-of-validation.png b/assets/images/what-is-the-scope-of-validation.png
new file mode 100644
index 00000000..e4a24f40
Binary files /dev/null and b/assets/images/what-is-the-scope-of-validation.png differ
diff --git a/assets/images/when-is-validation-done.png b/assets/images/when-is-validation-done.png
new file mode 100644
index 00000000..8c53eafd
Binary files /dev/null and b/assets/images/when-is-validation-done.png differ
diff --git a/blogs.html b/blogs.html
new file mode 100644
index 00000000..984090db
--- /dev/null
+++ b/blogs.html
@@ -0,0 +1,28 @@
+---
+layout: page
+title: Blogs
+permalink: /blogs
+---
+
+These blogs were originally hosted on intentionet.com.
+
+{%- if site.posts.size > 0 -%}
+