We have a setup of OpenBGPD edge router which is multihomed with two upstream ISP providers within our Autonomous system, the 2 ISP links provide redundancy for our Ubuntu Mirror behind the router. Our mirror receives traffic through both the ISP links though not evenly distributed, look at the following diagram to get an overview of the topology.

One of the feature of BGP is multihoming which is the most common need. It gives maximum redundancy through independent uplinks of different ISPs. But to avail the BGP multihoming feature certain requirements need to be satisfied so we made sure we have them & they are

  1. An Autonomous System Number
  2. An independent IP Address Space
  3. Minimum of two ISP uplinks

With the active multihoming feature in our setup, if one of the ISP fails there is still the other ISP link that is not affected. BGP calculates the full routing table based on the views of the available uplink ISP. Routes of the failed ISP provider are no longer considered and the traffic flows via other available ISP link.

My challenge is to test a successful failover between the ISP links when one of the ISP uplink fails and also to test different configuration changes for OpenBGPD in our autonomous system. Obviously I cannot touch the existing setup as it is live, so I started to emulate the setup on GNS3.

Before we look into the emulation of our physical setup on GNS3, we shall have a brief introduction to OpenBGPD.

Overview of OpenBGPD

OpenBGPD is a free software implementation of BGP v4 protocol. OpenBGPD is combination of 3 processes.

  1. Parent Process
  2. Session Engine
  3. Route Decision Engine

Parent Process deals with the configuration & interacts with the kernel routing table (FIB).
Session Engine handles all BGP sessions to neighbors and all timers (keepalive, hold and idle timer).
Route Decision Engine takes care of the BGP Routing Table RIB (Prefixes and Paths) and generates the Updates.

Source: https://www.openbsd.org/papers/linuxtag06-network/mgp00008.png


Simulation on GNS3

  • GNS3 is a graphical network simulator using which we can design complex network topologies. We can run simulations or configure virtual (Virtualbox & VMware VMs) & real devices (cisco).
  • The use of GNS3 in this scenario gives us the the ability to break or attach the links at will & also the visual representation of the network topology.
  • I have chosen a setup which consists of 4 Virtualbox VMs with Openbsd operating system configured with OpenBGPD.
  • The below image depicts the network topology which I’ve used as the testbed in GNS3 simulator

  • For a successful multi-homing scenario in the above testbed in GNS3 which uses OpenBGPD in Edge Router, despite the failure of any one of the ISP links, the destination AS should be reachable through the other available ISP uplink.
  • We can bring down the BGP peer either by removing any one of the ISP links or by bringing down any of the BGP peer using “bgpctl” utility.


When any one of the ISP link is down the keepalives will be sent until the hold timer limit is reached, pointing to the failed link and then OpenBGPD Session Engine knows that the link is dead. All the routes in the RIB which point to the failed ISP are removed, we can confirm it by looking at the RIB using

to make sure our failover has been successful. The new routes will be selected based on the view of the available ISP link and then stored in FIB, check it with