In the previous article, we introduced Ansible and how it can be used to manage network devices and how to use NAPALM as a common API to manage multiple vendor equipment. In this article, we will try to outline how to use Ansible and NAPALM to build a framework to automate the generation of router configuration and deployment of this configuration on the network nodes within the network. with this framework, we should be able to cover the full lifecycle of a typical network deployment which covers Design, deployment, and verification.  The below diagram outlines these three main stages and how we can leverage Ansible in each stage



This model can be used to in different scenarios like

  • Building a new Data Center Fabric and validating the deployment.
  • Building new Enterprise Network and validating the deployment.
  • Automating the build of lab environments to test new features as part of a CI/CD pipeline.

The below diagram outlines the lab Setup that we will use in order to illustrate Ansible and NAPALM use.The below setup is build using Vagrant with VirtualBox as the Hypervisor to run all these VMs



vSRX is running in packet-mode thus it should be considered as router, not a firewall, it is used in this lab since it footprint is small (512 MB RAM) while the XRv require  at least 2GB RAM.

The Routers are connected to form the below very simple topology where the vSRX1 is simulating P routers and vSRX2,vSRX3 and XR4 are PE routers.




Setting Up Ansible and Napalm

Ansible will be installed on a Linux machine running Ubuntu 16.04, Ansible can be installed on different Linux distributions however in this deployment we will be using Unbuntu.


sudo apt-get update && sudo apt-get install software-properties-common
sudo apt-add-repository ppa:ansible/ansible
sudo apt-get update
sudo apt-get install ansible

We will be installing NAPALM ansible plugin and configuring ansible to look for NAPALM in the directory where NAPALM is installed


  1. pip install napalm-ansible
  2. touch ~/.ansible.cfg
  3. add the below lines to the file ~/.ansible.cfg using any editor (vi or nano)
library = /usr/local/lib/python2.7/dist-packages/napalm_ansible


How Ansible identify managed nodes (Node inventory)

Ansible uses a simple ini like file to identify the nodes that will be managed by the Ansible server and against which it will execute the different playbooks. For simple setups, the host and group variables can be associated with those hosts (like username and password to SSH and NETCONF) in the inventory file. the below snippet is for inventory file that will be used in the lab (hosts) that identifies the nodes in this topology.

vSRX1 ansible_host=
vSRX2 ansible_host=
vSRX3 ansible_host=

XR4 ansible_host=




Below are some highlighted to describe the Ansible inventory file

  • The nodes are grouped into two groups (Junos, iosxr) since this will help us to assign different values for the variables based on this grouping.
  • Each group has different variables assigned to them as shown above [junos:vars] & [iosxr:vars]
  • ansible_user & ansible_ssh_pass specify the username & password for SSH session.
  • ansible_host specifies the IP address for the managed host


Generating the configuration of the devices

Ansible architecture in generating any configuration template (whether it is a Router configuration or an HTML document or any text-based template document) is the same, it uses its template module which takes two input in order to generate the output text document.

  1. Variables for the data that will be rendered into the template to generate the output text file. This data can be specified in a YAML formatted document or JSON (however  YAML is more human readable) which outline all the data required to build the output router configuration.
  2. JINJA2 template which will have vendor-specific (JunOS and IOS-XR in our case) configuration template.

Ansible takes these two input parameters and outputs the per-node configuration file for each node within its inventory. The below diagram outlines the process that Ansible executes in order to generate the per-node router configuration.


The network topology that we will build has the following parameters and design

  • IPv4 address with /24 subnet mask will be used on all core links between the nodes.
  • OSPF is the IGP routing protocol using Area 0 and all the links are configured as point-to-point.
  • iBGP is used with vSRX1 acting as the RR between all the routers and all the routers are in AS 65000.

The above requirements should be translated to a YAML document that describes the desired network setup. The YAML document that will be combined with the JINJA2 template is considered the data model which holds all the parameters which describe the network topology. This YAML document holds only this information which is abstracted from the per vendor syntax and this data model should be combined with the  JINJA2 template to output the per vendor router configuration. The below snippet from outline a YAML document that describes the network topology that was outlined in the beginning.


The above YAML document clearly defines two main objects that outline the above network topology which are the nodes and the links connecting these nodes. We define each node with its main attributes such as name, mgmt ipaddr and router-id. The links are described as a point to point connection between the left and right nodes with all the attributes is described as VLAN, cost, IP address are outlined. Finally, the BGP setup is also outlined with the specification of a Route reflector and the clients for the route reflector and the address families that will be enabled on all the routers. We can think of the above YAML document as the LLD that describe the overall architecture of the network.

YAML, JSON or XML are just data representation/serialization that defines the data structure and how it relates to each other. So in the above YAML document, nodes, is a list where each item in the list is a dictionary and same goes for links. Using these data structure we can access the information, using for loops defined either in Ansible or in the JINJA2 template.

Although the above YAML document very well describes the network design, however, to extract  the data from this document to populate the JINJA2 template is very hard, thus we need to  have a per-node data structure in a YAML document that will describe the same network setup  however from the perspective of a single node (we can think of it as a per Node LLD ). The resulting YAML document that describes a single node is shown below.


A snippet of the JINJA2 template to create the Junos configuration for all JunOS nodes is shown below, this snippet only outline the template OSPF configuration for any JunOS device and how it is rendered into the actual Junos configuration based on the input data model above.

Similar below is the IOS-XR JINJA2 template and the resulting router configuration.


Below is the Ansible play-book (core-deploy.yml) that takes both the per-node YAML  document (nodes.yml) that has all the variables for each node as well as the JINJA2  template (core.j2) and generate the resulting configuration files


The Ansible play-book is simply a YAML formatted document which has two plays. First let’s  focus on the second play and let’s break each line

  • name: Name of the play
  • gather_facts no: by default, Ansible try to gather facts on any managed nodes so as to use these facts during the play execution, however, this can be done for Linux machines and this is not applicable for network nodes.
  • connection local: as we described before Ansible by default SSH to remote nodes and run python code and then report the output when we specify connection local we are instructing Ansible to run the python code locally on the Ansible server.
  • hosts all: specify which hosts to execute the below tasks on, in our case, we will execute this play on all the hosts in the inventory file.
  • This play includes two tasks
    • creating the directory to store router configuration (using file module), this directory is called core_config.
    • creating the per-node configuration using the supplied JINJA2 template. we are choosing which template to use (JunOS or IOS-XR based on the dev_os variable that was set in the inventory file).

Below is the organization of the files in the directory that hosts the Ansible play-book.

├── core
│   ├── core-model.yml
│   └── core-to-nodes.j2
├── core_config
│   ├── vSRX1-config.txt
│   ├── vSRX2-config.txt
│   ├── vSRX3-config.txt
│   └── XR4-config.txt
├── core-deploy.yml
├── diff
│   ├── vSRX1-diff.txt
│   ├── vSRX2-diff.txt
│   ├── vSRX3-diff.txt
│   └── XR4-diff.txt
├── hosts
├── iosxr
│   └── core.j2
├── junos
│   └── core.j2
└── nodes.yml



The first Play in this playbook is used to generate the per node data model from the Core data model and store this in a YAML document called nodes.yml. we import all these variables in the second play to be used to generate the per node configuration.

We execute the below command in order to generate the per-node YAML data model

ansible-playbook -i hosts core-deploy.yml --tag model

We execute the below command in order run the playbook in order to generate the required per node router configuration.

ansible-playbook -i hosts core-deploy.yml --tag template

After this play is executed the directory core_config is populated with all the routers configuration.Once we have the per-node configuration we can move forward with the deployment of the configuration using NAPALM.


Deploying the configuration


The last part is to push the generated router configuration to each node this will be done using the napalm_install_config module which can be used to push the configuration on different platforms using an input file which has all the configuration required to be injected into the managed node. The play-book to push the configuration is as shown below


This play can perform two actions based on a supplied parameter called commit

  • If commit variable is 0, we push the configuration without committing the configuration and build a configuration diff that will be stored in the directory called diff.
  • If the commit variable is 1, we push the configuration and commit the configuration and also storing the configuration diff in the above-mentioned directory.

The below command push the configuration without committing

ansible-playbook -i hosts core-deploy.yml --tag deploy --e "commit=0"

The below commands push the configuration and commit the configuration

ansible-playbook -i hosts core-deploy.yml --tag deploy --e "commit=1"

After executing the above commands, the configuration should be pushed by Ansible into each router. In the next article, we will outline how to verify the configuration that was deployed and validate the current state of the network.


Below is the link to the complete playbook on GitHub