This is the first part of a series of blog posts that are meant to serve as a Quick Start Guide for getting up and running with streaming real-time telemetry data from Juniper devices and collecting it using an open-source tool called OpenNTI.
In this post, we will discuss some high-level concepts around Junos Telemetry Interface as well as the steps needed to install OpenNTI. In the next post, we will take a look at how enable telemetry streaming from Junos devices and get the data visualized using a dashboarding tool called Grafana, which is bundled with OpenNTI.
Push vs Pull Models for Network Monitoring
The traditional model for monitoring the health of a network is based on a so-called “pull” model, using SNMP and CLI to periodically poll network elements. These methods have inherent scalability limitations and are resource intensive, particularly when polling a large number of metrics at a very high frequency.
The Junos Telemetry Interface (JTI) flips this around entirely and eliminates the need for polling, by relying instead on a “push” model to asynchronously deliver the telemetry data as a stream to a downstream collector. This approach is much more scalable and supports the monitoring of thousands of objects in a network with granular resolution.
Collecting telemetry data, however, is not a trivial task as it typically involves three types of functions:
- Collection: Collecting and parsing the telemetry data using an appropriate data collection engine, such FluentD, Telegraf, Logstash, etc.
- Persistence: Persisting the collected telemetry data in some type of datastore, whether it be a file, a time-series database (like InfluxDB), or even a distributed streaming platform (like Apache Kafka).
- Visualization: Displaying the collected telemetry using some type of data visualization or dashboarding tool, such as Grafana or Kibana.
OpenNTI For One-Stop Shopping
OpenNTI is an open source project seeded by Juniper Networks. It is a collection of various tools that address all three functions described above, namely collecting, persisting and visualizing time series data from network devices, all contained in one package.
For data visualization and data persistence, OpenNTI comes bundled with Grafana and InfluxDB, respectively. On the data collection side, as shown in Figure 1 below, OpenNTI supports both “push” and “pull” operations on network telemetry data. This blog post focuses primarily on push operations, of which there are two flavours, each of which correspond to the two types of telemetry formats that Juniper supports:
- Native Streaming: This format uses a proprietary data model defined by Juniper, using Google protocol buffers (gpb) as a means of serializing and structuring telemetry data messages. Data is transported via UDP and is exported close to the source, such as directly from a line card or network processing unit (NPU).
- OpenConfig Streaming: This format utilizes OpenConfig data models and key/value pair based Google protocol buffer messages for streaming telemetry data. Unlike native format, data is transported via gRPC over HTTP2 and is exported to the collector centrally from the Routing Engine (RE).

Installing OpenNTI
Until recently, OpenNTI only supported Native Streaming using the FluentD collector. Recently, OpenNTI was retrofitted with a special build of the Telegraf collector that supports gRPC. This means that the OpenNTI master branch now supports both Native and OpenConfig Streaming.
OpenNTI can be found at the following GitHub repository:
For the example shown in this blog post, OpenNTI was installed on a Ubuntu 16.04 server, but it can be installed on any Linux or Mac machine. As a first step, a few prerequisite packages need to be installed first, namely ‘make‘, ‘docker‘ and ‘docker-compose‘:
apt install make apt install docker apt install docker-compose
Next, we pull the master branch for OpenNTI from GitHub:
root@ubuntu:~# git clone https://github.com/Juniper/open-nti Cloning into 'open-nti'... remote: Counting objects: 2474, done. remote: Total 2474 (delta 0), reused 0 (delta 0), pack-reused 2473 Receiving objects: 100% (2474/2474), 5.79 MiB | 0 bytes/s, done. Resolving deltas: 100% (1479/1479), done. Checking connectivity... done.
Pulling the branch will create an ‘open-nti‘ subdirectory in your current working directory. To kick off the installation process, ‘cd’ to the ‘open-nti‘ subdirectory and enter ‘make start’ (note, this will take a few minutes to complete):
root@ubuntu:~# ls
open-nti
root@ubuntu:~# cd open-nti
root@ubuntu:~/open-nti# make start
Use docker compose file: docker-compose.yml
IMAGE_TAG=latest docker-compose -f docker-compose.yml up -d
Pulling opennti (juniper/open-nti:latest)...
latest: Pulling from juniper/open-nti
6ffe5d2d6a97: Pull complete
f4e00f994fd4: Pull complete
[... CONTENT OMITTED FOR BREVITY ...]
Successfully built 78650f7cfc81
WARNING: Image for service input-internal was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating opennti_con
Creating opennti_input_snmp
Creating opennti_input_syslog
Creating opennti_input_jti
Creating opennti_input_oc
Creating opennti_input_internal
Verifying Installation
When the installation completes, issue the ‘docker ps‘ command to see what Docker containers are running and what ports have been opened by default.
root@ubuntu:~/open-nti# docker ps CONTAINER ID IMAGE PORTS NAMES 74d4ff78b44e juniper/open-nti-input-syslog:latest 5140/tcp, 24220/tcp, 24224/tcp, 0.0.0.0:6000->6000/udp opennti_input_syslog 81d43efac6f4 juniper/open-nti-input-jti:latest 5140/tcp, 24224/tcp, 0.0.0.0:50000->50000/udp, 24284/tcp, 0.0.0.0:50020->50020/udp opennti_input_jti 5d7c7eba6a01 opennti_input-internal opennti_input_internal f2f2ed7ede84 opennti_input-oc 0.0.0.0:50051->50051/udp opennti_input_oc 2e0ce61d258c opennti_input-snmp 0.0.0.0:162->162/udp opennti_input_snmp aceaa028eb2b juniper/open-nti:latest 0.0.0.0:80->80/tcp, 0.0.0.0:3000->3000/tcp, 0.0.0.0:8083->8083/tcp, 0.0.0.0:8086->8086/tcp, 0.0.0.0:8125->8125/udp opennti_con
As highlighted in green above, there are a few ports to take note of. Here is a summary of what these ports are used for:
Port | Description |
---|---|
50000 | The port that the FluentD collector listens on. |
50051 | The port that the Telegraf collector listens on. |
8083 | The port for the InfluxDB web admin UI. |
8086 | The port the InfluxDB northbound API. |
80 or 3000 | The port that the Grafana web UI listens on. |
Verify that Grafana is up and running by pointing your brower to one of the following URLs. The login page shown in Figure 2 below should get displayed. Note that the default username/password for Grafana is “admin”/”admin”.
http://<your_server_ip>:80
http://<your_server_ip>:3000

Verify that InfluxDB is up and running by pointing your brower to the following URL. The InfluxDB web admin UI page should get displayed.
http://<your_server_ip>:8083

In Part 2 of this blog series, we take a look at the Junos configuration needed to enable telemetry streaming for Native format, as well as getting the data collected and visualized using OpenNTI.
This was very helpful. I was able to get this running on Ubuntu and receiving streaming telemetry from a couple Juniper MX960s. Thanks so much for this informative post. I’m currently using the Chronograf site hosted at port 8888 using the Data Explorer function. So I still have a ways to go to using more of these capabilities. I’m not sure what the difference is in Chrongraf and Grafana and why I would use one or the other. Any advice is welcome. Thanks again.
Hi, great post! I’m trying the same but influxDB seems not running because I cannot access to port 8083 neither 8086 I’m using ubuntu 18.04.
Some advice please?
Cheers