Publishing an API using NGINX Controller


API management is a complex process of governing the design, and implementation of APIs. This article introduces a solution to implement an API management system based on the market-leading NGINX Plus platform.

The NGINX solution contains two main components which are NGINX Plus and NGINX Controller. NGINX Plus is the data processing unit that handles the API traffic. NGINX Controller manages NGINX Plus instances and provides a human consumable interface to handle API lifecycle.

Managing a single NGINX Plus instance and its configuration is relatively straightforward. However, for managing multiple NGINX Plus instances, a management system is necessary. NGINX Controller allows administrators to centrally configure, monitor, and analyze telemetry from multiple NGINX Plus instances regardless of their location. Instances can be deployed on-premises or in any public cloud infrastructure.

Architecture and Network Topology

NGINX Controller manages multiple NGINX Plus instances, that act as API gateways. In the diagram below, data plane communication flows are shown in blue and the control plane communications are shown in green.

Picture 1. “Controller to Nginx Plus interactions”

The NGINX Plus instances run the Controller agent, which registers the instances with the NGINX Controller. The agent, running on NGINX Plus, uses an API key issued by the Controller to register itself with the Controller. The key is used to authenticate control-plane data in transit between the NGINX Plus instance and the Controller. Once NGINX Plus is registered with the Controller, the latter fully control the instance. Subsequently, the Controller pushes the configuration to the NGINX Plus instance and monitors telemetry.

IP connectivity is provided by the networking stack of the underlying operating systems where the NGINX Plus and the NGINX Controller instances run. Those systems need to be able to reach each other over a network.

Note: Following ports need to be open to allow communications between NGINX Plus, controller, and database

  • DB: port 5432 TCP (incoming to DB from NGINX Controller host)
  • NGINX Controller: 80 TCP (incoming from NGINX Plus instances)
  • NGINX Controller: 443 TCP (incoming from where you are accessing from a browser, for example, an internal network)
  • NGINX Controller: 8443 TCP (incoming from NGINX Plus instances)
  • NGINX Controller: 6443 TCP (incoming requests to the Kubernetes master node; used for the Kubernetes API server)
  • NGINX Controller: 10250 TCP (incoming requests to the Kubernetes worker node; used for the Kubelet API)


NGINX Plus uses the underlying operating system’s networking stack to accept and forward data plane traffic. As a daemon on a Linux system, it listens on all available IP interfaces and ports (sockets) specified in its configuration. NGINX Plus can reuse a socket for delivering data to many different applications that sit behind it. As an example, assume NGINX Plus listens on network socket and receives requests to multiple applications such as,,,, etc. NGINX Plus is configured with virtual servers for each of the applications that it is serving. When requests come, NGINX Plus examines the hostname header in the request and matches it to the appropriate virtual server. This feature makes it possible to host multiple applications behind a single socket instead of running them on some random port that is not native to most web apps. Thus multiple applications can be served on the same machine using a single socket, rather than having to allocate different ports for each of the applications.

Publishing an API

Once the NGINX Plus and NGINX Controller instances are deployed and installed on the target systems, they can be configured to handle API traffic.

This article doesn’t contain step-by-step instructions to register NGINX Plus instances on Controller. Administrators are welcome to use the official documentation, that is available online: link.

Once the registration process is complete, an administrator can access a list of all registered instances and review graphs created from the telemetry data sent back to the controller.

Picture 2. Controller dashboard lists managed NGINX Plus instances

The system is now ready to define APIs and publish them through selected NGINX Plus instances regardless of their location.

The diagram below visually describes the scenario where a company publishes and maintains both a ‘test’ API and a ‘production’ API deployments.

Picture 3. Deployment layout

As an example API I use Httpbin app. It provides number of API endpoints that generate all kinds of responses depending on request.

Following steps describe how to publish 'test' version of API using NGINX controller.

1) Create an environment. Environment is an logical container that aggregates all kinds of resources (certificates, gateways, apps, etc...) for particular deployment. For example all resources that belong to testing deployment go to 'test' environment and resources for production use go to 'prod' environment. Such segregation helps to make configuration more error prone.

2) Add a certificate to publish an API via secure channel.

3) Create a gateway. It is similar to virtual server concept that defines HTTP listener properties.

4) Create an application. It provides a logical abstraction for an application. An application may include multiple components including API.

5) Create an API definition. A logical container for an API.

6) Create an API version. An API version enumerates all endpoints for an API.

7) Create a published API. A published API represents an API version deployed to particular gateway and forwarding API calls to a backend.

Once API is published NGINX Controller pushes configuration to corresponding NGINX Plus instances.

user@nginx-plus-2$ cat /etc/nginx/nginx.conf | grep -ie "server {" -A 7

    server {
        listen 80;
        listen 443 ssl;
        server_name test.httpbin.internet.lab;
        status_zone test.httpbin.internet.lab;
        set $apimgmt_entry_point 3;
        ssl_certificate /etc/controller-agent/configurator/auxfiles/cert.crt;
        ssl_certificate_key /etc/controller-agent/configurator/auxfiles/cert.key;

Now NGINIX Plus is ready to process API calls and forward them towards the backend

user@client-vm$ http -v https://test.httpbin.internet.lab/uuid

GET /uuid HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: test.httpbin.internet.lab
User-Agent: HTTPie/0.9.2

HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 53
Content-Type: application/json
Date: Thu, 12 Dec 2019 22:27:59 GMT
Server: nginx/1.17.6
    "uuid": "08232fcb-1e41-4433-adc3-2818a971647f"

As you may noticed API definition and API version abstractions don't belong to an environment. This means that exactly the same definition and version of an API may be published to any environment. E.g. once all tests are complete in 'test' environment it is easy to re-publish to production by creating another published API in 'prod' environment. Therefore NGINX controller significantly simplifies API lifecycle management.

Published Jan 07, 2020
Version 1.0

Was this article helpful?

No CommentsBe the first to comment