F5 in AWS Part 5 - Cloud-init, Single-NIC, and Auto Scale Out in BIG-IP

Updated for Current Versions and Documentation

The following article covers features and examples in the 12.1 AWS Marketplace release, discussed in the following documentation:

Amazon Web Services: Single NIC BIG-IP VE Amazon Web Services: Auto Scaling BIG-IP VE

You can find the BIG-IP Hourly and BYOL releases in the Amazon marketplace here.

BIG-IP utility billing images are available, which makes it a great time to talk about some of the functionality.  So far in Chris’s series, we have discussed some of the highly-available deployment footprints of BIG-IP in AWS and how these might be orchestrated. Several of these footprints leverage BIG-IP's Device Service Clustering (DSC) technology for configuration management across devices and also lend themselves to multi-app or multi-tenant configurations in a shared-service model. But what if you want to deploy BIG-IP on a per-app or per-tenant basis, in a horizontally scalable footprint that plays well with the concepts of elasticity and immutability in cloud? Today we have just the option for you. Before highlighting these scalable deployment models in AWS, we need to cover cloud-init and single-NIC configurations; two important additions to BIG-IP that enable an Auto Scaling topology. 


Elastiity is obviously one of the biggest promises/benefits of cloud. By leveraging cloud, we are essentially tapping into the "unlimited" (at least relative to our own datacenters) resources large cloud providers have built. In actual practice, this means adopting new methodologies and approaches to truely deliver this.


In traditional operational model of datacenters, everything was "actively" managed. Physical Infrastructure still tends to lend itself to active management but even virtualized services and applications running on top of the infrastructure were actively managed.  For example, servers were patched, code was live upgraded inplace, etc. However, to achieve true elasticity, where things are spinning up or down and more ephemeral in nature, it required a new approach. Instead of trying to patch or upgrade individual instances, the approach was treating them as disposable which meant focusing more on the build process itself.

ex. Netflix's famous Building with Legos approach.

Yes, the concept of golden images/snapshots existed since virtualization but cloud, with self-service, automation and auto scale functionality, forced this to a new level.  Operations focus shifted more towards a consistent and repeatable "build" or "packaging" effort, with final goal of creating instances that didn't need to be touched, logged into, etc. 

In the specific context of AWS's Auto Scale groups, that means modifying the Auto Scale Group's "launch config". The steps for creating the new instances involve either referencing an entirely new image ID or maybe modification to a cloud-init config. 


What is it?

First, let’s talk about cloud-init as it is used with most Linux distributions.  Most of you who are evaluating or operating in the cloud have heard of it. For those who haven’t, cloud-init is an industry standard for bootstrapping machines at startup.  It provides a simple domain specific language for common infrastructure provisioning tasks. You can read the official docs here.  For the average linux or systems engineer, cloud-init is used to perform tasks such as installing a custom package, updating yum repositories or installing certificates to perform final customizations to a "base" or “golden” image.  For example, the Security team might create an approved hardened base image and various Dev teams would use cloud-init to customize the image so it booted up with an ‘identity’ if you will – an application server with an Apache webserver running or a database server with MySQL provisioned.

Let’s start with the most basic "Hello World" use of cloud-init, passing in User Data (in this case a simple bash script). If launching an instance via the AWS Console, on the Configure Instance page, navigate down the “Advanced Details”:

Figure 1: User Data input field  - bash

However, User Data is limited to < 16KBs and one of the real powers of cloud-init came from extending functionality past this boundry and providing a standardized way to provision, or ah humm, "initialize" instances. Instead of using unwieldy bash scripts that probed whether it was an Ubuntu or a Redhat instance and used this OS method or that OS method (ex. use apt-get vs. rpm) to install a package, configure users, dns settings, mount a drive, etc. you could pass a yaml file starting with #cloud-config file that did a lot of this heavy lifting for you.

Figure 2: User Data input field - cloud-config

Similar to one of the benefits of Chef, Puppet, Salt or Ansible, it provided a reliable OS or distribution abstraction but where those approaches require external orchestration to configure instances, this was internally orchestrated from the very first boot which was more condusive to the "immutable" workflow.  NOTE: cloud-init also compliments and helps boot strap those tools for more advanced or sophisticated workflows (ex. installing Chef/Puppet to keep long running non-immutable services under operation/management and preventing configuration drift).

This brings us to another important distinction. Cloud-init originated as a project from Canonical (Ubuntu) and was designed for general purpose OSs. The BIG-IP's OS (TMOS) however is a highly customized, hardened OS so most of the modules don't strictly apply. Much of the Big-IP's configuration is consumed via its APIs (TMSH, iControl REST, etc.) and stored in it's database MCPD.  We can still achieve some of the benefits of having cloud-init but instead, we will mostly leverage the simple bash processor.

So when Auto Scaling BIG-IPs, there are a couple of approaches.

1) Creating a custom image as described in the official documentation.
2) Providing a cloud-init configuration
  • This is a little lighter weight approach in that it doesn't require the customization work above.  
3) Using a combination of the two, creating a custom image and leveraging cloud-init.
  • For example, you may create a custom image with ASM provisioned, SSL certs/keys installed, and use cloud-init to configure additional environment specific elements.

Disclaimer: Packaging is an art, just look at the rise of Docker and new operating systems. Ideally, the more you bake into the image upfront, the more predictable it will be and faster it deploys. However, the less you build-in, the more flexible you can be. Things like installing libraries, compiling, etc. are usually worth building in the image upfront. However, the BIG-IP is already a hardened image and things like installing libraries is not something required or recommended so the task is more about addressing the last/lighter weight configuration steps. However, depending on your priorities and objectives, installing sensitive keying material, setting credentials, pre-provisioning modules, etc. might make good candidates for investing in building custom images.

Using Cloud-init with CloudFormation templates

Remember when we talked about how you could use CloudFormation templates in a previous post to setup BIG-IP in a standard way? Because the CloudFormation service by itself only gave us the ability to lay down the EC2/VPC infrastructure, we were still left with remaining 80% of the work to do; we needed an external agent or service (in our case Ansible) to configure the BIG-IP and application services. Now, with cloud-init on the BIG-IP (using version 12.0 or later), we can perform that last 80% for you.

Using Cloud-init with BIG-IP

As you can imagine, there’s quite a lot you can do with just that one simple bash script shown above. However, more interestingly, we also installed AWS’s Cloudformation Helper scripts.


to help extend cloud-init and unlock a larger more powerful set of AWS functionality.

So when used with Cloudformation, our User Data simply changes to executing the following AWS Cloudformation helper script instead. 

"UserData": {
  "Fn::Base64": {
      "Fn::Join": [
        "/opt/aws/apitools/cfn-init-1.4-0.amzn1/bin/cfn-init -v -s ",
        "Ref": "AWS::StackId"
        " -r ",
        " --region ",
        "Ref": "AWS::Region"

This allows us to do things like obtaining variables passed in from Cloudformation environment, grabbing various information from the metadata service, creating or downloading files, running particular sequence of commands, etc. so once BIG-IP has finishing running, our entire application delivery service is up and running.

For more information, this page discusses how meta-data is attached to an instance using CloudFormation templates: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-commands.

Example BYOL and Utility CloudFormation Templates

We’ve posted several examples on github to get you started.  


In just a few short clicks, you can have an entire BIG-IP deployment up and running. The two examples below will launch an entire reference stack complete with VPCs, Subnets, Routing Tables, sample webserver, etc. and show the use of cloud-init to bootstrap a BIG-IP. 

Cloud-init is used to configure interfaces, Self-IPs, database variables, a simple virtual server, and in the case of of the BYOL instance, to license BIG-IP.  Let’s take a closer look at the BIG-IP resource created in one of these to see what’s going on here:

  "Bigip1Instance ": {
    "Metadata ": {
     "AWS::CloudFormation::Init ": {
      "config ": {
       "files ": {
        "/tmp/firstrun.config ": {
         "content ": {
          "Fn::Join ": [
           " ",
            "#!/bin/bash\n ",
            "HOSTNAME=`curl`\n ",
            "TZ='UTC'\n ",
            "BIGIP_ADMIN_USERNAME=' ",
             "Ref ":  "BigipAdminUsername "
            "'\n ",
            "BIGIP_ADMIN_PASSWORD=' ",
             "Ref ":  "BigipAdminPassword "
            "'\n ",
            "MANAGEMENT_GUI_PORT=' ",
             "Ref ":  "BigipManagementGuiPort "
            "'\n ",
            "GATEWAY_MAC=`ifconfig eth0 | egrep HWaddr | awk '{print tolower($5)}'`\n ",
            "GATEWAY_CIDR_BLOCK=`curl${GATEWAY_MAC}/subnet-ipv4-cidr-block`\n ",
            "GATEWAY_NET=${GATEWAY_CIDR_BLOCK%/*}\n ",
            "GATEWAY=`echo ${GATEWAY_NET} | awk -F. '{ print $1\ ".\ "$2\ ".\ "$3\ ".\ "$4+1 }'`\n ",
            "VPC_CIDR_BLOCK=`curl${GATEWAY_MAC}/vpc-ipv4-cidr-block`\n ",
            "VPC_NET=${VPC_CIDR_BLOCK%/*}\n ",
            "VPC_PREFIX=${VPC_CIDR_BLOCK#*/}\n ",
            "NAME_SERVER=`echo ${VPC_NET} | awk -F. '{ print $1\ ".\ "$2\ ".\ "$3\ ".\ "$4+2 }'`\n ",
            "POOLMEM=' ",
             "Fn::GetAtt ": [
              "Webserver ",
              "PrivateIp "
            "'\n ",
            "POOLMEMPORT=80\n ",
            "APPNAME='demo-app-1'\n ",
            "VIRTUALSERVERPORT=80\n ",
            "CRT='default.crt'\n ",
            "KEY='default.key'\n "
         "group ":  "root ",
         "mode ":  "000755 ",
         "owner ":  "root "
        "/tmp/firstrun.utils ": {
         "group ":  "root ",
         "mode ":  "000755 ",
         "owner ":  "root ",
         "source ":  "http://cdn.f5.com/product/templates/utils/firstrun.utils "
        "/tmp/firstrun.sh ": {
         "content ": {
          "Fn::Join ": [
           " ",
            "#!/bin/bash\n ",
            ". /tmp/firstrun.config\n ",
            ". /tmp/firstrun.utils\n ",
            "FILE=/tmp/firstrun.log\n ",
            "if [ ! -e $FILE ]\n ",
            " then\n ",
            "     touch $FILE\n ",
            "     nohup $0 0<&- &>/dev/null &\n ",
            "     exit\n ",
            "fi\n ",
            "exec 1<&-\n ",
            "exec 2<&-\n ",
            "exec 1<>$FILE\n ",
            "exec 2>&1\n ",
            "date\n ",
            "checkF5Ready\n ",
            "echo 'starting tmsh config'\n ",
            "tmsh modify sys ntp timezone ${TZ}\n ",
            "tmsh modify sys ntp servers add { 0.pool.ntp.org 1.pool.ntp.org }\n ",
            "tmsh modify sys dns name-servers add { ${NAME_SERVER} }\n ",
            "tmsh modify sys global-settings gui-setup disabled\n ",
            "tmsh modify sys global-settings hostname ${HOSTNAME}\n ",
            "tmsh modify auth user admin password \ "'${BIGIP_ADMIN_PASSWORD}'\ "\n ",
            "tmsh save /sys config\n ",
            "tmsh modify sys httpd ssl-port ${MANAGEMENT_GUI_PORT}\n ",
            "tmsh modify net self-allow defaults add { tcp:${MANAGEMENT_GUI_PORT} }\n ",
            "if [[ \ "${MANAGEMENT_GUI_PORT}\ " != \ "443\ " ]]; then tmsh modify net self-allow defaults delete { tcp:443 }; fi \n ",
            "tmsh mv cm device bigip1 ${HOSTNAME}\n ",
            "tmsh save /sys config\n ",
            "checkStatusnoret\n ",
            "sleep 20 \n ",
            "tmsh save /sys config\n ",
            "tmsh create ltm pool ${APPNAME}-pool members add { ${POOLMEM}:${POOLMEMPORT} } monitor http\n ",
            "tmsh create ltm policy uri-routing-policy controls add { forwarding } requires add { http } strategy first-match legacy\n ",
            "tmsh modify ltm policy uri-routing-policy rules add { service1.example.com { conditions add { 0 { http-uri host values { service1.example.com } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 1 } }\n ",
            "tmsh modify ltm policy uri-routing-policy rules add { service2.example.com { conditions add { 0 { http-uri host values { service2.example.com } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 2 } }\n ",
            "tmsh modify ltm policy uri-routing-policy rules add { apiv2 { conditions add { 0 { http-uri path starts-with values { /apiv2 } } } actions add { 0 { forward select pool ${APPNAME}-pool } } ordinal 3 } }\n ",
            "tmsh create ltm virtual /Common/${APPNAME}-${VIRTUALSERVERPORT} { destination${VIRTUALSERVERPORT} mask any ip-protocol tcp pool /Common/${APPNAME}-pool policies replace-all-with { uri-routing-policy { } } profiles replace-all-with { tcp { } http { } }  source source-address-translation { type automap } translate-address enabled translate-port enabled }\n ",
            "tmsh save /sys config\n ",
            "date\n ",
            "# typically want to remove firstrun.config after first boot\n ",
            "# rm /tmp/firstrun.config\n "
         "group ":  "root ",
         "mode ":  "000755 ",
         "owner ":  "root "
       "commands ": {
        "b-configure-Bigip ": {
         "command ":  "/tmp/firstrun.sh\n "
    "Properties ": {
     "ImageId ": {
      "Fn::FindInMap ": [
       "BigipRegionMap ",
        "Ref ":  "AWS::Region "
        "Ref ":  "BigipPerformanceType "
     "InstanceType ": {
      "Ref ":  "BigipInstanceType "
     "KeyName ": {
      "Ref ":  "KeyName "
     "NetworkInterfaces ": [
       "Description ":  "Public or External Interface ",
       "DeviceIndex ":  "0 ",
       "NetworkInterfaceId ": {
        "Ref ":  "Bigip1ExternalInterface "
     "Tags ": [
       "Key ":  "Application ",
       "Value ": {
        "Ref ":  "AWS::StackName "
       "Key ":  "Name ",
       "Value ": {
        "Fn::Join ": [
         " ",
          "BIG-IP:  ",
           "Ref ":  "AWS::StackName "
     "UserData ": {
      "Fn::Base64 ": {
       "Fn::Join ": [
        " ",
         "#!/bin/bash\n ",
         "/opt/aws/apitools/cfn-init-1.4-0.amzn1/bin/cfn-init -v -s  ",
          "Ref ":  "AWS::StackId "
         " -r  ",
         "Bigip1Instance ",
         " --region  ",
          "Ref ":  "AWS::Region "
         "\n "
    "Type ":  "AWS::EC2::Instance "

Above may look like a lot at first but high level, we start by creating some files "inline" as well as “sourcing” some files from a remote location.

/tmp/firstrun.config  - Here we create a file inline, laying down variables from the Cloudformation Stack deployment itself and even the metadata service ( Take a look at the “Ref” stanzas. When this file is laid down on the BIG-IP disk itself, those variables will be interpolated and contain the actual contents. The idea here is to try to keep config and execution separate. 

/tmp/firstrun.utils – These are just some helper functions to help with initial provisioning. We use those to determine when the BIG-IP is ready for this particular configuration (ex. after a licensing or provisioning step). Note that instead of creating the file inline like the config file above, we simply “source” or download the file from a remote location.    /tmp/firstrun.sh – This file is created inline as well and where it really all comes together. The first thing we do is load config variables from the firstrun.conf file and load the helper functions from firstrun.utils. We then create separate log file (/tmp/firstrun.log) to capture the output of this particular script. Capturing the output of these various commands just helps with debugging runs. Then we run a function called

(that we loaded from that helper function file) to make sure BIG-IP’s database is up and ready to accept a configuration. The rest may look more familiar and where most of the user customization takes place. We use variables from the config file to configure the BIG-IP using familiar methods like TMSH and iControl REST.  Technically, you could lay down an entire config file (like SCF) and load it instead.  We use tmsh here for simplicity. The possibilities are endless though.     

Disclaimer: the specific implementation above will certainly be optimized and evolve but the most important take away is we can now leverage cloud-init and AWS's helper libraries to help bootstrap the BIG-IP into a working configuration from the very first boot!

Debugging Cloud-init

What if something goes wrong?  Where do you look for more information? 

The first place you might look is in various cloud-init logs in /var/log (cloud-init.log, cfn-init.log, cfn-wire.log):

Below is an example for the CFTs below:

[admin@ip-10-0-0-205:NO LICENSE:Standalone] log # tail -150 cfn-init.log
2016-01-11 10:47:59,353 [DEBUG] CloudFormation client initialized with endpoint https://cloudformation.us-east-1.amazonaws.com
2016-01-11 10:47:59,353 [DEBUG] Describing resource BigipEc2Instance in stack arn:aws:cloudformation:us-east-1:452013943082:stack/as-testing-byol-bigip/07c962d0-b893-11e5-9174-500c217b4a62
2016-01-11 10:47:59,782 [DEBUG] Not setting a reboot trigger as scheduling support is not available
2016-01-11 10:47:59,790 [INFO] Running configSets: default
2016-01-11 10:47:59,791 [INFO] Running configSet default
2016-01-11 10:47:59,791 [INFO] Running config config
2016-01-11 10:47:59,792 [DEBUG] No packages specified
2016-01-11 10:47:59,792 [DEBUG] No groups specified
2016-01-11 10:47:59,792 [DEBUG] No users specified
2016-01-11 10:47:59,792 [DEBUG] Writing content to /tmp/firstrun.config
2016-01-11 10:47:59,792 [DEBUG] No mode specified for /tmp/firstrun.config
2016-01-11 10:47:59,793 [DEBUG] Writing content to /tmp/firstrun.sh
2016-01-11 10:47:59,793 [DEBUG] Setting mode for /tmp/firstrun.sh to 000755
2016-01-11 10:47:59,793 [DEBUG] Setting owner 0 and group 0 for /tmp/firstrun.sh
2016-01-11 10:47:59,793 [DEBUG] Running command b-configure-BigIP
2016-01-11 10:47:59,793 [DEBUG] No test for command b-configure-BigIP
2016-01-11 10:47:59,840 [INFO] Command b-configure-BigIP succeeded
2016-01-11 10:47:59,841 [DEBUG] Command b-configure-BigIP output:  % Total   % Received % Xferd  Average Speed
Time   Time   Time  Current
                                  Dload  Upload  Total  Spent   Left  Speed
0   40   0   40   0   0  74211    0 --:--:-- --:--:-- --:--:-- 40000
2016-01-11 10:47:59,841 [DEBUG] No services specified
2016-01-11 10:47:59,844 [INFO] ConfigSets completed
2016-01-11 10:47:59,851 [DEBUG] Not clearing reboot trigger as scheduling support is not available
[admin@ip-10-0-0-205:NO LICENSE:Standalone] log #

If trying out the example templates above, you can inspect the various files mentioned. 

Ex. In addition to checking for their general presence: /tmp/firstrun.config = make sure variables were passed as you expected.  /tmp/firstrun.utils = Make sure exists and was downloaded /tmp/firstrun.log = See if any obvious errors were outputted.

It may also be worth checking AWS Cloudformation Console to make sure you passed the parameters you were expecting.


Another one of the important building blocks introduced with 12.0 on AWS and Azure Virtual Editions is the ability to run BIG-IP with just a single network interface. Typically, BIG-IPs were deployed in a multi-interface model, where interfaces were attached to an out-of-band management network and one or more traffic (or "data-plane") networks. But, as we know, cloud architectures scale by requiring simplicity, especially at the network level. To this day, some clouds can only support instances with a single IP on a single NIC. In AWS’s case, although they do support multiple NIC/multiple IP, some of their services like ELB only point to first IP address of the first NIC. So this Single-NIC configuration makes it not only possible but also dramatically easier to deploy in these various architectures.

How this works:

We can now attach just one interface to the instance and BIG-IP will start up, recognize this, use DHCP to configure the necessary settings on that interface.  

Underneath the hood, the following DB keys will be set:

admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list sys db provision.1nic one-line
sys db provision.1nic { value "enable" }
admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list sys db provision.1nicautoconfig one-line
sys db provision.1nicautoconfig { value "enable" }

provision.1nic = allows both management and data-plane to use the same interface provision.1nicautoconfig = uses address from DHCP to configure a vlan, Self-IP and default gateway. 

Ex. network objects automatically configured

admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net vlan
net vlan internal
    if-index 112
    interfaces {
      1.0 { }
    tag 4094
admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net self
net self self_1nic {
  allow-service {
    traffic-group traffic-group-local-only
    vlan internal
admin@(ip-10-0-1-65)(cfg-sync Standalone)(Active)(/Common)(tmos)# list net route
net route default {
    network default

Note: Traffic Management Shell and the Configuration Utility (GUI) are still available on ports 22 and 443 respectively.  If you want to run the management GUI on a higher port (for instance if you don’t have the BIG-IPs behind a Port Address Translation service (like ELB) and want to run an HTTPS virtual on 443), use the following commands:

tmsh modify sys httpd ssl-port 8443
tmsh modify net self-allow defaults add { tcp:8443 }
tmsh modify net self-allow defaults delete { tcp:443 }

WARNING:  Now that management and dataplane run on the same interface, make sure to modify your Security Groups to restrict access to SSH and whatever port you use for the Mgmt GUI port to trusted networks.

UPDATE: On Single-Nic, Device Service Clustering currently only supports Configuration Syncing (Network Failover is restricted for now due to BZ-606032). 

In general, the single-NIC model lends itself better to single-tenant or per-app deployments, where you need the advanced services from BIG-IP like content routing policies, iRules scripting, WAF, etc. but don’t necessarily care for maintaining a management subnet in the deployment and are just optimizing or securing a single application. By single tenant we also mean single management domain as you're typically running everything through a single wildcard virtual (ex.,,, etc.) vs. giving each tenant its own Virtual Server (usually with its own IP and configuration) to manage.

However, you can still technically run multiple applications behind this virtual with a policy or iRule, where you make traffic steering decisions based on L4-L7 content (SNI, hostname headers, URIs, etc.). In addition, if the BIG-IPs are sitting behind a Port Address Translation service, it also possible to stack virtual services on ports instead.

Ex. = Virtual Service 1 = Virtual Service 2 = Virtual Service 3

 We’ll let you get creative here….

BIG-IP Auto Scale

Finally, the last component of Auto Scaling BIG-IPs involves building scaling policies via Cloudwatch Alarms. In addition to the built-in EC2 metrics in CloudWatch, BIG-IP can report its own set of metrics, shown below:

Figure 3: Cloudwatch metrics

to scale BIG-IPs based on traffic load.

This can be configured with the following TMSH commands on any version 12.0 or later build:

tmsh modify sys autoscale-group autoscale-group-id ${BIGIP_ASG_NAME}
tmsh load sys config merge file /usr/share/aws/metrics/aws-cloudwatch-icall-metrics-config

These commands tell BIG-IP to push the above metrics to a custom “Namespace” on which we can roll up data via standard aggregation functions (min, max, average, sum).  This namespace (based on Auto Scale group name) will appear as a row under in the “Custom metrics” dropdown on the left side-bar in CloudWatch (left side of Figure 1).  Once these BIG-IP or EC2 metrics have been populated, CloudWatch alarms can be built, and these alarms are available for use in Auto Scaling policies for the BIG-IP Auto Scale group. (Amazon provides some nice instructions here).  

Auto Scaling Pool Members and Service Discovery

If you are scaling your ADC tier, you are likely also going to be scaling your application as well.  There are two options for discovering pool members in your application's auto scale group.

1) FQDN Nodes

For example, in a typical sandwich deployment, your application members might also be sitting behind an internal ELB so you would simply point your FQDN node at the ELB's DNS. For more information, please see:


2) BIG-IP's AWS Auto Scale Pool Member Discovery feature (introduced v12.0)

This feature polls the Auto Scale Group via API calls and populates the pool based on its membership. For more information, please see:


Putting it all together

The high-level steps for Auto Scaling BIG-IP include the following:

  1. Optionally* creating an ElasticLoadBalancer group which will direct traffic to BIG-IPs in your Auto Scale group once they become operational.  Otherwise, will need Global Server Load Balancing (GSLB).
  2. Creating a launch configuration in EC2 (referencing either custom image id and/or using Cloud-init scripts as described above)
  3. Creating an Auto Scale group using this launch configuration
  4. Creating CloudWatch alarms using the EC2 or custom metrics reported by BIG-IP.
  5. Creating scaling policies for your BIG-IP Auto Scale group using the alarms above.  You will want to create both scale up and scale down policies.

Here are some things to keep in mind when Auto Scaling BIG-IP

● BIG-IP must run in a single-interface configuration with a wildcard listener (as we talked about earlier). This is required because we don't know what IP address the BIG-IP will get. ● Auto Scale Groups themselves consist of utility instances of BIG-IP ● The scale up time for BIG-IP is about 12-20 minutes depending on what is configured or provisioned.  While this may seem like a long-time, creating the right scaling policies (polling intervals, thresholds, unit of scale) make this a non-issue. ● This deployment model lends itself toward the themes of stateless, horizontal scalability and immutability embraced by the cloud. Currently, the config on each device is updated once and only once at device startup. The only way to change the config will be through the creation of a new image or modification to the launch configuration.  Stay tuned for a clustered deployment which is managed in a more traditional operational approach.

If interesting in exploring Auto Scale further, we have put together some examples of scaling the BIG-IP tier here:


* Above github repository also provides some examples of incorporating BYOL instances in the deployment to help leverage BYOL instances for your static load and using Auto Scale instances for your dynamic load. See the READMEs for more information.

CloudFormation templates with Auto Scaled BIG-IP and Application

Need some ideas on how you might leverage these solutions?  Now that you can completely deploy a solution with a single template, how about building service catalog items for your business units that deploy different BIG-IP services (LB, WAF) or a managed service build on top of AWS that can be easily deployed each time you on-board a new customer?

Updated Jun 06, 2023
Version 3.0

Was this article helpful?