Forum Discussion

PhilJones's avatar
PhilJones
Icon for Nimbostratus rankNimbostratus
Jun 14, 2021

Active Active Advanced WAF behind Azure LB Best Practice

Hi

Hope someone can help me.

I'm trying to work out the best configuration for our use case - 50 + web applications bound on SSL on a active / active Advanced WAF cluster behind an Azure Load Balancer configured on top of the single nic deployment from F5's supported ARM template (https://github.com/F5Networks/f5-azure-arm-templates/tree/main/supported/autoscale/waf/via-lb)

 

Should I separate out every application into separate Virtual Servers either on a separate port / IP binding?

 

If IP binding - is it possible even to share Self IPs between both Active BigIPs in single arm configuration behind an ALB (to reduce the admin overhead creating Virtual Servers twice on both BigIPs)?

 

Or should I bind more internal IPs directly to both BigIPs independently and duplicate the Virtual Server config based on that?

 

Or should I go for a 2 or 3 Nic configuration and will that allow me to configure shared IPs?

 

If port binding, is it efficient to create multiple virtual servers on same IP different ports?

 

Should that an IP binding on multiple ports or a wildcard destination?

 

I'm struggling to find a definitive guide for my use case that goes beyond a single Virtual Server set up.

 

I'm sure I've misunderstood some of these concepts!

thanks in advance

 

  • On a green field, I'd probably use the F5 DNS Load Balancer Cloud Service to load balance between two AdvWAF load balancers.

    For the case of single NIC or n-NIC deployments, I'd suggest to use a single NIC deployment and to use a LTM Traffic Policy that will assign a Security Policy, Pool and whateverelse based on the Host Header value. Or for more granularity on Host Header and URI.

    For SSL profiles you should use SNI.

    Important question to solve is - how do you keep the config in sync?

     

    I'd not bother users with learning multiple ports for multiple web apps. That's torture.

     

    And I strongly recommend against using F5 Cloud Failover Extension. In my opinion there is no good reason to run an expensive instance in Azure or AWS in standby. That stuff is expensive, hence it should be active.

  • Its unfortunate that the mechanism available to get F5 native clustering working in Azure, using API calls, is so slow to fail over. It would make things simpler otherwise.

    For that reason, i am still using a load balancer sandwich approach. This article is useful

    F5 High Availability - Public Cloud Guidance DevCentral

     

    The port address translation ability of Azure load balancer is handy. Instead of binding a secondary IP to each of my F5 for each new virtual, i can use a single pair of IP and use differing ports.

     

    I too would value some best practice input, as the documented approaches seem far too 'workaround' for an enterprise grade product.

     

    • PhilJones's avatar
      PhilJones
      Icon for Nimbostratus rankNimbostratus

      Thanks Jim I'll take a look through that article. are you managing SSL termination on the BigIPs in the middle of your sandwich? Not sure how I would port translate from the load balancer to each separate SSL profile VS? Or if I just have to do a single SSL VS with multiple profiles..