F5 Distributed Cloud - Customer Edge Site - Deployment & Routing Options
F5 Distributed Cloud Customer Edge (CE) software deployment models for scale and routing for enterprises deploying multi-cloud infrastructure. Today's service delivery environments are comprised of multiple clouds in a hybrid cloud environment. How your multi-cloud solution attaches to your existing on-prem and cloud networks can be the difference between a successful overlay fabric, and one that leave you wanting more out of your solution. Learn your options with F5 Distributed Cloud Customer Edge software.10KViews17likes3CommentsKubernetes architecture options with F5 Distributed Cloud Services
Summary F5 Distributed Cloud Services (F5 XC) can both integrate with your existing Kubernetes (K8s) clustersand/or host aK8s workload itself. Within these distinctions, we have multiple architecture options. This article explores four major architectures in ascending order of sophistication and advantages. Architecture #1: External Load Balancer (Secure K8s Gateway) Architecture #2: CE as a pod (K8s site) Architecture #3: Managed Namespace (vK8s) Architecture #4: Managed K8s (mK8s) Kubernetes Architecture Options As K8s continues to grow, options for how we run K8s and integrate with existing K8s platforms continue to grow. F5 XC can both integrate with your existing K8s clustersand/orrun a managed K8s platform itself.Multiple architectures exist within these offerings too, so I was thoroughly confused when I first heard about these possibilities. A colleague recently laid it out for me in a conversation: "Michael, listen up: XC can eitherintegrate with your K8s platform,run insideyour K8s platform, host virtual K8s(Namespace-aaS), or run a K8s platformin your environment." I replied, "That's great. Now I have a mental model for differentiating between architecture options." This article will overview these architectures and provide 101-level context: when, how, and why would you implement these options? Side note 1: F5 XC concepts and terms F5 XC is a global platform that can provide networking and app delivery services, as well as compute (K8s workloads). We call each of our global PoP's a Regional Edge (RE). RE's are highly meshed to form the backbone of the global platform. They connect your sites, they can expose your services to the Internet, and they can run workloads. This platform is extensible into your data center by running one or more XC Nodes in your network, also called a Customer Edge (CE). A CE is a compute node in your network that registers to our global control plane and is then managed by a customer as SaaS. The registration of one or more CE's creates a customer site in F5 XC. A CE can run on ahypervisor (VMWare/KVM/Etc), a Hyperscaler (AWS, Azure, GCP, etc), baremetal, or even as a k8s pod, and can be deployed in HA clusters. XC Mesh functionality provides connectivity between sites, security services, and observability. Optionally, in addition, XC App Stack functionality allows a large and arbitrary number of managed clusters to be logically grouped into a virtual site with a single K8s mgmt interface. So where Mesh services provide the networking, App Stack services provide the Kubernetes compute mgmt. Our first 2 architectures require Mesh services only, and our last two require App Stack. Side note 2: Service-to-service communication I'm often asked how to allow services between clusters to communicate with each other. This is possible and easy with XC. Each site can publish services to every other site, including K8s sites. This means that any K8s service can be reachable from other sites you choose. And this can be true in any of the architectures below, although more granular controls are possible with the more sophisticated architectures. I'll explore this common question more in a separate article. Architecture 1: External Load Balancer (Secure K8s Gateway) In a Secure Kubernetes Gatewayarchitecture, you have integration with your existing K8s platform, using the XC node as the external load balancer for your K8s cluster. In this scenario, you create a ServiceAccount and kubeconfig file to configure XC. The XC node then performs service discovery against your K8s API server. I've covered this process in a previous article, but the advantage is that you can integrate withexisting K8s platforms. This allows exposing both NodePort and ClusterIP services via the XC node. XC is not hosting any workloads in this architecture, but it is exposing your services to your local network, or remote sites, or the Internet. In the diagram above, I show a web application being accesssed from a remote site (and/or the Internet) where the origin pool is a NodePort service discovered in a K8s cluster. Architecture 2: Run a site within a K8s cluster (K8s site type) Creating a K8s site is easy - just deploy a single manifest found here. This file deploys multiple resources in your cluster, and together these resources work to provide the services of a CE, and create a customer site. I've heard this referred to as "running a CE inside of K8s" or "running your CE as a pod". However, when I say "CE node" I'm usually referring to a discreet compute node like a VM or piece of hardware; this architecture is actually a group of pods and related resources that run within K8s to create a XC customer site. With XC running inside your existing cluster, you can expose services within the cluster by DNS name because the site will resolve these from within the cluster. Your service can then be exposed anywhere by the F5 XC platform. This is similar to Architecture 1 above, but with this model, your site is simply a group of pods within K8s. An advantage here is the ability to expose services of other types (e.g. ClusterIP). A site deployed into a K8s cluster will only support Mesh functionality and does not support AppStack functionality (i.e., you cannot run a cluster within your cluster). In this architecture, XC acts as a K8s ingress controller with built-in application security. It also enables Mesh features, such as publishing of other sites' services on this site, and publishing of this site's discovered services on other sites. Architecture 3: vK8s (Namespace-as-a-Service) If the services you use includeAppStack capabilities, then architectures #3 and #4 are possible for you.In these scenarios, our XC nodeactually runs your K8son your workloads. We are no longer integrating XC with your existing K8s platform. XCisthe platform. A simple way to run K8s workloads is to use avirtual k8s (vK8s) architecture. This could be referred to as a "managed Namespace" because by creating a vK8s object in XC you get a single namespace in a virtual cluster. Your Namespace can be fully hosted (deployed to RE's) or run on your VM's (CE's), or both. Your kubeconfig file will allow access to your Namespace via the hosted API server. Via your regular kubectl CLI (or via the web console) you can create/delete/manage K8s resources (Deployments, Services, Secrets, ServiceAccounts, etc) and view application resource metrics. This is great if you have workloads that you want to deploy to remote regions where you do not have infrastructure and would prefer to run in F5's RE's, or if you have disparate clusters across multiple sites and you'd like to manage multiple K8s clusters via a single centralized, virtual cluster. Best practice guard rails for vK8s With a vK8s architecture, you don't have your own cluster, but rather a managed Namespace. So there are somerestrictions(for example, you cannot run a container as root, bind to a privileged port, or to the Host network). You cannot create CRD's, ClusterRoles, PodSecurityPolicies, or Namespaces, so K8s operators are not supported. In short, you don't have a managed cluster, but a managed Namespace on a virtual cluster. Architecture 4: mK8s (Managed K8s) Inmanaged k8s (mk8s, also known as physical K8s or pk8s) deployment, we have an enterprise-level K8s distribution that is run at your site. This means you can use XC to deploy/manage/upgrade K8s infrastructure, but you manage the Kubernetes resources. The benefitsinclude what is typical for 3rd-party K8s mgmt solutions, but also some key differentiators: multi-cloud, with automation for Azure, AWS, and GCP environments consumed by you as SaaS enterprise-level traffic control natively allows a large and arbitrary number of managed clusters to be logically managed with a single K8s mgmt interface You can enable kubectl access against your local cluster and disable the hosted API server, so your kubeconfig file can point to a global URL or a local endpoint on-prem. Another benefit of mK8s is that you are running a full K8s cluster at your site, not just a Namespace in a virtual cluster. The restrictions that apply to vK8s (see above) do not apply to mK8s, so you could run privileged pods if required, use Operators that make use of ClusterRoles and CRDs, and perform other tasks that require cluster-wide access. Traffic management controls with mK8s Because your workloads run in a cluster managed by XC, we can apply more sophisticated and native policies to K8s traffic than non-managed clusters in earlier architectures: Service isolation can be enforced within the cluster, so that pods in a given namespace cannot communicate with services outside of that namespace, by default. More service-to-service controls exist so that you can decide which services can reach with other services with more granularity. Egress controlcan be natively enforced for outbound traffic from the cluster, by namespace, labels, IP ranges, or other methods. E.g.: Svc A can reach myapi.example.com but no other Internet service. WAF policies, bot defense, L3/4 policies,etc—allof these policies that you have typically applied with network firewalls, WAF's, etc—can be applied natively within the platform. This architecture took me a long time to understand, and longer to fully appreciate. But once you have run your workloads natively on a managed K8s platform that is connected to a global backbone and capable of performing network and application delivery within the platform, the security and traffic mgmt benefits become very compelling. Conclusion: As K8s continues to expand, management solutions of your clusters make it possible to secure your K8s services, whether they are managed by XC or exist in disparate clusters. With F5 XC as a global platform consumed as a service—not a discreet installation managed by you—the available architectures here are unique and therefore can accommodate the diverse (and changing!) ways we see K8s run today. Related Articles Securely connecting Kubernetes Microservices with F5 Distributed Cloud Multi-cluster Multi-cloud Networking for K8s with F5 Distributed Cloud - Architecture Pattern Multiple Kubernetes Clusters and Path-Based Routing with F5 Distributed Cloud8.5KViews29likes5CommentsA complete Multi-Cloud Networking walkthrough with F5 Distributed Cloud
F5 Distributed Cloud – Multi-Cloud Networking F5 Distributed Cloud (F5 XC) provides a Software-as-a-Service based platform to connect, deliver, secure, and operate your networks and applications across any environment. This walkthrough contains two sections. The first section uses F5 Distributed Cloud Network Connect to network across cloud locations and providers with simplified provisioning and end-to-end security. The second part uses F5 Distributed Cloud App Connect, and shows how to securely connect distributed workloads across cloud and edge locations with integrated app security. Distributed Cloud Network Connect Network Connect helps customers establish a multi-cloud networking fabric with end-to-end cloud orchestration, a gateway that implements L3-L7 functions to enforce network connectivity and security and a unified policy with central visibility for collaboration across NetOps & SecOps. 1. Deploy F5 XC Customer Edge Site(s) Step 1: Establish a multi-cloud networking fabric by deploying F5 XC Customer Edge (CE) sites (cloud, edge, on-prem) ➡️ See the following article and connected video to learn how to use the Distributed Cloud Console to deploy a CE in AWS and in Azure, and then how to route traffic between each of the sites. Using F5 Distributed Cloud Network Connect to transit, route, & secure private cloud environments ➡️ F5 XC can orchestrate private connectivity, including AWS PrivateLink, Azure CloudLink, and many other private transport providers. The following article covers this capability in greater detail. Using F5 Distributed Cloud private connectivity orchestration for secure multi-cloud infrastructure Step 2: Customers onboard required VPC/VNets to the F5 XC CE sites to participate in the multi-cloud fabric. F5 XC then orchestrates cloud networking constructs to attract traffic from these VPCs (termed as spokes) and then enforce L3-L7 network services. Cloud orchestration includes things such as creating AWS TGW, route table updates, setting up Azure VNet peering, configuring AWS direct connect -or- Azure Express Route and related resources to establish private connectivity and many more. ➡️ See the following series of articles to learn how to use the Infrastructure as Code utility Terraform to deploy and connect Distributed Cloud CE’s in AWS, Azure, and Google Cloud Overview & AWS Deployment with F5 Distributed Cloud Multi-Cloud Networking AWS to Azure via Layer 3 & Global Network with F5 Distributed Cloud Multi-Cloud Networking Demo Guide: A step-by-step walkthrough using Terraform with Distributed Cloud Network Connect in AWS MCN 1: Deploy a F5 XC CE Site MCN 2: Cookie cutter architecture - fully orchestrated: attach spoke VPC/VNets seamlessly. MCN 3: Sites deployed across the globe to establish a multi-cloud networking fabric. 2. Configure Network Segments in Distributed Cloud Step 1: Configure Network Segments. These Network Segments will provide an end-to-end global isolated network. MCN 4: Configure a global Network Segment Step 2: Associate F5 XC CE Sites (incl. VLANs/interfaces for on-prem/edge sites), onboarded VPCs/VNets to these network segments to create an isolated network within the multi-cloud networking fabric. ➡️ Steps 4, 6, and 10+ in the following article show how to connect the Distributed Cloud Global Network use it to route traffic between different CE Sites Using F5 Distributed Cloud Network Connect to transit, route, & secure private cloud environments 3. Define Security Policies Step 1: Define security policies such as forward proxy policies, network security policies, traffic policers for your entire multi-cloud networking fabric with the power of labels to easily express the intent without complexities such as IP addresses. MCN 5: Enhanced Firewall Policy with the power of labels 4. Integrate with 3rd Party NFV services such as Palo Alto Networks Firewall Step 1: Seamlessly provision NFV services such as Big-IP AWAF, Palo Alto Networks Firewall, into any F5 XC CE site MCN 6: Orchestrate 3rd party firewalls like Palo Alto Step 2: Use the power of labels to easily express the intent to steer traffic to these 3rd party NFV appliances. MCN 7: Seamlessly steer traffic towards 3rd party NFV services such as PAN firewall ➡️ Learn how to deploy a Palo Alto Firewall using Distributed Cloud and a Palo Alto Panorama server, and then redirect traffic to the firewall using Enhanced Firewall Policies Easily Deploy Your Palo Alto NGFW with F5 Distributed Cloud Services 5. Monitor & Troubleshoot your Network NetOps and SecOps can collaborate using a single platform to monitor & troubleshoot networking issues across the multi-cloud fabric. MCN 8: Powerful monitoring dashboards & troubleshooting tools for your entire secure multi-cloud network fabric. Distributed Cloud App Connect App Connect helps customers simply deliver applications across their multi-cloud networking fabric including the internet without worrying about underlying networking via the distributed proxy architecture with full self-service capability and application isolation via namespaces. 1. Establish a Secure Multi-Cloud Network Fabric Utilize Multi-Cloud Network Connect to deploy F5 XC CE sites in environments that host your applications. 2. Discover Any App running Anywhere Step 1: Simply discover all apps running across your environments by configuring service discoveries. Use DNS based service discovery to discover legacy apps and K8s/consul-based service discovery to discover modern apps. MCN 9: Discover apps in any environment - sample showing apps discovered in a K8s cluster. 3. Deliver Any App Anywhere, incl. the Public Internet Step 1: Configure a Load Balancer which will connect apps (Origins) discovered in any environment and then deliver it (Advertise) to any environment. MCN 10: Leverage distributed proxy architecture to connect an App running in Azure to AWS – without configuring ANY networking. Step 2: Apps can be delivered (Advertised) directly to the internet using F5 XC’s performant anycast global backbone, with DNS delegation & TLS cert management by simply selecting VIP advertisement as ‘Internet’. MCN 11: Live traffic graph showing seamlessly connecting App in Azure -> AWS and then delivering the App in AWS to the public internet. ➡️ Navigate each step of the process, from deploying CE’s to using App Connect to connect app services locally and advertise the frontend to the Internet. The following collection of articles use the Distributed Cloud Console to facilitate the deployment, and demonstrate how to automate the process using the Infrastructure as Code utility Terraform to orchestrate everything. Use F5 Distributed Cloud to Connect Apps Running in Multiple Clusters and Sites Azure & Layer 7 Networking with F5 Distributed Cloud Multi-Cloud Networking Demo Guide: Using Terraform to connect backend-send services via Distributed Cloud App Connect in Azure 4. Secure your Apps Step 1: Secure Apps with industry leading application security services such as WAF, Bot, L7 DoS, API security, client-side defense and many more with a single click. MCN 12: One click application security for all your applications – anywhere ➡️ The following demo guide shows how to deploy web app globally and secure it. Distributed Cloud WAAP + CDN Demo Guide 5. Monitor & Troubleshoot your Apps SecOps, NetOps and DevOps can collaborate using a single platform to monitor & troubleshoot application issues across the multi-cloud fabric. MCN 13: Performance & Security dashboards for every application namespace - each namespace contains many load balancers. MCN 14: Performance & Security dashboard for each Load Balancer MCN 15: Various other security & performance tools to help maintain a healthy secure performant multi-cloud application fabric. Conclusion Using the Network Connect and App Connect services in Distributed Cloud, it's easy to deploy, connect, and secure apps that run in multiple clouds. The F5 platform automatically handles the connectivity, routing, and allows customized access, enabling apps to be deployed globally or privately in just a few clicks. Additional Resources Distributed Cloud Network Connect Distributed Cloud App Connect Demo Guide: F5 XC MCN6KViews3likes1CommentF5 Distributed Cloud - Regional Decryption with Virtual Sites
In this article we discuss how the F5 Distributed Cloud can be configured to support regulatory demands for TLS termination of traffic to specific regions around the world. The article provides insight into the F5 Distributed Cloud global backbone and application delivery network (ADN). The article goes on to inspect how the F5 Distriubted Cloud is able to achieve these custom topologies in a multi-tenant architecture while adhearing to the "rules of the internet" for route summarization. Read on to learn about the flexibility of F5's SaaS platform providing application delivery and security solutions for your applications.5.2KViews17likes2CommentsMulti-Cluster, Multi-Cloud Networking for K8S with F5 Distributed Cloud – Architecture Pattern
Application is the center of attention for majority organization. It is an important asset for business. Application evolves from simple application to complex application with multitude of integrated systems and distributed – simple to complex. Application becoming so complex. Complexities are the real threat to business regardless from security or operation aspect. Security, clouds, multi-cloud networking and so on exist because of application. Those technologies will cease to exist without existence of application. F5 Distributed Cloud is design to address business most important asset - application. It bring F5 vision to reality for our customer - “Secure, Deliver and Optimize every app and API anywhere”5KViews8likes3CommentsF5 Hybrid Security Architectures (Part 2 - F5's Distributed Cloud WAF and NGINX App Protect WAF)
Here in this example solution, we will be using Terraform to deploy an AWS Elastic Kubernetes Service cluster running the Arcadia Finance test web application serviced by F5 NGINX Kubernetes Ingress Controller and protected by NGINX App Protect WAF. We will supplement this with F5 Distributed Cloud Web App and API Protection to provide complimentary security at the edge. Everything will be tied together using GitHub Actions for CI/CD and Terraform Cloud to maintain state.4.9KViews4likes0CommentsUse F5 Distributed Cloud to Connect Apps Running in Multiple Clusters and Sites
Introduction Modern apps are comprised of many smaller components and can take advantage of today’s agile computing landscape. One of the challenges IT Admins and Security Operations face is securely controlling access to all the components of distributed apps while business development grows or changes hands with mergers and acquisitions, or as contracts change. F5 Distributed Cloud (F5XC) makes it very easy to provide uniform access to distributed apps regardless of where the components live. Solution Overview Arcadia Finance is a distributed app with modules that run in multiple Kubernetes clusters and in multiple locations. To expedite development in a key part of the Arcadia Finance distributed app, the business has decided to outsource work on the Refer A Friend module. IT Ops must now relocate the Refer A Friend module to a separate location exclusive to the new contractor where its team of developers have access to work on it. Because the app is modular, IT has shared a copy of the Refer A Friend container to the developer, and now that it is up and running in the new site, traffic to the module needs to transition away from the one that had been developed in house to the one now managed by the contractor. Logical Topology Distributed App Overview The Refer A Friend endpoint is called by the Arcadia Finance frontend pod in Kubernetes (K8s) when a user of the service wants to invite a friend to join. The pod does this by making an HTTP request to the location “refer-a-friend.demo.internal/app3/”. The endpoint “refer-a-friend.demo.internal” is registered to the K8s cluster with an F5XC HTTP Load Balancer policy with its VIP advertised internally to specific sites, including the K8s cluster. F5XC uses the cluster’s K8s API to register services and make them available anywhere within the customer tenant’s configured global network. Three sites are used by the company that owns Arcadia Finance to deliver the distributed app. The core of the app lives in a K8s cluster in Azure, the administration and monitoring of the app is in the customer’s legacy site in AWS. To maintain security, the new contractor only has access to GCP where they’ll continue developing the Refer A Friend module. An F5XC global virtual network connects all three sites, and all three sites are in a site mesh group to streamline communication between the different app modules. Steps to deploy To reach the app externally, an HTTP Load Balancer policy is configured using an origin pool that connects to the K8s “frontend” service, and the origin pool uses a Kubernetes Site in F5XC to access the frontend service. A second HTTP Load Balancer policy is configured with its origin pool, a static IP that lives in Azure and is accessed via a registered Azure VNET Site. When the Refer A Friend module is needed, a pod in the K8s cluster connects to the Refer A Friend internal VIP advertised by the HTTP Load Balancer policy. This connection is then tunneled by F5XC to an endpoint where the module runs. With development to the Refer A Friend module turned over to the contractor, we only need to change the HTTP Load Balancer policy to use an origin pool located in the contractor’s Cloud GCP VPC Site. The origin policy for the GCP located module is nearly identical to the one used in Azure. Now when a user in the Arcadia App goes to refer a friend, the callout the app makes is now routed to the new location where it is managed and run by the new contractor. Demo Watch the following video for information about this solution and a walkthrough using the steps above in the F5 Distributed Cloud Console. Conclusion Using F5 Distributed Cloud with modern day distributed apps, it’s almost too easy to route requests intended for a specific module to a new location regardless of the provider and provider specific requirements or the IP space the new module runs in. This is the true power of using F5 Distributed Cloud to glue together modern day distributed apps.3.9KViews4likes0CommentsWhat is Multi-Cloud Networking?
What is Multi-Cloud Networking? Multi-cloud networking (MCN), as a technology, aims to provide easy network connectivity between cloud environments. For the purpose of our definition, we need to imagine our datacenter as a cloud. You can loosely define a cloud environment as 'anywhere you run workloads.' The concept is nebulous… literally. Clouds come in all shapes and sizes, from 'running on Pi' to AWS / GCP / Azure. MCN is to clouds, as Internet is to network. AWS’s DirectConnect, Azure ExpressRoute and GCP’s Direct Link were early forms of MCN, aimed at joining portions of their own clouds together with customer datacenters. Insertion of transport virtual appliances in clouds has become another mechanism for MCN through time. Its strength is its flexibility and agility. One other notable MCN concept is the transport provider. Some circuit providers offer 'short-hop' transport to various cloud providers by routing. This option offers significant throughput versus the SDN router but lacks the agility. This is a popular option for hybrid cloud enterprises.With all of these options, you can make individual connections to each cloud, potentially in a hub and spoke fashion or full mesh. Challenges With Multi-Cloud Networking The top-most concern should be scalability, in every way. You need to be concerned about scale in routing, licensing, metered cloud costs, not to mention the knowledge to understand all of the nuance features of each cloud provider and so many more things. All of this is operational overhead, which can be significant. Another serious challenge is IP addressing. The sheer volume of it is one thing. Anyone who works with modern applications today can tell you that it's hard to even find a workload sometimes, with how massive things can get. DNS is one possible option to assist, but you've got to account for all of the native cloud workloads, too.. with their different DNS interfaces. Another common challenge is IP overlap. If you're curious what I mean, lets say your employer acquires a piece of software that lives in GCP, but you're already in AWS. You start going down the path of routing when you suddenly notice that both cloud environments are 10.1.x.x/16.This means localized routing all over the place and we know how much router people love one-offs, am I right? The next challenge is one I've already hinted at: How many indepth nerd knobs do you want to know by how many security vendors? You've got to strategize to minimize this sort of potential sprawl and standardize on the vendors that can do the most for you. Advantages of Multi-Cloud Networking The greatest advantage is really multi-cloud transit.Understanding so many different and new technologies is a daunting task. With multi-cloud transit, data centers route through the same SDN routers as your cloud application flows, allowing you to see each cloud provider as a metered resource for app consumption. No need to worry about addressing, DNS, or routing for each environment. Another substantial benefit is the enablement of a shared security model. When you can route between these environments, you can also easily aggregate logs, integrate with SIEMs and manage automated security policies with ease. Network fluidity is another substantial benefit. When your COO comes to you and says that you need to integrate a newly acquired network segment, you have no problems. One of the very cool benefits of SDN is the ability to route by software object. When we think of routing in traditional networks we want our packet to get to 10.10.10.4 by way of 192.168.3.1, but an SDN router sends our packet to 10.10.10.4 by way of f5xc_gcp_router4. This also means that your app developer can stamp out their app in AWS to send another packet to 10.10.10.4 by way of f5xc_aws_router16 or such. Overlap no longer matters when you route through an SDN core. Conclusion Giving your modern application networks the flexibility to grow on demand, to assimilate new application network segments in minutes instead of months... Ultimately, I really believe that MCN - when done right - like Chuck Mangione said (well, with a flugel horn), 'Feels So Good.' The designs you can build with it are SO much more scalable and translate everything from physical data centers to clouds in a clean, easy to manage fashion.3.8KViews8likes0CommentsUnderstanding Modern Application Architecture - Part 1
This is part 1 of a series. Here are the other parts: Understanding Modern Application Architecture - Part 2 Understanding Modern Application Architecture - Part 3 Over the past decade, there has been a change taking place in how applications are built. As applications become more expansive in capabilities and more critical to how a business operates, (or in many cases, the application is the business itself) a new style of architecture has allowed for increased scalability, portability, resiliency, and agility. To support the goals of a modern application, the surrounding infrastructure has had to evolve as well. Platforms like Kubernetes have played a big role in unlocking the potential of modern applications and is a new paradigm in itself for how infrastructure is managed and served. To help our community transition the skillset they've built to deal with monolithic applications, we've put together a series of videos to drive home concepts around modern applications. This article highlights some of the details found within the video series. In these first three videos, we breakdown the definition of a Modern Application. One might think that by name only, a modern application is simply an application that is current. But we're actually speaking in comparison to a monolithic application. Monolithic applications are made up of a single, or a just few pieces. They are rigid in how they are deployed and fragile in their dependencies. Modern applications will instead incorporate microservices. Where a monolithic application might have all functions built into one broad encompassing service, microservices will break down the service into smaller functions that can be worked on separately. A modern application will also incorporate 4 main pillars. Scalability ensures that the application can handle the needs of a growing user base, both for surges as well as long term growth. Portability ensures that the application can be transportable from its underlying environment while still maintaining all of its functionality and management plane capabilities. Resiliency ensures that failures within the system go unnoticed or pose minimal disruption to users of the application. Agility ensures that the application can accommodate for rapid changes whether that be to code or to infrastructure. There are also 6 design principles of a modern application. Being agnostic will allow the application to have freedom to run on any platform. Leveraging open source software where it makes sense can often allow you to move quickly with an application but later be able to adopt commercial versions of that software when full support is needed. Defining by code allows for more uniformity of configuration and move away rigid interfaces that require specialized knowledge. Automated CI/CD processes ensures the quick integration and deployment of code so that improvements are constantly happening while any failures are minimized and contained. Secure development ensures that application security is integrated into the development process and code is tested thoroughly before being deployed into production. Distributed Storage and Infrastructure ensures that applications are not bound by any physical limitations and components can be located where they make the most sense. These videos should help set the foundation for what a modern application is. The next videos in the series will start to define the fundamental technical components for the platforms that bring together a modern application. Continued in Part 23.7KViews8likes0CommentsEasily Deploy Your Palo Alto NGFW with F5 Distributed Cloud Services
Introduction In this article, I will show you how to easily deploy your Palo Alto firewall in a Security Services VPC using F5 Distributed Cloud (XC) Security Service Insertion. Security service insertion from F5 Distributed Cloud Network Connect simplifies the deployment and operation of Palo Alto NGFW security services across hybrid and multi-cloud environments. Deploying security software in the public cloud—especially in multiple public clouds—is more complicated than deploying it in private cloud and on-premises because the virtualized infrastructure is explicitly designed to operate as multiple independent instances, easily leading to instance sprawl and policy skew. SecOps and NetOps teams are struggling to install, configure, and maintain security solutions that work consistently. Key Benefits Automated deployment and repeatable traffic-steering policies. Customers can leverage the same security solution they use in their data centers for the cloud, and easily integrate them with native cloud networking constructs. Gain granular visibility and managed the security posture of applications and network traffic across multiple clouds and networks. Enhanced Firewall Policy is an intent-based network policy supported on the Distributed Cloud Platform. Just like Network Policy, an Enhanced firewall policy can be applied at the site level, and it can use flexible and dynamically abstracted data to make decisions. For example, the tags or labels belonging to a source or destination VPC on a deployed site can be used to allow, deny, or steer traffic. Using the new Enhanced Firewall policy object, network admins can steer the traffic to an external service. Use Cases I am listing six different use cases that can easily be configured in the XC Console to enable traffic steering with our newly released Enhanced firewall policies. This article will highlight the (1) East-West and (4) North-South scenarios below. Application to Application Traffic through PAN (East-West) Application in a Different Site through PAN (Site-to-Site) Application to the Internet through PAN (Egress Traffic) Ingress Traffic from the Internet to an Application (North-South) Ingress Traffic from F5 Distributed Cloud Regional Edge to Application (North-South) Ingress traffic from on-premises to Application (AWS DirectConnect) In addition, different types of traffic can be individually steered to the PAN Firewall, potentially offloading the firewall from having to inspect traffic that can be blocked by Distributed Cloud. L3 traffic between VPCs L7 traffic between VPCs L5 TLS traffic can be decrypted on the TGW site, to securely send decrypted traffic to the firewall for complete inspection and to offload compute intensive SSL operations. Prerequisites The following prerequisites apply: A Distributed Cloud Services Account. If you do not have an account, see Create an Account. An AWS Account. See Required Access Policies for permissions needed to deploy an AWS TGW site. Resources required per node: Minimum 4 vCPUs and 14 GB RAM. There should be no pre-existing Site Local Outside, Site Local Inside, and Workload subnet association when attaching an existing VPC. If Internet Gateway (IGW) is attached with the VPC, at least one of the routes should point to IGW in any route table of the VPC. A Palo Alto Firewall License The steps below are what is required to set up Service Insertion. I will not cover every step, as I will assume most have some experience with VPCs and some related cloud concepts. I will highlight where Distributed Cloud simplifies building this environment and changing traffic policies. Create or use an existing AWS TGW Site Attach Spoke VPC’s Add an External Service Configure Enhanced Firewall Policies F5 Distributed Cloud Console Log In: Select Multi-Cloud Network Connect: Navigate to Manage > Site Management > AWS TGW Sites Click Add AWS TGW Site or Select a TGW Site that has already been built for your organization. Note: At any time, you need additional information, click the Tech Docs link. On this initial page, you need to supply the Metadata Name, Label, and Description. I will cover each additional section in detailed Screenshots. AWS Resources Associate Spoke VPCs Site Network and Security Direct Connect Software Version Advanced Click on Configure under AWS Resources: AWS Resources: AWS Credentials, either select your existing credentials in XC Console or create and store valid AWS Credentials that will be used to configure AWS resources. Region and Services VPC: Select the AWS Region for your TGW Site Either create a new AWS TGW Site or Select an existing AWS TGW Site Transit Gateway Select the Transit Gateway, again this can be a new or existing gateway. Site Node Parameters Select the appropriate AWS Instance Type (t3.xlarge) Click Add Item Configure your Ingress/Egress Gateway Nodes (inside/outside interface) Give the Site Node a Name, Select the Workload Subnet, Subnet of Outside interface, and Subnet for Inside Interface Click Apply You are returned to the previous screen. Enter the Public SSH Key that you will use to access your AWS instances. Worker Nodes and Advertise VIP’s will maintain their default values of Disabled. Click Apply Associate Spoke VPCs Now configure your Spoke VPC’s Click Configure. Supply the appropriate VPC ID you are connecting with labels. Click Apply and continue adding additional VPC’s if needed. Click Apply again as needed. Site Network and Security Under Site Network and Security, you will have to select Configure under both areas, but the settings are all correct. Click Apply Direct Connect Keep default Disabled. Software Version You can choose the latest versions of Software or Specify a specific version if needed. Advanced The only setting in here that needs to be configured is the Latitude and Longitude. Click Save and Exit You have now successfully set up all the requirements to have a functioning TGW site. This uses Enhanced Firewall Policies with the attached VPCs to steer and secure traffic to your Palo Alto NGFW. Add an External Service Navigate in Multi-Cloud Network Connect > Manage > External Services Click Add External Service Supply a Name, Label and Description External Service Provider (Defaults to BIG-IP, a previous article linked below) Select Palo Alto Networks VM on AWS Select Configure Select the AWS instance type for your configuration. Note: Instance types vary by region. More details about AWS instance types available here, and specific Palo Alto VM requirements here. Select the AMI Choice Note: Only Palo Alto AWS bundles 1 and 2 are currently available. Click here for more details. Configure the Public SSH Key Select the AWS Transit Gateway Site created in the steps above. Under AZ Nodes, Select Add Item Give the Service Node a Name, the AWS AZ Name, and the Subnet for Management Interface Note: Click here for information about AWS Availability Zones, the name choices are unique to your AWS Subscription. The subnet and CIDR block for the management interface can be autogenerated by Distributed Cloud, it can be created manually at this step in the process, or you can use an existing subnet. This step determines the IP address that the firewall uses for its lifespan. Click > Apply You will be returned to the previous screen. If you are integrating Panorama, you would do that here. We are not covering that in this article. Select the PA Version. (At the time of this article's publishing only 11.0.0 is available) Click Apply Depending on the configuration, you will either enable or disable HTTPS Management of the firewall, choose the domain name suffix to complete the URL that will be used to access the firewall, and whether the firewall will be available publicly on the Internet or through select locations and networks connected by Distributed Cloud. Click Save and Exit Distributed Cloud now deploys the Palo Alto Firewall instance(s) and builds the Geneve tunnels. Configure Enhanced Firewall Policy This brings us to the final configuration and most powerful feature of Service Insertion. You can manipulate traffic going to the external service in 6 key use case scenarios by making simple changes to F5 XC enhanced firewall policy and reordering rules Here are 5 different policies that were built. Let’s look at one policy and then see how to change it to manipulate traffic. Note that the Enhanced Firewall Policy only controls what traffic goes to the external service, it doesn’t control what happens to the traffic on the external service itself. To see the flexibility provided for building policies, notice the firewall option to set up and control traffic. Select Custom Enhanced Firewall Policy Rule Selection Click Configure In the following screenshots, I will Show all the items in the Source Traffic Filter, the Destination Traffic Filter, the Type of Traffic to Match, and the Action. This rule sends all traffic to the external service in one direction. Because the firewall is stateful and the connection path is symmetric, a corresponding rule to redirect traffic in the reverse direction is not needed. Source Traffic Filter: All Sources Destination Traffic Filter: All Destinations Types of Traffic to Match: Match All Traffic Action: Insert an External Service Source Traffic Filter Destination Traffic Filter Types of Traffic to Match Action Here is where the Distributed Cloud magic happens. Select Insert an External Service. We will select the Palo Alto External Service you created previously. A final and optional step could be to add keys/labels to further restrict the selection criteria for routing and controlling traffic. For example, if the origin site routes traffic for multiple VPC’s, each VPC having its own unique key value, then entering a key here further restricts which VPC the rule applies to, i.e. prod, staging, or dev. Demo In the following video, I use the Distributed Cloud Console to configure an NFV Service, provision an HA pair Palo Alto VM-series, and configure Distributed Cloud to use Panorama to complete the configuration on the firewalls. Closing You now have completed all the steps to integrate your Palo Alto Firewall into F5 Distributed Cloud Network Connect. This enables you to route traffic through or around your Firewall based on the architecture and design of your network. Based on these simple steps, you have granular control over all your traffic and how you handle your traffic across multiple clouds. Related Material F5 Distributed Cloud Platform F5 Distributed Cloud Network Connect F5 Distributed Cloud Security Service Insertion With BIG-IP Advanced WAF Real-World Use Case Simulator Demo Video3.5KViews2likes0Comments