Enterprise Apps are Not Written for Speed
#fasterapp #cceventThey’re written for readability, for integration, for business function, and for long-term maintenance… When I was first entering IT I had the good (or bad, depending on how you look at it) fortune to be involved in some of the first Internet-facing projects at a global transportation organization. We made mistakes and learned lessons and eventually got down to the business of architecting a framework that would span the entire IT portfolio. One of the lessons I learned early on was that maintainability always won over performance, especially at the code level. Oh, some basic tenets of optimization in the code could be followed – choosing between while, for, and do..until conditionals based on performance-related concerns – but for the most part, many of the tricks used to improve performance were verboten, and some based solely on factors like readability. The introduction of local scope for an if…then…else statement, for example, was required for readability, even though in terms of performance this introduces many unnecessary clock ticks that under load can have a negative impact on overall capacity and response time. Microseconds of delays adds up to seconds of delays, after all. But coding standards in the enterprise lean heavily toward the reality that (1) code lives for a long time and (2) someone other than the original developer will likely be maintaining it. This means readability is paramount to ensuring the long-term success of any development project. Thus, performance suffers and “rewriting the application” is not an option. It’s costly and the changes necessary would likely conflict with the overriding need to ensure long-term maintainability. Even modern web-focused organizations like Twitter and Facebook have run into performance issues based on architectural decisions made early in the lifecycle. Many no doubt recall the often very technical discussions regarding Twitter’s design and interaction with its database as a source of performance woes, with hundreds of experts offering advice and criticism. Applications are not often designed with performance in mind. They are architected and designed to perform specific functions and tasks, usually business-related, and they are developed with long-term maintenance in mind. This leads to the problem of performance, which can rarely be addressed by the developers due to the constraints placed upon them, not least of which may be an active and very vocal user base. APPLICATION DELIVERY PUTS the FAST back in APPLICATIONS This is a core reason the realm of application delivery exists: to compensate for issues within the application that cannot – for whatever reason – be addressed through modification of the application itself. Application acceleration, WAN optimization, and load balancing services combine to form a powerful tier of application delivery services within the data center through which performance-related issues can be addressed. This tier allows load balancing services, for example, to be leveraged as a means to scale out an application, which effectively results in similar (and often greater) performance gains as simply scaling up to redress inherent performance constraints within the application. Application acceleration techniques improve the delivery of application-related content and objects through caching, compression, transformation, and concatenation. And WAN optimization services address bandwidth constraints that may inhibit delivery of the application, especially those heavy on the data and content side. While certainly developers could modify applications to rearrange content or reduce the size of data being delivered, it is rarely practical or cost-effective to do so. Similarly, it is not cost-effective or practical to ask developers to modify applications to remove processing bottlenecks that may result in unreadable code. Enterprise applications are not written for speed, but that is exactly what is demanded of them by their users. Both needs must be met, and the introduction of an application delivery tier into the architecture can serve to provide the balance between performance and maintenance by applying acceleration services dynamically. In this way applications need not be modified, but performance and scale is greatly improved. I’ll be at CloudConnect 2012 and we’ll discuss the subject of cloud and performance a whole lot more at the show! Sessions From Point A to Point B. The Three Axioms of Application Delivery WILS: WPO versus FEO The Full-Proxy Data Center Architecture Even the best written code has a weakness At the Intersection of Cloud and Control… What is a Strategic Point of Control Anyway? The Battle of Economy of Scale versus Control and Flexibility What CIOs Can Learn from the Spartans187Views0likes0CommentsWhat is server offload and why do I need it?
One of the tasks of an enterprise architect is to design a framework atop which developers can implement and deploy applications consistently and easily. The consistency is important for internal business continuity and reuse; common objects, operations, and processes can be reused across applications to make development and integration with other applications and systems easier. Architects also often decide where functionality resides and design the base application infrastructure framework. Application server, identity management, messaging, and integration are all often a part of such architecture designs. Rarely does the architect concern him/herself with the network infrastructure, as that is the purview of “that group”; the “you know who I’m talking about” group. And for the most part there’s no need for architects to concern themselves with network-oriented architecture. Applications should not need to know on which VLAN they will be deployed or what their default gateway might be. But what architects might need to know – and probably should know – is whether the network infrastructure supports “server offload” of some application functions or not, and how that can benefit their enterprise architecture and the applications which will be deployed atop it. WHAT IT IS Server offload is a generic term used by the networking industry to indicate some functionality designed to improve the performance or security of applications. We use the term “offload” because the functionality is “offloaded” from the server and moved to an application network infrastructure device instead. Server offload works because the application network infrastructure is almost always these days deployed in front of the web/application servers and is in fact acting as a broker (proxy) between the client and the server. Server offload is generally offered by load balancers and application delivery controllers. You can think of server offload like a relay race. The application network infrastructure device runs the first leg and then hands off the baton (the request) to the server. When the server is finished, the application network infrastructure device gets to run another leg, and then the race is done as the response is sent back to the client. There are basically two kinds of server offload functionality: Protocol processing offload Protocol processing offload includes functions like SSL termination and TCP optimizations. Rather than enable SSL communication on the web/application server, it can be “offloaded” to an application network infrastructure device and shared across all applications requiring secured communications. Offloading SSL to an application network infrastructure device improves application performance because the device is generally optimized to handle the complex calculations involved in encryption and decryption of secured data and web/application servers are not. TCP optimization is a little different. We say TCP session management is “offloaded” to the server but that’s really not what happens as obviously TCP connections are still opened, closed, and managed on the server as well. Offloading TCP session management means that the application network infrastructure is managing the connections between itself and the server in such a way as to reduce the total number of connections needed without impacting the capacity of the application. This is more commonly referred to as TCP multiplexing and it “offloads” the overhead of TCP connection management from the web/application server to the application network infrastructure device by effectively giving up control over those connections. By allowing an application network infrastructure device to decide how many connections to maintain and which ones to use to communicate with the server, it can manage thousands of client-side connections using merely hundreds of server-side connections. Reducing the overhead associated with opening and closing TCP sockets on the web/application server improves application performance and actually increases the user capacity of servers. TCP offload is beneficial to all TCP-based applications, but is particularly beneficial for Web 2.0 applications making use of AJAX and other near real-time technologies that maintain one or more connections to the server for its functionality. Protocol processing offload does not require any modifications to the applications. Application-oriented offload Application-oriented offload includes the ability to implement shared services on an application network infrastructure device. This is often accomplished via a network-side scripting capability, but some functionality has become so commonplace that it is now built into the core features available on application network infrastructure solutions. Application-oriented offload can include functions like cookie encryption/decryption, compression, caching, URI rewriting, HTTP redirection, DLP (Data Leak Prevention), selective data encryption, application security functionality, and data transformation. When network-side scripting is available, virtually any kind of pre or post-processing can be offloaded to the application network infrastructure and thereafter shared with all applications. Application-oriented offload works because the application network infrastructure solution is mediating between the client and the server and it has the ability to inspect and manipulate the application data. The benefits of application-oriented offload are that the services implemented can be shared across multiple applications and in many cases the functionality removes the need for the web/application server to handle a specific request. For example, HTTP redirection can be fully accomplished on the application network infrastructure device. HTTP redirection is often used as a means to handle application upgrades, commonly mistyped URIs, or as part of the application logic when certain conditions are met. Application security offload usually falls into this category because it is application – or at least application data – specific. Application security offload can include scanning URIs and data for malicious content, validating the existence of specific cookies/data required for the application, etc… This kind of offload improves server efficiency and performance but a bigger benefit is consistent, shared security across all applications for which the service is enabled. Some application-oriented offload can require modification to the application, so it is important to design such features into the application architecture before development and deployment. While it is certainly possible to add such functionality into the architecture after deployment, it is always easier to do so at the beginning. WHY YOU NEED IT Server offload is a way to increase the efficiency of servers and improve application performance and security. Server offload increases efficiency of servers by alleviating the need for the web/application server to consume resources performing tasks that can be performed more efficiently on an application network infrastructure solution. The two best examples of this are SSL encryption/decryption and compression. Both are CPU intense operations that can consume 20-40% of a web/application server’s resources. By offloading these functions to an application network infrastructure solution, servers “reclaim” those resources and can use them instead to execute application logic, serve more users, handle more requests, and do so faster. Server offload improves application performance by allowing the web/application server to concentrate on what it is designed to do: serve applications and putting the onus for performing ancillary functions on a platform that is more optimized to handle those functions. Server offload provides these benefits whether you have a traditional client-server architecture or have moved (or are moving) toward a virtualized infrastructure. Applications deployed on virtual servers still use TCP connections and SSL and run applications and therefore will benefit the same as those deployed on traditional servers. I am wondering why not all websites enabling this great feature GZIP? 3 Really good reasons you should use TCP multiplexing SOA & Web 2.0: The Connection Management Challenge Understanding network-side scripting I am in your HTTP headers, attacking your application Infrastructure 2.0: As a matter of fact that isn't what it means2.6KViews0likes1CommentI do not think that word means what you think it means
Greg Ferro over at My Etherealmind has a, for lack of a better word, interesting entry in his Network Dictionary on the term "Application Delivery Controller." He says: Application Delivery Controller (ADC) - Historically known as a “load balancer”, until someone put a shiny chrome exhaust and new buttons on it and so it needed a new marketing name. However, the Web Application Firewall and Application Acceleration / Optimisation that are in most ADC are not really load balancing so maybe its alright. Feel free to call it a load balancer when the sales rep is on the ground, guaranteed to upset them. I take issue with this definition primarily because an application delivery controller (ADC) is different from a load-balancer in many ways, and most of them aren't just "shiny chrome exhaust and new buttons". He's right that web application firewalls and web application acceleration/optimization features are also included, but application delivery controllers do more than just load-balancing these days. Application delivery controller is not just a "new marketing name", it's a new name because "load balancing" doesn't properly describe the functionality of the products that fall under the ADC moniker today. First, load-balancing is not the same as layer 7 switching. The former is focused on distribution of requests across a farm or pool of servers whilst the latter is about directing requests based on application layer data such as HTTP headers or application messages. An application delivery controller is capable of performing layer 7 switching, something a simple load-balancer is not. When the two are combined you get layer 7 load-balancing which is a very different beast than the simple load-balancing offered in the past and often offered today by application server clustering technologies, ESB (enterprise service bus) products, and solutions designed primarily for load-balancing. Layer 7 load balancing is the purvey of application delivery controllers, not load-balancers, because it requires application fluency and run-time inspection of application messages - not packets, mind you, but messages. That's an important distinction, but one best left for another day. The core functionality of an application delivery controller is load-balancing, as this is the primary mechanism through which high-availability and failover is provided. But a simple load-balancer does little more than take requests and distribute them based on simple algorithms; they do not augment the delivery of applications by offering additional features such as L7 rate shaping, application security, acceleration, message security, and dynamic inspection and manipulation of application data. Second, a load balancer isn't a platform; an application delivery controller is. It's a platform to which tasks generally left to the application can be offloaded such as cookie encryption and decryption, input validation, transformation of application messages, and exception handling. A load balancer can't dynamically determine the client link speed and then determine whether compression would improve or degrade performance, and either apply it or not based on that decision. A simple load balancer can't inspect application messages and determine whether it's a SOAP fault or not, and then once it's determined it is execute logic that handles that exception. An application delivery controller is the evolution of load balancing to something more; to application delivery. If you really believe that an application delivery controller is just a marketing name for a load-balancer then you haven't looked into the differences or how an ADC can be an integral part of a secure, fast, and available application infrastructure in a way that load-balancers never could. Let me 'splain. No, there is too much. Let me sum up. A load balancer is a paper map. An ADC is a Garmin or a TomTom.249Views0likes2CommentsThe Treachery of Hyperlinks
With apologies to René Magritte. Did you know you could stop the treachery that is Rickrolling hyperlinks with an iRule? Just search your outbound HTML for the appropriate YouTube URLs (you may need a data group to store them all) and strip them out, or search your inbound posts for the URLs and refuse to post them. Of course you could also write an iRule that automatically changes every submitted URL to be a rickroll, but man, that's evil! Maybe you just want to do it for a specific user. You can do that with iRules if your site uses cookies to identify users by id or name. Just check the cookie and if you find the right user, fire off the appropriate iRule code to replace the URLs before it's posted. It would still be evil. But it would be funny evil, if you know what I mean.173Views0likes1Comment