Don't do this with regular expressions. Remember, you're not protecting just against valid HTML; you're protecting against the DOM that web browsers create. Browsers can be tricked into producing valid DOM from invalid HTML quite easily.
For example, see this list of obfuscated XSS attacks. Are you prepared to tailor a regex to prevent this real world attack on Yahoo and Hotmail on IE6/7/8?
<HTML><BODY>
<?xml:namespace prefix="t" ns="urn:schemas-microsoft-com:time">
<?import namespace="t" implementation="#default#time2">
<t:set attributeName="innerHTML" to="XSS<SCRIPT DEFER>alert("XSS")</SCRIPT>">
</BODY></HTML>
How about this attack that works on IE6?
<TABLE BACKGROUND="javascript:alert('XSS')">
How about attacks that are not listed on this site? The problem with Jeff's approach is that it's not a whitelist, as claimed. As someone on that page adeptly notes:
The problem with it, is that the html
must be clean. There are cases where
you can pass in hacked html, and it
won't match it, in which case it'll
return the hacked html string as it
won't match anything to replace. This
isn't strictly whitelisting.
I would suggest a purpose built tool like AntiSamy. It works by actually parsing the HTML, and then traversing the DOM and removing anything that's not in the configurable whitelist. The major difference is the ability to gracefully handle malformed HTML.
The best part is that it actually unit tests for all the XSS attacks on the above site. Besides, what could be easier than this API call:
public String toSafeHtml(String html) throws ScanException, PolicyException {
Policy policy = Policy.getInstance(POLICY_FILE);
AntiSamy antiSamy = new AntiSamy();
CleanResults cleanResults = antiSamy.scan(html, policy);
return cleanResults.getCleanHTML().trim();
}
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…