What technique are you using to deduce what the content of the page is? A straight port of Arc90's open source code or some magic of your own? (I'm asking because I'm keen to improve their technique for a library I'm working on.)
I'd be real interested in their method for finding only the content of a page for "reading". Is it a custom template for most major sites or something smarter?
None of the UI stuff but the JS improvements are there. About the same speed as Chrome (so about 30-40% faster than Safari 4 according to the SunSpider benchmark)
I'm hoping there will be some additional customisability of the search box, I want to use google.com.au, not google.com, and I'd really prefer not to install more Safari addons (had some issues with memory leaks in the past).
I can't think of a reliable way to filter out "malicious" code without also having many false positives.
Without having seen their solution, I feel that the browser is the wrong place to fix this kind of problem anyway. Much like PHP tried to prevent SQL Injection Attacks with "Magic Quotes" - we all know how that went.
Sadly, no: http://www.quackit.com/html_5/tags/html_ruby_tag.cfm