A couple of months I published some results running Arabica against part of the OASIS XSLT conformance test suite. I've done a bit of work since then, and so it's time to update the numbers
Since the last published results, I have one more skip and 20 less fails. My little spreadsheet (the first I have ever constructed, career fact fans) says I'm running 1328 tests altogether, with a pass rate of 86.9%.
A failure means the test ran, but did the wrong thing. An error means it threw an exception, didn't compile the XSLT, or something similarly unexpected. A skip means the test deliberately wasn't run because of some known deficency in my code. It might be a feature I haven't implemented, the test is just plain wrong (there are a couple of these), the test is Xalan specific, or some other thing. Skips come in three flavours - don't bother at all, shouldn't compile, or shouldn't run. If a test that's not expected to compile does, or one that shouldn't run suddenly starts working, that's actually flagged as a failure. There aren't any tests doing this in these results.
Not every failure represents a unique bug. Similarly not every skip represents a unique deficiency. The biggest set of failed tests, the 78 output failures, I haven't investigated in depth but I suspect many of those are related to either HTML output (which I don't do) or text output (which the test harness can't currently compare).
These results are from current Subversion head, built on Windows XP using Visual Studio 8 and expat.