This should be a comment, but its a bit long.
While I've not (yet) tested out various webservers on my Pi, I have previously run a lot of testing on webservers running on x86 server hardware. What I know from there is:
most people get confused about the difference between performance and capacity - you'll see lots of posts claiming nginx is faster than (pre-fork) apache, this is not true, except under heavy load. Nginx (and lighty) are both much better at capacity. And that's at the most trivial level of analysis.
Few people serve up exclusively static content with their webservers (in this scenario, tux and G-Wan leave the servers you've mentioned in their dust). The performance profile is highly dependent on the logic tier technology and its integration with the webserver.
The performance (and capacity) is dependent on everything else running on the device.
There are lots of features of a datacentre server which are very easy to live without if you have proper cluster level redundancy (dual psu, dual network, remote console...) however a Raspberry PI doesn't make the best sense as a web serving platform due to slow disk I/O - you really need something with SATA, [i]SCSI, AOE or infiniband connectivity to your storage. The Pi doesn't have a SATA interface, only has one ethernet port and I'm not aware of an infiniband or SCSI interface.
(there are small, single board computers which are a more sensible choice for building webserving capability on - and a cluster of these can make good economic sense, but in such a scenario you are looking at multiple nodes with layered capability for SSL termination, HTTP caching, webserving, application logic and data management).
The question of fastest is hard to define, different for each case and impossible to answer.
However the biggest mistake I see again and again in IT, is people picking products based on a single attribute rather than considering the wider impact both in terms of the technology and people involved.