The concept of centrally shared storage is nothing new, with its roots reaching back to the 1980s as a way to allow for mainframes to share a common physical repository of disk drives, which at the time, cost more per MEGAbyte than the currently proposed U.S healthcare overhaul. From this world - Fibre Channel was born. It was fast, flexible, and was complicated enough to keep the egghead mainframe guys happy.
As Open Systems (think Microsoft and PCs) started to grow in their adoption for businesses, Fibre Channel was able to shift to that world as well and provided system administrators a way to provide centralized, scalable shared storage for applications that resided in their world (i.e. databases, e-mail, etc.). While the technology had a tendency to drift to the expensive side, it was well received by those who could afford it and the proprietary nature of its media and employment.
By the time we bid adieu to the 20th century, a growing legion of system administrators began needing the benefits of centralized, shared storage but didn't want to pay the premium price for the infrastructure associated with Fibre Channel. Thanks to a committee of nerds at the IEEE, iSCSI was born and boy was it SWEET. Instead of using proprietary protocols riding on proprietary hardware to access storage, you now could do the same thing by executing SCSI commands that were encapsulated in IP packets and delivered over a network fabric everybody already had installed and loved - Ethernet. Now, the Common Man could get shared storage like all of the cool enterprise folks and glean its benefits.
In fact, the concept worked so well, all major enterprise storage folks adopted it into their offerings - EMC, Network Appliance, Equalogic, and others rapidly jumped up and said, "Me too!" and were subsequently exposed to a set of customers and revenue that benefited a much broader set of IT operations.
Today, it's hard to NOT find iSCSI existing in many corners of most datacenter operations, and it is mainly due to the fact it just works and is easily implemented on existing Ethernet-based switches and NICS. Even better, the introduction of 10Gbps Ethernet is making iSCSI shine brighter than Fibre Channel in terms of speed (FC's peak throughput right now is 8Gbps). What's not to love?
Now, there is an initiative by some forces at work to want to introduce a way to run the Fibre Channel protocol over Ethernet, and I am left scratching my head as to why I would want to adopt it if iSCSI is already working. Granted, there are some performance efficiencies built into how the FC protocol works, but I struggle to understand adopting it if 10Gbps iSCSI (or even NFS for file level access) is working just fine.
Just because you CAN do something doesn't mean you SHOULD do something, right? What am I missing?