August 2, 2011
The optical communications community has been waiting for wide scale deployment of 40 Gbps transport systems for a decade. Now, with 40G on its third technology iteration, it?s not surprising the excitement is wearing off, and valuations of 40 Gbps module vendors are coming down. Recent acquisitions averaged only two times sales and near or below VC-invested capital:
August 2, 2011
As Niall Robinson, Mintera's VP of Marketing, commented in one of his presentations, the road for 40 Gbps products has been a long and winding one. Striving to optimize performance of 40 Gbps transmission systems, vendors developed several generations of transponder modules: DPSK, DQPSK, and coherent modules. Mintera was on the leading edge of product development cycle for DPSK modules but fell behind as the industry moved to DQPSK and coherent modulation formats. CoreOptics and Stratalight also had problems developing a profitable business, as sales volumes of 40 Gbps modules remained limited and the economic downturn of late 2008 and early 2009 slowed deployments of 40 Gbps systems. Market data collected by LightCounting shows that sales of 40 Gbps DWDM transponders reached only $54 million in 2009.
August 2, 2011
On June 22, 2011, HP announced ten new ProLiant G7 servers (three rack servers and seven blade servers), and an Ethernet/Fibre Channel module that plugs into its BladeSystem that together enable the first mass volume converged fabric in a blade server.
August 2, 2011
Five of the seven blade servers announced now have a 10GigE LAN on Motherboard (LOM) design that enables IP, FC, and iSCSI traffic all running on the two embedded 10GigE ports on each blade motherboard. The LOM is sourced from Emulex through its announced acquisition of ServerEngines. The three rack servers can all be outfitted with the same technology with an HP add-in adapter built by Emulex that has an SFP+ interface that the user can connect either an SFP+ optical module or an SFP+ Twinax copper cable. With HPs 56% share of blades and 38% share of x86 servers, the number of 10GigE ports shipping in servers later this year and next will grow exponentially.
August 1, 2011
This development is consistent with LightCounting’s analysis of the market for LightPeak published in December 2009. LightCounting believes there is clearly no need for 10Gbps speeds for consumer electronics at this time, other than to reduce the amount of cables in a PC by combining USB, HDMI, DisplayPort, VGA, SATA, and other I/O technology into one optical cable sharing the bandwidth. PC disk drives barely approach 1Gbps; even the fastest Flash drives operate at 2.5Gbps and "professional-level” uncompressed 1080p HDTV video at 135 Mbps. In other words, nothing in a PC even approaches the 10Gbps transfer speed of LightPeak.
August 1, 2011
While Sandy Bridge is the next generation architecture with PCI Express 3.0 for desktop PCs, Romley, is Intel’s Xeon (server) platform version. The introduction of PCI Express 3.0 addresses an important bottleneck in network traffic growth. Enterprise networking traffic often starts at the server, and with new servers moving from two or four to eight to twelve cores (with 50 cores on the horizon) and 400Gbytes of DRAM, higher speed interconnects are sorely needed, especially in light of trends to use virtualization to increase server utilization from 20 to 90%. PCI Express 2.0, or the Gen2 version, is the backbone bus for Intel/AMD-based servers and has been a severe I/O speed-limiting factor for many switch/router systems currently shipping. At IDF, Intel announced the Romley platform that interfaces the CPU to the PCI Express 3.0 bus and will be shipping in 2011 with data center servers and HPCs in the market early 2012.