Is Linux capable of handling the mission-critical, high-volume demands of the world’s biggest financial institutions? Speakers at the recent sixth annual Linux on Wall Street conference offered solid evidence in the affirmative, despite unresolved issues concerning real-time Linux and hypervisor interoperability.
Vinod Kutty, the distributed computing R&D head at the Chicago Mercantile Exchange (CME), now the CME Group, and a conference speaker, believes that Linux has become more mature as a platform and is ready for mission-critical, financial services workloads.
“I’m starting to see Linux companies focusing on enterprise customer needs” and beefing up their staff of talented Unix developers, Kutty said after the conference. “These are signs that they are ready to play in the enterprise market.”
Speakers like Kutty presented plenty of evidence that Linux is running mission-critical applications, including high-performance, real-time systems. In terms of financial systems, real time generally refers to a transaction time measured in microseconds or milliseconds. The faster the transaction time, the more traders can execute orders and the faster the orders can be filled. Kutty explained how the Chicago-based CME converted its systems from Solaris Sparc servers to Red Hat Enterprise Linux and achieved better performance in speed and reliability at lower cost, all while handling a large increase in electronic trade volume.Completed in late 2004, the 18-month migration provided CME with the backbone to increase from 250 million trades of commodity contracts in 2003 to 1.2 billion contracts in 2007. In addition, the transaction speed was sharply reduced from 200 milliseconds to 10 to 15 milliseconds, the closest to real time that is achievable today, he said.
IBM brings real-time Linux to U.S. Navy
Keith Bright, the program director of IBM’s Linux Technology Center, discussed another successful real-time Linux project: the creation of a centralized ship-board computing infrastructure for U.S. Navy destroyers under contract with Raytheon. IBM’s Total Ship Computing Environment will run all Zumwalt-class destroyer applications from weapons, command-and-control, radar and navigation. The centralized computing system will run on IBM BladeCenter and IBM x86 servers on real-time Linux and real-time Java.For the Navy’s project, IBM assembled a Linux team that first had to improve the quality of the kernel — fixing patches, debugging code, testing — and then integrate it with the Red Hat stack. In turn this improved kernel became the foundation for IBM’s Java Real Time, now called WebSphere Real Time, all on the open source model, he said.
“In 2005 [when IBM was working on the Navy’s real-time project], nobody wanted to play this game,” he added, referring to the distros’ reluctance to add real-time features to the Linux kernel and environment. “But open source has come a long way. It’s exciting to see Linux move into the mainstream.” The first version of the computing platform was delivered in mid-2006 on time and on budget, Bright said. The greatest project challenge was its tight schedule, he said. No ships have yet been launched under the program, which is ongoing. “Real time is pretty exciting in open source,” Bright said. “We had guaranteed real time and better throughput. It’s often one or the other. But real-time goals were achieved with minimum impact to performance as planned.” Real time is not about high performance but about determinism (the ability to prioritize tasks) and guaranteed execution, he noted. Head Bubba, the vice president of IT research and development at Credit Suisse, said he has validated a 40% performance improvement with the real-time Linux kernel. “This is an extreme case, but if you architect it correctly, you can see a performance boost,” he said. “But there is a tradeoff, because the real-time kernel could affect throughput. [The real-time kernel] is very stable, but it’s up to you to decide. Eventually, this will go mainstream.”
Microsoft, Novell hypervisor interoperability
In a separate workshop, Microsoft and Novell representatives gave an update on their ongoing efforts to make their systems interoperable following the joint Novell/Microsoft 2006 agreement. These areas include virtualization, directory and identity interoperability, and document formats.
Of particular interest was the ongoing effort between the two companies to make their respective virtualization hypervisors work interchangeably on each other’s operating systems. (Novell’s SUSE Enterprise Linux uses the open source Xen hypervisor, and Microsoft uses its own Hyper-V hypervisor.) Interoperability will be achieved with special adapters for each system.
As a young data center player, Linux still has plenty of catch-up challenges ahead, including interoperability, better vendor support and additional management tools. And the pace at which computing itself is changing — with added capabilities such as virtualization and cloud computing – that challenge involves some shifting ground. But conference speakers agreed that Linux has definitely arrived as a platform and is generating more than its share of innovation. The bottom line: Proprietary vendors had better take Linux seriously.
By Pam Derringer, News Writer
14 Apr 2008 | SearchEnterpriseLinux.com