The Blade Server Of The Future

edited November -1 in CRN

By Kevin Houston

What does the future hold for blade servers? This a great question that I get asked often, and it makes me wish I had a crystal ball. Of course, x86 CPUs designed for blade servers (and rack servers alike) will continue to be made smaller, run faster and have more and more cores, but beyond the processors, what will forthcoming models look like? With my 14 years in the IT industry and nearly 10 years in the blade industry, I have some ideas of what upcoming generations of blade-based server models will look like.

Looking at the trend to move toward a converged network (combining Ethernet and Fibre protocols on the same network) we will see more vendors offer a converged network adapter (or CNA) standard on the motherboard of the blade server. In addition, having a converged fabric being standard on each blade server, I expect we will see less demand for additional expansion cards, or mezzanine cards, on blade servers. Currently blade servers have on average 2 CPUs and 12 – 18 memory slots. As fewer mezzanine expansion slots are needed, CPU manufacturers and blade server vendors will have more internal space on a blade server to expand to being able to offer more CPU or more memory slots within a single blade server.

If you understand blade server architecture, you understand that if a mezzanine expansion card is not used on blade servers, then you don’t need additional I/O modules in the blade chassis. If additional I/O modules are not required, then blade server vendors would have the ability to reduce the quantity of blade I/O module bays which could lead to a chassis that requires less power over current blade chassis designs or offer additional space for new features like local storage.

http://bladesmadesimple.com/wp-content/uploads/2010/11/Shared-Storage-on-BladeSystem-c7000.jpg

As virtualization becomes the de facto standard with x86 environments, I also believe that creating a more modular blade environment is going to become a requirement. Separating I/O from CPU and memory is a necessity for future generations of blade servers – especially as CPU chipsets evolve and require more blade server real estate. Standardizing a protocol for all blade vendors to offer external I/O drawers connected to their blade chassis would allow vendors to provide additional I/O capacity on demand for each blade server. http://bladesmadesimple.com/wp-content/uploads/2010/06/Shared-IO.jpg

As mentioned above, I speculate that future CPUs will need more memory DIMMs slots, but with that will come the need for more power. As power supply efficiency is already reaching its capacity, liquid cooling might become an imminent requirement.  Anytime you introduce liquid into a datacenter offers great challenges, both physically and politically - so this is probably a long shot, but if vendors can make this happen, it would allow for more to be done with blade servers in a lower power envelope.

No one has a crystal ball and no one can predict the future, but hopefully some of the ideas I’ve written about above will someday become a reality.

Kevin Houston is the founder of BladesMadeSimple.com and is a panelist for COMDEXvirtual's session on Next-Gen Server Architctures. He can be reached at BladesMadeSimple@gmail.com.

Comments

  • I see three patterns coming out over then next few years.

    1. Massively dense systems of low cost componetns used by Microsoft, Amazon, Google, and similar sized companies..

    2. Large companies internal products and mid-sized hosting companies.

    3. Low cost entry systems for companies that can't use clouds.

    Long term 2015-2020 I see a move to massive timesharing environments where hardware is only sold to large hosting providers as the power of the systems have outgrown 99% of all corporate needs.

    Type one will be fast to deploy in large chunks of capacity (as in shipping container sizes) and are replace not repaired in the same chunks of capacity. Each chunk will have built-in repacement capacity like we are seeing in the pattern used in Solid State Drives today where x capacity is not reported in the usable size of the drive to allow for swapping out bad storage cells.  The unit of capacity will be large and will be replace when x% of that capacity is out of compliance with operating tolerances.

    Type two and three will differ in scale and level of automation but will be able to easily scale from single CPU systems to multiple CPU systems using 40-100 Gbps Converged Enhanced Ethernet carring iSCSI, FCoE both of which may be replaced by SATA over Ethernet arround 2016. (most likely pattern). I agree that the blades need to move interface circuity to the enclosure so all blades are idental and are logically mapped to specific types of I/O at the enclosure level.

Sign In or Register to comment.
Forums HomeGo To CRN

Welcome to the new CRN Forums. Feel free to sign up or, if you are already a CRN.com subscriber, log in. Please refrain from posting any press releases, marketing material or content not related to the IT channel. Keep the discussions civil, intelligent and informative.


Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Categories