Cisco UCS

by Ryan 6. March 2012 13:15

Let's talk about Cisco UCS - Unified Computing System.

I help stand up new IT infrastructure all over the world, and I have been seeing a lot more of these lately. It's a pretty impressive system. In most small to mid-size shops you tend to see an onsite server closet or maybe a small cage in a datacenter full of 2 or 3 generations old HP Proliants and Dell Poweredges. But for the largest scale enterprise operations, nothing beats the density and manageability of blades. (And their ability to lock you in with a particular vendor. ;)) Blade systems essentially do for hardware what hypervisors did for operating systems. Not only are you packing more into less and increasing your compute density, you're centralizing the management of your entire datacenter and simplifying the deployment process by orders of magnitude. What do I mean by that? Well have some pictures worth a thousand words:

(Try right-clicking the images and opening in a new tab and you might get a better view.)

Turn this...ucs

... into this.

(The above image is courtesy of dalgeek - knightfoo.wordpress.com.)

Turn this...ucs wiring

... into this.

(I took that picture on the left myself a couple years ago, from a place I used to work.)

Now, when we talk about Cisco UCS, we're actually talking about a few discrete components that come together to form the UCS. First, we have the fabric interconnect. We'll use the Cisco UCS 6120XP 20-Port Fabric Interconnect as an example.

6120xp

It's a specialized 1U 10Gb (ten gigabit) switch that supports up to 160 servers or 20 chassis as a single, seamless system. (And remember each "server" can have dozens of VMs on it.) This particular switch is capable of 520Gbps of throughput. (I keep feeling like I'm making typos when I type numbers that large.)

The next piece is the blade chassis itself. Take the Cisco UCS 5108 Blade Server Chassis for example. This thing is 6 rack units, making the entire solution so far 7U for what could potentially house hundreds of VMs. Those smaller ports on the bottom of the chassis are for power supplies. Note that you can cram either half-width blades or full-width blades into this chassis. A full-width blade would look a little more like the traditional pizza box that we're used to, and has room for more stuff in it obviously, but I think the extra agility offered by half-width blades is probably the reason that they're the only ones I really see out in the wild.

Cisco UCS 5100 series

And lastly we have the blades themselves. Take the Cisco UCS B200 M2, a half-width blade, for example:

UCS Guts *Why yes, that is 192GB of DDR3 RAM, thanks for noticing*

And here's a little artist's depiction of what an entirely fleshed out "Unified Computing System" would look like. Note that you'd probably want some SAN storage somewhere for this to be considered a complete solution, beyond just the couple of disks that you can stick into each blade. I wonder how much storage you could get up there in the top 4 to 6 U of each cabinet...

racks of ucses

Hardware characteristics such as MAC addresses are configured at the chassis slot level, so if a blade fails you can swap in a new blade and not have to reconfigure anything. You can also do things like automatically reboot a host onto another blade if one fails, etc.

And lastly - it is all managed from a single web interface. (I hope you like Java.)

So that all looks pretty amazing, right? There may be a couple of cons to going with Cisco UCS however, and there are alternative blade systems to consider as well. You just have to weigh these pros and cons for yourself and your enterprise's situation. One of these possible cons is cost. The old adage goes that nobody in IT ever got in trouble for buying Cisco. They do make great stuff, but they also make practically the most expensive equipment in existence. Exact pricing is complicated and of course depends on exactly how you configure your equipment, but list price is somewhere in the ballpark of $20,000 per blade. Don't worry though, no one pays list price. Especially if you were to make a huge order like this, Cisco would be expected to have their discount pen at the ready. $10,000 - $12,000 per blade might be a more realistic figure. I count 288 blades in the picture above, putting your budgetary needs at somewhere around $2.9 - $3.46 million USD. (And we still don't have storage or networking yet... but you are well on your way to having one of the densest datacenters in the world.)

In the types of environments that I'm most used to, I see one to two UCS chassis per datacenter, each with two clustered interconnects for redundancy. In contrast, you might decide to go with HP c7000 blade servers filled with BL465c's. I see some of these as well, especially as things like cloud technologies cause aggressive IT expansion and thus the need to do more with your budget. You would almost certainly save a substantial amount of cash if you did go with HP or Dell; however, I think Cisco still has a very compelling price-per-blade as the solution scales out extremely well, and you only pay for the management stack once. (Or twice if you're really reaching for the stars like we did above.)

So in conclusion, I'll just leave you with a couple last things. Here is Cisco's UCS In A Nutshell documentation if I've whet your appetite and you want more information. And here is a Cisco UCS emulator, if you'd like to play around with what it feels like to administer one of these things. And lastly, here are some tutorials to go along with that emulator software.

'Till next time!

Comments (1) -

xamthone plus United States
3/27/2012 9:04:56 PM #

cisco is the most expensive stuff, but it's really good, thank for sharing

Reply

Add comment

About Me

Name: Ryan Ries
Location: Texas, USA
Occupation: Systems Engineer 

I am a Windows engineer and Microsoft advocate, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.


Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST


MCITP: Enterprise Administrator

VCP5-DCV

Profile for Ryan Ries at Server Fault, Q&A for system administrators

LOPSA

GitHub: github.com/ryanries

 

I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.