Thursday, June 26, 2014

Cisco Nexus 7000 - Basic Design Case Study and Lessons Learned

As a senior-level engineer with my company, I have the opportunity to do some basic system design. It’s not the kind of experience I would get with a VAR or a larger enterprise, but I count my blessings every chance I get to install and play with new gear.

We deployed a Nexus 7000 in our main datacenter three years ago for 10Gbps connectivity, and we’re now getting around to doing the same thing in our collocated DR site.  Due to tech advancements, though, it doesn’t make sense for us to use identical hardware for the DR location.  Here I’ll compare the old to the new and some of the lessons learned while getting the new one set up.

Our older Nexus 7010 uses Sup1 supervisor engines and M1-series line cards.  We started with a single VDC (virtual device context) model, then later added an L2-only VDC to introduce mass in-line firewall functionality. Having all M1 line cards made this really easy. We’re still running NX-OS v5.1.5 because we’ve had no particular reason to upgrade. Installation was made easier with help from our Cisco partner.

Now there are M1, M2, F1, F2, F2e, and F3 line card models that use different architectures.  I’ve been reading entire slide decks from Cisco Live that talk about how certain features can be implemented with particular combinations of models of line cards. Combining that plethora of information, along with our requirements, presents a formidable challenge. Add on the fact that we MAY WANT to do certain things in the future (like OTV for instance) and it’s even more interesting.

Our new N7K, which is also a 7010, has dual Sup2 supervisors along with M1 and F2e line cards. The M1 cards (model M148GT-11L) provide 48-port copper 1Gbps RJ45 connections, and the F2e cards (model F248XP-25E) are for 1/10Gbps connections using either fiber optics transceivers or twinax cables. One key thing I’ve learned in my cram course on N7K modules is that we will need NX-OS v6.2 in order to support the same VDC model we already use in production. When running in this “proxy routing” mode, the F2e ports defer the L3 decisions to the M1 cards in the same VDC. In my case there’s also a key takeaway: we cannot connect other routers to F2e ports when using M1 for proxy routing.

Screenshot 2014 06 26 08 24 49

All our existing routers in the same location are 1Gbps only so can be connected to the M1 cards, but we’ll have to keep this in mind for future connections. We may need to create an F2e-only VDC in the future if we want to terminate 10Gbps routers. I welcome your comments if you have experience with this.

The resources I’ve been using include some very smart folks on Twitter such as Ron Fuller (@ccie5851) and David Jansen (@ccie5952). Ron and David, as well as countless others, referred me to the F2e and M Series Design Guide for NX-OS 6.2. Honestly, I might not have known about this doc had it not been for Ron’s apparent omnipresence on Twitter.  Many made references to http://ciscolive.com/online and the great presentations there.  Also, here’s a relevant discussion on Cisco’s Support Forums site: https://supportforums.cisco.com/discussion/11673636/nexus-f2e-series-modules

 As always, hit me up on Twitter @swackhap if you have questions or comments. Or leave them below this post.

No comments:

Post a Comment