Thursday, June 27, 2013

Cisco Live Thursday Lessons Learned

My first session today was BRKRST-3114, The Art of Network Architecture, presented by Denise Donahue (@denise_donohue), Russ White, and Scott Morris (@ScottMorrisCCIE). They talked about how architecture is "the intersection of business and technology" and went into detail about how to better understand a customer by doing a SWOT analysis (stands for Strengths, Weaknesses, Opportunities, and Threats). Having been in the Air Force for over 5 years I really appreciated that Russ, who is also an Air Force veteran, introduced the audience to the concept of an OODA loop (Observe, Orient, Decide, Act). In the military, we were taught that you want to shrink your OODA loop to be smaller than your enemy's in order to defeat them. Similarly in business, you want to shrink your OODA loop smaller than your competition by best employing IT resources to help your customer succeed.
 
I was able to spend some more time in the World of Solutions expo where I visited some areas of the Cisco booth. I'm working on a project to replace some access switches as well as their aggregation point. When I mentioned the plan to use Catalyst 3750X switches for access, I was asked "why not 3850s?" Based on my conversation with the engineer, the Catalyst 3850s (see data sheet here) come in 24- and 48-port variants and have 3 options for uplink module: 4x1G, 2x10G, and 4x10G. The 3850 is the same price as the 3750X and has better performance capabilities with these caveats:
  1. Can only stack up to 4 currently (should be updated in Fall 2013)
  2. Not every feature supported by 3750X is supported by 3850 yet
  3. The 3850 runs IOS XE whereas the 3750X runs IOS
For the aggregation, I believe the best option to support 27 network closets, each with 2x10Gbps uplinks, would be a pair of 4500X switches (see data sheet here) configured as a VSS pair. Each 4500X can be ordered with either 16 or 32 onboard 10G ports and includes an expansion slot to support an additional 8x10G ports for a max total of 40 ports of 10G. Each 4500X would be ordered with 32 ports (and no expansion module) to support 27 closets plus 2x10G uplinks to the core Nexus 7k. This is another great example of how spending 10 minutes at Cisco Live can save literally hours of research online and/or discussion with my account team.
 
My last session of Cisco Live was the annual end-of-the-week panel presentation and discussion with the NOC team. Session PNLNMS-3000, titled Cisco Live Network and NOC, was moderated by Jimmy-Ray Purser (@JimmyRay_Purser) of Techwise TV. I took the opportunity to live-blog the event using the hashtags #clus and #noc. Below is a transcript of the live tweets in reverse chronological order. (Sorry, I couldn't figure an easy way to reverse them.) This year's show went VERY well for the NOC team, particularly for wireless. Well done Cisco Live! Thanks to Keith Parsons (@KeithRParsons) for referring me to http://allmytweets.net to easily copy and paste them here.
  • .@JimmyRay_Purser did a great job moderating this panel #clus #noc 
  • Applause for question managers that have been answer questions in the #clus app #noc 
  • Q: How many boxes got stolen this year? A: 1 classroom switch and an AP and switch loaned to vendor #clus #noc 
  • Question: was there a noticeable uptick in HTTPS over HTTP over last year? Answer: Yes #clus #noc 
  • They used @Splunk to help with security analysis of firewall logs, etc. #clus #noc 
  • The esteemed #clus #noc panel http://t.co/ho2jPjDpZg 
  • Mobile app developed outside of Cisco, delay due to CA cert used and not the network (maybe a cert check?) #clus #noc 
  • Things were rushed with the mobile app, lessons learned, they plan to make experience smoother next year #clus #noc 
  • HTTP data is still being processed for top websites used, NetMan might publish blogpost about it when done #clus #noc 
  • All other controllers for session rooms and hallways ran v7.3MR #clus #noc 
  • WoS controllers started on v7.3, needed more tweaks based on devices seen, so moved down to v7.2 to gave the “knob” needed #clus #noc 
  • They have months of WebEx sessions in advance to prep for show #clus #noc 
  • Collaboration done over Google Docs in many cases to share IP address info, etc; used Push-to-talk radio to communicate on-site #clus #noc 
  • IPv4 used exclusively for NetMan, IPv6 only used for DHCP #clus #noc 
  • no IPv4 was provided in WoS wireless to ensure stability and reduce the load that would have been needed for IPv6 multicast #clus #noc 
  • Jimmy-Ray is taking questions. Anybody? #clus #noc 
  • “Thank you for exercising our network and attending Cisco Live” #clus #noc 
  • Network was 100% reliable for the duration of the show #clus #noc #applause 
  • video streaming exceeded HTTP for traffic breakdown #clus #noc 
  • Vendors would sometimes shut off things, including switches in rooms, to help save power #oops #clus #noc 
  • Intelligent Automation - allowed users to use web portal to switch a port to a particular vlan without knowing details #clus #noc 
  • switches would use EEM to figure out themselves what VLAN they were on by pinging all possible gateways then self-configure #clus #noc 
  • Used EEM to set port descriptions based on CDP neighbors plugged in (embedded automation) #clus #noc 
  • used Cisco Prime LMS to help provision IDF and room switches #clus #noc 
  • …Prime Infrastructure, StealthWatch, Plixer; syslog also sent to FreeBSD and forwarded to interested parties #clus #noc 
  • Flex Netflow sent from 6500 core and dist switches to FreeBSD VM “exploder” which forwarded to other collectors… #clus #noc 
  • SNMPv3 authPriv (SHA/DES) with ACLs, NAM 2304 appliance used to traffic volume and utilization #clus #noc 
  • Joe Clarke - Network Mgmt - very impressed with a lot of Network Academy folks he worked with #clus #noc 
  • peak 10k IOPs, peak data rate 140MB/s #clus #noc 
  • Colo storage: Sunnyvale NetApp FAS2240-4 26 TB total cap, mirrored to it from local DC each night for backups #clus #noc 
  • 12 TB provisioned to VMware x2 mirrored to HA partner, 28% saved on dedup, 8.6TB used on disk #clus #noc 
  • 18TB provisioned to VMs (mostly thick provisioned); 6TB saved by thin provisioning; 14TB physical capacity avail #clus #noc 
  • Self-paced labs used virtual desktops running on NetApp storage with UCS #clus #noc 
  • All recordings from all sessions go to this storage, higher workload than last year, video surveillance stored on UCS local disk #clus #noc 
  • NetApp FAS31240 HA Pair, 2x DS2246 Disk Shelves, same equipment as last year #clus #noc 
  • Patrick Strick - NetApp in Datacenter #clus #noc 
  • Physical safety and security - 6001 events consumed, 12 physec tickets, monitoring based on motion detection #clus #noc 
  • security analytics: 1.2B events sysloged; 12 events resulted in FW blocks #clus #noc 
  • Adam Baines - remote monitoring services: core fault mgmt, security event, physical safety and security video #clus #noc 
  • Bus cams used DMVPN over LTE, worked very well #clus #noc 
  • He has some interesting footage of us coming back from CAE last night on the buses #clus #noc 
  • Able to analyze lines of people to help optimize for future events #clus #noc 
  • 6TB data storage consumed for video surveillance, 35 mobile cams on hotel shuttles, running on UCS in DC #clus #noc 
  • Physical Security with Lionel Hunt, worked with John Chambers head of security, 45 cameras deployed, 2Mbps per camera #clus #noc 
  • Some people doing call-home to botnets - check your stuff #clus #noc 
  • maxed around 1000 conns/sec, FWs never passed 7% CPU #clus #noc 
  • 26.5 TB transferred through firewalls through the week #clus #noc 
  • No firewall failover even when cables were removed and replaced during full production at 800Mbps of throughput #clus #noc 
  • Secure Edge Architecture, ASAs deployed in transparent mode active/standby HA, failover only occurs when 2 ints failed #clus #noc 
  • ASA5585X SSP-60, 2 pair, IPS-SSP-60 (4) for IPv4; ASA5585-X SSP-20, 1 pair, IPS-SSP-20 (2), for IPv6 #clus #noc 
  • Security - Per Hagen; CSM 4.4, Cisco Cyber Threat Defense #clus #noc 
  • Apple 6K clients, Intel 2k clients, Samsung 953 clients total for week #clus #noc 
  • 60% clients on 2.4GHz, 1 on 802.1b, 171 802.11a, 300 802.11g #noc #clus 
  • Peaked at 13.4K clients Tues and Wed, today crossed 10K clients on wireless, 293 per AP for the big rooms #clus #noc 
  • 180x3502P w/Air-ANT25137NP-R stadium antennas to cover keynote and WoS #clus #noc 
  • 300x3602 APs in hallways/sessions rooms in OCCC, 110x3602 APs in Peabody, 87 in-house APs for some cove ration in OCCC #clus #noc 
  • 7x58 controllers for session rooms, hallways, and Peabody; 3x5508 controllers for Keynote and WoS areas; 4xMSE 7.5 for Location #clus Noc 
  • Mir Alami - wireless - TME, very happy about how well things went this year #clus #noc 
  • EEM scripts and Twitter’s API were used to tweet from @CiscoLive2013 account for distribution Switch #clus #noc 
  • Quad redundancy with Quad Sup SSO, new feature as of May, 15.7K unique IPv4 macs, 7.8K unique IPv6 macs #clus #noc 
  • …Flex Netflow on Sup2T for IPv4 and IPv6 traffic; 1TB of multicast traffic during show #clus #noc 
  • VSS Quad-Sup SSO and Multichassis Etherchannel, OSPF and BGP for IPv4 and IPv6, SNMPv3, CoPP, Syslog, etc for NetMan…#clus #noc 
  • Connection was also provided to Peabody’s 4500 switch(es) for their meeting rooms #clus #noc 
  • 2x6509E VSS, Sup2T, 40G backbone; Dist: 2x6513E + 2x6504E, Sup2T, 40G Ethernet #clus #noc 
  • Divya has done several shows last few years including Interop core #clus #noc 
  • Next up: Divya Rao, Switching Backbone #clus #noc 
  • Multi-hop FCOE used in DC with N7004 pair but ran into problems…solution was multiple VDC #clus #noc cc/ @drjmetz @ccie5851 
  • IPv4 220K PPS Denver, 74K PPS Sunnyvale; IPv6 12.7K PPS…8% traffic was IPv6 on avg #clus #noc 
  • Local AS 64726…”thank you for stressing my network”…940Mbps from Denver, 615Mbps from Sunnyvale peaks #clus #noc 
  • RPKI validation tested this year with SoBGP for IPv4 and IPv6 for full Internet routing table #clus #noc 
  • Sunnyvale, Denver uplink sites for Centurylink #clus #noc 
  • Networking Academy had 40 people here all week #clus #noc 
  • CenturyLink ISP had rep on-site all week. Savvis provided DC services #clus #noc 
  • Routing and DC: Patrick Warichet #clus #noc 
  • 8 panelists will each present for 7 mins #clus #NOC 
  • PNLNMS-3000 Cisco Live Network and NOC, with Jimmy-Ray Purser #clus 

Cisco Live Wednesday Lessons Learned

My first session today was BRKARC-3472, NX-OS Routing Architecture and Best Practices presented by Arkady Shapiro, Technical Marketing Engineer (TME) for NX-OS and Nexus 7000. I thought Arkady was very entertaining and engaging as he delved into the depths of L3 on the N7K. Some of my key takeaways (may or may not be important in your line of work):
  1. Routes can be leaked between VRFs by enabling "feature pbr" and setting up route-maps with "match ip" statements and linking them with "set vrf" commands. (ref: slide 50)
  2. Routes can be leaked with VRF-lite without an MPLS license by redistributing IGP into BGP and using "route-target export" and "route-target import" commands under the BGP routing configuration of each VRF. (ref: slide 52)
  3. Auto-cost reference bandwidth by default is 100Mbps in IOS but 40Gbps in NX-OS.
  4. BGP best-practice is to use "aggregate-address a.b.0.0/16" under BGP routing configuration. Do NOT use "network a.b.0.0/16" under BGP routing configuration. Do NOT use "ip route a.b.0.0/16 Null0" under VRF. The reason is that if "network" statement matches a static route to null0, MPLS traffic to that route may be dropped. (ref: slide 92)
For lunch I had the opportunity to spend time with some of Solarwinds Head Geeks (@headgeeks) for two lunch-n-learn styled presentations. The first session, called "Don't Forget The Superglue," was introduced by Carlos Carvajal (Market Strategy) and presented mainly by Patrick Hubbard (The Head Geek). The reference to "superglue" alluded to the tools that Solarwinds offers to help in day-to-day running of the network and IT in general. Tools mentioned included:
  1. Web Help Desk - automated ticketing, asset management, knowledge base, communication
  2. Network Configuration Manager (NCM) - automatic config backup, realtime change alerts, compliance reporting
  3. Firewall Security Manager (FSM) - Java-based, runs on workstation, automated security and compliance audits, firewall change impact modeling, rule/object cleanup and optimization, can download configs from firewalls directly or from NCM
  4. Network Topology Mapper (NTM) - successor to LanSurveyor - network discovery, mapping, reporting, can export maps to Orion and open them in Orion Atlas
The second session covered some recent updates to Orion Network Performance Monitor (NPM) v10.5. Again introduced by Carlos Carvajal, this was presented by Michal Hrncirik, Product Manager for several of Solarwinds' applications. A couple key items that interested me:
  1. Interface discovery can be filtered for import - for instance, you can tell it to only select trunk ports and not access ports on switches, then it will show you a list of all ports and the devices they belong to so you can manually uncheck ones you don't want to import.
  2. Route monitoring - NPM will poll routes from the routing table. Although Michal said EIGRP isn't yet supported, I have actually seen EIGRP routes pulled from my IOS and NX-OS routers. The IOS routers showed them labeled as EIGRP (I think) and NX-OS showed them as "Cisco IGRP" in Orion. I'm pretty excited about the possible alerts we can set up with this type of monitoring.
Many thanks to Kellen Christensen (@ChrisTekIT) for taking the time to talk with me about his experience with Palo Alto firewalls. 

Tuesday, June 25, 2013

Cisco Live Tuesday Lessons Learned

My first session today was BRKRST-2336, EIGRP Deployment in Modern Networks. This was a new session presented by Don Slice and Donnie Savage (@diivious), who have been managing EIGRP since 1995. I've attended Don's "Care and Feeding of EIGRP" in past years at Cisco Live, and it's always a pleasure to attend his presentations. My key takeaways:
  1. EIGRP is no longer proprietary. Cisco has published an IETF Open-EIGRP Informational Draft. This means other companies can now implement EIGRP into their products if/when customers demand it.
  2. Neighbor authentication done with MD5 is no longer secure enough, so they've implemented SHA2-256 Hash-based Message Authentication Code (HMAC) to protect EIGRP messages exchanged between routers.
  3. The advent of 10Gbps links made it necessary to change the formula used to compute EIGRP metrics, now referred to as Wide Metric Support. They mentioned this was supported as of EIGRP release 8 and that the "show eigrp plugin" command would show version, but I tried on an NXOS and IOS router in my network and those commands didn't seem valid.
  4. How many of us enterprise customers use EIGRP in the LAN and have to redistribute with BGP for MPLS circuits? The problems inherent in this redistribution (which I have personally experienced, sometimes painfully) led them to create a new feature called Over the ToP (OTP) which uses LISP to bridge two EIGRP-speaking "CE" routers across a provider's MPLS cloud. One of the CE routers acts as a "route reflector" (term stolen from BGP) to consolidate route sharing amongst multiple CE routers connected to the MPLS cloud. OTP is shipping this month or next for IOS XE, then IOS in November.
The Opening Keynote this morning was hosted by Cisco Chief Marketing Officer Blair Christie (@blairchristie) and feature the perennial presenter John Chambers as well as Cisco CTO Padmasree Warrior (@padmasree) and Cisco's "Chief Futurist" Dave Evans (@davethefuturist). The presentation focused on the evolution of the "Internet of Everything" or IoE. As sensors shrink and become wearable, we will continue to be surrounded more and more by connected devices that will, according to Dave, eventually become self-aware. The obvious comparison to Skynet (http://en.wikipedia.org/wiki/Skynet_(Terminator)) was shared amongst the folks I was sitting next to. I for one WELCOME our new robot overlords. ;-)
 
I also attended BRKVIR-2019 Hypervisor Networking: Best Practices for Interconnecting with Cisco Switches. This was an excellent overview of basic networking terms and what they mean from the perspective of VMware vSphere, Microsoft HyperV, and Citrix XenServer Hypervisors. This session helps translate the terminology used by the hypervisor vendors to the terminology that Cisco uses for switch connections.
 
I was able to spend a bit more time on the expo floor, a.k.a. the "World of Solutions" (WoS). Some awesome TAC engineers in the Technical Solutions Clinic were able to help me figure out something with a Nexus 7000 that had been puzzling to me for quite some time. I popped my laptop open, connected to my company's network, and got on the N7K while the TAC folks watched over my shoulder. (By the way, I'm very impressed with the CiscoLive2013 conference wireless which, in past years, hasn't worked at all on the show floor.) I can't overemphasize how AWESOME it is to have these TAC folks here. Just being near them makes me feel smarter through osmosis.
 
As I have been researching IPAM vendors, I also visited BlueCat Networks and Infoblox and got to geek out with an engineer at each of their booths while they showed me their respective products.  Both seem solid, intuitive, and easy to use, and even though BlueCat has a plugin for VMware automation I've heard a lot more about how well integrated Infoblox is with VMware's vCenter Orchestrator and vCloud Director. In addition, Infoblox seems to have a unique way to visualize the IP networks as well as subnets and IP ranges within them that are available, assigned via static or DHCP lease, etc. I would need to see significant savings or other benefits compared to Infoblox to be convinced that Bluecat is the way to go, at least for my company.
 
It almost goes without saying at this point that I met more fantastic folks today, both in sessions and through Twitter, that continue to make this an amazing and rewarding experience. 

Monday, June 24, 2013

Cisco Live Monday Lessons Learned

I attended great session today on Cisco's Overlay Transport Virtualization (OTV) supported on Nexus 7k and ASR 1k platforms (BRKDCT-2049 - click here if you have a CiscoLive365 account). OTV is an L2 datacenter interconnect (DCI) technology proprietary to Cisco that is meant to help solve certain problems of traditional L2 VPNs including pseudo-wire maintenance and to better support multi-homing. In my enterprise role, it's important to understand how we might be able to use this kind of tech for upcoming projects and be able to present supportable ideas to my partners in IT as well as the business we support.
 
Also on my schedule was Virtual Device Context (VDC) Design and Implementation Considerations with Nexus 7000 (BRKDCT-2121) by Ron Fuller (@ccie5851). I've had the good fortune of meeting with Ron in the past and continue to interact on Twitter, and he's especially helpful in answering questions (sometimes almost in real-time). The material was in great detail and is important for me since I helped install and continue to support a Nexus 7k routed core. A key takeaway is that VDCs on the Nexus 7k are industry certified under FIPS 140-2, Common Criteria Evaluation and Validation Scheme Cert #10349. NSS Labs also has certified it as PCI compliant. The bottom line is that many customers can now collapse their Internet Edge, DMZ, and Core switching requirements into a single pair of N7Ks. There's also support for FCoE to help converge storage and IP traffic in the datacenter.
 
Thanks to the power of Twitter (once again), I arranged a real-life meet-up with Phillip James (@security_freak) and Jake Snyder (@jsnyder81) to discuss 802.1x and NAC. Kellen Christensen (@ChrisTekIT) joined the discussion to learn from Phillip and Jake what it takes to implement 802.1x. It sounds like it's much easier to do with wireless than with wired! The statistic "95% of wired 802.1x implementations fail" was thrown out, which certainly grabbed my attention. My key takeaways from this conversation, some based on my own (feeble) knowledge:
  1. Go slow. Start with Monitor Mode, then Low Impact Mode, then eventually work your way to High Security Mode.
  2. Be realistic and up-front with all critical players (desktop support, printer support, help desk, key users, management, etc). Partner with them and help them understand that this "may hurt a little" (my words).
  3. Cisco's NAC appliance was replaced by Cisco Identity Service Engine (ISE) and supports RADIUS (basic as well as advanced functions defined in multiple RFCs). Cisco Secure ACS Server v5 is the current product that supports TACACS+. ISE doesn't currently support TACACS+. 
  4. Aruba ClearPass supports RADIUS and TACACS+ as well as similar functions compared to ISE (security policy, endpoint identification/profiling). 
  5. I need to research what exact features are supported on the 3750/3750E/3750X access switches we're looking to deploy this on as well as what exact features and RFCs are supported by ISE and ClearPass.
Another highlight of my day was meeting more Tweeps IRL (in real life) such as Matthew Norwood (@matthewnorwood). And many thanks to Amy Lewis (@commsninja) and her Cisco Datacenter team for hosting Waffle Club (ssh…the first rule about Waffle Club, is don't talk about Waffle Club). Lots of great discussions there and I look forward to many more!
 

Sunday, June 23, 2013

Cisco Live Sunday Lessons Learned

Sunday was Day 1 for me at Cisco Live. Here are my key takeaways.
 
I attended the 4-hour morning session LTRSEC-2014 "Basic Network Threat Defense, Countermeasures, and Controls" with Randy Ivener and Joe Karpenko. Whether you're an Enterprise or Service Provider, unicast reverse-path forwarding (uRPF) checks can enhance security and clean up logs from edge routers. Rather than using an ACL blocking packets sourced from undesired address ranges (e.g. RFC 1918 and RFC 5735) or spoofed from your own addresses, you can implement uRFP to black-hole the traffic in CEF. Benefits include cleaner logs and lower processor overhead (depending on hardware) because the uRFP check is done in CEF. You still need the ACL, but uRPF can help.
 
Many thanks to Ed Wheadon (@avalonhawk) for weighing in on my IPAM self-task (see my Cisco Live To Do List here). I didn't know anything about Windows Server 2012 including IPAM. I'll have to check that out.
 
Kathleen Mudge (@kathleenmudge) and her crack Social Media team did a GREAT job this year putting together a beautiful and functional Social Media Hub that was accessible from Day 1, and they continue to promote the Cisco Live conversation through building online relationships among attendees. Oh, and the Scavenger Hunt was a blast (and it's only just begun)!
 
So many great folks here! Looking forward to meeting so many more smart people.

Saturday, June 22, 2013

Swack's Cisco Live To-Do List

Cisco live2

My company pays a lot of money to send me here to Cisco Live. That's likely the case for you as well (if you're also here). I've had a list at past conferences of what I wanted to accomplish but never really published it outside my head. This year I'm holding myself more accountable and putting it here.  Many are things I could do quite easily back in the office if I didn't have distractions. Now I can focus AND talk to the smartest folks in the industry about how they do business. Here's some of the many things I hope to accomplish this year.

1. Better understand the Catalyst 4500 series and how I can use them as an aggregation point for 10-gig connected closet switches. I've never really worked with them so getting a better idea of how they work, benefits and drawbacks, and deployment options is key. How else could I provide resilient aggregation for 27 network closets with 2x10G links each?

2. Learn AMAP (as much as possible) about 802.1x and how Cisco switches and phones handle it. What are the deployment methods and models? How can we use certificates or other methods like MAC Authentication Bypass (MAB) for Cisco VoIP phones where we have a client connected behind the phone? What are the capabilities of Cisco Secure ACS and Cisco Identity Services Engine (ISE) and how do they compare with other RADIUS methods such as Aruba Networks Clearpass Policy Manager (CPPM) or just a simple Windows RADIUS server?

3. Talk more in detail with Solarwinds Head Geeks and other smart engineers about how the latest version of Orion NPM Route Polling works. How can we map over 1200 locations using Orion so our retail support teams can better take advantage of Orion's power and knowledge? How can we use Orion NPM and NCM to possibly replace our existing legacy Linux-based config generation tool for store routers and provision them in an automated way?

4. How should I troubleshoot high received errors on ASA and router interfaces (specifically 7200 series)?

5. What are my options for expanding a pair of 5548UP Nexus switches as I keep adding FEX and running out of ports? If I add another pair I add another point of management (boo!). If I replace with 5596s how do I handle the transition and what can I get for trading in the 5548s?

6. How can I get our NXOS gear properly sending syslogs to our syslog server? (I already know this is a great question for the TAC folks that are here.)

7. Learn more about how IP Address Management (IPAM) vendors can prepare us for an 802.1x deployment, especially in terms of learning our existing MAC addresses for a MAB table. I've heard of Infoblox and BlueCat. Any others worth looking at?

8. Get familiar with Cisco's Next Gen Firewall capabilities and how it compares to certain competitors, particularly Palo Alto Networks.

I welcome your comments/feedback below or directly on Twitter (@swackhap).

-Swack

Saturday, June 1, 2013

VMware View Problems with 64-bit Windows 7 Virtual Desktop

We've been growing our Virtual Desktop Infrastructure (VDI) quite a bit lately, and as a result I've taken ownership of a shiny new Windows 7 64-bit virtual desktop.  Unlike the 32-bit Win7 VM I used before, though, this one has been giving me trouble.

The trouble starts when I am trying to reconnect to the already booted VM from a machine other than the last one I was on.  Specifically, I use Windows 8 64-bit at work on a Dell tower with 4 monitors (two dual-monitor graphics cards).  I use my VDI VM all the time from that machine on all four monitors.  I also have a Macbook Pro (MBP) that I take to meetings and use outside the office.  

Sometimes (not always) when I re-connect to my VM from my MBP I get a black screen with a mouse cursor and nothing else.  After waiting a minute, I either disconnect or quit the View application and re-launch. Reconnecting the second time gives me an error indicating that desktop resources  are busy.  When this happens I cannot even connect via RDP, let alone through the usual way via the View broker. I attempt to restart the guest OS through vCenter but it never actually reboots unless I power cycle the VM in vCenter.  

I worked with VMware Support but unfortunately haven't been able to fully solve the problem.  The View support folks have looked thoroughly at our setup and don't see anything that could be causing problems.  They handed me off to another group that was able to analyze a crash dump of my VM after the problem occurred, but they could only tell me that it appeared the VM was trying to use 3D rendering services of some sort (if I remember correctly).  

As a workaround, I now re-size my View window on my desktop before disconnecting so it is intentionally smaller than the laptop from which I usually connect.  This seems to have helped but it's rather frustrating.  No other users have reported having the same issue, but there are currently no other VDI users with more than 2 screens.  I should also point out that I've observed the same behavior when I connect from my home Windows 7 machine.  It doesn't seem to matter if I'm connecting to the internal View servers that only use AD authentication or if I use the Secure Gateway View server that requires 2-factor authentication and tunnels secure PCoIP. 

Based on all the evidence it seems my problem is related to having 4 monitors, but VMware support has been unable to identify the root cause and neither have I.  If you have ideas, I'd love to hear them. Hit me up on Twitter (@swackhap).