The iPhone wars, concluded

This happened not too long after I posted my last iPhone update, but I forgot to blog it until now.

I made the decision to get rid of the iPhone. There were a few things I liked about it, but overall, I found the user experience for core behavior and integration was just nowhere near the level of excellence provided by Windows Phone. Yes, I could have probably solved the problems I found by purchasing additional apps – I noticed that for the most part, the better apps are not the free ones – but it wouldn’t have solved the larger problems of each piece being just a piece, not part of a larger hole.

So, I ditched it and replaced the necessary functionality with a 4G access point. I still have the tethering when necessary but now it’s not driving down my phone battery, I only have one device to handle – one that I like – and I still don’t need to pass out my personal cell number, by simply giving my customers the option to call my main Lync number and forward the call to my cell.

So it was interesting, but ultimately…iPhones aren’t for me.

0  

Let go of Windows XP, Office 2003, and Exchange 2003

The day has come. s the end of an era, one that many people do not want to let go. I can understand that.

I drove my last car, a Ford Focus 2000, until it died in the summer of 2010. I loved that car, and we seriously considered replacing the engine (which would have been a considerable chunk of money we didn’t have) so we could keep it. In the end, though, we had to take a long hard look between finances and our family requirements, and we moved on to a new vehicle. It was the requirements portion that was the key. It was certainly cheaper to fix the immediate problem – the blown engine – and we had friends who could do it for us professionally but inexpensively.

However, our kids were getting older. The four-door mini-sedan model wasn’t roomy enough for us and all of our stuff if we wanted to take a longer road trip like we’d been talking about. If we wanted to get a new sofa, we had to ask a friend with a truck. It would be nice, we thought, to have some additional carrying capacity for friends, family, groceries, and the occasional find from Craigslist. We’d been limiting our activities to those that were compatible with our car. With the new vehicle, we found we had far greater options.

On the road again
On the road again

 

Two years ago we took the entire family on a 2-week road trip across the United States, camping along the way. Last summer, we took our family down to Crater Lake, the California Redwoods, and the Oregon Coast. We’ve been to the Olympic Rain Forest. I’ve hauled Scouts and their gear home from Jamboree shakedowns. We’ve moved. We’ve hauled furniture. In short, we’ve found that our forced upgrade, although being more expensive in the long run, also gave us far more opportunity in the long run.

I know many of you like Windows XP. For some crazy reason, I know there are still quite a few of you out there who love Office 2003 and refused to let it go. I even still run across Exchange 2003 on a regular basis. I know that there is a certain mindset that says, “We paid for it, it’s not going to wear out, so we’re just going to keep using it.” Consider, if you will, the following points:

  • Software doesn’t wear out, per se, but it does age out. You have probably already seen this in action. It’s not limited to software – new cars get features the old cars don’t. However, when a part for an old car breaks down, it’s a relatively simple matter for a company to make replacement parts (either by reverse-engineering the original, or licensing it from the original car-maker). In the software world, there is a significant amount of work revolved in back-porting code from the new version and running it backwards several versions. There’s programming time, there’s testing time, and there’s support time. 10 years is more than just about any other software company out there (get any paid Linux support company to give you 10-year support for one up-front price). Microsoft is not trying to scam more money out of you. They want you to move on and stay relatively current with the rest of the world.
  • You are a safety hazard for others. There has been plenty written about the dangers of running XP past the end of life. There are some really good guides on how to mitigate the dangers. But make no mistake – you’re only mitigating them. And in a networked office or home, your risk is exposing other people to danger as well. Don’t be surprised in a couple of months, after one or two well-publicized large-scale malware breakouts targeting these ancient editions, that your business partners, ISP, and other vendors take strong steps to protect their networks by shutting down your access. When people won’t vaccinate and get sick, quarantine is a reasonable and natural response. If you don’t want to be the attack vector or the weakest link, get off your moral high ground and upgrade your systems.
  • This is why you can’t have nice things. Dude, you’re still running Windows XP. The best you have to look forward to is Internet Explorer 8, unless you download Firefox, Chrome, or some other browser. And even those guys are only going to put up with jumping through the hoops required to make XP work for so long. News flash: few software companies like supporting their applications on an operating system (or application platform) that itself is unsupported. You’re not going to find better anti-virus software for that ancient Exchange 2003 server. You’re going to be lucky to continue getting updates. And Office 2003 plug-ins and files? Over the next couple of years, you’re going to enjoy more and more cases of files that don’t work as planned with your old suite. Don’t even think about trying to install new software and applications on that old boat. You’ve picked your iceberg.

Look, I realize there are reasons why you’ve chosen to stay put. They make sense. They make financial sense. But Microsoft is not going to relent, and this choice is not going to go away, and it’s not going to get cheaper. Right now you still have a small window of time when you will have tools to help you get your data to a newer system. That opportunity is going away faster than you think. It will probably, if past experience serves, cost you more to upgrade at this time next year than it does now.

So do the right thing. Get moving. If you need help, you know where to find us. Don’t think about all the things the new stuff does that you don’t need; think about all the ways you could be making your life easier.

0  

The enemy’s gate is down: lessons in #Lync

Sometimes what you need is a change in perspective.

I started my IT career as a technician: desktops and peripherals, printers, and the parts of networks not involving the actual building and deployment of servers. I quickly moved into the systems and network administration role. After 9/11 and a 16-month gap in my employment status, I met these guys and moved my career into a radically different trajectory – one that would take me to places I’d never dreamed of. From there, I moved into traditional consulting.

There is a different mindset between systems administration (operation) and consulting (architecture): the latter guy designs and builds the system, while the former guy keeps it running. Think of it like building a house. The contracting team are the experts at what current code is, how to get a crew going and keep them busy, how to navigate the permit process, and all the other things you need when designing and building a house. The people who buy the house and live there, though, don’t need that same body of knowledge. They may be able to do basic repairs and maintenance, but for remodels they may need to get some expert help. However, they’re also the people who have to live day in and day out with the compromises the architect and builders made. Those particular design decisions may be played out over tens of houses, with neither the designer nor the builder aware that it’s ultimately a poor choice and that a different set of decisions would have been better.

I personally find it helpful to have feet in both worlds. One of the drawbacks I’d had in working at Trace3 is that I was moving steadily away from my roots in systems administration. With Cohesive Logic, I’m getting to step somewhat back in the systems role. What I’m remembering is that there is a certain mindset good systems administrators have: when faced with a problem, they will work to synthesize a solution, even if it means going off the beaten path. The shift from “working within the limitations” to “creatively working around the limitations” is a mental reorientation much like that described in Ender’s Game: in a zero-G battle arena, the title characters realizes that carrying his outside orientation into battle was a liability. By re-visualizing the enemy’s gate as being “down”, Ender changed the entire axis of the conflict in ways both subtle and profound – and turned his misfit team into an unstoppable army.

enemys-gate-is-down

Case in point: I wanted to get my OCS/Lync Tanjay devices working our Lync Server 2013 deployment. This involved getting the firmware upgraded, which ended up being quite a challenge. In the end, I managed to do something pretty cool – get a Tanjay device running 1.0.x firmware to upgrade to 4.0.x in one jump against a native Lync Server 2013 deployment – something many Lync people said wasn’t possible.

Here’s how I did it.

All it took was a mental adjustment. Falling is effortless – so aim yourself to fall toward success.

0  

Windows 2012 R2 and #MSExchange: not so fast

In the past couple of months since Windows Server 2012 R2 has dropped, a few of my customers have asked about rolling out new domain controllers on this version – in part because they’re using it for other services and they want to standardize their new build outs as much as they can.

My answer right now? Not yet.

Whenever I get a compatibility question like this, the first place I go is the Exchange Server Supportability Matrix on TechNet. Now, don’t let the relatively old “last update” time dismay you; the support matrix is generally only updated when major updates to Exchange (a service pack or new version) come out. (In case you haven’t noticed, Update Rollups don’t change the base compatibility requirements.)

Not this kind of matrix...

Not that kind of matrix…

If we look on the matrix under the Supported Active Directory Environments heading, we’ll see that as of right now Windows Server 2012 R2 isn’t even on the list! So what does this tell us? The same thing I tell my kids instead of the crappy old “No means No” chestnut: only Yes means Yes. Unless the particular combination you’re looking for is listed, then the answer is that it’s not supported at this time.

I’ve confirmed this by talking to a few folks at Microsoft – at this time, the Exchange requirements and pre-requisites have not changed. Are they expected to? No official word, but I suspect if there is a change we’ll see it when Exchange 2013 SP1 is released; that seems a likely time given they’ve already told us that’s when we can install Exchange 2013 on Windows 2012 R2.

In the meantime, if you have Exchange, hold off from putting Windows 2012 R2 domain controllers in place. Will they work? Probably, but you’re talking about untested schema updates and an untested set of domain controllers against a very heavy consumer of Active Directory. I can’t think of any compelling reasons to rush this one.

0  

The iPhone Wars, Day 121

120 days later and I figured it was time for an update on the war.

First: I still hate this thing.

Somewhere along the way with one of the iOS updates, the battery life started going to crap, even when I’m barely using the device. When I use it as a personal hotspot, I can practically watch the battery meter race to zero.

I’ve nailed down what it is about the email client that I don’t like, and it’s the same thing I don’t like about many of the apps: the user interfaces are inconsistent and cramped. Navigating my way through a breadcrumb trail that is up near (but not quite) up at the top just feels clunky. This is where contrast with Windows Phone really, really hurts the iPhone in my experience; the Metro (I know, we’re not supposed to call it that anymore, but they can bite me) user interface principles are clean and clear. Trying to figure out simple tasks like how to get the iPhone to actually resync is more complex than necessary. Having the “new message” icon down on the bottom when the navigation is up top is stupid.

The one thing that impresses me consistently is even though the screen is small, the on-screen keyboard is really good at figuring out which letter I am trying to hit. On my Windows Phone I mistype things all the time. This rarely happens on the iPhone. Even though the on-screen keys are much smaller, the iPhone typing precision is much higher. Microsoft, take note – I’m tired of what feels like pressing on one key only to have another key grab the focus.

Even the few custom apps I do use on this iPhone fail to impress. Thanks to a lack of consistent design language, learning one doesn’t help me with the rest, and I have discovered that iPhone developers are just as bad as Windows Phone developers when it comes to inexplicable gaps in functionality.

I guess no one knows how to write good mobile software yet.

1  

The iPhone Wars, Day 1

Part of the fun of settling into a new job is the new tools. In this trade, that’s the laptop and the cell phone. Now, I already have a perfectly good laptop and cell phone, so I probably could have just gone on using those, but where so much of what I do is from home, I find it important to have a clear break between personal business and work. Having separate devices helps me define that line.

My current cell phone is a Nokia Lumia 1020 (Windows Phone 8), which I definitely enjoy. I haven’t had a good chance to take the camera for a full spin, but I’m looking forward to it. I’ve had a lot of PDAs and smart phones in my time: Palm Pilot, Handspring Visor, Windows Mobile, BlackBerry, Windows Phone 7, even an Android. The one I’ve never had, though, is an iPhone.

And it’s not that I hate Apple. My favorite past laptop was my MacBook Pro (Apple has ruined me for any other touchpad). Granted, I’m that weird bastard who loaded Vista SP1 into Boot Camp and hardly ever booted back into Mac OS X again, but ever since then I’ve usually had a spare Apple computer around the house, if only for Exchange interop testing. OS X is a good operating system, but it’s not my favorite, so my main device is always a Windows machine. My current favorite is my Surface Pro.

In all of that, though, I’ve never had an iOS device. Never an iPhone, never an iPad. Yesterday, that all changed.

I needed a business smart phone that runs a specific application, one that hasn’t yet been ported to Windows Phone. I’ve long been an advocate that “apps matter first; pick your OS and platform after you know what apps you need.” Here was my opportunity not to be a shining hypocrite! After discussion with Jeremy, I finally settled on a iPhone 5, as Android was going to be less suitable for reasons too boring to go into.

So now I have an iPhone, and I have just one question for you iPhone-lovers of the world: You really like this thing? Honest to goodness, no one is putting a gun to your head?

I can’t stand this bloody thing! First, it’s too damn small! I mean, yes, I like my smart phones somewhat large, but I have big hands and I have pockets. The iPhone 5 is a slim, flat little black carbon slab with no heft. I’ve taken to calling it the CSD – the Carbon Suppository of Death. Now, if it were just the form factor, I could get used to it, but there’s so much more that I can’t stand:

  • I didn’t realize how much I love the Windows Phone customizable menu until I wasn’t using it. I forget who once called the iPhone (and Android) menu “Program Manager Reborn” but it’s totally apt. Plus, all the chrome (even in iOS 7) just feels cluttered and junky now.
  • Speaking of cluttered, Apple sometimes takes the minimalist thing too far. One button is not enough. This, I think, Windows Phone nails perfectly. Android’s four buttons feel extraneous, but Apple’s “let there be one” approach feels like dogma that won’t bow to practicality.
  • The last time I used an iPod, it was still black & white. I can’t stand iTunes as a music manager, and I don’t like the device-side interface – so I won’t be putting any music on the CSD. No advantage there.
  • Likewise, you think I’m going to dink around with the camera on the CSD when I have the glorious Lumia camera to use? Get real, human.
  • The on-screen keyboard sucks. Part of this is because the device is so much smaller, but part of it is that Apple doesn’t seem to understand little touches that improve usability. On Windows and Android, when you touch the shift key, the case of the letters on the keys changes correspondingly; Apple is all, “LOL…NOPE!”
  • Even the Mail client irritates me, even though I haven’t managed to put my finger on exactly why yet.

So is there anything I like about the device? Sure! I’m not a total curmudgeon:

  • Build quality looks impressive. If the CSD wasn’t as flimsy as a communion wafer, I would be blown away by the feel of the device. It’s got good clean lines and understated elegance, like that suit from the expensive Saville Row tailors.
  • Power usage. The CSD goes through battery very slowly. Now part of that is because I’m not using it, but Apple has had time to optimize their game, and they do it very well indeed.
  • The simple little physical switch to put the CSD into silent mode. This is exactly the kind of physical control EVERY smart phone should have, just like every laptop should have a physical switch to disable the radios (not just a hotkey combination).

This is where I’m at, with a fistful of suck. Even an Android phone would be better than this. I’ve got no-one to blame but myself, and it’s not going to get any better. So look forward to more of these posts from time to time as I find yet another aspect of the CSD that drives me crazy.

“But Devin,” I hear some of you Apple-pandering do-gooders say, “You’re just not used to it yet. Give it time. You’ll grow to love it.”

CHALLENGE ACCEPTED.

5  

Meet the New Corporate Overlords @CohesiveLogic

Just a brief announcement (you’ll be hearing more about it later) to let everyone know that I’ve found new employment with Cohesive Logic as a Principal Consultant. Jeremy and the rest are good people and I’m happy to be hanging my hat under their shingle. We’ve got some exciting stuff coming down the pipe, and while I’ll still be focusing on Exchange, I’ll have the opportunity to broaden my skill set.

0  

A Keenly Stupid Way To Lock Yourself Out of Windows 8

Ready for this amazing, life-changing technique? Lets go!

  1. Take a domain-joined Windows 8 computer.
  2. Logon as domain user 1.
  3. Notice that the computer name is a generic name and decide to rename it.
  4. Don’t reboot yet, because you have other tasks you want to do first.
  5. Switch users to domain user 2.
  6. Perform more tasks.
  7. Go to switch back to user 1. You can’t!
  8. Try to log back in as user 2. You can’t!

Good for hours of fun!

0  

Defending a Bad Decision

I
t’s already started.

A bit over 12 hours after MSL’s cowardly decision to announce the end of the MCM program (see my previous blog post), we’re already starting to see a reaction from Microsoft on the Labor Day holiday weekend.

SQL Server MVP Jen Stirrup created an impassioned “Save MCM” plea on the Microsoft Connect site this morning at 6:19. Now, 7.5 hours later, it already has almost 200 votes of support. More importantly, she’s already gotten a detailed response from Microsoft’s Tim Sneath:

Thank you for the passion and feedback. We’re reading your comments and take them seriously, and as the person ultimately responsible for the decision to retire the Masters program in its current form, I wanted to provide a little additional context.

Firstly, you should know that while I’ve been accused of many things in my career, I’m not a “bean counter”. I come from the community myself; I co-authored a book on SQL Server development, I have been certified myself for nearly twenty years, I’ve architected and implemented several large Microsoft technology deployments, my major was in computer science. I’m a developer first, a manager second.

Deciding to retire exams for the Masters program was a painful decision – one we did not make lightly or without many months of deliberation. You are the vanguard of the community. You have the most advanced skills and have demonstrated it through a grueling and intensive program. The certification is a clear marker of experience, knowledge and practical skills. In short, having the Masters credential is a huge accomplishment and nobody can take that away from the community. And of course, we’re not removing the credential itself, even though it’s true that we’re closing the program to new entrants at this time.

The truth is, for as successful as the program is for those who are in it, it reaches only a tiny proportion of the overall community. Only a few hundred people have attained the certification in the last few years, far fewer than we would have hoped. We wanted to create a certification that many would aspire to and that would be the ultimate peak of the Microsoft Certified program, but with only ~0.08% of all MCSE-certified individuals being in the program across all programs, it just hasn’t gained the traction we hoped for.

Sure, it loses us money (and not a small amount), but that’s not the point. We simply think we could do much more for the broader community at this level – that we could create something for many more to aspire to. We want it to be an elite community, certainly. But some of the non-technical barriers to entry run the risk of making it elitist for non-technical reasons. Having a program that costs candidates nearly $20,000 creates a non-technical barrier to entry. Having a program that is English-only and only offered in the USA creates a non-technical barrier to entry. Across all products, the Masters program certifies just a couple of hundred people each year, and yet the costs of running this program make it impossible to scale out any further. And many of the certifications currently offered are outdated – for example, SQL Server 2008 – yet we just can’t afford to fully update them.

That’s why we’re taking a pause from offering this program, and looking to see if there’s a better way to create a pinnacle, WITHOUT losing the technical rigor. We have some plans already, but it’s a little too early to share them at this stage. Over the next couple of months, we’d like to talk to many of you to help us evaluate our certifications and build something that will endure and be sustainable for many years to come.

We hate having to do this – causing upset amongst our most treasured community is far from ideal. But sometimes in order to build, you have to create space for new foundations. I personally have the highest respect for this community. I joined the learning team because I wanted to grow the impact and credibility of our certification programs. I know this decision hurts. Perhaps you think it is wrong-headed, but I wanted to at least explain some of the rationale. It comes from the desire to further invest in the IT Pro community, rather than the converse. It comes from the desire to align our programs with market demand, and to scale them in such a way that the market demand itself grows. It comes from the desire to be able to offer more benefits, not fewer. And over time I hope we’ll be able to demonstrate the positive sides of the changes we are going through as we plan a bright future for our certifications.

Thank you for listening… we appreciate you more than you know.

First, I want to thank Tim for taking the time to respond on a holiday Saturday. I have no reason to think ill of him or disbelieve him in any way. That said, it won’t keep me from respectfully calling bullshit. Not to the details of Tim’s response (such as they are) not to the tone of his message, but rather to the worldview that it comes from.

First, this is the way the decision should have been announced to begin with, not that ham-fisted, mealy-mouthed thinly-disguised “sod off” piece of tripe that poor Shelby Grieve sent late last night. This announcement should have been released by the person who made the decision, taking full accountability for it, in the light of day, not pawned off to an underlying who was allowed to sneak it out at midnight Friday on a three-day holiday weekend.

Second, despite Tim’s claims of being a developer first, manager second, I believe he’s failing to account for the seductive echo-chamber mentality that permeates management levels at Microsoft. The fatal weakness of making decision by metrics is choosing the wrong metrics. When the Exchange program started the Ranger program (what later morphed to become the first MCM certification), their goal wasn’t reach into the community. It was reducing CritSits on deployments. It was increasing the quality of deployments to reduce the amount of downtime suffered by customers. This is one of the reasons I have been vocal in the past that having MSL take on 100% responsibility for the MCM program was a mistake, because we slowly but surely began losing the close coupling with the product group. Is the MCM program a failure by those metrics? Does the number of MCMs per year matter more than the actual impact that MCMs are making to Microsoft’s customers? This is hard stuff. Maybe, just maybe, having more than a tenth of a percent of all MCPs achieve this certification is the right thing if you’re focusing on getting the right people to earn it.

Third, MSL has shown us in the recent past that it knows how to transition from one set of certifications to another. When the MCITP and MCTS certification were retired, there was a beautiful, coordinated wave of information that came out showing exactly what the roadmap was, why things were changing, and what the new path would look like for people. We knew what to expect from the change. Shelby’s announcement gave us no hint of anything coming in the future. It was an axe, not a roadmap. It left no way for people who had just signed up (and paid money for the course fees, airplane tickets, etc.) to reach out and get answers to their questions. As far as we know, there may not be any refunds in the offing. I think it’s a bit early to be talking about lawyers, but several of my fellow MCMs don’t. All of this unpleasantness could have been avoided by making this announcement with even a mustard seed of compassion and projection. Right now, we’re left with promises that something will come to replace MCM. Those promises are right up on my hearth along with the promises that we just got made in recent months about new exams, new testing centers, and all the other promises the MCM program has made. This one decision and badly wrought communication has destroyed credibility and trust.

Fourth, many of the concerns Tim mentioned have been brought up internally in the MCM program before. The MCMs I went through my rotation with had lots of wonderful suggestions on how to approach solutions to these problems. The MCMs in my community have continued to offer advice and feedback. Most of this feedback has gone nowhere. It seems that somebody in between the trainers and the face people that we MCMs interact with and the folks at Tim’s level have been gumming up the communication. Ask any good intelligence analyst – sometimes you need to see the raw data instead of just the carefully processed work from the people below you in the food chain. Somewhere in that mass of ideas are good suggestions that probably could have been made to work to break down some of those non-technical barriers long before now, if only they’d gotten to the right level of management where someone had the power to do something about it. Again, in a metrics-driven environment, data that doesn’t light up the chosen metrics usually gets ignored or thrown out. There’s little profit taking the risk of challenging assumptions. Combine that with a distinct “not invented here” syndrome, and it feels like MSL has had a consistent pattern of refusing to even try to solve problems. Other tech companies have Master-level exams that don’t suffer too badly from brain dumps and other cheating measures. Why can’t Microsoft follow what they are doing and improve incrementally from there? I believe it’s because it requires investing even more money and time into these solutions, something that won’t give back the appropriate blips on the metrics within a single financial year.

So while I appreciate the fact that Tim took the time to respond (and I will be emailing him to let him know the existence of this post), I don’t believe that the only option MSL had was to do things in this fashion. And right now, that’s the impression I believe that this response is going to generate among an already angry MCM community.

5  

Ain’t Nobody [at Microsoft Learning] Got Time For That

If you track other people in the Microsoft Certified Master blogosphere you’ve probably already heard about the shot to the face the MCM/MCSM/MCA/MCSA program (which I will henceforth refer to just as MCM for simplicity) took last night: a late Friday night email announcing the cancellation of these programs.

"Wait for it...wait for it..."

“Wait for it…wait for it…”

I was helping a friend move at the time, so check the email on my phone, pondered it just long enough to get pissed off, and then put it away until I had time and energy to deal with it today.

This morning, a lot of my fellow members of the Microsoft IT Pro community are reacting publicly. This list includes Microsoft employees, MCM trainers, MCMs, and MCM candidates:

 Others have already made all of the comments I could think to make — the seemingly deliberately bad timing, the total disconnect of this announcement with recent actions and announcements regarding the MCM availability, the shock and anger, all of it.

The only unique insight I seem to have to share is that this does *not* seem to be something that the product groups are on board with — it seems to be coming directly from Microsoft Learning and the higher-ups in that chain. Unfortunately, those of us who resisted and distrusted the move of MCM from being run by the product groups in partnership with MSL to the new regime of MSL owning all the MCM marbles (which inevitably led to less and less interaction with the actual product groups, with the predictable results) now seem to be vindicated.

I wish I’d been wrong. But even this move was called out by older and wiser heads than mine, and I discounted them at the time. Boy, was I wrong about that.

I’m really starting to think that as Microsoft retools itself to try to become a services and devices company, we’re going to see even more of these kind of measures (TechNet subs, MCM certs) that alienate the highly trained end of the IT Pro pool. After all, we’re the people who know how to design and implement on-premises solutions that you folks can run cheaper than Microsoft’s cloud offerings. Many of the competitors to Microsoft Consulting or to Microsoft hosted services had one or more MCMs on staff, and MCM training was a great viewpoint into how Office 365 was running their deployments. In essence, what had once been a valuable tool for helping sell Microsoft software licenses and reduce Microsoft support costs has now become, in the Cloud era, a way for competitors and customers to knowledgeably and authoritatively derail the Cloud business plans.

From that angle, these changes make a certain twisted sort of short-term sense — and with the focus on stock price and annual revenues, short-term sense is all corporate culture knows these days.

For what it’s worth, SQL Server MVP Jen Stirrup has started this Connect petition to try to save the MCM program. I wish him luck.

3  

The Case for TechNet

By now, those of you who are my IT readers almost certainly know about Microsoft’s July 1st decision to retire the TechNet subscription offerings for IT professionals. In turn, Cody Skidmore put together a popular site to petition Microsoft to save TechNet subscriptions. Cody and many others have blogged about their reasons why they think that TechNet subscriptions need be be revived, rather than stick with Microsoft’s current plans to push Azure services, trial software, and expensive MSDN subscriptions as reasonable alternatives. I have put my name to this petition, as I feel that the loss of TechNet subscriptions is going to have a noticeable impact in the Microsoft ecosystem in the next few years.

I also hear a few voices loudly proclaiming that everything is fine. They generally make a few good points, but they all make a solitary, monumental mistake: they assume that everyone using TechNet subscriptions use them for the same things they do, in the same ways they do. Frankly, this myopia is insulting and stupid, because none of these reasons even begin to address why I personally find the impending loss of TechNet subscriptions to be not only irritating, but actively threatening to my ability to perform at my peak as an IT professional.

As a practicing consultant, I have to be an instant expert on every aspect of my customers’ Exchange environments, including the things that go wrong. Even when I’m on-site (which is rare), I usually don’t have unlimited access to the system; security rules, permissions, change control processes, and the need for uptime are all ethical boundaries that prevent me from running amok and troubleshooting wildly to my heart’s content. I can’t go cowboy and make whatever changes I need to (however carefully researched they may be) until I have worked out that those changes will in fact fix the problem and what the rollback process is going to be if things don’t work as expected.

Like many IT pros, I don’t have a ton of money to throw around at home. Because I have been working from home for most of the last few years, I have not even had access to my employer’s labs for hardware or software. I’ve been able to get around this with TechNet and virtualization. The value that TechNet provides as a reasonable price point is to give me full access to current and past versions of Microsoft software, updates, and patches, so I can replicate the customer’s environment in its essence, reduce the problem to the minimum steps for reproduction, and explore fixes or call in Microsoft Support for those times where it’s an actual bug with a workaround I can’t find. Demo versions of current software don’t help when I’m debugging interactions with legacy software, much of which rapidly becomes unavailable or at least extremely hard to find.

Microsoft needs to sit up and take notice; people like me save them money. Per-incident support pricing is not heinous, and it only takes a handful of hours going back and forth with the support personnel before it’s paid for itself from the customer’s point of view (I have no visibility into the economics on Microsoft’s side, but I suspect it is subsidized via software and license pricing overall). The thing is, though, Microsoft is a metric-driven company. If consultants and systems administrators no longer have a cost-effective source for replicating and simplifying problems, the obvious consequence I see is that Microsoft will see a rise in support cases, as well as a rise in the average time to resolve support cases, with the corresponding decrease in customer satisfaction.

Seriously, Microsoft – help us help you. Bring back TechNet subscriptions. They have made your software ecosystem one of the richest and healthiest of any commercial software company. Losing them won’t stem piracy of your products and won’t keep companies from using your software, but it will threaten to continue the narrative that Microsoft doesn’t care about its customers. And today more than ever, there are enough viable alternatives that you cannot take customer loyalty for granted.

Taking TechNet subscriptions away is a clear statement that Microsoft doesn’t trust its customers; customers will respond in kind. As the inevitable backlash to cloud services spreads in the wake of the NSA revelations, Microsoft will need all of the trust it can get. This is penny-wise, pound-foolish maneuvering at precisely the wrong time.

1  

Newsflash: Sexuality Is Already in Scouting

I ran across this article from May1, from yet another conservative Christian Scouter who seems to be convinced that by accepting gay Scouts into the BSA, the end is near for all morality in the Scouting program. As comments are closed, my responses are here. I hope the author’s blog registers the trackback and he sees it.

First, the condescending tone in the post (see the paragraph about liberals and “the choir”) makes it clear that he thinks there is absolutely no discussion that can be had, nothing to learn from an alternate point of view. This is the kind of closed mind that is the most dangerous to any youth program anywhere. Leaders need to balance between a firm and strong sense of what their pillars are and the willingness to learn new insight from those of opposing views. The only people that Jesus didn’t waste his time with were the Pharisees – the ones whose minds were rigid and unyielding.

Second, I agree 100% with him about the need to ensure that no overt acts of sexuality (regardless of orientation) have any place in the Scouting program. However, if he really thinks that sexuality is not already in Scouting, I have unwelcome news for him:

  • I remember from my own time as a Scout: when the Scoutmasters aren’t around, there’s a large amount of sexual humor and indoctrination that gets passed around from boy to boy. From fairly benign (calling Scout camp “memories without mammaries”) to merely inappropriate (streaking through a camp site) to more potentially unhealthy activities and peer pressure, these activities were there when I was a boy. From what my son tells me, they’re still there today. He’s in a great troop with a lot of amazing leaders, but no matter how great the parents/leaders/boys, when you push a prevalent and powerful aspect of humanity under the carpet, it will find ways to express itself. Adolescence is exactly the time when the humans are dealing with powerful feelings of sexuality, for the first time, and it is confusing. Scouters are often trusted adults, especially when the boys don’t have a good relationship with their parents.
  • As Scouters we have to model responsible behavior to our Scouts, including appropriate forms of sexuality. Sexuality is far more than physical intimacy; it includes our attitudes on gender, orientation, sexual roles, and more. Our Scouts watch us closely; if we are disrespectful of women and dismissive of non-masculine men (as many Scouters frequently are), they will learn that behavior is appropriate and they will indulge in it too.
  • Alaric attended the recent National Scout Jamboree. The Jamboree selection process, if you’re not familiar, limits the number of Scouts per council; there’s an interview and recommendation process that in theory ensures the Scouts who went to Jamboree are living the Scout Oath and Law. Yet despite all the precautions, they had problems with Scouts treating the female youth attendees (American Venture Scouts and international Scouts) with a marked lack of respect, including peeping tom incidents at their showers. Sexuality (of the heterosexual nature) is alive and well in Scouting. The answer is not to ignore sex; it is to address it in the appropriate context and with the appropriate limits and boundaries for Scouting activities.

Third, expecting gay Scouts to be silent about their orientation, *even when they are following Scouting guidelines about sexual activities*, is explicitly unequal.

After all, how many times have you heard of a heterosexual Boy Scout declaring for all to hear that, “I’m a heterosexual and I’m sexually active and I lust after girls?” Why is it that the GLBT crowd needs to publicly share their sexual preferences? And why on earth would a parent go on national television, or go into a court of law, to show support for their teenage son’s sexual preference for other boys?

Heterosexual Scouts make that declaration (or have it made for them) on a regular basis. When there’s an adolescent joke about boobs, or Scouts ogle another Scout’s sister, heterosexual Scouts are non-verbally (but nevertheless clearly and loudly) making the declaration that they are heterosexual beings who are attracted to girls. That doesn’t mean that they are sexually active. It is not a hallmark of “the LGBT agenda” that parents don’t want their boys to be forced to assume a mantle of silence or be assumed to be sexually prolific just because they aren’t attracted to girls. Again, I’ll use poor Alaric as an example; I know he likes girls and I know what types he likes, as do his friends and members of his troop, but he feels no need (I like to think in part because he can be open with us) to become a sexually active fourteen year-old. This is because of his character, not because he likes girls.

The mistake the blog author makes here, and he makes it consistently, is to conflate “gay” with “sexually active.” There is no reason to assume that homosexual teenagers will be any more sexually active than heterosexual teenagers (and if he wants to dispute that, I’ll be happy to point him to the studies showing increased rates of sexually transmitted diseases among Christian, abstinence-only youth who engage in risky alternatives to vaginal intercourse, because their rigid upbringing gives no thought to failure modes). In fact, homosexual youth who have access to a variety of caring, responsible adult role models are more likely to make informed, intelligent choices about sex. If the author really wants to keep kids from acts he believes are immoral, he can do far worse than encourage them to get into Scouting where they can be around leaders and other boys who will help reinforce the desired standard of behavior.

Finally, the author needs to drop the martyr complex he displays in his last paragraph. Although it was never a prominent or prevalent practice, historical research shows that the Christian church throughout the ages has at times and places supported homosexual members, including through the celebration of marriage for homosexual couples. Many of us who support Scouting’s long-overdue change in policy do so from our own religious principles. Thought we differ from the author on this issue, there are many aspects of character that we do agree on, including the points of the Scout Oath and Law, even if we do not see eye-to-eye over every point of interpretation.

Scouting is a worldwide movement. American Scouting and our particular struggles over the interpretation of Christian doctrine are not the acme of the Scouting ideals. If this one issue is really so important to him that he feels he has no choice but to cede his involvement in Scouting in the event of a legal challenge, I for one will miss the richness and depth he brings to the overall tapestry of Scouting. However, that tapestry has to be a living tapestry. Scouting is supposed to be inclusive enough to be an umbrella for multiple religions and views, to adapt and grow as our society changes. I refuse to believe that this one issue is the one that will destroy Scouting.

That’s not the vibrant Scouting program I know.

1  

Finding Differences in Exchange objects (#DoExTip)

Many times, when I’m troubleshooting Exchange issues I need to compare objects (such as user accounts in Active Directory, or mailboxes) to figure out why there is a difference in behavior. Many times, the difference is tiny and hard-to-spot. It may not even be visible through the GUI.

To do this, I first dump the objects to separate text files. How I do this depends on the type of object I need to compare. If I can output the object using Exchange Management Shell, I pipe it through Format-List and dump it to text there:

Get-Mailbox –Identity Devin | fl > Mailbox1.txt

If it’s a raw Active Directory object I need, I use the built-in Windows LDP tool and copy and paste the text dump to separate files in a text editor.

Once the objects are in text file format, I use a text comparison tool, such as the built-in comparison tool in my preferred text editor (UltraEdit) or the standalone tool WinDiff.The key here is to quickly highlight the differences. Many of those differences aren’t important (metadata such as last time updated, etc.) but I can spend my time quickly looking over the properties that are different, rather than brute-force comparing everything about the different objects.

I can hear many of you suggesting other ways of doing this:

  • Why are you using text outputs even in PowerShell? Why not export to XML or CSV?
    If I dump to text, PowerShell displays the values of multi-value properties and other property types that it doesn’t show if I export the object to XML or CSV. This is very annoying, as the missing values are typically the source of the key difference. Also, text files are easy for my customers to generate, bundle, and email to me without any worries that virus scanners or other security policies might intercept them.
  • Why do you run PowerShell cmdlets through Format-List?
    To make sure I have a single property per line of text file. This helps ensure that the text file runs through WinDiff properly.
  • Why do you run Active Directory dumps through LDP?
    Because LDP will dump practically any LDAP property and value as raw text as I access a given object in Active Directory. I can easily walk a customer through using LDP and pasting the results into Notepad while browsing to the objects graphically, as per ADSIedit. There are command line tools that will export in other formats such as LDIF, but those are typically overkill and harder to use while browsing for what you need (you typically have to specify object DNs).
  • PowerShell has a Compare-Object cmdlet. Why don’t you use that for comparisons instead of WinDiff or text editors?
    First, it only works for PowerShell objects, and I want a consistent technique I can use for anything I can dump to text in a regular format. Second, Compare-Object changes its output depending on the object format you’re comparing, potentially making the comparison useless. Third, while Compare-Object is wildly powerful because it can hook into the full PowerShell toolset (sorting, filters, etc.) this complexity can eat up a lot of time fine-tuning your command when the whole point is to save time. Fourth, WinDiff output is easy to show customers. For all of these reasons, WinDiff is good enough.
0  

Using Out-GridView (#DoExTip)

My second tip in this series is going to violate the ground rules I laid out for it, because they’re my rules and I want to. This tip isn’t a tool or script. It’s a pointer to an insanely awesome feature of Windows PowerShell that just happens to nicely solve many problems an Exchange administrator runs across on a day-to-day basis.

I only found out about Out-GridView two days ago, the day that Tony Redmond’s Windows IT Pro post about the loss of the Message Tracking tool hit the Internet. A Twitter conversation started up, and UK Exchange MCM Brian Reid quickly chimed in with a link to a post from his blog introducing us to using the Out-GridView control with the message tracking cmdlets in Exchange Management Shell.

This is a feature introduced in PowerShell 2.0, so Exchange 2007 admins won’t have it available. What it does is simple: take a collection of objects (such as message tracking results, mailboxes, public folders — the output of any Get-* cmdlet, really) and display it in a GUI gridview control. You can sort, filter, and otherwise manipulate the data in-place without having to export it to CSV and get it to a machine with Excel. Brian’s post walks you through the basics.

In just two days, I’ve already started changing how I interact with EMS. There are a few things I’ve learned from Get-Help Out-GridView:

  • On PowerShell 2.0 systems, Out-GridView is the endpoint of the pipeline. However, if you’re running it on a system with PowerShell 3.0 installed (Windows Server 2012), Out-GridView can be used to interactively filter down a set of data and then pass it on in the pipeline to other commands. Think about being able to grab a set of mailboxes, fine-tune the selection, and pass them on to make modifications without having to get all the filtering syntax correct in PowerShell.
  • Out-GridView is part of the PowerShell ISE component, so it isn’t present if you don’t have ISE installed or are running on Server Core. Exchange can’t run on Server Core, but if you want to use this make sure the ISE feature is installed.
  • Out-GridView allows you to select and copy data from the gridview control. You can then paste it directly into Excel, a text editor, or some other program.

This is a seriously cool and useful tip. Thanks, Brian!

1  

Exchange Environment Report script (#DoExTip)

My inaugural DoExTip is a script I have been rocking out to and enthusiastically recommending to customers for over a year: the fantastic Exchange Environment Report script by UK Exchange MVP Steve Goodman. Apparently Microsoft agrees, because they highlight it in the TechNet Gallery.

It’s a simple script: run it and you get a single-page HTML report that gives you a thumbnail overview of your servers and databases, whether standalone or DAG. It’s no substitute for monitoring, but as a regular status update posted to a web page or emailed to a group (easily done from within the script) it’s a great touch point for your organization. Run it as a scheduled task and you’ll always have the 50,000 foot view of your Exchange health.

I’ve used it for migrations in a variety of organizations, from Exchange 2003 (it must be run on Exchange 2007 or higher) on up. I now consider this script an essential part of my Exchange toolkit.

3  

Introducing DoExTips

At my house, we try to live our life by a well-known saying attributed to French philosopher Voltaire: “The perfect is the enemy of the good.” This is a translation from the second line of his French poem La Bégueule, which itself is quoting a more ancient Italian proverb. It’s a common idea that perfection is a trap. You may be more used to modern restatements such as the 80/20 rule (the last 20% of the work takes 80% of the effort).

I’ve had an idea for several years to fill what I see is a gap in the Exchange community. I’ve been toying with this idea for a while, trying to figure out the perfect way to do it. Today, I had a Voltaire moment: forget perfect.

So, without further ado, welcome to Devin on Exchange Tips (or #DoExTips for short). These are intended to be small posts that occur frequently, highlighting free scripts and tools that members of the global Exchange community have written and made available. There’s a lot of good stuff out there, and it doesn’t all come from Microsoft, and you don’t have to pay for it.

The tools and scripts I’ll highlight in DoExTips are not going to be finished products or polished. In many cases, they’ll take work to adapt to your environment. I’m going to quickly show you something I found that I’ve used as a starting point or spring board, not solve all your problems.

So, if you’ve got something you think should be highlighted as a DoExTip, let me know. (Don’t like the name? Blame Tom Clancy. I’ve been re-reading his Jack Ryan techno-thrillers and so military naming is on the brain.)

0  

Let’s Test It!

I’ve been studying karate for nearly five years now, and I don’t think I’ve shared this story before. When we’re sparring, students are required to wear the appropriate protective gear. No head shots, for example, if you’re not wearing head protection. For males, a sports cup is mandatory, for reasons that probably don’t require elaboration.

When I was buying a cup, I had no clue what to get. The only sports I’d done as a kid were one season of track in high school and some Pee-Wee/Little League baseball. I’d never had to deal with a cup before. I’d heard lots of horror stories about them: they were uncomfortable, didn’t fit, and didn’t really keep blows from hurting as much as they reduced the pain to manageable levels.

No, thanks. This geek did some research and came up with the Nutty Buddy. This was a cup whose inventor stood by his product by taking 90mph fast balls from a pitching machine to his crotch. After reading around, I was sold. It was more expensive, but hey, not feeling soul-crushing pain is worth it, right?

Here’s what happened next, as I sent it to Nutty Buddy:

My order arrived on the day of a sparring class. That night, I prepped for class a little early so I could figure out how to get my Nutty Buddy put in place. Having bought the “Build Your Own Package” option, I had everything I needed, and soon I was all dressed in my gi, ready to go. I walked out from my bedroom to the living room to pick up my gear bag and was met by my son, then 11 years old. “Do you have it on?” he asked eagerly and I nodded. “Great, let’s test it!” he said as he executed a perfect front snap-kick to the boys. It was a great kick, too – one of those kind you can’t be thinking about, you just have to let it rip. He immediately realized what he’d done and started apologizing, but was shocked when I laughed. The only thing I’d felt was the shock. The Nutty Buddy lived up to the hype, and I knew it was worth every penny.

No matter how prepared you are for life, sometimes you only know whether something’s going to work by just doing it.

2  

#MSExchange 2010 and .NET 4.0

Oh, Microsoft. By now, one might think that you’d learn not to push updates to systems without testing them thoroughly. One would be wrong. At least this one classifies as a minor annoyance and not outright breakage…

Windows Update offers up .NET 4.0 to Windows 2008 R2 systems as an Important update (and has been for a while). This is fine and good – various versions of the .NET framework can live in parallel. The problem, however, comes when you accept this update on an Exchange 2010 server with the CAS role.

If you do this, you may notice that the /exchange, /exchweb, and /public virtual directories (legacy directories tied to the /owa virtual directory) suddenly aren’t redirecting to /owa like they’re supposed to. Now, people aren’t normally using these directories in their OWA URLs anymore, but if someone does attempt to hit one of these virtual directories it leaves a gnarly error message to spam your event logs.

This is occurring because when .NET 4.0 is installed and the ASP.NET 4.0 components are tied into IIS, the Default Application Pool is reconfigured to use ASP.NET 4.0 instead of ASP.NET 2.0 (the version used by the .NET 3.5 runtime on Windows 2008 R2). What exactly it is about this that breaks these legacy virtual directories, I have no idea, but break them it does.

The fix for this is relatively simple: uninstall .NET 4.0 and hide the update from the machine so it doesn’t come back. If you don’t want to do that, follow this process outlined in TechNet to reset the Default Application Pool back to .NET 2.0. Be sure to run IISRESET afterwards.

1  

Blues Brother

Right at the end of December, I decided that January 2013 would be my year of just saying “Do it.” The first thing I said “do it” to was getting my hair dyed blue, like I’ve been wanting to for over a decade. That Saturday, I walked into my hairstylist for my normal haircut, and came out with a little more.

Devin_blue-dec-sm
My blue-green hair in December

I loved the cut and the color (a blue-green-silver mix), and after two weeks it had faded to a soft cotton-candy color of blue. However, it just kept on fading. Time for a refresh, so back in to my fantastic hairstylist, Liz!

Devin_Liz_Blue-sm
My partner in crime

This time, we dropped the green and mixed the blue and silver in nearly equal proportions. The result is vivid now, but we think it’s going to be fantastic after some fading!

Devin-blue-jan-sm
Move over, IBM

The best part of this experiment is that if I ever get tired of looking like a dry-erase marker, I can simply shave it off. It’s not like that’s a new look for me. The plan, though, is to keep experimenting with fun colors and settle down on a few favorites.

0  

A Few Bullet Points on American Gun Culture

I’m a gun owner. I hold a concealed pistol license in the state of Washington and I own a pistol and a rifle, which I have taken reasonable and prudent steps to keep locked up and safe when they are not in use. Although I have not taken a formal gun safety class, I have had firearms training and have taken steps to ensure that my family is also provided with training. My kids have enjoyed the carefully supervised events when they have been taken shooting by myself and other qualified adults.

I’ve had some thoughts stirring around for a while on the topic of America and the 2nd Amendment, but it wasn’t until today I pulled them together enough to start the process of writing a blog post.

Note 1: I’m going to do my level best to be polite and respectful to all parties, regardless of their political position on this subject, and I request that all commenters do the same. People crossing the line of civility may get a warning or I may just delete their comment, depending on the severity.

The Ground Rules

Today, on Facebook, one my friends posted this picture:

Who knows more about the Constitution?Figure 1: Constitutional law qualifications
(can’t find the original source for this, if you know please let me know?)

As you can imagine, this prompted (as do almost all gun control threads on the Internet) a barrage of comments. Sadly, these types of discussions tend to quickly be dominated by one of two vocal extremes:

  • The gun enthusiast (pejoratively known as the “gun nut” or “right-wing whackjob”), who often gives the impression that she won’t be happy until she can personally and privately own any weapon system ever made, up to and including ICBMs, aircraft carriers, Abrams tanks, and F-22 Raptors. She is typically, but not always, aligned with the more extremely conservative side of the political spectrum
  • The gun worrier (pejoratively known as the “gun grabber” or “bleeding-heart liberal”), who commonly and frequently opines that mankind will know nothing but a wretched existence devoid of any light, joy, or hope until every last physical instance of, drawing of, reference to, or even the mental concept a of weapon is wiped from existence. He is typically, but not always, aligned with the more extremely liberal side of the spectrum.

Note 2: if you fit into one of these two extremes, I will give you good advice: stop reading now, and move on. You won’t like what I have to say; I refuse to validate your unreasonably narrow and exclusionary viewpoint. I won’t let other people call you names should you choose to ignore my advice and comment, but I will redact your extremist attempts to redirect a civil conversation into your own flavor of lunacy. Be warned – my blog, my rules. You want to post your own screed? Go burn your own storage and bandwidth to do it.

Almost immediately, a good point was made: while Obama’s credentials are accurately stated, this picture attempts to make a point through blatant use of stereotypes. We know nothing about the gentleman in the red box – he might also be an Ivy League Constitutional scholar, or a distinguished judge, or even a talented and knowledgeable amateur scholar. We don’t know and we’re not told. This is the good old “guilt by association” propaganda ploy – if you like big scary guns, you’re probably ignorant just based on your appearance. Not a great way for liberals to make a point.

At the same time, conservatives are guilty of blatantly false propaganda too:


Figure 2: One of these things is not like the other
(found on
rashmanly.com)

Really? A democratically elected (twice, now, even!) federal executive, in a country with some of the most extensive checks and balances, who for at least half of his time in office has had to deal with a Congress (you know the branch of the government that actually makes the laws) controlled by his political opponents, is magically a dictator on par with some of the worst tyrants of recorded history? Because his biggest political acts have been to try to keep our country from plunging into a hyper-inflationary depression, to make sure poor people have access to medical care, and to try to maybe do something to reduce the number of innocent people killed by guns in this country every year? Remember, this is the President who pissed off many in his party because he didn’t bother to dismantle many of the incentives put in place by his predecessor.

Note 3: Don’t even think of heading to the “Democrats just want to take away guns and Republicans are protecting gun rights.” Remember the assault rifle ban that expired in 2004? The one that was enacted in 1994, which would have been during the (Democratic) Clinton administration? The one that was lobbied for by Ronald Reagan?

Finding Middle Ground

Okay, now that I’ve unilaterally declared extremes off the table, let’s dig into the meat of the original graphic – which is the fact that Obama has a background in Constitutional law, so unlike many politicians and political wonks, he might actually have a more than passing familiarity with some of the issues involved.

Obama is using executive orders to make changes within the framework of existing law, as well as working to introduce legislation to accomplish additional goals such as reintroducing the expired assault rifle ban. Some of these changes are likely to be polarizing, but outside of the echo chambers and spin factories, there’s actually a large amount of support for many of these proposals – and this according to a poll of 945 gun owners conducted last July by Republican party pollster Frank Luntz, before the events of Newton. After Newton, support for stricter laws on the sale of firearms has increased overall, including increased support for passing new laws although support for renewal of the assault rifle ban is still just shy of a majority. Yet somehow, any discussion of changes provokes an immediate, hostile response.

It’s also inevitable to see someone trot out the argument that since cars kill far more people, we need to regulate cars. Um, hello? We do. Car manufacturers have to regularly participate in studies and make changes to cars to reduce the deaths because of cars, and over the decades, it’s worked. We do the same thing for other forms of violence — we study it, and we make intelligent changes to reduce the impact. But the current climate and talking points (such as the historically inaccurate charge that gun control led to the Holocaust) have kept us in a virtual standstill on dealing with gun violence of any type.

Thanks to a careful and prolonged lobbyist and political spending campaign by the NRA and the gun manufacturers, we don’t even have credible research that would tell us why American gun deaths are so much higher than comparable nations. Let me be clear; the NRA does a lot of good, but they are a human institution and over the past couple of decades, they’ve transformed themselves from a simple society to promote scientific rifle shooting to a lobbyist organization. At times, I think this dichotomy can at times drive the NRA leadership out of sync with their members’ concerns and lead them to try to drive policy and dictate their members beliefs rather then represent them.

At this point, I think its obvious that some sort of changes need to be made. The USA has a gun homicide rate that is 4.5 times higher (or more) than other G-8 countries. When confronted with these facts, many people respond with talking points about how countries that have enacted gun control laws see a rise in crimes such as violent assault (Australia is a frequently featured talking point). However true these points may be, I can’t help but think that’s an invalid comparison. If I were to be the victim of a crime, I think I would rather be injured rather than outright killed. I would rather that my stuff got stolen than lose my wife or one of my kids. But overall, the crime rate in the US is dropping.

Like many Americans, I’m in favor of extending background checks and doing more to ensure that people with a history of violent mental illness and misdemeanor violence have reduced access to guns. Without comprehensive studies, I’m not convinced that renewing the assault rifle ban will actually help anything (are extended magazines actually useful in genuine self-defense scenarios, or would regular magazines do the trick?) But there’s a number of potential steps I’ve thought of that I’ve seen no discussion on:

  • I’m disturbed by the fact that when I take a free CPR or First Aid class, I have more stringent requirements than I do for my CPL. When I get CPR training I have to demonstrate that I am up-to date in my training and technique and recertify every year or two at the most; when I applied for my concealed pistol license, all I had to do was not currently be a felon and I get a five year license. Different states have different requirements; maybe it’s time to get a more consistent framework in place that requires more frequent check-ins and more frequent training?
  • While we’re talking about training, let’s hit another popular talking point: that armed private citizens are likely to stop mass shootings. While there are incidents of gun owners (typically store clerks) stopping an attempted robbery, the private citizens that have stopped instances of mass shootings all turn out to be private or off-duty security personnel who have substantially higher levels of firearms training than the average citizen (such as the Clackamas Mall shooting in Portland, OR).
  • One of the claimed benefits of having less restrictive firearms statutes is crime reduction. More armed citizens, it is said, equals lower crime. However, in order to have this kind of deterrent effect, don’t the criminals have to either know that people are carrying, or at least have a reasonable suspicion that people are carrying? Concealed carry would seem to be counter-productive; open carry would actually allow criminals to know what they’re about to get into. Is American culture ready for open carry? Again, this is an area we’d need more research on.
  • What about on-site gun safe inspections as part of the permit approval process? If one of the big concerns is people getting inappropriate access to guns, we should be making sure they’re being appropriate stored and locked away.

There’s a horrible patchwork of laws in place and there are some loopholes that should be closed, as long as we can do so without heading down the path of a guns registry. Come on, yes there are some screwballs who want to take all guns away, just as there are some screwballs who think that they should be able to own fully operable RPGs and tanks and fighter jets. Most of us are somewhere in the middle, although not in the same part of the middle, but we can’t even have a realistic, reasoned discussion on this because the people who benefit financially from the status quo make sure we can’t.

At this point in time, we can’t have a meaningful conversation on what the “well-regulated” clause in the 2nd Amendment is supposed to mean. All of our other liberties have been slowly and carefully re-interpreted over time – sometimes overly so, usually with corrections in the long run — as the times changed and as the nation changed and (yes) as we saw the fruits of some of the Founders’ mistakes. They were human; of course they made mistakes. They knew they would make mistakes and that we would have to adjust for situations they could never have foreseen. And yet, a strict reading of the 2nd Amendment is somehow off the table for even reasonable discussion? Why must we hew strictly to the Founding Fathers’ intentions in this one area when we willingly ignore them in other areas? (Check out what they had to say about professional politicians, lobbyists, and a two-party system.)

So, yes, sometimes it takes a Constitutional scholar to understand not only the original context of our Constitution, but also remember that the Founding Fathers always intended this Constitution to grow and live and adapt as our country did. It’s time for us to open the doors to a reasoned discussion on all areas of the 2nd Amendment, including the precise definition of which weapons it makes sense to allow citizens to have and what sorts of controls might be prudent to put in place to balance the right to self-defense with the reasonable safety of those around us.

2  

Attached To You: Exchange 2010 Storage Essays, part 3

[2100 PST 11/5/2012: Edited to fix some typos and missing words/sentences.]

So, um…I knew it was going to take me a while to write this third part of the Exchange 2010 storage saga…but over two years? Damn, guys. I don’t even know what to say, other than to get to it.

So, we’ve this lovely little 3-dimension storage axis I’ve been talking about in parts 1 (JBOD vs. RAID) and 2 (SATA vs. SAS/FC). Part 3 addresses the third axis: SAN vs. DAS.

Exchange Storage DAS vs. SAN

What’s in a name?

It used to be that everyone agreed on the distinction between DAS, NAS, and SAN:

  • DAS was typically dumb or entry-level storage arrays that connected to a single (or at most two or three) servers via SCSI, SATA/SAS, or some other storage-specific cabling/protocol. DAS arrays typically had very little on-board smarts, other than the ability to run RAID configurations and present the RAID volumes to the connected server as if they were a single volume instead.
  • NAS was file-level storage presented over a network connection to servers. The two common protocols used were NFS (for Unix machines) and SMB/CIFS (for Windows machines). NAS solutions often include more functionality, including features such as direct interfaces with backup solutions, snapshots of the data volumes, replication of data to other units, and dynamic addition of storage.
  • SAN was high-end, expensive block-level storage presented over a separate network infrastructure such as FC or iSCSI over Ethernet. SAN systems offer even more features aimed at enterprise markets, including sophisticated disk partitioning and access mechanisms designed to achieve incredibly high levels of concurrence and performance.

As time passed and most vendors figured out that providing support for both file-level and block-level protocols made their systems more attractive by allowing them to be reconfigured and repurposed by their customers, the distinction between NAS and SAN began to blur. DAS, however, was definitely dumb storage. Heck, if you wanted to share it with multiple systems, you had to have multiple physical connections! (Anyone other than me remember those lovely days of using SCSI DAS arrays for poor man’s clustering by connecting two SCSI hosts – one with a non-default host ID – to the same SCSI chain?)

At any rate, it was all good. For Exchange 2003 and early Exchange 2007 deployments, storage vendors were happy because if you had more than a few hundred users, you almost certainly needed a NAS/SAN solution to consolidate the number of spindles required to meet your IOPS targets.

The heck you say!

In the middle of the Exchange 2007 era, Microsoft upset the applecart. It turns out that with the ongoing trend of larger mailboxes, Exchange 2007 SP1, CCR, and SCR, many customers were able to do something pretty cool: decrease the mailbox/database density to the point where (with Exchange 2007’s reduced IOPS) the total IOPS for their databases no longer required a sophisticated storage solution to provide the requisite IOPS. In general, disks for SAN/NAS units have to be of a higher quality and speed than for DAS arrays, so they typically had better performance and lower capacity than consumer-grade drives.

This trend only got more noticeable and deliberate in Exchange 2010, when Microsoft unified CCR and SCR into the DAG and moved replication to the application layer (as we discussed in Part 1). Microsoft specifically designed Exchange 2010 to be deployable on a direct-attached RAID-less 2TB SATA 7200 RPM drive to hold a database and log files, so they could scale hosted Exchange deployments up in an affordable fashion. Suddenly, Exchange no longer needed SAN/NAS units for most deployments – as long as you had sufficiently large mailboxes throughout your databases to reduce the IOPS/database ratio below the required amount.

Needless to say, storage vendors have taken this about as light-heartedly as a coronary.

How many of you have heard in the past couple of years the message that “SAN and DAS are the same thing, just different protocols”?

Taken literally, DAS and SAN are only differences in connectivity.

The previous quote is from EMC, but I’ve heard the same thing from NetApp and other SAN vendors. Ever notice how it’s only the SAN vendors who are saying this?

I call shenanigans.

If they were the same thing, storage vendors wouldn’t be spending so much money on whitepapers and marketing to try to convince Exchange admins (more accurately, their managers) that there was really no difference and that the TCO of a SAN just happens to be a better bet.

What SAN vendors now push are features like replication, thin provisioning, virtualization and DR integration, backup and recovery – not to mention the traditional benefits of storage consolidation and centralized management. Here’s the catch, though. From my own experience, their models only work IF and ONLY IF you continue to deploy Exchange 2010 the same way you deployed Exchange 2003 and Exchange 2007:

  • deploying small mailboxes that concentrate IOPS in the same mailbox database
  • grouping mailboxes based on criteria meant to maximize single instance storage (SIS)
  • planning Exchange deployments around existing SAN features and backup strategies
  • relying on third-party functionality for HA and DR
  • deploying Exchange 2010 DAGs as if they were a shared copy cluster

When it comes right down to it, both SAN and DAS deployments are technically (and financially) feasible solutions for Exchange deployments, as long as you know exactly what your requirements are and let your requirements drive your choice of technology. I’ve had too many customers who started with the technology and insisted that they had to use that specific solution. Inevitably, by designing around technological elements, you either have to compromise requirements or spend unnecessary energy, time, and money solving unexpected complications.

So if both technologies are viable solutions, what factors should you consider to help decide between DAS and SAN?

Storage Complexity

You’ve probably heard a lot of other Exchange architects and pros talk about complexity – especially if they’re also Certified Masters. There’s a reason for this – more complex systems, all else being equal, are more prone to system outages and support calls. So why do so many Exchange “pros” insist on putting complexity into the storage design for their Exchange systems when they don’t even know what that complexity is getting them? Yes, that’s right, Exchange has millennia of man-hours poured into optimizing and testing the storage system so that your critical data is safe under almost all conditions, and then you go and design storage systems that increase the odds the fsck-up fairy[1] will come dance with your data in the pale moonlight.

SANs add complexity. They add more system components and drivers, extra bits of configuration, and additional systems with their own operating system, firmware, and maintenance requirements. I’ll pick on NetApp for a moment because I’m most familiar with their systems, but the rest of the vendors have their own stories that hit most of the same high points:

  • I have to pick either iSCSI or FC and configure the appropriate HBA/NICs plus infrastructure, plus drivers and firmware. If I’m using FC I get expensive FC HBAs and switches to manage. If I go with iSCSI I get additional GB or 10GB Ethernet interfaces in my Exchange servers and the joy of managing yet another isolated set of network adapters and making sure Exchange doesn’t perform DAG replication over them.
  • I have to install the NetApp Storage Tools.
  • I have to install the appropriate MPIO driver.
  • I have to install the SnapDrive service, because if I don’t, the NetApp snapshot capability won’t interface with Windows VSS, and if I’m doing software VSS why the hell am I even using a SAN?
  • I *should* install SnapManager for Exchange (although I don’t have to) so that my hardware VSS backups happen and I can use it as an interface to the rest of the NetApp protection products and offerings.
  • I need to make sure my NetApp guy has the storage controllers installed and configured. Did I want redundancy on the NetApp controller? Upgrades get to be fun and I have to coordinate all of that to make sure they don’t cause system outage. I get to have lovely arguments with the NetApp storage guys about why they can’t just treat my LUNs the same way they treat the rest of them, yes I need my own aggregates and volumes and no please don’t give me the really expensive 15KRPM SAS drives that store a thimble because you’re going to make your storage guys pass out when they find out how many you need for all those LUNs and volumes (x2 because of your redundant DAG copies).[2]

Here’s the simple truth: SANs can be very reliable and stable. SANs can also be a single point of failure, because they are wicked expensive and SAN administrators and managers get put out with Exchange administrators who insist on daft restrictions like “give Exchange dedicated spindles” and “don’t put multiple copies of the same database on the same controller” and other party-pooping ways to make their imagined cost savings dwindle away to nothing. The SAN people have their own deployment best practices, just like Exchange people; those practices are designed to consolidate data for applications that don’t manage redundancy or availability on their own.

Every SAN I’ve ever worked with wants to treat all data the same way, so to make it reliable for Exchange you’re going to need to rock boats. This means more complexity (and money) and the SAN people don’t want complexity in their domain any more than you want it in yours. Unless you know exactly what benefits your solution will give you (and I’m not talking general marketing spew, I’m talking specific, realistic, quantified benefits), why in the world would you want to add complexity to your environment, especially if it’s going to start a rumble between the Exchange team and the SAN team that not even Jackie Chain and a hovercraft can fix?

Centralization and Silos

Over the past several years, IT pros and executives have heard a lot of talk about centralization. The argument for centralization is that instead of having “silos” or autonomous groups spread out, all doing the same types of things and repeating effort, you reorganize your operation so that all the storage stuff is handled by a single group, all the network stuff is handled by another group, and so on and so forth. This is another one of those principles and ideas that sounds great in theory, but can fall down in so many ways once you try to put it into practice.

The big flaw I’ve seen in most centralization efforts is that they end up creating artificial dependencies and decrease overall service availability. Exchange already has a number of dependencies that you can’t do anything about, such as Active Directory, networking, and other external systems. It is not wise to create even more dependencies when the Exchange staff doesn’t have the authority to deal with the problems those dependencies create but are still on the hook for them because the new SLAs look just like the old SLAs from the pro-silo regime.

Look, I understand that you need to realign your strategic initiatives to fully realize your operational synergies, but you can’t go do it half-assed, especially when you’re messing with business critical utility systems like corporate email. Deciding that you’re going to arbitrarily rearrange operations patterns without making sure those patterns match your actual business and operational requirements is not a recipe for long-term success.

Again, centralization is not automatically incompatible with Exchange. Doing it correctly, though, requires communication, coordination, and cross-training. It requires careful attention to business requirements, technical limitations, and operational procedures – and making sure all of these elements align. You can’t have a realistic 1-hour SLA for Exchange services when one of the potential causes for failure itself has a 4-hour SLA (and yes, I’ve seen this; holding Exchange metrics hostage to a virtualization group that has incompatible and competing priorities and SLAs makes nobody happy). If Exchange is critical to your organization, pulling the Exchange dependencies out of the central pool and back to where your Exchange team can directly operate on and fix them may be a better answer for your organization’s needs.

The centralization/silo debate is really just capitalism vs. socialism; strict capitalism makes nobody happy except hardcore libertarians, and strict socialism pulls the entire system down to the least common denominator[3]. The real answer is a blend and compromise of both principles, each where they make sense. In your organization, DAS and an Exchange silo just may better fit your business needs.

Management and Monitoring

In most Exchange deployments I’ve seen, this is the one area I consistently see neglected, so it doesn’t surprise me that it’s not more of an issue. Exchange 2010 does a lot to make sure the system stays up and operational, but it can’t manage everything. You need to have a good monitoring system in place and you need to have automation or well-written, thorough processes to handle dealing with common warnings and low-level errors.

One of the advantages of a SAN is that (at least on a storage level) much of this will be taken care of you. Every SAN system I’ve worked with not only built-in monitoring of state of the disks and the storage hardware, but has extensive integration with external monitoring systems. It’s really nice when at the same time you get notification that you’ve had a disk failure in the SAN that the SAN vendor has also been notified, so you know in the next day a spare will show up via FedEx (or even possibly brought by a technician who will replace it for you). This kind of service is not normally associated with DAS arrays.

However, even the SAN’s luscious – nay, sybaritic – level of notification luxury only protects you against SAN-level failures. SAN monitoring doesn’t know anything about Exchange 2010 database copy status or DAG cluster issues or Windows networking or RPC latency or CAS arrays or load balancer errors. Whether you deploy Exchange 2010 on a SAN or DAS offering, you need to have a monitoring solution that provides this kind of end-to-end view of your system. Low-end applications that rely on system-agnostic IP pings and protocol endpoint probes are better than nothing, but they aren’t a substitute for application-aware systems such as Microsoft System Center Operations Manager or some other equivalent that understand all of the components in an Exchange DAG and queries them all for you.

You also need to think about your management software and processes. Many environments don’t like having changes made to centralized, critical dependency systems like a SAN without going through a well-defined (and relatively lengthy) change management process. In these environments, I have found it difficult to get emergency disk allocations pushed through in a timely fashion.

Why would we need emergency disk allocations in an Exchange 2010 system? Let me give you a few real examples:

  • Exchange-integrated applications[4] cause database-level corruption that drives server I/O and RPC latency up to levels that affect other users.
  • Disk-level firmware errors cause disk failure or drops in data transfer rates. Start doing wide-scale disk replacements on a SAN and you’re going to drive system utilization through the roof because of all the RAID group rebuilds going on. Be careful which disks you pull at one time, too – don’t want to pull two or three disks out of the same RAID group and have the entire thing drop offline.
  • Somebody else’s application starts having disk problems. You have to move the upper management’s mailboxes to new databases on unaffected disks until the problems are identified and resolved.
  • A routine maintenance operation on one SAN controller goes awry, taking out half of the database copies. There’s a SAN controller with some spare capacity, but databases need to be temporarily consolidated so there is enough room for two copies of all the databases during the repair on the original controller.

Needless to say, with DAS arrays, you don’t have to tailor your purchasing, management, and operations of Exchange storage around other applications. Yes, DAS arrays have failures too, but managing them can be simpler when the Exchange team is responsible for operations end-to-end.

Backup, Replication, and Resilience

The big question for you is this: what protection and resilience strategy do you want to follow? A lot of organizations are just going on auto-pilot and using backups for Exchange 2010 because that’s how they’ve always done it. But do you really, actually need them?

No, seriously, you need to think about this.

Why do you keep backups for Exchange? If you don’t have a compelling technical reason, find the people who are responsible for the business reason and ask them what they really care about – is it having tapes or a specific technology, or is it the ability to recover information within a specific time window? If it’s the latter, then you need to take a hard look at the Exchange 2010 native data protection regime:

  • At least three database copies
  • Increased deleted item/deleted mailbox recovery limits
  • Recoverable items and hold policies
  • Personal archives and message retention
  • Lagged database copies

If this combination of functionality meets your needs, you need to take a serious look at a DAS solution. A SAN solution is going to be a lot more expensive for the storage options to begin with, and it’s going to be even more expensive for more than two copies. None of my customers deployed more than two copies on a SAN, because not only did they have to budget for the increased per-disk cost, but they would have to deploy additional controllers and shelves to add the appropriate capacity and redundancy. Otherwise, they’d have had multiple copies on the same hardware, which really defeats the purpose. At that point, DAS becomes rather attractive when you start to tally up the true costs of the native data protection solution.

So what do you do if the native data protection isn’t right for you and you need traditional backups? In my experience, one of the most compelling reasons for deploying Exchange on a SAN is the fantastic backup and recovery experience you get. In particular, NetApp’s snapshot-based architecture and SME backup application head the top of my list. SME includes a specially licensed version of the Ontrack PowerControls utility to permit single mailbox recovery, all tied back into NetApp’s kick-ass snapshots. Plus, the backups happen more quickly because the VSS provider is the NetApp hardware, not a software driver in the NTFS file system stack, and you can run the ESE verification off of a separate SME server to offload CPU from the mailbox servers. Other SAN vendors offer some sort of integrated backup option of some equivalency.

The only way you’re going to get close to that via DAS is if you deploy Data Protection Manager. And honestly, if you’re still replying on tape (or cloud) backups, I really recommend that you use something like DPM to stage everything to disk first so that backups from your production servers are staging to a fast disk system. Get those VSS locks dealt with as quickly as possible and offload the ESE checks to the DPM system. Then, do your tape backups off of the DPM server and your backup windows are no longer coupled to your user-facing Exchange servers. That doesn’t even mention DPM’s 15-minute log synchronization and use of deltas to minimize storage space on its own storage pool. DPM has a lot going for it.

A lot of SANs do offer synchronous and asynchronous replication options, often at the block level. These sound like good options, especially to enhance site resiliency, and for other applications, they often can be. Don’t get suckered into using them for Exchange, though, unless they are certified to work against Exchange (and if it’s asynchronous replication, it won’t be). A DAS solution doesn’t offer this functionality, but that’s no loss in this column; whether you’re on SAN or DAS, you should be replicating via Exchange. Replicating using the SAN block-level replication means that the replication is happening without Exchange being aware of it, which means depending on when a failure happens, you could in the worst case end up with a corrupted database replica volume. Best case, your SAN-replicated database will not be in a consistent state, so you will have to run ESEUTIL to perform a consistency check and play log files forward before mounting that copy. If you’re going to that, why are you running Exchange 2010?

Now if you need a synchronous replication option, Exchange 2010 includes an API to allow a third-party provider to replace the native continuous replication capability. As far as I know, only one SAN vendor (EMC) has taken advantage of this option, so your options are pretty clear in this scenario.

Conclusion

We’ve covered a lot of ground in this post, so if you’re looking for a quick take-away, the answer is this:

Determine what your real requirements are, and pick your technology accordingly. Whenever possible, don’t make choices by technology or cost first without having a clear and detailed list of expected benefits in hand. You will typically find some requirement that makes your direction clear.

If anyone tells you that there’s a single right way to do it, they’re probably wrong. Having said that, though, the more I’ve seen over the past couple of years, the more people deviate from the Microsoft sweet spot, the more design compromises they’ve made when perhaps they didn’t have to. Inertia and legacy have their place but need to be balanced with innovation and reinvention.

[1] Not a typo, I’m just showing off my Unix roots. The fsck utility (file system check) helps fix inconsistencies in the Unix file systems. Think chkdsk.

[2] Can you tell I’ve been in this rodeo once or twice? But I’m not bitter. And I do love NetApp because of SME, I just realize it’s not the right answer for everyone.

[3] Yes, I did in fact just go there. Blame it on the nearly two years of political crap we’ve suffered in the U.S. for this election season. November 6th can’t come soon enough.

[4] The application in this instance was an older version of Microsoft Dynamics CRM, very behind on its patches. There was a nasty calendar corruption bug that made my customer’s life hell for a while. The solution was to upgrade CRM to the right patch level, then move all of the affected mailboxes (about 40% of the users) to new databases. We didn’t need to have a lot of new databases, as we could move them in a swing fashion, but in order to get it done in a timely fashion we needed to provision enough LUNs to have enough databases and copies that we could get the process done in a timely fashion. Each swing cycle took about two weeks because of change management when we could have gotten it done much sooner.

0  

Alaric’s Fundraising Progress

Just wanted to drop a quick note to you all to keep you updated on Alaric’s progress in raising funds for his 2013 Summer of Awesome. I’ve created a static page that you can go to and will keep it updated until our goal of $5,000 is met. That’s not to say that I won’t be reminding you all about it here and on Twitter and Facebook on a regular basis, but I wanted to condense all the major details down to one place.

Update: We’re around $1,365 or so, give or take some pending funds from current fundraising efforts and some pledges we’ve not yet receiving but are expecting. Thank you to everyone who has helped us out so far!

0  

Can You Fix This PF Problem?

Today I got to chat with a colleague who was trying to troubleshoot a weird Exchange public folder replication problem. The environment, which is the middle of an Exchange 2007 to Exchange 2010 migration, uses public folders heavily – many hundreds of top-level public folders with a lot of sub-folders. Many of these public folders are mail-enabled.

After replicating creating public folder replicas on Exchange 2010 public folder databases and ensuring that the public folders were starting to replicate, my colleague received notice that specific mail-enabled public folders weren’t getting incoming mail content. Lo and behold, the HT queues were full of thousands of public folder replication messages, all queued up.

After looking at the event logs and turning up the logging levels, my colleague noticed that they were seeing a lot of the 4.3.2 STOREDRV.Deliver; recipient thread limit exceeded error message mentioned in the Microsoft Exchange team blog post Store Driver Fault Isolation Improvements in Exchange 2010 SP1. Adding the RecipientThreadLimit key and setting it to a higher level helped temporarily, but soon the queues would begin backing up again.

At that point, my colleague called me for some suggestions. We talked over a bunch of things to check and troubleshooting trees to follow depending on what he found. Earlier tonight, I got an email confirming the root cause was identified. I was not surprised to find out that the cause turned out to be something relatively basic. Instead of just telling you what it was though, I want you to tell me which of the following options YOU think it is. I’ll follow up with the answer on Monday, 10/15.

Which of the following options is the root cause of the public folder replication issues?

2  

Alaric’s Summer of Awesome

Some of you might get some cognitive whiplash from the following post, given my recent vocal stance on Intel’s corporate fundraising for Boy Scouts of America. If your own views on Scouting are such that you are not able to entertain helping out or sponsoring a Scout, we understand — this post isn’t for you.

Many of you know that my son Alaric has been involved in Scouting for many years. Despite my own issues with the Scouting organization’s policies[1], we’ve seen a lot of benefits from Alaric’s involvement. There are some really great boys and adults we’ve met through Scouting and my boy has learned and grown a lot. He’s currently a Star Scout and an Ordeal member of the Order of the Arrow, and has been serving as a patrol leader for a year. Alaric is well on his way to Life Scout by the end of the year and has given himself a goal of becoming an Eagle Scout by the end of summer 2013.

Alaric receiving four merit badges

Alaric receiving four merit badges

Next summer, Alaric has the opportunity to have the kind of summer adventure that every Boy Scout can only dream of:

  • It’s time for the National Scout Jamboree. This event typically takes place once every 4 years. This year is particularly cool because it will be the first jamboree held on the new Summit grounds in West Virginia’s Bechtel Reserve wilderness area, Scouting’s new permanent jamboree home and high adventure base.
  • Alaric’s troop will be heading to Philmont Scout Ranch in New Mexico, Scouting’s oldest and most famous high adventure backpacking camp. It can take years for troops to get a slot to come to Philmont for a mountain adventure.

Both of these events are usually once-in-a-lifetime events for most Boy Scouts. The fact that Alaric has the chance to go to both is amazing and requires an immense amount of commitment and dedication from him (and us).

Unfortunately, my getting laid off in July threw a huge wrench into the fund-raising portion of this adventure. The total cost to participate in both events, including airline tickets and required gear upgrades, is going to be around $5,000 for our family. If I had a steady job, this wouldn’t have been a problem — we’d have covered half and Alaric could have motored through fund raisers with his troops to get the rest covered. He’s already raised over $600 just through mowing lawns, odd jobs, and even a garage sale.

Even if you don’t want to donate to Scouting — are you willing to invest in my son? The typical Scouting fundraiser is through the sales of Trail’s End popcorn products. Trail’s End is an amazing outfit that makes online sales very easy, they produce fantastic popcorn, and they offer the choice for making donations to help send popcorn to active-duty military units.

Alaric’s popcorn pitch letter can be seen below:

Dear Dad’s Reader,

Did you know you can help send me to the National Jamboree? Just click here and place an order on my behalf. There are all kinds of products to choose from, and every product has better flavor and is better for you.

Plus, you won’t just be helping me go to Jamboree. 70% of your purchase will benefit Scouting in my area and help more kids experience all the things that make Scouting great. It’s a situation where everyone wins.

Thanks for your support,

Alaric

P.S. If you cannot click on the link above, copy and paste this full URL:
http://www.trails-end.com/shop/scouts/email_referral.jsp?id=3440240

If you would rather donate to Alaric directly, contact me using the form below.

If you’re still with me this far, thanks for reading and for your support.

[1] I have two main issues. The first is that they discriminate against gay boys, girls (several programs for older youth are co-ed), and leaders. The second is that their religious requirements discriminate against boys who are atheists or agnostic yet are willing to investigate a religion in the spirit of understanding and tolerance. Look at Girl Scouts to see how these issues can be dealt with sensibly.

0  

Forced Obsolescence

ZDNet’s David Meyer noted earlier today that Google is about to shut down support for exporting the legacy Microsoft Office file formats (.doc, .xls, and .ppt) from Google Apps as of October 1, 2012. The Google blog notes that Google Apps users will still be able to import data from those formats. However, if they want Office compatibility, they need to export to the Office 2007 formats (.docx, .xlsx, and .pptx).

When Office 2007 was still in beta back in 2006, Microsoft released optional patches to Office 2003 to allow it to open and save the new file formats. Over time, these patches got included in Windows Update, so if you still have Office 2003 but have been updating, you probably have this capability today. Office 2003 can’t open these newer documents with 100% fidelity, but it’s good enough to get the job done. And if you’re on earlier versions of Office for Word, Microsoft hasn’t forgotten you; Office 2000 and Office XP (2002) users can also download the Compatibility Pack.

What boggles me are some of the comments on the ZDNet article. I can’t understand why anyone would think this was a bad idea:

  • The legacy formats are bloated and ill-defined. As a result, files saved in those format are more prone to corruption over the document lifecycle, not to mention when moving through various import/export filters. Heck, just opening them in different versions of Word can be enough to break the files.
  • The legacy formats are larger — much larger — than the new formats. Between the use of standard ZIP compression (the new format documents are actually an archive file containing a whole folder/file structure inside) along with smart use of XML rather than proprietary binary data, the new formats can pack a lot more data into the same space. Included picture files, for example, can be stored in compressible formats rather than as space-hogging uncompressible bitmaps.
  • The legacy formats are safer. Macro information is safely stored away from the actual data in the file, and Office (at least) can block the loading and saving of macro information from a variant of these files.

For many companies it would simply be cost-prohibitive to convert legacy files into the new formats…but it might not be a bad idea for critical files. Nowadays, I personally try to make sure I’m only writing new format Office files unless the people I am working with specifically ask for one of the legacy formats. I’m glad to see that Google is doing the right thing in helping make these legacy formats nothing more than a historical footnote — and I’d love to see Microsoft remove write support for them in Office 2013.

0  

MEC Day 3

Unfortunately, this is the day that Murphy caught up with me, in the form of a migraine. When these hit, the only thing I can do is try to sleep it off.

I ended up not hitting the conference center until a bit after noon, just in time to brave lunch. What would a Microsoft conference be without the dreaded salmon meal? At that point, my stomach rebelled and my head agreed, so I wandered back to the MVP area and chatted until it was time to head upstairs to my room for my last session at 1pm.

Big thanks to everyone who showed up for the session. I took some of the feedback from Day 2, and combined with my increased mellowness from the migraine, I made some changes to the structure of the session and clarifications to the message I wanted the attendees to walk away with. We had what I thought was a brilliant session. Apparently, I do my best work while in pain.

After that, it was down to the expo floor for a quick round of good-byes, then off to catch my shuttle to Orlando International Airport. I was able to get checked in with more than enough time for a leisurely meal, then on to gate 10 where I met up with various other MEC attendees on their way back home to Seattle.

WHAT AN AMAZING CONFERENCE. I had SO much fun, even with missing essentially all of Day 3 and the wonderful sessions that I’d planned to sit into. My apologies for the missed Twitter stream that day.

We’ll have to do this again next year. I hope you’ll be there!

0  

And @marypcbuk Nails IT

Amid all the bustle of MEC, I’ve not taken a bunch of time to read my normal email, blogs, etc. However, this article from ZDNet caught my eye:

Windows 8: Why IT admins don’t know best by Mary Branscombe

The gist of it is that IT departments spend a lot of time and effort trying to stop users from doing things with technology when they would often be better served enabling users. Users these days are not shy about embracing new technology, and Mary argues that users find creative ways around IT admins who are impediments:

The reality is that users are pushing technology in the workplace — and out of it. The Olympics has done more to advance flexible and remote working than a decade of IT pilot projects.

What got her going is the tale of an IT admin who found a way to disable, via Group Policy, the short tutorial that users are given on navigating Windows 8 the first time they log on.

I see this behavior all the time from admins and users – admins say “No” and users say “Bet me.” Users usually win this fight, too, because they are finding ways to get their work done. A good admin doesn’t say “No” – they say, “Let me help you find the best way to get that done.”

Mary finishes with this timely reminder:

See something new in Windows 8? If your first impulse is to look for a way to turn it off, be aware that you’re training your users to work around you.

What a refreshing dose of common sense.

0  

MEC Day 2

Today was another fun-filled and informative day at MEC:

  • The day started off with a keynote by Microsoft Distinguished Engineer Perry Clarke, head of Exchange Software Development. Perry does a blog called Ask Perry which regularly includes a video feature, Geek Out With Perry. The keynote was done in this format. The latter half was quite good, but the first half was a little slow and (I thought) lightweight for a deeply technical conference such as MEC. However, that could just have been a gradual wake-up for the people still recovering from last night’s attendee party of Universal’s Islands of Adventure theme park.
  • After a short break, we were off to the interactive sessions! I got caught up in a conversation and made it to my first session a few minutes late – and wasn’t able to enter, as the room was at capacity. So, I missed Jeff Mealiffe’s session on virtualizing Exchange 2013, much to my annoyance. Instead I headed down to the exhibit floor and hung out in the MVP area, talking with a bunch of folks (including one my homies from MCM R1).
  • At lunch I caught up with some old friends – one of the best reasons for coming.
  • After lunch, I squeezed (and by squeezed I am being literal; we were crammed into the room like sardines) into Bill Thompson’s session on the Exchange 2013 transport architecture. WOW. Some bold changes made, but I think they’re going to be good changes.
  • At 3:00, my time at the front of the room had come and I gave my first session of my Exchange 2010 virtualization lessons learned. Mostly full room and there were some good questions. I received some interesting feedback later, so will be wrapping that into tomorrow’s repeat presentation.
  • My last session of the day was Greg Taylor’s session on Exchange 2013 load balancing. Again, lots of good surprises and changes, and as always watching Greg in action was entertaining and informative. This is, after all, the man who talks about Exchange client access using elephant’s asses.
  • Afterwards, I caught up with former co-workers and enjoyed a couple of beers at MAPI Hour in the lovely central atrium of the Gaylord Palms Hotel, then went out to dinner (fantastic burger at the Wrecker’s Sports Bar). Capped the night off with a sundae.

Two down, one more to go. What a fantastic time I’ve been having!

1  

MEC Day 1

After 10 years of absence, the Microsoft Exchange Conference is back. Yes, that’s right, the last time MEC happened was in 2002. How do I know this? I’ve seen a couple of people today who still had their MEC 2002 badges. HOLY CRAP, DUDES. I’m a serious packrat and not even *I* keep my old conference badges.

I decided to live tweet my sessions. I did a good job too – my Twitter statistics are telling me that I’ve sent 258 tweets! If any of my Facebook friends are still bothering to read my automatic Twitter-to-Facebook updates..shit, sorry. Two more days to go and you know I can’t be nearly as prolific today or Wednesday because I’m presenting a session each day:

  • E14.310 Three Years In: Looking Back At Virtualizing Exchange 2010
    Tuesday, September 25 2012 3:00 PM – 4:15 PM in Tallahassee 1
  • E14.310R1 Three Years In: Looking Back At Virtualizing Exchange 2010
    Wednesday, September 26 2012 1:00 PM – 2:15 PM in Tallahassee 1

Monday was the “all Microsoft, all Exchange 2013” day with typical keynotes and breakouts. Today, we start the “un-conference” – smaller, more interactive sessions, led by members of the community like myself. Today and tomorrow will be a lot more peer-to-peer…which will be fun.

See you out there! Drop me a note or track me down to let me know if you read my blog or have a question you’d like me to answer!

0  

The BSA Funding Hornet’s Nest

Earlier today I posted a Scouting-related tweet that provoked drew a strong reaction from several people. Here’s the tweet:

Intel Corporation: Pull your financial support until the Boy Scouts pull their anti-gay policy http://www.change.org/petitions/intel-corporation-pull-your-financial-support-until-the-boy-scouts-pull-their-anti-gay-policy … via @change

I was asked if I thought that it was better for Scouting to lose funds. I was asked how doing this would help the boys in Scouting. I was told that it was abusive and manipulative to use funding to try to effect change in Scouting’s policies over what is a relatively minor matter.

I am a former Boy Scout, my son is a Boy Scout, I have just been registered as an adult Scouter, and my daughter is looking at joining a Venture crew sometime in the next year. I think that Scouting is a fantastic youth program. So how can I support Scouts while calling for Intel to defund them?

I have two main reasons to support the petition to Intel.

Reason 1: Choices have consequences

The value of Scouting isn’t just the outdoor skills and learning how to handle yourself in the wilderness; it’s in the character formation that goes along with the outdoor program. Scouting teaches principles and duty. Scouting youth often drop out when they hit a certain age because of the peer pressure they’re getting by being different, by standing up for their beliefs and values. The kids who stay in Scouting learn that making a stand comes with consequences. It is precisely this kind of character formation that many former Scouts go on to say is the most valuable lesson they learn from Scouting.

The national Scouting organization has now said multiple times that they see having gay Scouts and Scouters as somehow being incompatible with Scouting ideals. Intel and the other companies identified in this article by Andy Birkey on The American Independent (linked to from the petition, BTW) have made their policies on charitable donations crystal clear. These policies are not new. These companies need to make sure their house is in order by verifying that their giving is in line with their policies (as the ones in orange have done). However, Scouting has a responsibility here too. By continuing to accept money from organizations such as Intel in violation of their stated donation guidelines, I believe that Scouting is sending the message that money is more important than principles. I’ve heard a lot of justification for accepting the money, but when it comes right down to it, taking donations from these companies when you don’t comply with their guidelines is hypocrisy, plain and simple. I think Scouting is better than that.

Whether I agree with the national organization’s stance on gay Scouts/Scouters or not, I think the unwritten message is doing more harm in the long run that the immediate defunding would do. I’m confident that should Scouting actually have the courage to turn down this money, alternate funding sources would quickly emerge in today’s polarized climate. Look at the Chik-Fil-A protests and responses if you doubt me. So no, I’m not worried that there would be long-term financial damage to Scouting.

It’s not like this is a theoretical situation for my family. Our local troop enjoys a high level of funding thanks to Microsoft matching contributions to the men and women who volunteer as our Scouters and committee members, many of whom are full-time Microsoft employees. I suspect that Microsoft’s policies are actually the same as Intel’s, based on their publicly stated policies for software donations to charities. If Microsoft were to stop funding Scouting (or Scouting were to stop taking Microsoft dollars because of this policy) our troop would be directly and severely affected.

I personally know at least two gay Scouters, and I suspect I know more. Scouting would somehow find the money to replace the lost donations. I don’t know how they’d replace the people I’m thinking of.

I’ve talked this over with my son on multiple occasions. When we discussed this particular petition and the fact that I was going to publicly support it, we talked about the implications. I asked him if he had any concerns. His response: “Do it, Dad. Scouting needs a kick in the ass.” (Yes, he’s my kid.)

And if you think I’m somehow being abusive or manipulative for supporting the use of defunding as a tool for policy change, go back to that Birkey article:

In a brief filed in the landmark case of Boy Scouts of America v. Dale, a lawyer for the LDS Church warned that the church would leave the scouts if gays were allowed to be scout leaders.

“If the appointment of scout leaders cannot be limited to those who live and affirm the sexual standards of BSA and its religious sponsors, the Scouting Movement as now constituted will cease to exist,” wrote Von G. Keetch on behalf of the LDS Church and several other religious organizations in 2000. “The Church of Jesus Christ of Latter-day Saints — the largest single sponsor of Scouting units in the United States — would withdraw from Scouting if it were compelled to accept openly homosexual Scout leaders.”

According to the Chartered Organizations and the Boy Scouts of America fact sheet, as of December 31, 2011 there are over 100,000 chartered Scouting units, with nearly 7/10 of them chartered by religious organizations. In the tables in that fact sheet, we see data on the top 25 religious charterers, top 20 civic charterers, and the educational charterers – giving us data on 55,100 units (just over half) and 1,031,240 youth. According to this data, the LDS Church sponsors almost 35% of the Scouting units in the BSA. Yet, according to this same data, they have only 16% of the actual youth in Scouting. The youth-to-unit average for the LDS Church is a mere 11.1, which is the lowest of any organization (or group of organizations) listed in the fact sheet data.

Several of the organizations on that list, including the next largest religious sponsor (the United Methodist Church – 11,078 units, 371,491 youth, 33.5 youth per unit, 10% of the total units, and 14% of the total youth) would support and welcome gay Scouts and Scouters. The LDS Church gets to be vocal about it because of that 1/3 number of units – that translates into money for Scouting. This kind of ultimatum is in fact what manipulative behavior (using the threat of defunding) looks like.

Reason 2: People who see a problem need to be part of the solution

I’m continuing to get more involved with Scouting for one simple reason: I believe that if I see something I think is wrong, I need to be part of the solution. I don’t think it’s right that Scouting be in a position where it can have its cake and eat it too. However, I’m not going to throw the baby out with the bathwater; I see the incredible value the Scouting program gives to young men (and the young women who participate in the Venturer program).

My own religious beliefs and principles move me to be more involved precisely because I think Scouting needs more Scouts and Scouters who are open about their support for changing these policies. I know people who gave up on Scouting; I refuse to be one of them.

I want Scouting to change its policies, but I’m willing to keep being a part of it during those changes. I’m not trying to take my bat and ball and go home if the game doesn’t go my way. I want Scouts to continue producing young people of character for future generations.

Want to see the data I’m looking at? I got the fact sheet from the link stated above, brought the data in Excel, and added formulas for unit/youth ratios and percentages. I’ve put this spreadsheet online publicly via SkyDrive.

1  

TMG? Yeah, you knew me!

Microsoft today officially announced a piece of news that came as very little surprise to anyone who has been paying attention for the last year. On May 25th of 2011, Gartner broke an unsubstantiated claim that they had been told by Microsoft that there would be no future release of Forefront Threat Management Gateway (TMG).

Microsoft finally confirmed that information. Although the TMG product will receive mainstream support until April 14, 2015 (a little bit more than 2.5 years from time of writing), it will no longer be available for sale come December 1, 2012.

Why do Exchange people care? Because TMG was the simple, no-brainer solution for environments that needed a reverse proxy in a DMZ network. Many organizations can’t allow incoming connections from the Internet to cross into an interior network. TMG provided protocol-level inspection and NAT out of the box, and could be easily configured for service-aware CAS load balancing and pre-authentication. As I said, no-brainer.

TMG had its limitations, though. No IPv6 support, poor NAT support, and an impressively stupid inability to proxy all non-HTTP protocols in a one-armed configuration. The “clustered” enterprise configuration was sometimes a pain-in-the ass to troubleshoot and work with when the central configuration database broke (and it seemed more fragile than it should be).

The big surprise for me is that TMG shares the chopping block with the on-server Forefront protection products for Exchange, SharePoint, and Lync/OCS. I personally have had more trouble than I care for with the Exchange product — it (as you might expect) eats up CPU like nobody’s business, which made care and feeding of Exchange servers harder than it needed to be. Still, to only offer online service — that’s a telling move.

0  

Duke of URL

Just a quick note to let you know about a change or two I’ve made around the site.

  • Changed the primary URL of the site from www.thecabal.org to www.devinonearth.com. This is actually something I’ve been wanting to do for a long time, to reflect the site’s really awesome branding. Devin on Earth has long been its own entity that has no real connection to my original web site.
  • Added a secondary URL of www.devinganger.com to the site. This is a nod toward the future as I get fiction projects finished and published – author domains are a good thing to have, and I’m lucky mine is unique. Both www.devinganger.com and www.thecabal.org will keep working, so no links will ever go stale.

As a final aside, this is the 600th post on the site. W00t!

0  

My Five Favorite Features of Exchange Server 2013 Preview

Exchange Server 2013 Preview was released a few weeks ago to give us a first look at what the future holds in store for Exchange. I got a couple of weeks to dig into it in depth and so here’s my quick impression of the five changes I like the most about Exchange 2013.

  1. Client rendering is moved from the Client Access role to the Mailbox role. (TechNet) Yes, this means some interesting architectural changes to SMTP, HTTP, and RPC, but I think it will help spread load out to where it should be – the server that host active users’ mailboxes.
  2. The Client Access role is now a stateless proxy. (TechNet) This means we no longer need an expensive L7 load balancer with all sorts of fancy complicated session cookies in our HTTP/HTTPS sessions. It means a simple L4 load balancer is enough to scale the load for thousands of users based solely on source IP and port. No SSL offload required!
  3. The routing logic now recognizes DAG boundaries. (TechNet) This is pretty boss – members of a DAG that are spread across multiple sites will still act as if they were local when routing messages to each other. It’s almost like the concept of routing groups has come back in a very limited way.
  4. No more MAPI-RPC over TCP. (TechNet) Seriously. Outlook Anywhere (aka RPC over HTTPS) is where it’s at. As a result, Autodiscover for clients is mandatory, not just a really damn good idea. Firewall discussions just got MUCH easier. Believe it or not, this simplifies namespace and certificate planning…
  5. Public folders are now mailbox content. (TechNet) Instead of having a completely separate back-end mechanism for public folders, they’re now put in special mailboxes. Yes, this means they are no longer multi-master…but honestly, that causes more angst than it solves in most environments. And now SharePoint and other third-party apps can get to public folder content more easily…

There are a few things I’m not as wild about, but this is a preview and there’s no point kvetching about a moving target. We’ll see how things shake down.

I’m looking forward to getting a deeper dive at MEC in a couple of weeks, where I’ll be presenting a session on lessons learned in virtualizing Exchange 2010. Are you planning on attending?

Have you had a chance to play with Exchange 2013 yet, or at least read the preview documentation? What features are your favorite? What changes have you wondering about the implications? Send me an email or comment and I’ll see if I can’t answer you in a future blog post!

0  

Can’t make a bootable USB stick for Windows 8? Join the club!

I was trying to make a bootable USB stick for Windows 8 this morning, using the Windows 7 USB/DVD Download Tool from Microsoft and the process outlined in this Redmond Pie article (the same basic steps can be found in a number of places). Even though the tool originated for Windows 7 and the steps I linked to are for the Windows 8 Consumer Preview, it all still works fine with Windows 8 RTM.

The steps are pretty simple:

  1. Download and install the tool.
  2. Download the ISO image of the version of Windows you want to install (Windows 7 and 8 for sure, I believe it works with Windows Server 2008 R2 and Windows Server 2012 RC as well).
  3. Plug in a USB stick (8GB or larger recommended) that is either blank or has no data on it you want to keep (it will be reformatted as part of the process).
  4. Run the tool and pick the ISO image.
  5. Select the USB drive (note that this tool can also burn the ISO to DVD).
  6. Wait for the tool to reformat the USB stick, copy the ISO contents to the stick, and make it bootable.

Everything was going fine for me until I got to step 6. The tool would format the USB stick, and then it would immediately fail before beginning the file copy:

DownloadToolError

Redmond, we have a problem…

At first I was wondering if it was related to UAC (it wasn’t) or a bad ISO image (it wasn’t). So I plugged the appropriate search terms into Bing and away we went, where I finally found this thread on the TechNet forums, which led me to this comment in the thread (wasn’t even marked as the solution, although it sure should have been):

We ran across this same "Error during backup., Usb; Unable to set active partition. Return code 87" with DataStick Pro 16 GB USB sticks. The Windows 7 DVD/USB Download Tool would format and then fail as soon as the copy started.

We ended up finding that the USB stick has a partition that starts at position 0 according to DiskPart. We used DiskPart to select the disk that was the USB, then ran Clean, then created the partition again. This time it was at position 1024. The USB stick was removed then reinserted and Windows prompted to format the USB stick, answer Yes.

The Windows 7 DVD/USB Download Tool was now able to copy files.

So, here’s the process I followed:

DiskPartUSBFix

Follow my simple step-by-step instructions. I make hacking FUN!

To do it yourself, launch a command window (either legacy CMD or PowerShell, doesn’t matter) with Administrator privileges and type diskpart to fire up the tool:

  1. LIST DISK gives a listing of all the drives attached to the system. At this point, no disk is selected.
  2. I have a lot of disks here, in part because my system includes an always-active 5-in-1 card reader (disks 1 through 5 that say no media). I also have an external USB hard drive (230GB? How cute!) at disk 6. Disk 7, however — that’s the USB stick. Note that the "free" column is *not* showing free space on the drive in terms of file system — it’s showing free space that isn’t allocated to a partition/volume.
  3. Diskpart, like a lot of Microsoft command-line tools, often requires you to select a specific item for focus, at which point other commands that you run will then run against the currently focused object. Use SELECT DISK to set the focus on your USB stick.
  4. Now that the USB stick has focus, the LIST PART command will run against the selected disk and show us the partitions on that disk.
  5. Uh-oh. This is a problem. With a zero-byte offset on that partition (USB sticks typically only have a single partition) that means there’s not enough room for that partition to be marked bootable and for the boot loader to be put on the disk. The volume starts at the first available byte. Windows needs a little bit of room — typically only one megabyte — for the initial boot loader code (which then jumps into the boot code in the bootable disk partition).
  6. So, let’s use CLEAN to nuke the partitions and restore this USB stick to a fully blank state.
  7. Use LIST PART again (still focused on the disk object) confirms that we’ve removed the offending partition. You can create a new partition in diskpart but I happened to have the Disk Manager MMC console open already as part of my troubleshooting, so that’s what I used to create the new partition.
  8. Another LIST PART to confirm that everything is the way it should be…
  9. Yup! Notice we have that 1 MB offset in place now. There’s now enough room at the start of the USB stick for the boot loader code to be placed.
  10. Use EXIT to close up diskpart.

This time, when I followed the steps with the Download Tool, the bootable USB stick was created without further ado. Off to install Windows 8!

10  

Back on the market

I sent out a brief tweet about this Friday and have received a number of queries, so I figured I should expand on this publicly.

No, I am no longer with Trace3. No, this was not my decision — I was happy with my position and work and was excited about what was happening there. At the same time, this was not a complete shock. I’m not at liberty to go into it (and even if I were, I don’t think I would anyway) but all living organisms (including vibrant corporations) change, and while many of those changes are good for the organism as a whole, they aren’t always so great for individual cells.

I have no hard feelings. I had a fantastic time at Trace3 and have learned a lot. I wish everyone there all the success in the world and am reasonably confident they’ll grab it. At the same time, there were some aspects of my fit at Trace3 that could have been improved on. Always being remote with no local co-workers, for one — that was a definite downer.

I’m feeling confident in my ability to find my next job. I have some exciting opportunities under way. In the meantime, though, if you have a lead or opportunity, let me know — and yes, that does include the potential for 1099 independent consulting work.

0  

Beating Verisign certificate woes in Exchange

I’ve seen this problem in several customers over the last two years, and now I’m seeing signs of it in other places. I want to document what I found so that you can avoid the pain we had to go through.

The Problem: Verisign certificates cause Exchange publishing problems

So here’s the scenario: you’re deploying Exchange 2010 (or some other version, this is not a version-dependent issue with Exchange) and you’re using a Verisign certificate to publish your client access servers. You may be using a load balancer with SSL offload or pass-through, a reverse proxy like TMG 2010, some combination of the above, or you may even be publishing your CAS roles directly. However you publish Exchange, though, you’re running into a multitude of problems:

  • You can’t completely pass ExRCA’s validation checks. You get an error something like:  The certificate is not trusted on any version of Windows Phone device. Root = CN=VeriSign Class 3 Public Primary Certification Authority – G5, OU=”(c) 2006 VeriSign, Inc. – For authorized use only”, OU=VeriSign Trust Network, O=”VeriSign, Inc.”, C=US
  • You have random certificate validation errors across a multitude of clients, typically mobile clients such as smartphones and tablets. However, some desktop clients and browsers may show issues as well.
  • When you view the validation chain for your site certificate on multiple devices, they are not consistent.

These can be very hard problems to diagnose and fix; the first time I ran across it, I had to get additional high-level Trace3 engineers on the call along with the customer and a Microsoft support representative to help figure out what the problem was and how to fix it.

The Diagnosis: Cross-chained certificates with an invalid root

So what’s causing this difficult problem? It’s your basic case of a cross-chained certificate with an invalid root certificate. “Oh, sure,” I hear you saying now. “That clears it right up then.” The cause sounds esoteric, but it’s actually not hard to understand when you remember how certificates work: through a chain of validation. Your Exchange server certificate is just one link in an entire chain. Each link is represented by an X.509v3 digital certificate that is the footprint of the underlying server it represents.

At the base of this chain (aka the root) is the root certificate authority (CA) server. This digital certificate is unique from others because it’s self-signed – no other CA server has signed this server’s certificate. Now, you can use a root CA server to issue certificates to customers, but that’s actually a bad idea to do for a lot of reasons. So instead, you have one or more intermediate CA servers added into the chain, and if you have multiple layers, then the outermost layer are the CA servers that process customer requests. So a typical commercially generated certificate has a validation chain of 3-4 layers: the root CA, one or two intermediate CAs, and your server certificate.

Remember how I said there were reasons to not use root CAs to generate customer certificates? You can probably read up on the security rationales behind this design, but some of the practical reasons include:

  • The ability to offer different classes of service, signed by separate root servers. Instead of having to maintain separate farms of intermediate servers, you can have one pool of intermediate servers that issue certificates for different tiers of service.
  • The ability to retire root and intermediate CA servers without invalidating all of the certificates issued through that root chain, if the intermediate CA servers cross-chain from multiple roots. That is, the first layer intermediate CA servers’ certificates are signed by multiple root CA servers, and the second layer intermediate CA servers’ certificates are signed by multiple intermediate CA servers from the first layer.

So, cross-chaining is a valid practice that helps provide redundancy for certificate authorities and helps protect your investment in certificates. Imagine what a pain it would be if one of your intermediate CAs got revoked and nuked all of your certificates. I’m not terribly fond of having to redeploy certificates for my whole infrastructure without warning.

However, sometimes cross-chained certificates can cause problems, especially when they interact with another feature of the X.509v3 specification: the Authority Information Access (AIA) certificate extension. Imagine a situation where a client (such as a web browser trying to connect to OWA), presented with an X.509v3 certificate for an Exchange server, cannot validate the certificate chain because it doesn’t have the upstream intermediate CA certificate.

If the Exchange server certificate has the AIA extension, the client has the information it needs to try to retrieve the missing intermediate CA certificate – either retrieving it from the HTTPS server, or by contacting a URI from the CA to download it directly. This only works for intermediate CA certificates; you can’t retrieve the root CA certificate this way. So, if you are missing the entire certificate chain, AIA won’t allow you to validate it, but as long as you have the signing root CA certificate, you can fill in any missing intermediate CA certificates this way.

Here’s the catch: some client devices can only request missing certificates from the HTTPS server. This doesn’t sound so bad…but what happens if the server’s certificate is cross-chained, and the certificate chain on the server goes to a root certificate that the device doesn’t have…even if it does have another valid root to another possible chain? What happens is certificate validation failure, on a certificate that tested as validated when you installed it on the Exchange server.

I want to note here that I’ve only personally seen this problem with Verisign certificates, but it’s a potential problem for any certificate authority.

The Fix: Disable the invalid root

We know the problem and we know why it happens. Now it’s time to fix it by disabling the invalid root.

Step #1 is find the root. Fire up the Certificates MMC snap-in, find your Exchange server certificate, and view the certificate chain properties. This is what the incorrect chain has looked like on the servers I’ve seen it on:

image

The invalid root CA server circled in red

That’s a not very helpful friendly name on that certificate, so let’s take a look at the detailed properties:

image

Meet “VeriSign Class 3 Public Primary Certification Authority – G5”

Step #2 is also performed in the Certificates MMC snap-in. Navigate to the Third-Party Root Certification Authorities node and find your certificate. Match the attributes above to the certificate below:

image

Root CA certificate hide and seek

Right-click the certificate and select Properties (don’t just open the certificate) to get the following dialog, where you will want to select the option to disable the certificate for all purposes:

image

C’mon…you know you want to

Go back to the server certificate and view the validation chain again. This time, you should see the sweet, sweet sign of victory (if not, close down the MMC and open it up again):

image

Working on the chain gang

It’s a relatively easy process…so where do you need to do it? Great question!

The process I outlined obviously is for Windows servers, so you would think that you can fix this just on the the Exchange CAS roles in your Internet-facing sites. However, you may have additional work to do depending on how you’re publishing Exchange:

  • If you’re using a hardware load balancer with the SSL certificate loaded, you may not have the ability to disable the invalid root CA certificate on the load balancer. You may simply need to remove the invalid chain, re-export the correct chain from your Exchange server, and reinstall the valid root and intermediate CA certificates.
  • If you’re publishing through ISA/TMG, perform the same process on the ISA/TMG servers. You may also want to re-export the correct chain from your Exchange server onto your reverse proxy servers to ensure they have all the intermediate CA certificates loaded locally.

The general rule is that the outermost server device needs to have the valid, complete certificate chain loaded locally to ensure AIA does its job for the various client devices.

Let me know if this helps you out.

2  

Autism Is Not The New Cool

Pardon, y’all. It’s been a while since I’ve been here <peers at the dust>. I’ve had the best of intentions, but sadly, my bogging client of choice (Windows Live Writer) doesn’t auto-translate those into actual written blog posts yet. Maybe in the next version. <sigh>

I can hear some of you (both of you still reading, thank you loyal fans) asking what finally brought me back, and I have to say it’s a rant. A rant about autism (and Asperger’s, and the rest of the spectrum), how it is perceived, and how trendy equals insensitive. You have been warned.

Hip To Be Square

After karate class tonight on the drive home, Steph was reading through Facebook (something I do but occasionally these days, having overdosed myself on social media some time ago) and came across the following comment on a mutual friend’s post:

image

Yes, that really does say that stupid thing

For some reason, this really punched my buttons. I don’t know much about the person who posted it. I don’t know if they’re a fellow spectrum traveller or not. I don’t know how many close friends or family members they have who have autism. To a certain extent, it really doesn’t matter, because this comment is a textbook illustration of a fallacy that I’m seeing more and more:

If geeks are cool, and a lot of geeks are autistic, they must be cool because they are autistic.

This is a fallacy because it is the living embodiment of failure to grasp proper logic and set theory. This growing "Autism Is The New Cool" meme (AITNC for those of us who adore our acronyms), for lack of a better word, is reaching stupid proportions.

Venn We Dance

Now listen up, because if you’d paid attention in Algebra the first time, I wouldn’t have to be telling you this shit now.

What we are talking about here are properties that people have: the property of being cool, the property of being a geek, and the property of being on the autism spectrum. These are not variables that we can just slam together in a transitive[1] orgy of equation signs, as much as someone might like to be able to write on a whiteboard that A=B=C.

image

You get to stay after class and wipe down the whiteboard

Instead, we need to head over to set theory, which is where we look at groupings (or "sets") of objects, where said sets are organized by a shared trait. Such as being a geek, or being cool, or being on the autism spectrum. We represent these sets by drawing circles. Then we can make useful and interesting (and sometimes even more occasionally related to real life) observations by seeing where these sets overlap and what that tells us. This is a Venn diagram, and it helps us immediately destroy AITNC, because it reminds us that people (the members of the sets) are not single-value variables like A and B and C and the rest of their letter trash, but complex people who are not in any way entirely equal. This is my AITNC mega-buster Venn diagram, whipped up on this evening when I had lots of better stuff to do, just for your edification:

image

Filling in the missing names is left as an exercise for the reader[2]

Note that there are plenty of places where there is no overlap. Note that there are four separate regions where there are overlap. I can think of people who are examples of each of those areas, but I’m not enough of a dick to tell you who they are.

The Big Boy/Girl Panties Are Right Over There

I have, I shit you not, had parents ask me how to get their kid diagnosed with Asperger’s so they can "give him an extra educational advantage" (or some such nonsense). Yeah, I know. Fucked up, right?

I’m no child psychology professional, but I know spoiled, overly sugared kids when I see them. You want your kid to get an extra educational advantage? Don’t let the little bastards play video games and watch TV when they get home from school. Make them do homework and chores. Stop buying them everything they want and make them earn a meager amount of money and prioritize they things they really want from passing whims. Spend time with them and find out what they’re learning. Teach them about things you’re doing, which means you might want to put down the remote and pick up some more books or spend time outdoors or in your shop. Take the time to buy and prepare healthy food instead of boxed-up pre-digested pap. Teach them how to cook and clean, while you’re at it. Get involved with what they’re doing at school and be ruthlessly nosy about their grades and progress. Limit their after-school activities so they have time to study. Make and enforce a reasonable bedtime. In short, be a fucking parent. Stick with that for a year, and I guarantee your kids will have an educational advantage that you can’t believe.

NoYouCannotHaveAPony

Unless you want it in kebabs for dinner

Once you’ve done that for a few years and your kids have adjusted to having the meanest parents on the block like mine have, then you can worry about whether your precious little shit belongs on the autism spectrum, or has ADHD, or whatever other crutch diagnosis you think you need to compensate for being a mere gamete donor instead of a real parent.

People Are Strange (When You’re A Stranger)

I’m not going to sing a litany of woes about how tough it is being Asperger’s. I have fought most of my adult life to keep this thing from defining who I am. Devin != autism, not by a long shot. It’s one of a large number of properties about me, and it’s a mere footnote at that. I refuse to self-identify as an "Aspie" because I see that many of them (not all, but a significant fraction of them) use it as a Get Out Of Life Free card. "Oh, boohoo, I can’t make friends. Boohoo, I can’t have a relationship. Boohoo, my boss doesn’t understand me." I’ll grant it makes things difficult at times, but you know what? I look at so-called "neurotypical" people and they seem to have rough patches too. Life isn’t perfect for anyone. I don’t know how much harder my life is because of Asperger’s, and you don’t either. Anyone who claims to know is full of shit. At best, they’re making wild-ass guesses.

I choose not to play "what-if" games, because there is always something you think of after the fact. This wiring malfunction in my brain does not define or control me unless I choose to let it. The only reason its effects dominated my life through my early adulthood is that I didn’t know. Once I knew…well, I went all G. I. Joe[3] on its ass.

You know what really sucks? That my wife and kids have to be hyper-vigilant about what food they eat because their bodies are attacking their own auto-immune systems. I can tell you exactly how much of a crimp that’s put into their enjoyment of life. One thoughtless dweeb in a restaurant kitchen who doesn’t properly wash bread crumbs off a counter, or clean off that dollop of butter on the knife, can make them miserable for a week. That’s a pretty raw deal, friends. Asperger’s has nothing on that. Try traveling or going out to a restaurant with friends. The number of things you can eat with one of the 8 major food allergies quickly limits your options. Enjoy two of them (like my family) and you can start counting your dining options on one hand.

So if you’re one of those assholes who thinks autism is cool or glamorous, get a life. Seriously. Be thankful for what you have. And recognize that people are cool not because of their afflictions but because they are cool people.

 

[1] You’ll probably have forgotten in five minutes, but transitive means if one thing is equal to a second thing, and a third thing is also equal to the second thing, then the first and third things are equal too. This only usually works in math and quantum mechanics, because how often are two things actually equal in the real world?

[2] Extra credit if you noticed that I really did match the color coding between the two diagrams.Without thinking.

[3] "Knowing is half the battle."

1  

Stop SOPA/PIPA now

If you don’t know what SOPA and PIPA are by now…where have you been?

Here…watch this:

Now, go do something about it.

0  

Exchange 2010 virtualization storage gotchas

There’s a lot of momentum for Exchange virtualization. At Trace3, we do a lot of work with VMware, so the majority of the customers I work with already have VMware deployed strategically into their production operation model. As a result, we see a lot of Exchange 2010 under VMware. With Exchange 2010 SP1 and lots of customer feedback, the Exchange product team has really stepped up to provide better support for virtual environments as well as more detailed guidance on planning for and deploying Exchange 2007 and 2010 in virtualization.

Last week, I was talking with a co-worker about Exchange’s design requirements in a virtual environment. I casually mentioned the “no file-level storage protocols” restriction for the underlying storage and suddenly, the conversation turned a bit more serious. Many people who deploy VMware create large data stores on their SAN and share them to the ESX cluster via the NFS protocol. There are a lot of advantages to doing it this way, and it’s a very flexible and relatively easy way to deploy VMs. However, it’s not supported for Exchange VMs.

The Heck You Say?

“But Devin,” I can hear some of you say, “what do you mean it’s not supported to run Exchange VMs on NFS-mounted data stores? I deploy all of my virtual machines using VMDKs on NFS-mounted data stores. I have my Exchange servers there. It all works.”

It probably does work. Whether or not it works, though, it’s not a supported configuration, and one thing Masters are trained to hate with a passion is letting people deploy Exchange in a way that gives them no safety net. It is an essential tool in your toolkit to have the benefit of Microsoft product support to walk you through the times when you get into a strange or deep problem.

Let’s take a look at Microsoft’s actual support statements. For Exchange 2010, Microsoft has the following to say in http://technet.microsoft.com/en-us/library/aa996719.aspx under virtualization (emphasis added):

The storage used by the Exchange guest machine for storage of Exchange data (for example, mailbox databases or Hub transport queues) can be virtual storage of a fixed size (for example, fixed virtual hard disks (VHDs) in a Hyper-V environment), SCSI pass-through storage, or Internet SCSI (iSCSI) storage. Pass-through storage is storage that’s configured at the host level and dedicated to one guest machine. All storage used by an Exchange guest machine for storage of Exchange data must be block-level storage because Exchange 2010 doesn’t support the use of network attached storage (NAS) volumes. Also, NAS storage that’s presented to the guest as block-level storage via the hypervisor isn’t supported.

Exchange 2007 has pretty much the same restrictions as shown in the http://technet.microsoft.com/en-us/library/bb738146(EXCHG.80).aspx TechNet topic. What about Exchange 2003? Well, that’s trickier; Exchange 2003 was never officially supported under any virtualization environment other than Microsoft Virtual Server 2005 R2.

The gist of the message is this: it is not supported by Microsoft for Exchange virtual machines to use disk volumes that are on file-level storage such as NFS or CIFS/SMB, if those disk volumes hold Exchange data. I realize this is a huge statement, so let me unpack this a bit. I’m going to assume a VMware environment here, but these statements are equally true for Hyper-V or any other hypervisor supported under the Microsoft SVVP.

While the rest of the discussion will focus on VMware and NFS, all of the points made are equally valid for SMB/CIFS and other virtualization system. (From a performance standpoint, I would not personally want to use SMB for backing virtual data stores; NFS, in my experience, is much better optimized for the kind of large-scale operations that virtualization clusters require. I know Microsoft is making great strides in improving the performance of SMB, but I don’t know if it’s there yet.

It’s Just Microsoft, Right?

So is there any way to design around this? Could I, in theory, deploy Exchange this way and still get support from my virtualization vendor? A lot of people I talk to point to a whitepaper that VMware published in 2009 that showed the relative performance of Exchange 2007 over iSCSI, FC, and NFS. They use this paper as “proof” that Exchange over NFS is supported.

Not so much, at least not with VMware. The original restriction may come from the Exchange product group (other Microsoft workloads are supported in this configuration), but the other vendors certainly know the limitation and honor it in their guidance. Look at VMware’s Exchange 2010 best practices at http://www.vmware.com/files/pdf/Exchange_2010_on_VMware_-_Best_Practices_Guide.pdf on page 13:

It is important to note that there are several different shared-storage options available to ESX (iSCSI, Fibre Channel, NAS, etc.); however, Microsoft does not currently support NFS for the Mailbox Server role (clustered or standalone). For Mailbox servers that belong to a Database Availability Group, only Fibre Channel is currently supported; iSCSI can be used for standalone mailbox servers. To see the most recent list of compatibilities please consult the latest VMware Compatibility Guides.

According to this document, VMware is even slightly more restrictive! If you’re going to use RDMs (this section is talking about RDMs, so don’t take the iSCSI/FC statement as a limit on guest-level volume mounts), VMware is saying that you can’t use iSCSI RDMs, only FC RDMs.

Now, I believe – and there is good evidence to support me – that this guidance as written is actually slightly wrong:

  • The HT queue database is also an ESE database and is subject to the same limitations; this is pretty clear on a thorough read-through of the Exchange 2010 requirements in TechNet. Many people leave the HT queue database on the same volume they install Exchange to, which means that volume also cannot be presented via NFS. If you follow best practices, you move this queue database to a separate volume (which should be an RDM or guest-mounted iSCSI/FC LUN).
  • NetApp, one of the big storage vendors that supports the NFS-mounted VMware data store configuration, only supports Exchange databases mounted via FC/iSCSI LUNs using SnapManager for Exchange (SME) as shown in NetApp TR-3845. Additionally, in the join NetApp-VMware-Cisco performance whitepaper on virtualizing Microsoft workloads, the only configuration tested for Exchange 2010 is FC LUNs (TR-3785).
  • It is my understanding that the product group’s definition of Exchange files doesn’t just extend to ESE files and transaction logs, but to all of the Exchange binaries and associated files. I have not yet been able to find a published source to document this interpretation, but I am working on it.
  • I am not aware of any Microsoft-related restriction about iSCSI + DAG. This VMware Exchange 2010 best practices document (published in 2010) is the only source I’ve seen mention this restriction, and in fact, the latest VMware Microsoft clustering support matrix (published in June 2011) lists no such restriction. Microsoft’s guidelines seem to imply that block storage is block storage is block storage when it comes to “SCSI pass-through storage”). I have queries in to nail this one down because I’ve been asking in various communities for well over a year with no clear resolution other than, “That’s the way VMware is doing it.”

Okay, So Now What?

When I’m designing layouts for customers who are used to deploying Windows VMs via NFS-mounted VMDKs, I have a couple of options. My preferred option, if they’re also using RDMs, is to just have them provision one more RDM for the system drive and avoid NFS entirely for Exchange servers. That way, if my customer does have to call Microsoft support, we don’t have to worry about the issue at all.

However, that’s not always possible. My customer may have strict VM provisioning processes in place, have limited non-NFS storage to provision, or have some other reason why they need to use NFS-based VMDKs. In this case, I have found the following base layout to work well:

Volume Type Notes
C: VMDK or RDM Can be on any type of supported data store. Should be sized to include static page file of size PhysicalRAM + 10 MB.
E: RDM or guest iSCSI/FC iSCSI/FC    All Exchange binaries installed here. Move IIS files here (scripts out on Internet to do this for you). Create an E:\Exchdata directory and use NTFS mount points to mount each of the data volumes the guest will mount.
Data volumes RDM or guest iSCSI/FC Any volume holding mailbox/PF database EDB or logs, or HT queue EDB or logs. Should mount these separately, NTFS mount points recommended. Format these NTFS volumes with 64K block size, not default.

Note that we have several implicit best practices in use here:

  • Static page file, properly sized for a 64-bit operating system with a large amount of physical RAM. Doing this ensures that you have enough virtual memory for the Exchange memory profile AND that you can write a kernel memory crash dump to disk in the event of a blue screen. (If the page file is not sized properly, or is not on C:, the full dump cannot be written to disk.)
  • Exchange binaries not installed on the system drive. This makes restores much easier. Since Exchange uses IIS heavily, I recommend moving the IIS data files (the inetpub and children folders) off of the system drive and onto the Exchange volume. This helps reduce the rate of change on the system drive and offers other benefits such as making it easier to properly configure anti-virus exclusions.
  • The use of NTFS mount points (which mount the volume to a directory) instead of separate drive letters. For large DAGs, you can easily have a large number of volumes per MB role, making the use of drive letters a limitation on scalability. NTFS mount points work just like Unix mount points and work terribly well – they’ve been supported since Exchange 2003 and recommended since the late Exchange 2003 era for larger clusters. In Exchange 2007 and 2010 continuous replication environments (CCR, SCR, DAG), all copies must have the same pathnames.
  • Using NTFS 64K block allocations for any volumes that hold ESE databases. While not technically necessary for log partitions, doing so does not hurt performance.

So Why Is This Even A Problem?

This is the money question, isn’t it? Windows itself is supported under this configuration. Even SQL Server is. Why not Exchange?

At heart, it comes down to this: the Exchange ESE database engine is a very finely-tuned piece of software, honed for over 15 years. During that time, with only one exception (the Windows Storage Server 2003 Feature Pack 1, which allowed storage solutions running WSS 2003 + FP1 to host Exchange database files over NAS protocols), Exchange has never supported putting Exchange database files over file-level storage. I’m not enough of an expert on ESE to whip up a true detailed answer, but here is what I understand about it.

Unlike SQL Server, ESE is not a general purpose database engine. SQL is optimized to run relational databases of all types. The Exchange flavor of ESE is optimized for just one type of data: Exchange. As a result, ESE has far more intimate knowledge about the data than any SQL Server instance can. ESE provides a lot of performance boosts for I/O hungry Exchange databases and it can do so precisely because it can make certain assumptions. One of those assumptions is that it’s talking to block-level storage.

When a host process commits writes to storage, there’s a very real difference in the semantics of the write operation between block-level protocols and file-level protocols. Exchange, in particular, depends dramatically on precise control over block-level writes – which file protocols like NFS and SMB can mask. The cases under which this can cause data corruption for Exchange are admittedly corner cases, but they do exist and they can cause impressive damage.

Cleaning Up

What should we do about it if we have an Exchange deployment that is in violation of these support guidelines?

Ideally, we fix it. Microsoft’s support stance is very clear on this point, and in the unlikely event that data loss occurs in this configuration, Microsoft support is going to point at the virtualization/storage vendors and say, “Get them to fix it.” I am not personally aware of any cases of a configuration like this causing data loss or corruption, but I am not the Exchange Product Group – they get access to an amazing amount of data.

At the very least, you need to understand and document that you are in an unsupported configuration so that you can make appropriate plans to get into support as you roll out new servers or upgrade to future versions of Exchange. This is where getting a good Exchange consultant to do an Exchange health check can help you get what you need and provide the support you need with your management – we will document this in black and white and help provide the outside validation you might need to get things put right.

One request for the commenters: if all you’re going to do is say, “Well we run this way and have no problems,” don’t bother. I know and stipulate that there are many environments out there running in violating of this support boundary that have not (yet) run into issues. I’ve never said it won’t work. There are a lot of things we can do, but that doesn’t mean we should do them. At the same time, at the end of the day – if you know the issues and potential risks, you have to make the design decision that’s right for your organization. Just make sure it’s an informed (and documented, and signed-off!) decision.

10  

Getaway

We’ve lived in Monroe for over 13 years. In that time, we’ve not taken advantage of many of the opportunities available in this area to get out and see the amazing beauty of the Puget Sound region. Late last summer, we finally started correcting that with hikes and drives to various attractions. Stephanie and I are also closing in on our 15th anniversary, and it’s been a while since we’ve had a getaway for just the two of us that didn’t also serve some other purpose (such as her heading to Las Vegas with me for Exchange Connections); it was time to correct this. This weekend, I combined those two imperatives and planned a Friday night overnight to Whidbey Island, as a slightly-belated celebration of Steph’s birthday.

Whidbey Island

The first thing I did was do a little research to locate a candidate list of reasonable bed and breakfasts for us to stay at. Steph had never before been to one and, frankly, hotels are boring. Ideally, I wanted one that was based out of a Victorian house, since Steph loves them. Potential bed & breakfasts of course would have to be able to handle the no-dairy/no-gluten restrictions. I really wanted to find one on Whidbey Island, which is close off-shore in the Sound, separated from Fidalgo Island by Deception Pass.

Why did I want to go to Whidbey Island for our overnight?

  • Islands are picturesque as hell. On the right island, you’re always close to the water, which I love.
  • We’d only been there once previously, during a quick drive-around last September when we got our new car.
  • Whidbey is one of the bigger islands in the Sound. It hosts several towns and has a high enough population to still offer some great experiences even during the depths of the off-season.
  • You can drive to it (by the bridge to Fidalgo Island, then by the bridge over Deception Pass to the north end of the island) or take the Mukilteo-Clinton ferry at the south end of the island. Transportation flexibility in winter months is a good thing.
  • Our most direct route to Whidbey Island is the ferry route, which runs every half-hour and is a short 20 minute ride. This helps achieve Steph’s goal of riding every ferry route in the Sound at least once. It also indulges my love of being out on the water.

Once I had a couple of candidates and knew what their check-in times were, I could work backwards for travel times and ferry crossings and determine the window of time in which we’d need to leave. This gave me the all-important time: my cut-off for the work day. Armed with this time, I set up an out-of-office calendar appointment and clearly communicated with my co-workers and clients that I had a hard stop at 3pm. As I set up each part of the weekend reservations, I sent Steph appropriate meeting requests in our shared Outlook/Exchange calendar. This let her know what my plans were and gave her the links and information she’d need to poke around and do her own reading. It seemed to work, because I quickly got acceptance notices and by Wednesday, Steph was practically bouncing off the walls in anticipation!

Whidbey-Island-Map

Whidbey Island (from the Whidbey Island Visitors Guide website)

Once Friday came, Steph was obviously eager to be off on our adventure. I think she was packed to go by 10am. At any rate, I was promptly done with work by 3pm, took a few minutes to pack, and we were out the door by 3:45pm as I’d planned. We took a quick detour to run a necessary errand, then headed for the Mukilteo ferry terminal. We arrived in time to queue up and watch (but not participate in) the loading of the 5pm ferry crossing as the sun set; it would be our turn in 30 minutes. The ride across the Sound was quick and cold in the gloaming, and we made our way north up the island until arriving at Coupeville.

The Blue Goose Inn

After doing some homework and reading reviews, it became clear that my #1 choice was going to be The Blue Goose Inn in Coupeville, overlooking Penn Cove in central Whidbey Island. Proprietors Sue and Marty McDaniel offer a fantastic getaway experience out of two lovely restored Victorian historical homes, and during a good portion of the year also operate a pub on-premises (sadly, it was closed during our visit). When I called to inquire, Sue assured me that the dietary restrictions would be no problem. A few minutes later, I had chosen the Captain’s Suite because of the king-size bed, the soaking tub, and the view of Penn Cove; we had our reservation!

steph_and_the_blue_goose

Stephanie in front of The Blue Goose Inn in Coupeville, WA

Even though we arrived after sunset, Stephanie could see enough details that she was delighted by the choice. As we walked in the front door and were greeted by Sue and Marty, we immediately felt welcome. As I’d taken care of payment over the phone when I made the reservations, there was no paperwork to take care of; we chatted for a few minutes, they approved of my choice of venue for dinner, gave us our room key to the Captain’s Suite in the Coupe House, explained the accommodations that were available besides our room, and sent us on our way. We were not disappointed; the room was lovely and tastefully appointed with beautiful and functional furniture. In many older homes, drafts can be a problem, especially on a cold, windy night; this was not a problem here! The room was comfortable without being stuffy or unpleasant. We quickly unpacked, rested for a bit, and prepared for dinner.

Once we’d returned from a fabulous dinner, we again relaxed and settled in for the evening. Other than our Windows phones, we didn’t crack open any computers, so I don’t know how the complimentary Wi-Fi access was. We can both report, however, that the soaking tub was every bit as luxurious as it was claimed. The king bed (which towered off the ground) was one of the most comfortable beds I’ve ever slept in away from home.

Viva Whidbey Island

Looking NNW over Penn Cove from the Captain’s Suite

In the morning, we woke up, brewed tea (for her) and coffee (for me, which is not my normal morning habit), got ready, and packed. As promised, the view of Penn Cove was beautiful and not marred at all by the brief but vigorous attack of hail and rain we enjoyed. Just before 9am, we placed our bags in our car and headed back into the main house for breakfast, where Marty greeted us by name and showed us to our place in the dining room with the rest of the guests. What a treat!

  • Tea and coffee were on offer and Marty was quick to refill any cups that looked like they were thinking about becoming empty.
  • The first course was a green mango fool. Now, I’m not a mango person…or at least, I didn’t think I was. I had a tiny bite of this and it was quite simply divine. I would have promptly devoured my whole serving, but I wasn’t quite awake yet and the morning’s cup of coffee kept me from being hungry yet. Stephanie also got to enjoy this, minus the cream.
  • Our next course was buttermilk scones with currants. Again, these were very tasty, and again, my stomach wasn’t quite open for business yet. Sue made sure that Steph was supplied with gluten/dairy-free banana muffins, which Steph devoured.
  • The final course was a three-cheese omelet and a serving of oven-roasted seasoned Yukon Gold potatoes; Steph got scrambled eggs. Now, Steph’s not a scrambled egg person, but you’d never have known that — just as you’d never have known that I never eat breakfast potatoes unless they’re hash browns. The eggs came in separate porcelain oval bowls that kept them hot and tasty.

We lingered over our breakfast until the other guests left. At that point, we chatted a few minutes more with Marty (and said goodbye to Sue when she stuck her head out of the kitchen). After purchasing a Blue Goose Inn mug for me, we promised we’d be back during pub season, then hit the road back to the ferry terminal and points east. We had to head home, unpack, relax, and get the family ready for the afternoon’s plans: a visit to the Seattle Art Museum.

Christopher’s at Whidbey

Since Friday evening dinner wasn’t provided by The Blue Goose, this was the other major logistical challenge I faced in my planning. Dining is now much more exciting than it was back when I was the pickiest eater in the family, and it can be a significant source of stress for Steph. This was supposed to be a relaxing night away and I didn’t want her to have to worry about anything. Was I up to the task? As I said before, one of the reasons I chose Whidbey Island is that there are several towns on the island. Even if I couldn’t find anything near our lodgings, I was confident I’d be able to find a nice place for an intimate evening meal that could offer Steph not just one dinner option, but a choice of meals. They would also need to have food I’d eat — I still don’t like too much food with my food, if you know what I mean.

Coupeville turned out to be perfect because it’s also home to Christopher’s on Whidbey, a small and unassuming restaurant that boasts exquisite food and wine at amazingly affordable prices. During my planning, I’d called them up, explained our requirements, and in a few moments had an 8pm dinner reservation set up. They assured me that not only would Stephanie’s needs be taken care of, but that she would have a number of items to choose from. They asked me all the right questions to give me confidence that they actually did understand how to properly cook her meals without overlooking anything or putting her in danger of cross-contamination.

It was just a short drive from the Blue Goose to Christopher’s; if the weather had been better and we had still had light, we’d have walked the few blocks. When we arrived, the interior of the restaurant was well-lit, warm, and comfortably elegant without being pretentious or snobby. They greeted us by name, reassured me that they had Steph’s dietary restrictions on file, and showed us to a quiet table in the corner.

Pinot Blanc

Albrecht 2008 Pinot Blanc from Alsace, France

They had an interesting and eclectic wine selection, with offerings from a number of sources. Unlike many wine lists, they seemed to focus on offering affordable, enjoyable wines, mainly from local and regional wineries. Stephanie and I both favor white wines, and I noticed they offered a pinot blanc from Alsace. I’ve heard good things about Alsace wine but have never had it, so I enquired about it; apparently, this was a good thing, because this wine turned out to be a favorite of their wine expert. We ordered a bottle and found it to be delicate and satisfying both chilled and warm; it boasted a fantastic balance of dry vs. sweet with an unassuming and crisp fruity taste. A lot of whites taste like alcohol mixed with simple syrup; this one barely tasted like alcohol at all, and went well with both our dishes. While we waited for our entrees, Steph enjoyed a salad and I attacked a basket of bread with butter.

Stephanie chose the king salmon with raspberry barbeque sauce with greens and mixed vegetables. I went with something a little less adventuresome: linguine alfredo with chicken; in my defense, I don’t get cream-based sauces at home any more thanks to our Glorious New Dietary Regime. Our food was served rather quickly and was presented with a simple elegance that could easily have double the price tag in another establishment. I’ll let Stephanie speak for her meal if she chooses, but I will note that she told me at least once that she could eat it every day and be happy. My linguine was simply fantastic; the pasta was perfectly al dente, the sauce was light and creamy and in perfect proportion to enhance the pasta without smothering it, and the chicken was tender and full of flavor. It was easily the best pasta I’ve had in my life, and the entire meal rates up in my top three dining experiences. The service, of course, was quick, cheerful, and unassuming. We will happily come back and acquaint ourselves with the rest of the menu.

Picasso at the Seattle Art Museum

Upon arriving back at home around 12:30pm on Saturday, we unpacked, grabbed an informal lunch with the family, and planned out the rest of the day. For Christmas, the kids had purchased Stephanie a family membership in the Seattle Art Museum, in part so we could all head to the Picasso exhibit they have running through January 17. We had our tickets to get into the Picasso exhibit for 5pm, and with the Seahawks kicking off in Seattle at 1:30pm, we decided to wait for traffic to die down and head into town later in the afternoon. That gave us time to locate several alternatives for dinner after we’d been to the museum.

Once we got into Seattle and were parked at the garage underneath the SAM — a much trickier proposition now that we have a Ford Freestyle — we went up to Member Services and got our temporary membership cards. At that point, we had about 75 minutes to fill before we could enter the Picasso exhibit. We therefore broke up into groups and wandered around the museum’s various levels. Much of what I saw made little impression on me; a few of the pieces provoked a strong response (usually strong incredulity). I very much enjoyed the European and Italian galleries; in particular, they had a recreation of an Italian room, full of dark carved wood, that I found particularly intriguing.

Soon enough, 5pm approached and we queued up to enter the Picasso exhibit. I’m afraid I’m the wrong person to comment on it — I find most of Picasso’s work to be unapproachable. I tended to concentrate, instead, on the other people viewing the exhibit. There were a lot of very serious people there who apparently found all sorts of serious things to ponder. They were no fun. I liked watching the people who were totally blown away by what they were seeing; even if I didn’t share their reaction, I couldn’t help but be happy they were having a great time. These people invariably talked about how the art made them feel; the former types tended to pontificate on how it should make others feel and think. That’s an interesting lesson, don’t you think?

Once we had our fill of Picasso — or at least of walking around on the hard floors and dueling our way through the maddening crowds — we headed down to the waterfront to the Old Spaghetti Factory. I hadn’t dined here in many years — back when Stephanie and I were first married and I was working down on Pier 70. I’d really enjoyed it then and was looking forward to introducing my kids, especially because they offered gluten-free/dairy-free options. Instead, Stephanie and I found it to be one of the most disappointing dining experiences we’ve ever had. Maybe we were spoiled by still being on a high from the previous evening’s dining, but the restaurant felt crowded and dark, our table was noisy and drafty, and our server, while personable enough, couldn’t hit the right balance between competence and comedy. I can make better pasta than the half-hearted attempt I received. The best thing we can say is that Mom enjoyed it, as did the kids, although even the kids say that Steph would have made a better meal.

Wrapping Up

So, now it’s time for me to get off the computer and go spend the rest of the day with my family. I think we’ve got a board game or two on deck, maybe a family movie. Or, I could always pull out the copy of Enchanter’s Endgame that we’ve slowly been working through and read another chapter out loud. At any rate, we’ll have a good evening and get prepared to throw ourselves back into school, work, and life come Monday morning.

2  

Solving The Problems You See

Somewhere along the way, I picked up an unusual philosophy: problems are meant to be solved by those who see them. Time after time, I have watched various friends and acquaintances become aware of a cause or injustice, get involved, and find that they had the right combination of talents and drive to becoming actively engaged in the solution in ways they never could have previously imagined. It’s the same phenomenon that can make churches and charitable organizations far more effective at solving particular problems than government programs could ever be. There’s something transformative about passion, moreso when you’re directly involved in changing lives instead of working through some faceless proxy organization.

Right now, I’d like to introduce you to a friend of mine by the name of Chris. Chris and I became acquainted lo these many moons ago when I got involved in the community for the online PC game Starsiege: Tribes back at the end of the 90s. A week after we met, Chris was in a horrible motorcycle accident that changed his life forever. It’s a miracle he’s still alive. Stephanie and I have kept touch with him and over the years, have had the privilege of having him fly out from Vermont for three extended visits with our family, including two memorable Christmas holidays. He’s been placed in our lives for a reason, and we’ve drawn him into our family-of-choice.

Chris at the Gangers for Christmas 2007

Chris at the Gangers for Christmas 2007

Chris’s medical condition is deteriorating; his doctor now estimates that he has approximately five years at the outside until he will need to live in assisted care. We were able to help him out a couple of years ago by putting him up on the awesome Select Comfort air bed that Steph had scrounged up for our guest room. The difference it made during his four-week visit that year was amazing — by the end of the visit, he was regularly going without an entire pain medication dose and was still more active and healthy than he’d been since the accident. His doctor worked all year to get the State of Vermont health services to purchase a Select Comfort bed for him — wrote the prescription, jumped through hoops to show how the cost of the bed would easily repay itself in the reduced medication costs, etc. — and some bureaucratic organization killed the whole idea. Why? Good question — we still don’t know. After a year of struggling, we sent the bed home with him after the next Christmas visit. (Screw you, nameless Vermont functionaries!)

We’ve been working on getting him moved from Vermont to Washington — specifically out to be near us — but it’s been an uphill battle. It has been extremely frustrating hearing him tell us over and over how he gets a good phone interview for a perfect part-time job but then once they meet him in person, game over. Now Chris has a plan. It may not be the best plan, but it’s better than what we’ve been able to come up with and we’re going to help.

PICT0499

Chris working on my Lego Star Destroyer

Those of you who read my blog, whether directly, through some feed, through Twitter, or through Facebook: I’m hoping that you might be able and willing to give some help as well. Please go read his site and background — we’re going to scrounge up the pictures we have of him and send them so he can include them in updates and allow folks to get to know him. If you can, donate. If you can, spread this further. We’d love to get Chris relocated this spring and summer once the weather turns good and get him out here where we can provide in-person assistance. It won’t take much — $1, $2, $maybe even $5 and then pass the word on.

3  

Devin’s Load Balancer for Exchange 2010

Overview

One of the biggest differences I’m seeing when deploying Exchange 2010 compared to previous versions is that for just about all of my customers, load balancing is becoming a critical part of the process. In Exchange 2003 FE/BE, load balancing was a luxury unheard of for all but the largest organizations with the deepest pockets. Only a handful of outfits offered load balancing products, and they were expensive. For Exchange 2007 and the dedicated CAS role, it started becoming more common.

For Exchange 2003 and 2007, you could get all the same benefits of load balancing (as far as Exchange was concerned) by deploying an ISA server or ISA server cluster using Windows Network Load Balancing (WNLB). ISA included the concept of a “web farm” so it would round-robin incoming HTTP connections to your available FE servers (and Exchange 2007 CAS servers). Generally, your internal clients would directly talk to their mailbox servers, so this worked well. Hardware load balancers were typically used as a replacement for publishing with an ISA reverse proxy (and more rarely to load balance the ISA array instead of WNLB). Load balancers could perform SSL offloading, pre-authentication, and many of the same tasks people were formerly using ISA for. Some small shops deployed WNLB for Exchange 2003 FEs and Exchange 2007 CAS roles.

In Exchange 2010, everything changes. Outlook RPC connections now go to the CAS servers in the site, not the MB server that hosts the active copy of the database. Mailbox databases now have an affiliation with either a specific CAS server or a site-specific RPC client access array, which you can see using the –RpcClientAccessServer parameter of the Get-MailboxDatabase cmdlet. If you have two or more servers, I recommend you set up the RPC client access array as part of the initial deployment and get some sort of load balancer in place.

Load Balancing Options

At Trace3, we’re an F5 reseller, and F5 is one of the few load balancer companies out there that has really made an effort to understand and optimize Exchange 2010 deployments. However, I’m not on the sales side; I have customers using a variety of load balancing solutions for their Exchange deployments. At the end of the day, we want the customer to do what’s right for them. For some customers, that’s an F5. Others require a different solution. In those cases, we have to get creative – sometimes they don’t have budget, sometimes the networking team has their own plans, and on some rare occasions, the plans we made going in turned out not to be a good fit after all and now we have to come up with something on the fly.

If you’re not in a position to use a high-end hardware load balancer like an F5 BIG-IP or a Cisco ACE solution, and can’t look at some of the lower-cost (and correspondingly lower-feature) solutions that are now on the market, there are few alternatives:

  • WNLB. To be honest, I have attempted to use this in several environments now and even when I spent time going over the pros and cons, it failed to meet expectations. If you’re virtualizing Exchange (like many of my customers) and are trying to avoid single points of failure, WNLB is so clearly not the way to go. I no longer recommend this to my customers.
  • DNS round robin. This method at least has the advantage of in theory driving traffic to all of the CAS instances. However, in practice it gets in the way of quickly resolving problems when they come up. It’s better than nothing, but not by much.
  • DAG cluster IP. Some clever people came up with this option for instances where you are deploying multi-role servers with MB+HT+CAS on all servers and configuring them in a DAG. DAG = cluster, these smart people think, and clusters have a cluster IP address. Why can’t we just use that as the IP address of the RPC client access array? Sure enough, this works, but it’s not tested or supported by Microsoft and it isn’t a perfect solution. It’s not load balancing at all; the server holding the cluster IP address gets all the CAS traffic. Server sizing is important!

The fact of the matter is, there are no great alternatives if you’re not going to use hardware load balancing. You’re going to have to compromise something.

Introducing Devin’s Load Balancer

For many of my customers, we end up looking something like this:

  • The CAS/HT roles are co-located on one set of servers, while MB (and the DAG) is on another. This rules out the DAG cluster IP option.
  • They don’t want users to complain excessively when something goes wrong with one of the CAS/HT servers. This rules out DNS round robin.
  • They don’t have the budget for a hardware solution yet, or one is already in the works but not ready because of schedule. They need a temporary, low-impact solution. This effectively rules out WNLB.

I’ve come up with a quick and dirty fix I call Devin’s Load Balancer or, as I commonly call it, the DLB. It looks like this:

  1. Pick one CAS server that can handle all the traffic for the site. This is our target server.
  2. Pick an IP address for the RPC client access array for the site. Create the DNS A record for the RPC client access array FQDN, pointing to the IP address.
  3. Create the RPC client access array in EMS, setting the name, FQDN, and site.
  4. On the main network interface of the target server, add the IP address. If this IP address is on the same subnet as the main IP address, there is no need to create a secondary interface! Just add it as a secondary IP address/subnet mask.
  5. Make sure the appropriate mailbox databases are associated with the RPC client access array.
  6. Optionally, point the internal HTTP load balance array DNS A record to this IP address as well (or publish this IP address using ISA).

You may have noticed that this sends all traffic to the target server; it doesn’t really load balance. DLB also stands for Doesn’t Load Balance!

This configuration, despite its flaws, gives me what I believe are several important benefits:

  • It’s extremely easy to switchover/failover. If something happens to my target server, I simply add the RPC client access array IP address as a secondary IP address to my next CAS instance. There are no DNS cache entries to wait to expire. There are are no switch configurations to modify. There are no DNS records I have to update. If this is a planned switchover, client get disrupted but can immediately connect. I can make the update as soon as I get warning that something happened and my clients can reconnect without any further action on their part.
  • It isolates what I do with the other CAS instances. Windows and Exchange no longer have any clue they’re in a load balanced pseudo-configuration. With WNLB, if I make any changes to the LB cluster (like add or remove a member), all connections to the cluster IP addresses are dropped!
  • It makes it very easy to upgrade to a true load balancing solution. I set the true solution up in parallel with an alternate, temporary IP address. I use local HOSTS file entries on my test machines while I’m getting everything tested and validated. And then I simply take the RPC client access array IP address off the target server and put it on the load balancer. Existing connections are dropped, but new ones immediately connect with no timeouts – and now we’re really load balancing.

Note that you do not need the CAS SSL certificate to contain the FQDN of the RPC client access array as a SAN entry. RPC doesn’t use SSL for encryption (it’s not based on HTTP).

Even in a deployment where the customer is putting all roles into single-server configuration, if there’s any thought at all that they might want to expand to an HA configuration in the future, I now am in the habit of configuring this. The RPC client access array is now configured and somewhat isolated from the CAS configuration, so now my future upgrades are easier and less disruptive.

13  

Review: Cooking for Geeks (O’Reilly)

Edit 1/1/2013: (Belatedly) updated the author’s website per his request.

Writing books is a ton of work. Making them appealing is even more so, especially when your audience is geeks. You have to know your stuff, you have to present it well, and it doesn’t hurt if you can make it entertaining. In the technical field, I think O’Reilly is the one publisher that hits this bar more consistently than any other publisher. Getting to co-write my first book for them was a great experience; if they ever came asking me to work on another book for them, I would seriously think about it (more importantly, my wife wouldn’t automatically say no).

Back at the end 0f August, I had the opportunity, thanks to the @OReillyMedia twitter feed, to get my hands on a review copy of Cooking for Geeks (CfG) in e-book format. As part of the review agreement, I was supposed to:

  • Select a recipe from the book,
  • Prepare it,
  • Photograph it,
  • Write a review and post it,
  • Post the photograph on the O’Reilly Facebook page,
  • and all by September 6th.

Oops. Obviously, I’ve missed the precise timing here, but a bit belated, here’s the review I owe.

Why this cooking book?

There’s a lot of information on cooking out there. Stephanie has a metric ton of cookbooks and collected recipes in our house, and there are large chunks of old-growth forest bound up in the various cookbooks you can find in various stores. Thanks to the celebrity chef craze on TV, cooking (never an unpopular subject) has grown leaps and bounds beyond the good old Betty Crocker Cookbook that many of us grew up with[1]. Popular TV chefs now write and sell cookbooks on just about any specialty and niche you can imagine. I’ve even indulged in the recipe fetish myself once or twice, most noticeably to snag and perfect my favorite dish, the Cheesecake Factory’s Spicy Cashew Chicken dish.

What caught my attention (other than this being an O’Reilly book) about CfG was that my household has been slowly and steadily moving into the exciting world of food allergens. We recently flung ourselves off the cliffs of insanity this summer when blood tests revealed that Steph and Treanna tested positive for gluten antibodies. Add that to the existing dairy-free regime, and it was clear that menu planning at Chez Ganger had just started a new, exciting, but potentially very limited and boring chapter.

We’ve got a lot of friend who are gluten-free, dairy-free, vegetarian, vegan, some other regime, or even combinations of the above, so Steph’s no stranger to the issues involved. What is doable as an occasional thing, though, can become overwhelming when it’s a sudden lifestyle change that comes hard on the heels of a long, exhausting summer – just in time for the new school year. Understandably, Steph was struggling to cope – and we weren’t exactly the most helpful crew she could hope for.

After a few weeks of the same basic dishes being served over and over again, I was ready for any lifesaver that I could find. That’s when the fateful tweet caught my eye. After a few rounds of back and forth e-mail, I discovered that CfG included a chapter on cooking to accommodate allergens. The rest, as they say, is fate.

Torturing Chickens For Fun and Noms

Although I could go into great detail about the recipe my family ended up selecting – butterflied roasted chicken – my wife has already done so. Like a good writer, I will steal her efforts link to her blog post instead. She even took pictures! Go, read and salivate!

Back already?

Under the Cover

CfG is written by Jeff Potter, whose geek credentials appear to be genuine. The book has a fantastic companion site, which is essentially a link fest to the related blog and Twitter stream (as well as to the various places you can go on the Internet to purchase a copy of the book).

My lovely wife handled the “cooking” and “presentation” parts well, so I’m going to move on to our thoughts about the book itself:

  • Content. If you want a book that explores the science and the art behind cooking, this is your book. It’s not a college textbook; it’s a great middle school or high school-level overview of the science of cooking that seems more interested in sharing Jeff’s love of cooking with you rather than creating cooking’s equivalent of the CCIE. Jeff writes with a very informal personable voice and isn’t afraid to show off his mastery of the physics behind good and bad dishes, sharing them in a way that’s part Bill Nye the Science Guy and part Ferris Bueller. I have never before laughed while reading a book on cooking. However, if you’re expecting a cookbook, check your expectations at the door. If this book has a weakness, it’s that talking about all this food will make you want a lot of recipes to try out, and I was surprised by how relatively few recipes there actually are. What is there provides an interesting cross-section across different types of dishes and ingredients, but it’s not a comprehensive reference guide. This is not “Cooking in a Nutshell” or cooking’s Camel Book; it is instead a not-to-scale map of the CfG theme park. If you find something that entrances you, you should be able to walk away with enough exposure to be able to knowledgeably pick out some other more detailed work for given area. CfG is the culinary equivalent of Jerome K. Jerome’s immortal Three Men in a Boat (To Say Nothing of the Dog); you’re going to get a fantastic lazy summer day punt trip down the river of Jeff’s epicurean experiences.
  • Format. We used the PDF format (like all of O’Reilly’s e-books, unencumbered by DRM). Steph already made a comment about how useful she found the e-book format. With a sturdy tablet, I think an e-book cookbook would be great in the kitchen, especially if there were some great application that could handle browsing and organizing recipes from multiple sources. As I already said, though, this book is not a cookbook and I’d probably just make a quick copy of (or retype) the recipes I was interested in so that I didn’t have to use the physical book in the kitchen. Having said that, though, we’re going to purchase a physical copy of the book to facilitate quick browsing. If you’ve already made the switch to casual e-reading (we have not yet), you probably won’t have this same issue.
  • Organization. Whether you like the book’s organization will depend on what you wanted out of it. If you wanted cooking’s Camel Book, you will find the book to be dismayingly unorganized. The structure of the book (and the recipes within) are based around the physics of cooking. Here, Jeff reveals himself to be a Lego Master of building blocks – you will find yourself introduced to one scientific concept after another, and each chapter will build on that knowledge by concentrating on a particular theme or technique rather than on a specific type of food or course. It really will help you to think of it as a novel (a romance, actually, between Jeff and food) and read the book from cover to cover rather than jump around in typical O’Reilly reference format. This is passion, not profession; calling, not career.
  • Utility. I’m pretty much a dunce when it comes to cooking, so I found this book to be extremely useful. I hate following the typical magical thinking approach to cooking: put ingredient A into slot B and pull on tab C for 30 minutes until you screw it all up because you didn’t know that your masterpiece was afraid of loud noises. I want to know why I’m putting nasty old cream of tartar into my mixing bowl; what purpose does it serve? How can I usefully strike out into the scary wilderness of trying to adapt existing favorite recipes to a gluten-free, dairy-free existence? CfG doesn’t answer all my questions, but it answers a hell of a lot more of them than any other cooking book I’ve picked up. It didn’t talk down to me, but it didn’t assume I was already a lifelong member of the Secret and Worshipful Order of Basters, Bakers, and Broilers. What it didn’t do, though, is give me a large number of variations on a theme to go and try. At times the recipe selection – while ecletic and representative – felt somewhat sparse and even unrelated to what was being talked about in the main text. It seemed like someone on the team had written a badly behaved random recipe widget[2] to insert a recipe every so often. I would love, in the second edition, to see a little bit more connection between the theory and the practice, even though I recognize this isn’t a textbook.

We found our payoff in the chapter on cooking around allergens. Of all the chapters, this is the one that most felt like a reference work — a concise but thorough reference work. Jeff explains why (for example) taking gluten out of a recipe and merely substituting some non-gluten flour is probably not going to produce edible results, and then explains some of the common approaches for dealing with the problem. He’s trusting us, the readers, to be able and willing to do some experimentation and find our own way without having a GPS to lead us by the nose. While it’s initially tempting to have the comfort of specific substitution steps, in the end, CfG will help you know how to make substitutions on your own and quickly dial in to an acceptable solution rather than sit around waiting for someone to write the HOWTO.

In the end, Jeff’s approach is empowerment. We liked it a lot; thank you, Jeff and O’Reilly!

[1] Not only did I grow up with one and spend a lot of time browsing it, Steph has one. I’ll have you know, however, that I’ve only flipped through it once for auld lang syne.

[2] Probably written in Ruby or PHP.

3  

Offered without comment or context

double rainbow cool

0  

Moving to Exchange Server 2010 Service Pack 1

Microsoft recently announced that Service Pack 1 (SP1) for Exchange Server 2010 had been released to web, prompting an immediate upgrade rush for all of us Exchange professionals. Most of us maintain at least one home/personal lab environment, the better to pre-break things before setting foot on a customer site. Before you go charging out to do this for production (especially if you’re one of my customers, or don’t want to run the risk of suddenly becoming one of my customers), take a few minutes to learn about some of the current issues with SP1.

Easy Installation and Upgrade Slipstreaming

One thing that I love about Exchange service packs is that from Exchange 2007 on, they’re full installations in their own right. Ready to deploy a brand new Exchange 2010 SP1 server? Just run setup from the SP1 binaries – no more fiddling around with the original binaries, then applying your service packs. Of course, the Update Rollups now take the place of that, but there’s a mechanism to slipstream them into the installer (and here is the Exchange 2007 version of this article).

Note: If you do make use of the slipstream capabilities, remember that Update Rollups are both version-dependent (tied to the particular RTM/SP release level) and are cumulative. SP1 UR4 is not the same thing as RTM UR4! However, RTM UR4 will include RTM UR3, RTM UR2, and RTM UR1…just as SP1 UR4 will contain SP1 UR3, SP1 UR2, and SP1 UR1.

The articles I linked to say not to slipstream the Update Rollups with a service pack, and I’ve heard some confusion about what this means. It’s simple: you can use the Updates folder mechanism to slipstream the Update Rollups when you are performing a clean install. You cannot use the slipstream mechanism when you are applying a service pack to an existing Exchange installation. In the latter situation, apply the service pack, then the latest Update Rollup.

It’s too early for any Update Rollups for Exchange 2010 SP1 to exist at the time of writing, but if there were (for the sake of illustration, let’s say that SP1 UR X just came out), consider these two scenarios:

  • You have an existing Exchange 2010 RTM UR4 environment and want to upgrade directly to SP1 UR1. You would do this in two steps on each machine: run the SP1 installer, then run the latest SP1 UR X installer.
  • You now want to add a new Exchange 2010 server into your environment and want it to be at the same patch level. You could perform the installation in a single step from the SP1 binaries by making sure the latest SP1 UR X installer was in the Updates folder.

If these scenarios seem overly complicated, just remember back to the Exchange 2003 days…and before.

Third Party Applications

This might surprise you, but in all of the current Exchange 2010 projects I’m working on, I’ve not even raised the question of upgrading to SP1 yet. Why would I not do that? Simple – all of these environments have dependencies on third-party software that is not yet certified for Exchange 2010 SP1. In some cases, the software has barely just been certified for Exchange 2010 RTM! If the customer brings it up, I always encourage them to start examining SP1 in the lab, but for most production environments, supportability is a key requirement.

Make sure you’re not going to break any applications you care about before you go applying service packs! Exchange service packs always make changes – some easy to see, some harder to spot. You may need to upgrade your third-party applications, or you may simply need to make configuration changes ahead of time – but if you blindly apply service packs, you’ll find these things out the hard way. If you have a critical issue or lack of functionality that the Exchange 2010 SP1 will address, get it tested in your lab and make sure things will work.

Key applications I encourage my customers to test include:

Applications that use SMTP submission are typically pretty safe, and there are other applications that you might be okay living without if something does break. Figure out what you can live with, test them (or wait for certifications), and go from there.

Complications and Gotchas

Unfortunately, not every service pack goes smoothly. For Exchange 2010 SP1, one of the big gotchas that early adopters are giving strong feedback about is the number of hotfixes you must download and apply to Windows and the .NET Framework before applying SP1 (a variable number, depending on which base OS your Exchange 2010 server is running).

Having to install hotfixes wouldn’t be that bad if the installer told you, “Hey, click here and here and here to download and install the missing hotfixes.” Exchange has historically not done that (citing boundaries between Microsoft product groups) even though other Microsoft applications don’t seem to be quite as hobbled. However, this instance of (lack of) integration is particularly egregious because of two factors.

Factor #1: hotfix naming conventions. Back in the days of Windows 2000, you knew whether a hotfix was meant for your system, because whether you were running Workstation or Server, it was Windows 2000. Windows XP and Windows 2003 broke that naming link between desktop and server operating systems, often confusingly so once 64-bit versions of each were introduced (32-bit XP and 32-bit 2003 had their own patch versions, but 64-bit XP applied 64-bit 2003 hotfixes).

Then we got a few more twists to deal with. For example, did you know that Windows Vista and Windows Server 2008 are the same codebase under the hood? Or that Windows 7 and Windows Server 2008 R2, likewise, are BFFs? It’s true. Likewise, the logic behind the naming of Windows Server 2003 R2 and Windows Server 2008 R2 were very different; Windows Server 2003 R2 was basically Windows Server 2003 with a SP and few additional components, while Windows Server 2008 R2 has some substantially different code under the hood than Windows Server 2008 with SP. (I would guess that Windows Server 2008 R2 got the R2 moniker to capitalize on Windows 2008’s success, while Windows 7 got a new name to differentiate itself from the perceived train wreck that Vista had become, but that’s speculation on my part.)

At any rate, figuring out which hotfixes you need – and which versions of those hotfixes – is less than easy. Just remember that you’re always downloading the 64-bit patch, and that Windows 2008=Vista while Windows 2008 R2=Windows 7 and you should be fine.

Factor #2: hotfix release channels. None of these hotfixes show up under Windows Update. There’s no easy installer or tool to run that gets them for you. In fact, at least two of the hotfixes must be obtained directly from Microsoft Customer Support Services. All of these hotfixes include scary legal boilerplate about not being fully regression tested and thereby not supported unless you were directly told to install them by CSS. This has caused quite a bit of angst out in the Exchange community, enough so that various people are collecting the various hotfixes and making them available off their own websites in one easy package to download[1].

I know that these people mean well and are trying to save others from a frustrating experience, but in this case, the help offered is a bad idea. That same hotfix boilerplate means that everyone who downloads those hotfixes agree not to redistribute those hotfixes. There’s no exception for good intentions. If you think this is bogus, let me give you two things to think about:

  • You need to be able to verify that your hotfixes are legitimate and haven’t been tampered with. Do you really want to trust production mission-critical systems to hotfixes you scrounged from some random Exchange pro you only know through blog postings? Even if the pro is trustworthy, is their web site? Quite frankly, I trust Microsoft’s web security team to prevent, detect, and mitigate hotfix-affecting intrusions far more quickly and efficiently than some random Exchange professional’s web host. I’m not disparaging any of my colleagues out there, but let’s face it – we have a lot more things to stay focused on. Few of us (if any) have the time and resources the Microsoft security guys do.
  • Hotfixes in bundles grow stale. When you link to a KB article or Microsoft Download offering to get a hotfix, you’re getting the most recent version of that hotfix. Yes, hotfixes may be updated behind the scenes as issues are uncovered and testing results come in. In the case of the direct-from-CSS hotfixes, you can get them for free through a relatively simple process. As part of that process, Microsoft collects your contact info so they can alert you if any issues later come up with the hotfix that may affect you. Downloading a stale hotfix from a random bundle increases the chances of getting an old hotfix version that may cause issues in your environment, costing you a support incident. How many of these people are going to update their bundles as new hotfix versions become available? How quickly will they do it – and how will you know?

The Exchange product team has gotten an overwhelming amount of feedback on this issue, and they’ve responded on their blog. Not only do they give you a handy table rounding up links to get the hotfixes, they also collect a number of other potential gotchas and advice to learn from from before beginning your SP1 deployment. Go check it out, then start deploying SP1 in your lab.

Good luck, and have fun! SP1 includes some killer new functionality, so take a look and enjoy!

[1] If you’re about to deploy a number of servers in a short period of time, of course you should cache these downloaded hotfixes for your team’s own use. Just make sure that that you check back occasionally for updated versions of the hotfixes. The rule of thumb I’d use is about a week – if I’m hitting my own hotfix cache and it’s older than a week, it’s worth a couple of minutes to make sure it’s still current.

2  

Manually creating a DAG FSW for Exchange 2010

I just had a comment from Chris on my Busting the Exchange Trusted Subsystem Myth post that boiled down to asking what you do when you have to create the FSW for an Exchange 2010 DAG manually?

In order for this to be true, you have to have the following conditions:

  1. You have no other Exchange 2010 servers in the AD site. This implies that at least one of your DAG nodes is multi-role — remember that you need to have a CAS role and an HT role in the same site as your MB roles, preferably two or more of each for redundancy and load. If you have another Exchange 2010 server, then it’s already got the correct permissions — let Exchange manage the FSW automatically.
  2. If the site in question is part of a DAG that stretches sites, there are more DAG nodes in this site than in the second site. If you’re trying to place the FSW in the site with fewer members, you’re asking for trouble[1].
  3. You have no other Windows 2003 or 2008 servers in the site that you consider suitable for Exchange’s automatic FSW provisioning[2]. By this, I mean you’re not willing to the the Exchange Trusted Subsystem security group to the server’s local Administrators group so that Exchange can create, manage, and repair the FSW on its own. If your only other server in the site is a DC, I can understand not wanting to add the group to the Domain Admins group.

If that’s the case, and you’re dead set on doing it this way, you will have to manually create the FSW yourself. A FSW consists of two pieces: the directory, and the file share. The process for doing this is not documented anywhere on TechNet that I could find with a quick search, but happily, one Rune Bakkens blogs the following process:

To pre-create the FSW share you need the following:
- Create a folder etc. D:\FilesWitness\DAGNAME
- Give the owner permission to Exchange Trusted Subsystem
- Give the Exchange Trusted Subsystem Full Control (NTFS)
- Share the folder with the following DAGNAME.FQDN (If you try a different share name,
it won’t work. This is somehow required)
- Give the DAGNAME$ computeraccount Full Control (Share)

When you’ve done this, you can run the set-databaseavailabilitygroup -witnessserver CLUSTERSERVER – witnessdirectory D:\Filewitness\DAGNAME

You’ll get the following warning message:

WARNING: Specified witness server Cluster.fqdn is not an Exchange server, or part of the Exchange Servers security group.
WARNING: Insufficient permission to access file shares on witness server Cluster.fqdn. Until this problem is corrected, the database availability group may be more vulnerable to failures. You can use the set-databaseavailabilitygroup cmdlet to try the operation again. Error: Access is denied

This is expected, since the cmdlet tries to create the folder and share, but don’t have the permissions to do this.

When this is done, the FSW should be configured correct. To verify this, the following files should be created:

- VerifyShareWriteAccess
- Witness

Just for the record, I have not tested this process yet. However, I’ve had to do some recent FSW troubleshooting lately and this matches with what I’ve seen for naming conventions and permissions, so I’m fairly confident this should get you most of the way there. Thank you, Rune!

Don’t worry, I haven’t forgotten the next installment of my Exchange 2010 storage series. It’s coming, honest!

[1] Consider the following two-site DAG scenarios:

  • If there’s an odd number of MB nodes, Exchange won’t use the FSW.
  • An even number (n) of nodes in each site. The FSW is necessary for there to even be a quorum (you have 2n+1 nodes so a simple majority is n+1). If you lose the FSW and one other node — no matter where that node is — you’ll lose quorum. If you lose the link between sites, you lose quorum no matter where the FSW is.
  • A number (n) nodes in site A, with at least one fewer nodes (m) in site B. If n+m is odd, you have an odd number of nodes — our first case. Even if m is only 1 fewer than n, putting the FSW in site B is meaningless — if you lose site A, B will never have quorum (in this case, m+1 = n, and n is only half — one less than quorum).

I am confident in this case that if I’ve stuffed up the math here, someone will come along to correct me. I’m pretty sure I’m right, though, and now I’ll have to write up another post to show why. Yay for you!

[2] You do have at least one other Windows server in that site, though, right — like your DC? Exchange doesn’t like not having a DC in the local site — and that DC should also be a GC.

7  

On Patriotism

Patriotism is being committed to making things better for those around me no matter how good I personally have it. No government, political system, or economic theory is perfect; there will always be people who fall through the cracks. As a patriot, I have a responsibility to identify those cracks and work to mitigate them. Dedication to capitalism or socialism should not deaden me to the suffering of those who are not as fortunate as I am. In helping my fellow Americans, I am strengthening my country.

Patriotism is holding my elected officials, their political appointees, and the news media accountable for the choices and actions they take in my name. As a patriot, I have a responsibility to ensure that my representatives are conducting the business of government according to the values and principles they represented during election time. I need accurate and timely information on their performance and actions. I need to understand the difference between news and entertainment and know when each is appropriate.

Patriotism is acknowledging my country’s flaws with integrity and honesty instead of trying to cover them up or excuse them. When my government and policies fail – and being human institutions, they will fail – I will be tempted to downplay or minimize the impact of these failures. Instead, I must face these failures and their consequences forthrightly, make every reasonable effort to keep them from occurring again, and encourage my fellow Americans to do the same.

Patriotism is respecting the offices and institutions of my government even when expressing my disagreement with its policies and actions. Whether I am Democrat, Independent, Libertarian, Republican, some other party, or a member of none, I choose to discuss government and politics with civility and grace. I do not have to vilify political opponents in order to successfully engage their ideas and point out the failures of their actions. I can condemn bad choices and actions without hatred or unnecessary anger towards those who make them.

Patriotism is placing untainted personal ethics and morality ahead of my politics. I will not spread racism, classism, sexism, or other institutionalized forms of hatred. I have a responsibility to ensure that the voice of every American can be heard and that America provides as level of a playing field as possible. I have a personal stake in making America an ideal of compassionate, reasoned behavior to Americans and to the people of the world. I understand that my country will not be truly great if her citizens are not also great.

Patriotism is patient and compassionate. It is not jealous or blind. It does not covet or boast. Patriotism builds up and exhorts. It does not destroy or belittle. It does not promote lies or avoid the truth. Patriotism does not demand perfection, but asks you to always give your best.

May we all strive to be better patriots.

1  

How To Develop Patience

“Lord, give me patience, and give it to me now!” I’m willing to bet most of of us have heard that joke (or some variant) at some point in our lives, but it underscores a serious question: how does one go about learning to exercise patience?

I’m no guru or saint, so I can’t answer the question for you, but for me it turns out the answer comes from a combination of two life experiences: my six and a half years at 3Sharp, and the nearly two years I’ve been studying karate. At 3Sharp, I learned how to do a lot of things that were beyond my initial comfort zone, developing deep technical presentations (and delivering them to large audiences), scoping and producing large technical projects such as books and whitepapers, and doing a large variety of work from hands-on consulting to research projects.

I’ve talked in previous posts about the physical benefits I’ve seen from karate. However, two weeks ago I tested for my 5th kyu belt (the second of my three green belts) and that experience made me aware of some deep changes in my personality and character. The step from 6th kyu to 5th kyu was particularly hard for me, and it took some time to sort out the two reasons why.

The obvious cause was schedule. I took two months off of karate at the beginning of the year, due to a combination of factors. That’s a hard gap to come back from; I had problems after the three week hiatus I took because of the MCM class. After two months, I just didn’t feel that my presence in class was doing any good until I had the privilege of watching two of my friends from the Mukilteo dojo earn their black belts one Saturday morning in February. I walked away from that experience feeling a new level of commitment to karate. After all, I told myself at that point, if I study hard, I’ll get to 5th kyu sooner or later, and that’s half-way to black belt!

The other cause was technical. The test kata for 5th kyu (Pinan Shodan) is the karateka’s first introduction[1] to a well-known and complicated set of katas, and while most of it seems to be straightforward, there’s a lot boiling up from underneath the surface. Carlos Sensei began introducing us to a series of drills based around Pinan Shodan that unpack a lot of useful theory and practice from the first eight moves of the kata. There’s this very difficult pivot/kick/double punch move right in there (I dub it UberHardMove) that is a key element of the kata, and I was having a hard time getting the pivot, kick, and punches all coordinated together and working the right way without falling on my ass. In fact, I had such a difficult time with it that I can remember sometime around the end of December thinking that maybe I’d found the wall beyond which my lack of coordination was not going to let me pass. In addition, there’s some pretty gnarly tuite that goes along with all of this and I found that I felt horribly weak on my tuite all around, let alone with the techniques I was supposed to able to demonstrate some proficiency at.

What ended up happening, though, was that the two-month time-out did me unexpected good. I didn’t go to class during that period, but I kept practicing karate around the house. (Just ask Steph and the kids; they’ll tell you that it can be difficult to get me to knock it off and stop interfering with whatever they’re trying to do.) And what I did during that time was to take UberHardMove and break it down into components, the way I had previously been shown as a blue belt[2]. I combined that with specific suggestions given to me by both Carlos Sensei and Liam Sensei and picked UberHardMove down to bare bones.

When I finally came back to class, I came back finally believing that the whole concept of me one day earning my black belt wasn’t the world’s best joke. I came back believing that I’d already invested nearly two years and I was willing to invest even more. I didn’t have to be perfect; I gave myself permission to suck. I knew that I was going to make stupid mistakes that I wouldn’t make (like mixing up techniques in lower level katas) if I’d been in class the whole time. I knew that my endurance was going to be awful. I knew that there was a lot of rust to scrub off and deal with and that it wasn’t going to happen immediately. I knew that I needed to let my instructors know that I desperately needed help with my tuite techniques. I knew that I was going to have to have them explain the same things about UberHardMove multiple times until I finally grokked it. In short, I accepted failure without accepting being a failure.

That was March. I tested near the end of May. Somewhere in there, I became proficient with my tuite. I learned a measure of peace with UberHardMove; I’m still not great at it, but I mastered it enough to move on to the next lessons[3]. Perfection is in fact is a bad word in our household. We think the concept of perfection is one of the worst lies that the Adversary ever got humans to accept.

When you stop trying to be perfect – when you give yourself permission to have flaws and failings and determine to be honest about them and learn from them rather than try to cover them up – something amazing tends to happen. You accept “doing your best” instead of “doing it better than everyone else.” You accept “that’s enough for now” instead of “that’s not good enough yet.” You develop a sense of faith that over time, your progress will trend upwards. With that faith, you can draw valuable lessons from your mistakes and missteps. You stop fighting the basic physical and neurological limits of how your body and mind acquire new proficiencies and start working within your limits to expand them instead of struggling against them to tear them down with brute force. You acquire patience – new and fledgling, but the seed of something that starts to affect how you deal with all of your life.

I’m no paragon of patience, but I can see clear changes. For example, I’ve been spending far less time playing Call of Duty on the Xbox in the last month or so. I have a better understanding of how that experience has been frustrating instead of fun and relaxing and I’m less willing to give in to that anymore.

I don’t know where this will go ultimately or at what pace. I can honestly say, though, that I’m okay with that. Will I get my black belt? I don’t know; there are many circumstances that could prevent or delay that. However, I certainly want to, and I finally know I’m capable of doing it, so I wouldn’t bet against me. But I also know that’s just another waypoint on the journey. It’s not an end. It’s a marker where I can say, “See what I’ve done so far? That’s pretty cool. Now I’ve learned enough that I can get serious about learning this stuff and helping pass it on to others.”

Two months ago, I’d have said I couldn’t wait for that day. You know what? That’s not true. I can wait. I will wait. And I will do so profitably.

[1] In our style, at least. There are other styles that place another Pinan kata before Pinan Shodan.

[2] In a nice twist of synchronicity, the person who showed me was at the time was a helpful brown belt from Mukilteo who ended up being one of the two black belts I got to watch test. He has continued to be an amazing source of inspiration for me through what is now a large number of discouraging situations. Hi, Max!

[3] It’s not going away; I still practice it, and I know that it will get better as I learn more. In fact, those final four moves in Pinan Nidan where I’m in a cat stance might be helpful here, hmmm…

1  

The Disk’s The Thing! Exchange 2010 Storage Essays, part 2

Greetings, readers! When I first posted From Whence Redundancy? (part 1 of this series of essays on Exchange 2010 storage) I’d intended to follow up with other posts a bit faster than I have been. So much for intentions; let us carry on.

In part 1, I began the process of talking about how I think the new Exchange 2010 storage options will play out in live Exchange deployments over the next several years. The first essay in this series discussed what is I believe the fundamental question at the heart an Exchange 2010 storage design: at what level will you ensure the redundancy of your Exchange mailbox databases? The traditional approach has used RAID at the disk level, but Exchange 2010 DAGs allow you to deploy mailbox databases in JBOD configurations. While I firmly believe that’s the central question, answering it requires us to dig under the hood of storage.

With Exchange 2010, Microsoft specifically designed Exchange mailbox servers to be capable of using the lowest common denominator of server storage: a directly attached storage (DAS) array of 7200 RPM SATA disks in a Just a Box of Disks (JBOD) configuration (what I call DJS). Understanding why they’ve made this shift requires us to understand more about the disk drive technology. In this essay, part 2 of this series, let’s talk about disk technology and find out how Fibre Channel (FC), Serially Attached SCSI (SAS), and Serial Advanced Technology Attachment (SATA) disk drives are the same – and more importantly, what slight differences they have and what that means for your Exchange systems.

Exchange Storage SATA vs SAS

So here’s the first dirty little secret: for the most part, all disks are the same. Regardless of what type of bus they use, what form factor they are, what capacity they are, and what speed they rotate at, all modern disks use the same construction and principles:

  • They all have one or more thin rotating platters coated with magnetic media; the exact number varies by form factor and capacity. Platters look like mini CD-ROM disks, but unlike CDs, platters are typically double-sided. Platters have a rotational speed measured in revolutions per minute (RPMs).
  • Each side of a platter has an associated read-write head. These heads are on a single-track arm that moves in toward the hub of the platter or out towards the rim. The heads do not touch the platter, but float very close to the surface. It takes a measurable fraction of a second for the head to relocate from one position to another; this is called its seek time.
  • The circle described by the head’s position on the platter is called a track. In a multi-platter disk, the heads move in synchronization (there’s no independent tracking per platter or side). As a result, each head is on the same track at the same time, describing a cylinder.
  • Each drive unit has embedded electronics that implement the bus protocol, control the rotational speed of the platters, and translate I/O requests into the appropriate commands to the heads. Even though there are different flavors, they all perform the same basic functions.

If you would like a more in-depth primer on how disks work, I recommend starting with this article. I’ll wait for you.

Good? Great! So that’s how all drives are the same. It’s time to dig into the differences. They’re relatively small, but small differences have a way of piling up. Take a look at Table 1 which summarizes the differences between various FC, SATA, and SAS disks, compared with legacy PATA 133 (commonly but mistakenly referred to as IDE) and SCSI Ultra 320 disks:

Table 1: Disk parameter differences by disk bus type

Type Max wire bandwidth(Mbit/s) Max data transfer(MB/s)
PATA 133 1,064 133.5
SCSI Ultra 320 2,560 320
SATA-I 1,500 150
SATA-II 3,000 300
SATA 6 Gb/s 6,000 600
SAS 150 1,500 150
SAS 300 3,000 300
FC (copper) 4,000 400
FC (optic) 10,520 2,000

 

As of this writing, the most common drive types you’ll see for servers are SATA-II, SAS 300, and FC over copper. Note that while SCSI Ultra 320 drives in theory have a maximum data transfer higher than either SATA-II or SAS 300, in reality that bandwidth is shared among all the devices connected to the SCSI bus; both SATA and SAS have a one-to-one connection between disk and controller, removing contention. Also remember that SATA is only a half-duplex protocol, while SAS is a full-duplex protocol. SAS and FC disks use the full SCSI command set to allow better performance when multiple I/O requests are queued for the drive, whereas SATA uses the ATA command set. Both SAS and SATA implement tagged queuing, although they use two different standards (each of which has its pros and cons).

The second big difference is the average access time of the drive, which is the sum of multiple factors:

  • The average seek time of the heads. The actuator motors that move the heads from track to track are largely the same from drive to drive and thus the time contributed to the drive’s average seek time by just the head movements is roughly the same from drive to drive. What varies is the length of the head move; is it moving to a neighboring track, or is it moving across the entire surface? We can average out small track changes with large track changes to come up with idealized numbers.
  • The average latency of the platter. How fast the platters are spinning determines how quickly a given sector containing the data to be read (or where new data will be written) will move into position under the head once it’s in the proper track. This is a simple calculation based on the RPM of the platter and the observed average drive latency. We can assume that a given sector will move into position, on average, in no more than half a rotation. This gives us 30 seconds out of each minute of rotation, or 30,000 ms, into which we can divide the drive’s actual rotation.
  • The overhead caused by the various electronics and queuing mechanisms of the drive electronics, including any power saving measures such as reducing the spin rate of the drive platters. Although electricity is pretty fast and on-board electronics are relatively small circuits, there may be other factors (depending on the drive type) that may introduce delays into the process of fulfilling the I/O request received from the host server.

What has the biggest impact is how fast the platter is spinning, as shown in Table 2:

Table 2: Average latency caused by rotation speed

Platter RPM Average latency in ms
7,200 4.17
10,000 3
12,000 2.5
15,000 2

 

(As an exercise, do the same math on the disk speeds for the average laptop drives. This helps explain why laptop drives are so much slower than even low-end 7,200 RPM SATA desktop drives.)

Rather than painfully take you through the result of all of these tables and calculations step by step, I’m simply going to refer you to work that’s already been done. Once we know the various averages and performance metrics, we can figure out how many I/O operations per second (IOPS) a given drive can sustain on average, according to the type, RPMs, and nature of the I/O (sequential or random). Since Microsoft has already done that work for us as part of the Exchange 2010 Mailbox Role Calculator (version 6.3 as of this writing, I’m going to simply use the values there. Let’s take a look at how all this plays out in Table 3 by selecting some representative values.

Table 3: Drive IOPS by type and RPM

Size Type RPM Average Random IOPS
3.5” SATA 5,400 50
2.5” SATA 5,400 55
3.5” SAS 5,400 52.5
3.5” SAS 5,900 52.5
3.5” SATA 7,200 55
2.5” SATA 7,200 60
3.5” SAS 7,200 57.5
2.5” SAS 7,200 62.5
3.5” FC/SCSI/SAS 10,000 130
2.5” SAS 10,000 165
3.5” FC/SCSI/SAS 15,000 180
2.5” SAS 15,000 230

 

There are three things to note about Table 3.

  1. These numbers come from Microsoft’s Exchange 2010 Mailbox Sizing Calculator and are validated across vendors through extensive testing in an Exchange environment. While there may be minor variances between drive model and manufacturers and these number may seem pessimistic according to calculated IOPS number published for individual drives, these are good figures to use in the real world. Using calculated IOPS numbers can lead both to a range of figures, depending on the specific drive model and manufacturer, as well as to overestimating the amount of IOPS the drive will actually provide to Exchange.
  2. For the most part, SAS and FC are indistinguishable from the IOPs point of view. Regardless of the difference between the electrical interfaces, the drive mechanisms and I/O behaviors are comparable.
  3. Sequential IOPS are not listed; they will be quite a bit higher than the random IOPS (that same 7,200RPM SATA drive can provide 300+ IOPS for sequential operations). The reason is simple; although a lot of Exchange 2010 I/O has been converted from random to sequential, there’s still some random I/O going on. That’s going to be the limiting factor.

The IOPS listed are per-drive IOPS. When you’re measuring your drive system, remember that the various RAID configurations have their own IOPS overhead factor that will consume a certain number

There are of course some other factors that we need to consider, such as form factor and storage capacity. We can address these according to some generalizations:

  • Since SAS and FC tend to have the same performance characteristics, the storage enclosure tends to differentiate between which technology is used. SAS enclosures can often be used for SATA drives as well, giving more flexibility to the operator. SAN vendors are increasingly offering SAS/SATA disk shelves for their systems because paying the FC toll can be a deal-breaker for new storage systems.
  • SATA disks tend to have a larger storage capacity than SAS or FC disks. There are reasons for this, but the easiest one to understand is that SAS, being traditionally a consumer technology, has a lower duty cycle and therefore lower quality control specifications that must be met.
  • SATA disks tend to be offered with lower RPMs than SAS and FC disks. Again, we can acknowledge that quality control plays a part here – the faster a platter spins, the more stringently the drive components need to meet their specifications for a longer period of time.
  • 2.5” drives tend to have lower capacity than their 3.5” counterparts. This makes sense – they have smaller platters (and may have fewer platters in the drive).
  • 2.5” drives tend to use less power and generate less heat than equivalent 3.5” drives. This too makes sense – the smaller platters have less mass, requiring less energy to sustain rotation.
  • 2.5” drives tend to permit a higher drive density in a given storage chassis while using only fractionally more power. Again, this makes sense based on the previous two points; I can physically fit more drives into a given space, sometimes dramatically so.

Let’s look at an example. A Supermicro SC826 chassis holds 12 3.5” drives with a minimum of 800W power while the equivalent Supermicro SC216 chassis holds 24 2.5” drives with a minimum of 900W of power in the same 2Us of rack space. Doubling the number of drives makes up for the capacity difference between the 2.5” and 3.5” drives, provides twice as many spindles and allows a greater aggregate IOPS for the array, and only requires 12.5% more power.

The careful reader has noted that I’ve had very little to say about capacity in this essay, other than the observation above that SATA drives tend to have larger capacities, and that 3.5” drives tend to be larger than 2.5” drives. From what I’ve seen in the field, the majority of shops are just now looking at 2.5” drive shelves, so it’s safe to assume 3.5” is the norm. As a result, the 3.5” 7,200 RPM SATA drive represents the lowest common denominator for server storage, and that’s why the Exchange product team chose that drive as the performance bar for DJS configurations.

Exchange has been limited by performance (IOPS) requirements for most of its lifetime; by going after DJS, the product team has been able to take advantage of the fact that the capacity of these drives is the first to grow. This is why I think that Microsoft is betting that you’re going to want to simplify your deployment, aim for big, cheap, slow disks, and let Exchange DAGs do the work of replicating your data.

Now that we’ve talked about RAID vs. JBOD and SATA vs. SAS/FC, we’ll need to examine the final topic: SAN vs. DAS. Look for that discussion in Part 3, which will be forthcoming.

7  

A Psalm for Karatekas

Last night I went to my first karate class in several weeks. On the way, my brain reinterpreted Psalm 23 from the viewpoint of a karateka. Enjoy.

1 The LORD is my sensei; I shall not fear.

2 He makes me work out with white belts; he leads me through katas.

3 He perfects my form. He leads me in the path of new techniques for the sake of advancement.

4 Though I walk through the valley of the shadow of death, I fear no evil, for his teachings are with me; my kama and bo staff comfort me.

5 He prepares testing for me in the presence of my fellow karateka; he adorns my waist with new obi, my gi fits better.

6 Surely discipline and health shall follow me all the days of my studies, and I will dwell in the dojo of the LORD forever.

2  
http://ajleeonline.com/
%d bloggers like this: