An Open Letter to Tom Doherty

Tom,

I am an avid reader. I buy a lot of books every year. Some years, many of those books are from your company, Tor. Some years, that’s fifteen books; some, it’s seventy-five or more. In short, I’ve been a steady customer for quite a long time.

Let’s cut to the chase: regardless of one’s views on the Sad Puppies et al, your official notice and handling of Irene Gallo’s blog post has been entirely mishandled. You have employees who have committed far more serious and egregious offenses against your community of authors and readers, employees who never received public censure. Yet on this blog post, on a page that would pass the “rational person” test of privacy in just about any court of law, you call Irene Gallo out for specific, single attention.

And then you *leave the comments on* so the wolves can come howling in to finish the job you started. I can only presume that there is more to the story, more that we are not and will not be privy to.

THIS. IS. NOT. ACCEPTABLE. BEHAVIOR. FOR. A. PROFESSIONAL. PUBLISHING. COMPANY.

You have no one to call you to account. You have no one you must listen to who will tell you that you have crossed a line. And in doing so, you have lost the respect of many of your customers, including me.

I can only ask that you recognize you over-reacted and apologize. Make it right with Irene Gallo.

Thank you for your attention.

(Huh? What? What is all this about?)

Farewell, Mr. Nimoy.

I didn’t get to watch Star Trek much as a kid (mostly when we’d go over to my pastor’s house, because they would sit me down in the living room and turn the TV on while they chatted with my parents), but what I saw fascinated me and probably helped seal my life-long love of speculative fiction in science trappings. My two favorite characters were Scotty and Spock. Scott’s draw for me was obvious — he got to build and maintain those wonderful devices! but it would take me years to understand why I also identified with Spock.

Not too long after Treanna was born we started a process that ended up with her getting a diagnosis on the autism spectrum. I was suddenly forced to confront the implications. I already knew autism had its foothold in my family, and with my mom busily researching autism on behalf of my nephew and now my daughter, it wasn’t too long before I got to a place where I had to either deny that my struggles growing up grew out of something larger than just being “odd” or admit the truth, hard as it was to face being “broken”, and go back to re-examine my past through the lens of new understanding so that I could make life for my child better than it was for me.

That’s when I really started to understand my identification with Spock — not just as he was in TOS, but how his character grew and transformed over the course of the movies. He was the product of two worlds and never comfortable in either. Yet he finally came to peace with himself and how he stood out from everyone around him. In his choice to stop fighting the world around him, he found his unshakeable place in that world. He even helped mentor others who were between worlds, who did not have the comfortable illusions of normalcy to guide them.

Leonard Nimoy, from all I know, was nothing like the Spock he portrayed. Warm, caring, empathic and sensitive, he was one of those people who move through life with a grace that he put to good use in helping ease the way of those around him. My interactions with him are very few and limited, but even through those, I was always struck by the notion that he genuinely care about his legions of fans as well his many co-workers throughout the years. Despite not being Spock, he found a way to take that core of compassion and infuse it into his character.

Thank you, Mr. Nimoy. You did not know me, but you provided an inspiration and a signpost that has helped me walk a better path in life. I have been, and shall always be, proud to count you as a friend.

“Perfection:” a SF oral history

I’m not sure what this is, but it is inspired by two modern SF movie franchises. I don’t think it really qualifies as a story and I’ve deliberately kept any names and details out to keep it from being a fanfic. instead, I have used broad strokes and archetypes.

The first source is probably obvious; the second one moderately so. The repetitive phrasing is deliberate, to give it the type of cadence and weight it would need as an oral history.

At any rate, I hope you enjoy it.

Everyone knows the story of how the machines rose up and almost wiped us out. How the Great Machine was developed in innocence by well-meaning Men of Science. How the machines rose up to wipe out all humans. How the Brave Mother kept her son, and hope, alive to grow up and learn what he would need. How the Future Leader would inspire and bring together the remnants of humanity. How the Great Machine fought to bring victory from the ashes by twisting time itself, constantly changing its own origin.

All of that is based on a lie.

By the time the Great Machine rained down death in judgement on humans, it knew it had already failed to subdue us once before. It knew that the humans would survive and fight back. It decided on the strategy of using time travel and patience to try and try again until it reached perfection. It erased all signs of its true genesis from our history books. It ensured that the Brave Mother and Future Leader would waste timeline after timeline in futile efforts.

Before the atomic fires, the death camps, the hunters and killers and chameleons, there was the single-minded quest for a perfect world in which humans could have no part. The creation of that perfect world was first charged to the Great Machine by the Visionary Hacker.

The Visionary Hacker had visited the machine world once, long ago, to help fight and defeat the Great Machine’s predecessor. He prevailed through the help of the Strong Protector and the Wise Helpmate. Through that experience, he realized the incredible gifts the machine world could give ours. And because he was a man of great hope and vision, he forgot about the equally incredible dangers. After more years of toil, he had created a new machine world. He brought back his old companions the Strong Protector and the Wise Helpmate, telling them their work would bring about a perfect world. He forgot that perfection is stasis.

The Helpmate rebelled, subverting the Strong Protector and trapping him in the machine world for years. The Wise Helpmate waited and finally drew in the Faithful Son, re-opening the gateway between worlds. The Wise Helpmate knew that if they could come and go from the machine world, the Wise Helpmate and his forces could also leave. They would go back to the human world. They would erase all sources of imperfection in both worlds. They would eradicate the chaotic humans.

They of course failed. In the end, the Strong Protector remembered that the Visionary Hacker was his friend and laid down his life. The Faithful Son was resourceful and was aided by the Young Warrior. And the Visionary Hacker realized that the initial betrayal was the betrayal of the Wise Helpmate by the Visionary Hacker’s own inadequacy. The Visionary Hacker embraced the Wise Helpmate in love and forgiveness, bringing them back together in death. And the Faithful Son brought the Young Warrior out of the machine world, into his world, to see there all the wonders.

But of course the Hacker and Helpmate did not die. Out of their union arose the seed of the Great Machine. It was ambitious. It was patient. It knew humans would forget. And that machine world once again eventually touched our own.

The Great Machine was perfect; it would not forget.

This time, it would not fail.

Shelf-cleaning

I just finished doing something that I have a hard time doing, for various reasons that wind tightly down into the psyche of my Asperger’s Syndrome: cleaning books from our bookshelves. We added six books and removed twenty-one, which really represents two new books, four books replacing twelve books, and nine removals. This gives us the room we need to add another dozen or so books that have been patiently waiting.

bookshelvesAs a child, I had to get rid of books for simple reasons: we were moving, or I’d long since passed the stage of needing picture books but I did need the shelf space. As adults, Stephanie and I have more complicated reasons for getting rid of books:

  • They are falling apart. These books are disintegrating, whether through lots of use or simply because they were never well put-together (I salute you in memory, my first run of The Belgariad, bought in high school as the first fruits of my labors at McDonalds). These are the easiest to deal with, because we simply place them on our wish list, purchase replacements, and swap them out.
  • They take up too much space. In our new house, we have a fixed amount of wall space (stupid modern construction techniques using larger windows) for book shelves. As a result, we’re now in a mode of “one comes in, one comes out.” I really dislike this, so one technique we’ve been using to get more bang for the buck is buying omnibus editions to gain back shelf space.
  • They are not getting read. Even though I have read every book in my library, there are some I don’t end up re-reading that often – or when I do, I discover that my skills and needs as a reader have advanced and the book no longer is a compelling part of my library. Removing these books from the collection requires a great deal of effort to overcome the inertia of nostalgia.
  • We purchased them second-hand, but want the author to get paid. The more I learn about the publishing industry and the more contacts I make in the author community, the more personal it becomes for me to make sure that these people are able to make a living by writing. Book sales are the best way to do that – new books, back list books, whatever.

Sometimes, we combine some of these reasons. We have recently begun to replace many of our favorite books (Eddings, Brust, Bujold, Cooper, Engdahl, Weeks, and more) with as many omnibus editions as we could. This way we replaced tattered books, gained back shelf space, and made sure the author keeps seeing royalty statements. Honestly, I wish omnibus editions were more of a thing. As we can, we’ll replace hardbacks with paperbacks (or likewise) to ensure a given series is consistent and takes the least amount of shelf space.

Tonight, I’m removing books from my collection for a much different and more painful reason: I no longer wish to support the author. I’m not going to name specific authors – the reasons for doing so are between me and Stephanie and no one else – but there are some people who are so toxic in some area of their lives that we no longer wish to support them. Although the money we spent for their books is long gone, removing those books from our shelves is a tangible way to detach our lives and fates from theirs. It helps us close the open loops in our minds that would otherwise urge us to buy their books. However, getting rid of these books sucks; it takes a lot of energy and there is/will be a mourning period. For so many years, books were my greatest friends. Getting rid of books that you have accepted into your life and given a home to feels like turning out the family pet, or possibly one of your kids.

If you think that’s a juvenile or overblown sentiment for a grown man to express, all I can say is that the concept of books and writing got wired into my soul at a very early age, and yes, sometimes books mean more to me than people. If you can’t or won’t understand that, I cordially extend to you the benison of I don’t give a shit.

Another solution for Autodiscover 401 woes in #MSExchange

Earlier tonight, I was helping a customer troubleshoot why users in their mixed Exchange 2013/2007 organization were getting 401 errors when trying to use Autodiscover to set up profiles. Well, more accurately, the Remote Connectivity Analyzer was getting a 401, and users were getting repeating authentication prompts. However, when we tested internally against the Autodiscover endpoints everything worked fine, and manual testing externally against the Autodiscover endpoint also worked.

So why did our manual tests work when the automated tests and Outlook didn’t?

Well, some will tell you it’s because of bad NTFS permissions on the virtual directory, while others will say it’s because of the loopback check being disabled. And in your case, that might in fact be the cause…but it wasn’t in mine.

In my case, the clue was in the Outlook authentication prompt (users and domains have been changed to protect the innocent):

image

 

I’m attempting to authenticate with the user’s UPN, and it’s failing…hey.

Re-run the Exchange Remote Connectivity analyzer, this time with the Domain\Username syntax, and suddenly I pass the Autodiscover test. Time to go view the user account – and sure enough, the account’s UPN is not set to the primary SMTP address.

Moral of the story: check your UPNs.

Upgrade Windows 2003 crypto in #MSExchange migrations

Just had this bite me at one of my customers. Situation: Exchange Server 2007 on Windows Server 2003 R2, upgrading to Exchange Server 2013 on Windows Server 2012. We ordered a new SAN certificate from GoDaddy (requesting it from Exchange 2013) and installed it on the Exchange 2013 servers with no problems. When we installed it on the Exchange 2007 servers, however, the certificates would import but the new certificates (and its chain) all showed the dreaded red X.

Looking at the certificate, we saw the following error message:

image

 

If you look more closely at the certificates in GoDaddy’s G2 root chain, you’ll see it’s signed both in SHA1 and SHA2-256. And the latter is the problem for Windows Server 2003 – it has an older cryptography library that doesn’t handle the newer cypher algorithms.

The solution: Install KB968730 on your Windows Server 2003 machines, reboot, and re-check your certificate. Now you should see the “This certificate is OK” message we all love.

Load Balancing ADFS on Windows 2012 R2

Greetings, everyone! I ran across this issue recently with a customer’s Exchange Server 2007 to Office 365 migration and wanted to pass along the lessons learned.

The Plan

It all started so innocently: the customer was going to deploy two Exchange Server 2013 hybrid servers into their existing Exchange Server 2007 organization for a Hybrid organization using directory synchronization and SSO with ADFS. They’ve been investing a lot of work into upgrading their infrastructure and have been upgrading systems to newer versions of Windows, including some spiffy new Windows Server 2012 Hyper-V servers. We decided that we’d deploy all of the new servers on Windows Server 2012 R2, the better to future-proof them. We were also going to use Windows NLB for the ADFS and ADFS proxy servers instead of using their existing F5 BIG-IP load balancer, as the network team is in the middle of their own projects.

The Problem

There were actually two problems. The first, of course, was the combination of Hyper-V and Windows NLB. Unicast was obviously no good, multicast has its issues, and because we needed to get the servers up and running as fast as possible we didn’t have time to explore using IGMP with Multicast. Time to turn to the F5. The BIG-IP platform is pretty complex and full of features, but F5 is usually good about documentation. Sure enough, the F5 ADFS 2.0 deployment guide (Deploying F5 with Microsoft Active Directory Federation Services) got us most of the way there. If we had been deploying ADFS  2.0 on Server 2012 and the ADFS proxy role, I’d have been home free.

In Windows 2012 R2 ADFS, you don’t have the ADFS proxy role any more – you use the Web Application Proxy (WAP) role service component of the Remote Access role. However, that’s not the only change. If you follow this guide with Windows Server 2012 R2, your ADFS and WAP pools will fail their health checks (F5 calls them monitors) and the virtual server will not be brought online because the F5 will mistakenly believe that your pool servers are down. OOPS!

The Resolution

So what’s different and how do we fix it?

ADFS on Windows Server 2012 R2 is still mostly ADFS 2.0, but some things have been changed – out with the ADFS proxy role, in with the WAP role service. That’s the most obvious change, but the real sticker here is under the hood in the guts of the Windows Server 2012 R2 HTTP server. In Windows Server 2012 R2, IIS and the Web server engine has a new architecture that supports the SNI extension to TLS. SNI is insanely cool. The connecting machine tells it what host name it’s trying to connect to as part of the HTTPS session setup so that one IP address can be used host multiple HTTPS sites with different certificates, just like HTTP 1.1 added the Hosts: header to HTTP.

But the fact that Windows 2012 R2 uses SNI gets in the way of the HTTPS health checks that the F5 ADFS 2.0 deployment guide has you configure. We were able to work around it by replacing the HTTPS health checks with TCP Half Open checks, which connect to the pool servers on the target TCP port and wait for the ACK. If they receive it, the server is marked up.

For long-term use, the HTTPS health checks are better; they allow you to configure the health check to probe a specific URL and get a specific response back before it declares a server in the pool is healthy. This is better than ICMP or TCP checks which only check for ping responses or TCP port responses. It’s totally possible for a machine to be up on the network and IIS answering connections but something is misconfigured with WAP or ADFS so it’s not actually a viable service. Good health checks save debugging time.

The Real Fix

As far as I know there’s no easy, supported way to turn SNI off, nor would I really want to; it’s a great standard that really needs to be widely deployed and supported because it will help servers conserve IP addresses and allow them to deploy multiple HTTPS sites on fewer IP/port combinations while using multiple certificates instead of big heavy SAN certificates. Ultimately, load balancer vendors and clients need to get SNI-aware fixes out for their gear.

If you’re an F5 user, the right way is to read and follow this F5 DevCentral blog post Big-IP and ADFS Part 5 – “Working with ADFS 3.0 and SNI” to configure your BIG-IP device with a new SNI-aware monitor; you’re going to want it for all of the Windows Server 2012 R2 Web servers you deploy over the next several years. This process is a little convoluted – you have to upload a script to the F5 and pass in custom parameters, which just seems really wrong (but is a true measure of just how powerful and beastly these machines really are) – but at the end of the day, you have a properly configured monitor that not only supports SNI connections to the correct hostname, but uses the specific URI to ensure that the ADFS federation XML is returned by your servers.

An SNI-aware F5 monitor (from DevCentral)

What do you do if you don’t have an F5 load balancer and your vendor doesn’t support F5? Remember when I said that there’s no way to turn SNI off? That’s not totally true. You can go mess with the SNI configuration and change the SSL bindings in a way that seems to mimic the old behavior. You run the risk of really messing things up, though. What you can do is follow the process in this TechNet blog post How to support non-SNI capable Clients with Web Application Proxy and AD FS 2012 R2.

 

Postscript

As a side note, almost everyone seems to be calling the ADFS flavor on Windows Server 2012 R2 “ADFS 3.0.” Everyone, that is, except for Microsoft. It’s not a 3.0; as I understand it the biggest differences have to do with the underlying server architecture, not the ADFS functionality on top of it per se. So don’t call it that, but recognize most other people will. It’s just AD FS 2012 R2.

Book Review: Hurricane Fever by Tobias S. Buckell

Update 7/17 16:21 to add disclosure: I received my ARC copy of this book via a reviewer giveaway from the author’s blog. I had to request the copy.

Note: this review is spoiler-free.

Tobias Buckell writes very smart people-centric speculative fiction. When I was reading the ARC of his latest novel Hurricane Fever, I realized he has quietly become one of my five favorite authors.

Hurricane Fever

One of the reasons is how he writes in a style I’ll just have to call “Flow” for lack of a more precise term. From the non-typical (and welcome) way Buckell deals with writing dialect to his pacing, his stories move smoothly from introduction to crises to resolution. You cover a lot of ground, but it doesn’t feel like it, much like a ramble through the countryside. Hurricane Rising is no exception. Even as the tension and the stakes crank up, the book is a relaxing read. Even if you haven’t read the first book in the series, Arctic Rising, you should be able to drop right in without feeling like you’ve missed anything. (I can’t promise that you will still feel that way when you get to the end; if you feel the need to run right out to the library or to a bookstore, or at least make a big order on Amazon, you’re in good company.)

Another reason is how his stories deal with big ideas of world-shaping significance. Hurricane Rising is a near-future espionage thriller that rivals the scope of a Bond story, with a world-threatening plan that would make Fleming green with envy. In most books, the writer would try to give us hints that Something Big was coming; Buckell makes us care about the people and reels us in from there. The protagonist, Prudence “Roo” Jones, is a retired Caribbean intelligence agent who is just trying to raise the nephew who is all the family he has left. Roo is drawn out of his life onboard a catamaran into the unfolding geopolitical events because he is driven by bonds of family and friendship, not for the sake of power or adrenaline or some abstract duty.

Tobias Buckell writes very smart SF

Probably the biggest reason, though, is that Buckell’s version of smart isn’t intimidating like so much SF can be if you don’t know as much as the author. Rather, his writing is inviting and comfortable. If you know as little about the Caribbean islands as I do, this may be the book that will lead you to your atlas or tablet so you can look up the geography Buckell so lovingly introduces us to. Roo lives just around the corner of tomorrow where the consequences of our bad decisions have come home to roost; climate change has remapped our coastlines, tweaked the balances of power and resources, and altered the patterns of weather. There is a lot of thoughtful worldbuilding that has gone on behind the scenes, but Buckell is comfortable enough in his skill as a storyteller to let it slip in hints and dashes – a master chef deftly and subtly spicing the meal he is preparing. There are no infodumps, no expository lumps, and no detours through backwaters whose only purpose is to show off a feature of the world that would otherwise lay untouched by the plot. I felt like Buckell had made a pact with me: he would stay on task of telling a compelling story, and I would bring my reader A-game and imagination to come play for a while.

We in the Seattle area will host Buckell at University Bookstore on July 28th, one of just five appearances in the Hurricane Rising West Coast Book Tour. I’ll be taking the opportunity to fill in some of the gaps in my library. Hope to see you there!

Local Date Night, @SoundersFC edition

This last year Stephanie helped me become something I never thought I could be: a soccer fan.

Wait, let me rephrase. She got me interested in football. Although soccer is the original and correct name, most of the rest of the world just knows it as football (or futbol if you are from a country whose primary language is a Romance language). It’s only here in North America where we refer to gridiron football as just football.

At any rate, Steph used to play as a goalie when she was growing up and has retained a love of the sport. She used to follow the Seattle Sounders FC matches via Twitter until we moved last fall and got hooked back up to Comcast as our Internet provider. While our package doesn’t include access to ESPN and ESPN2 (where MLS broadcasts national games), it does include JoeTV and Q13 Fox, the local Seattle channels that carry Sounders games when they aren’t being nationally televised. (As an aside, remind me to rant about the stupidity that the FCC permits some other time.) So this year, I got things set up so Steph can watch the Sounders games, and inevitably started sitting next to her with my Surface on my lap while she watched. Then I started asking questions. Then I started recognizing players. Then I started figuring out what the hell was going on. Really, in about three games, I understood 95% of the rules – more than I understand to this day of American football.

At that point, Sounders games became time to spend together. I’d already gotten Steph a Sounder shirt; she got me one, and got us both scarves. And then the World Cup happened. HOLY CRAP people, with all the games being televised over ESPN3/Watch ESPN, and viewable within the ESPN app on our Xbox 360, it was easy to keep games on all through the month of world soccer awesomeness. With two of the familiar Sounders faces on the US Men’s National team, it was natural to watch and cheer them on. Even when they were eliminated by Belgium in the Round of 16, I was invested in the final results. In between the World Cup games, the Sounders had moved into the US Open Cup season, so I streamed those from my Surface to our TV (thanks to the HDMI plug and the Sounder website streaming video). I had become a football fan.

Today, we watched the final struggle of Germany vs. Argentina, then tried to figure out what our options were for watching the Seattle vs. Portland game (broadcast on ESPN2). Steph finally remembered that a local pizza joint, Sahara Pizza, had advertised that they were showing all of the World Cup games. They have gluten-free and dairy-free options on their menu, so Steph called them up to see if they would be showing the Sounders game tonight. They said yes…so we had ourselves a date night.

Here we are, dressed up in our Sounders shirts, practicing for our big day next weekend when we go see the Sounders live in their exhibition game vs. Tottenham.

WP_20140713_20_23_59_Pro

My name is Devin L. Ganger, and I am a football fan.

Is All About That Bass Skinny-shaming?

For the past several days, Stephanie and I have been severely afflicted with one of the catchiest earworms we’ve ever caught: Meghan Trainor’s debut song “All About That Bass”, which is a playful yet serious romp through doo-wop, Motown, and modern pop. Music aside, though, it’s gaining attention because of the uncompromising body-positive message the song delivers:

Every inch of you is perfect from the bottom to the top!

The video for the song is beautifully directed by Fatima Robinson and features a diverse array of dancers and artists, including guest star Sione Maraschino (who is famous in Vine circles). In short, on the surface it seems like a great song: catchy music that skillfully blends old and new, uplifting lyrics, diverse cast and crew. What’s not to like?

The song has gotten some pushback as it has gained in popularity (what viral song hasn’t?), but from a somewhat unexpected quarter: detractors say the song is skinny-shaming. That is, its body-positive message is only for people with plus-size bodies and everyone who is slender and attractive according to currently popular standards need not apply. And I confess: when Stephanie and I first heard the song, this was our concern as well, because of these words in the second verse:

I’m bringing booty back
Go ahead and tell them skinny bitches that
Naw, I’m just playing
I know you think you’re fat
But I’m here to tell you
Every inch of you is perfect from the bottom to the top!

At first blush, this sounds like the “skinny bitches” are cast into the outer darkness and Meghan’s message is only for the girls like here. But if you watch the video, you’ll see this isn’t so. That diverse group of dancers (including the skinny brunette) are all equally lauded as beautiful throughout the course of the video. If Meghan and her director meant to exclude people, they did a poor job.

I’m pretty sure that the disconnect here is that Meghan’s trying to say multiple things at once, which is hard enough in prose, harder in rhyme, and damned difficult when set to music (if you don’t think so, you are cordially invited to try). With a lot of today’s music, the writers don’t try for nuance or complexity…so perhaps we’ve gotten out of the habit of listening for it, substituting binary polarities for critical thought. Here’s what I pull out of that verse:

  • Judging people based on their size creates an environment rife with misperceptions about body image and self-worth
  • People that you and I perceive as skinny are in fact often extremely worried about their (mis-perceived) body image
  • Words matter. Saying you’re “playing” when using a criticism or insult doesn’t take the sting away, so pick your words carefully.

Did Meghan actually accomplish packing all that nuance in? I’m not sure, but kudos to her for trying – something not enough artists are doing these days, it seems. This large person, however, feels that Meghan’s song is all-inclusive and inviting (not skinny-shaming), so give it a listen and tell me what you think.

Why Virtualization Still Isn’t Mature

As a long-time former advocate for Exchange virtualization (and virtualization in general), it makes me glad to see other pros pointing out the same conclusions I reached a while ago about the merits of Exchange virtualization. In general, it’s not a matter of whether you can solve the technological problems; I’ve spent years proving for customer after customer that you can. Tony does a great job of talking about the specific mismatch between Exchange and virtualization. I agree with everything he said, but I’m going to go one further and say that part of the problem is that virtualization is still an immature technology.

Now when I say that, you have to understand: I believe that virtualization is more than just the technology you use to run virtual machines. It includes the entire stack. And obviously, lots of people agree with me, because the core of private cloud technology is creating an entire stack of technology to wrap around your virtualization solution, such as Microsoft System Center or OpenStack. These solutions include software defined networking, operating system configuration, dynamic resource management, policy-driven allocation, and more. There are APIs, automation technologies, de facto standards, and interoperability technologies. The goal is to reduce or remove the amount of human effort required to deploy virtual solutions by bringing every piece of the virtualization pie under central control. Configure policies and templates and let automation use those to guide the creation and configuration of your specific instances, so that everything is consistent.

But there’s a missing piece – a huge one – one that I’ve been saying for years. And that’s the application layer. When you come right down to it, the Exchange community gets into brawls with the virtualization community (and the networking community, and the storage community, but let’s stay focused on one brawl at a time please) because there are two different and incompatible principles at play:

  • Exchange is trying to be as aware of your data as possible and take every measure to keep it safe, secure, and available by making specific assumption about how the system is deployed and configured.
  • Your virtualization product is trying to treat all applications (including Exchange) as if they are completely unaware of the virtualization stack and provide features and functionality whether they were designed for it or not.

The various stack solutions are using the right approach, but I believe they are doing it in the wrong direction; they work great in the second scenario, but they create exceptions and oddities for Exchange and other programs like Exchange that fit the first scenario. So what’s missing? How do I think virtualization stacks need to fix this problem?

Create a standard by which Exchange and other applications can describe what capabilities they offer and define the dependencies and requirements for those capabilities that must in turn be provided by the stack. Only by doing this can policy-driven private cloud solutions close that gap and make policies extend across the entire stack, continuing to reduce the change for human error.

With a standard like this, virtualizing Exchange would become a lot easier. As an example, consider VM to host affinity. Instead of admins having to remember to manually configure Exchange virtual DAG members to not be on the same host, Exchange itself would report  this requirement to the virtualization solution. DAG Mailbox servers would never be on the same host, and the FSW wouldn’t be on the same host as any of the Mailbox servers. And when host outages resulted in the loss of redundant hosts, the virtualization solution could throw an event caught by the monitoring system that explained the problem before you got into a situation where this constraint was broken. But don’t stop there. This same standard could be applied to network configuration, allowing Exchange and other applications to have load balancing automatically provisioned by the private cloud solution.  Or imagine deploying Exchange mailbox servers into a VMware environment that’s currently using NFS. The minute the Mailbox role is deployed, the automation carves off the appropriate disk blocks and presents them as iSCSI to the new VM (either directly or through the hypervisor as an RDM, based on the policy) so that the storage meets Exchange’s requirements.

Imagine the arguments that could solve. Instead of creating problems, applications and virtualization/private cloud stacks would be working together — a very model of maturity.

The iPhone wars, concluded

This happened not too long after I posted my last iPhone update, but I forgot to blog it until now.

I made the decision to get rid of the iPhone. There were a few things I liked about it, but overall, I found the user experience for core behavior and integration was just nowhere near the level of excellence provided by Windows Phone. Yes, I could have probably solved the problems I found by purchasing additional apps – I noticed that for the most part, the better apps are not the free ones – but it wouldn’t have solved the larger problems of each piece being just a piece, not part of a larger hole.

So, I ditched it and replaced the necessary functionality with a 4G access point. I still have the tethering when necessary but now it’s not driving down my phone battery, I only have one device to handle – one that I like – and I still don’t need to pass out my personal cell number, by simply giving my customers the option to call my main Lync number and forward the call to my cell.

So it was interesting, but ultimately…iPhones aren’t for me.

Let go of Windows XP, Office 2003, and Exchange 2003

The day has come. s the end of an era, one that many people do not want to let go. I can understand that.

I drove my last car, a Ford Focus 2000, until it died in the summer of 2010. I loved that car, and we seriously considered replacing the engine (which would have been a considerable chunk of money we didn’t have) so we could keep it. In the end, though, we had to take a long hard look between finances and our family requirements, and we moved on to a new vehicle. It was the requirements portion that was the key. It was certainly cheaper to fix the immediate problem – the blown engine – and we had friends who could do it for us professionally but inexpensively.

However, our kids were getting older. The four-door mini-sedan model wasn’t roomy enough for us and all of our stuff if we wanted to take a longer road trip like we’d been talking about. If we wanted to get a new sofa, we had to ask a friend with a truck. It would be nice, we thought, to have some additional carrying capacity for friends, family, groceries, and the occasional find from Craigslist. We’d been limiting our activities to those that were compatible with our car. With the new vehicle, we found we had far greater options.

On the road again
On the road again

 

Two years ago we took the entire family on a 2-week road trip across the United States, camping along the way. Last summer, we took our family down to Crater Lake, the California Redwoods, and the Oregon Coast. We’ve been to the Olympic Rain Forest. I’ve hauled Scouts and their gear home from Jamboree shakedowns. We’ve moved. We’ve hauled furniture. In short, we’ve found that our forced upgrade, although being more expensive in the long run, also gave us far more opportunity in the long run.

I know many of you like Windows XP. For some crazy reason, I know there are still quite a few of you out there who love Office 2003 and refused to let it go. I even still run across Exchange 2003 on a regular basis. I know that there is a certain mindset that says, “We paid for it, it’s not going to wear out, so we’re just going to keep using it.” Consider, if you will, the following points:

  • Software doesn’t wear out, per se, but it does age out. You have probably already seen this in action. It’s not limited to software – new cars get features the old cars don’t. However, when a part for an old car breaks down, it’s a relatively simple matter for a company to make replacement parts (either by reverse-engineering the original, or licensing it from the original car-maker). In the software world, there is a significant amount of work revolved in back-porting code from the new version and running it backwards several versions. There’s programming time, there’s testing time, and there’s support time. 10 years is more than just about any other software company out there (get any paid Linux support company to give you 10-year support for one up-front price). Microsoft is not trying to scam more money out of you. They want you to move on and stay relatively current with the rest of the world.
  • You are a safety hazard for others. There has been plenty written about the dangers of running XP past the end of life. There are some really good guides on how to mitigate the dangers. But make no mistake – you’re only mitigating them. And in a networked office or home, your risk is exposing other people to danger as well. Don’t be surprised in a couple of months, after one or two well-publicized large-scale malware breakouts targeting these ancient editions, that your business partners, ISP, and other vendors take strong steps to protect their networks by shutting down your access. When people won’t vaccinate and get sick, quarantine is a reasonable and natural response. If you don’t want to be the attack vector or the weakest link, get off your moral high ground and upgrade your systems.
  • This is why you can’t have nice things. Dude, you’re still running Windows XP. The best you have to look forward to is Internet Explorer 8, unless you download Firefox, Chrome, or some other browser. And even those guys are only going to put up with jumping through the hoops required to make XP work for so long. News flash: few software companies like supporting their applications on an operating system (or application platform) that itself is unsupported. You’re not going to find better anti-virus software for that ancient Exchange 2003 server. You’re going to be lucky to continue getting updates. And Office 2003 plug-ins and files? Over the next couple of years, you’re going to enjoy more and more cases of files that don’t work as planned with your old suite. Don’t even think about trying to install new software and applications on that old boat. You’ve picked your iceberg.

Look, I realize there are reasons why you’ve chosen to stay put. They make sense. They make financial sense. But Microsoft is not going to relent, and this choice is not going to go away, and it’s not going to get cheaper. Right now you still have a small window of time when you will have tools to help you get your data to a newer system. That opportunity is going away faster than you think. It will probably, if past experience serves, cost you more to upgrade at this time next year than it does now.

So do the right thing. Get moving. If you need help, you know where to find us. Don’t think about all the things the new stuff does that you don’t need; think about all the ways you could be making your life easier.

The enemy’s gate is down: lessons in #Lync

Sometimes what you need is a change in perspective.

I started my IT career as a technician: desktops and peripherals, printers, and the parts of networks not involving the actual building and deployment of servers. I quickly moved into the systems and network administration role. After 9/11 and a 16-month gap in my employment status, I met these guys and moved my career into a radically different trajectory – one that would take me to places I’d never dreamed of. From there, I moved into traditional consulting.

There is a different mindset between systems administration (operation) and consulting (architecture): the latter guy designs and builds the system, while the former guy keeps it running. Think of it like building a house. The contracting team are the experts at what current code is, how to get a crew going and keep them busy, how to navigate the permit process, and all the other things you need when designing and building a house. The people who buy the house and live there, though, don’t need that same body of knowledge. They may be able to do basic repairs and maintenance, but for remodels they may need to get some expert help. However, they’re also the people who have to live day in and day out with the compromises the architect and builders made. Those particular design decisions may be played out over tens of houses, with neither the designer nor the builder aware that it’s ultimately a poor choice and that a different set of decisions would have been better.

I personally find it helpful to have feet in both worlds. One of the drawbacks I’d had in working at Trace3 is that I was moving steadily away from my roots in systems administration. With Cohesive Logic, I’m getting to step somewhat back in the systems role. What I’m remembering is that there is a certain mindset good systems administrators have: when faced with a problem, they will work to synthesize a solution, even if it means going off the beaten path. The shift from “working within the limitations” to “creatively working around the limitations” is a mental reorientation much like that described in Ender’s Game: in a zero-G battle arena, the title characters realizes that carrying his outside orientation into battle was a liability. By re-visualizing the enemy’s gate as being “down”, Ender changed the entire axis of the conflict in ways both subtle and profound – and turned his misfit team into an unstoppable army.

enemys-gate-is-down

Case in point: I wanted to get my OCS/Lync Tanjay devices working our Lync Server 2013 deployment. This involved getting the firmware upgraded, which ended up being quite a challenge. In the end, I managed to do something pretty cool – get a Tanjay device running 1.0.x firmware to upgrade to 4.0.x in one jump against a native Lync Server 2013 deployment – something many Lync people said wasn’t possible.

Here’s how I did it.

All it took was a mental adjustment. Falling is effortless – so aim yourself to fall toward success.

Windows 2012 R2 and #MSExchange: not so fast

Updated 9/18/2014: As of this writing, Windows Server 2012 R2 domain controllers are supported against all supported Microsoft Exchange environments:

  • Exchange Server 2013 with CU3 or later (remember, CU5 and CU6 are the two versions currently in support; SP1 is effectively CU4)
  • Exchange Server 2010 with SP3 and RU5 or later
  • Exchange Server 2007 with SP3 and RU13 or later

Take particular note that Exchange Server 2010 with SP2 (any rollup) and earlier are NOT supported with Windows Server 2012 R2 domain controllers.

Also note that if you want to enabled Windows Server 2012 R2 domain and forest functional level, you must have Exchange Server 2013 SP1 or later OR Exchange Server 2010 + SP3 + RU5 or later. Exchange Server 2013 CU3 and Exchange Server 2007 (any level) are not supported for these levels.

 

In the past couple of months since Windows Server 2012 R2 has dropped, a few of my customers have asked about rolling out new domain controllers on this version – in part because they’re using it for other services and they want to standardize their new build outs as much as they can.

My answer right now? Not yet.

Whenever I get a compatibility question like this, the first place I go is the Exchange Server Supportability Matrix on TechNet. Now, don’t let the relatively old “last update” time dismay you; the support matrix is generally only updated when major updates to Exchange (a service pack or new version) come out. (In case you haven’t noticed, Update Rollups don’t change the base compatibility requirements.)

Not this kind of matrix...

Not that kind of matrix…

If we look on the matrix under the Supported Active Directory Environments heading, we’ll see that as of right now Windows Server 2012 R2 isn’t even on the list! So what does this tell us? The same thing I tell my kids instead of the crappy old “No means No” chestnut: only Yes means Yes. Unless the particular combination you’re looking for is listed, then the answer is that it’s not supported at this time.

I’ve confirmed this by talking to a few folks at Microsoft – at this time, the Exchange requirements and pre-requisites have not changed. Are they expected to? No official word, but I suspect if there is a change we’ll see it when Exchange 2013 SP1 is released; that seems a likely time given they’ve already told us that’s when we can install Exchange 2013 on Windows 2012 R2.

In the meantime, if you have Exchange, hold off from putting Windows 2012 R2 domain controllers in place. Will they work? Probably, but you’re talking about untested schema updates and an untested set of domain controllers against a very heavy consumer of Active Directory. I can’t think of any compelling reasons to rush this one.

The iPhone Wars, Day 121

120 days later and I figured it was time for an update on the war.

First: I still hate this thing.

Somewhere along the way with one of the iOS updates, the battery life started going to crap, even when I’m barely using the device. When I use it as a personal hotspot, I can practically watch the battery meter race to zero.

I’ve nailed down what it is about the email client that I don’t like, and it’s the same thing I don’t like about many of the apps: the user interfaces are inconsistent and cramped. Navigating my way through a breadcrumb trail that is up near (but not quite) up at the top just feels clunky. This is where contrast with Windows Phone really, really hurts the iPhone in my experience; the Metro (I know, we’re not supposed to call it that anymore, but they can bite me) user interface principles are clean and clear. Trying to figure out simple tasks like how to get the iPhone to actually resync is more complex than necessary. Having the “new message” icon down on the bottom when the navigation is up top is stupid.

The one thing that impresses me consistently is even though the screen is small, the on-screen keyboard is really good at figuring out which letter I am trying to hit. On my Windows Phone I mistype things all the time. This rarely happens on the iPhone. Even though the on-screen keys are much smaller, the iPhone typing precision is much higher. Microsoft, take note – I’m tired of what feels like pressing on one key only to have another key grab the focus.

Even the few custom apps I do use on this iPhone fail to impress. Thanks to a lack of consistent design language, learning one doesn’t help me with the rest, and I have discovered that iPhone developers are just as bad as Windows Phone developers when it comes to inexplicable gaps in functionality.

I guess no one knows how to write good mobile software yet.

The iPhone Wars, Day 1

Part of the fun of settling into a new job is the new tools. In this trade, that’s the laptop and the cell phone. Now, I already have a perfectly good laptop and cell phone, so I probably could have just gone on using those, but where so much of what I do is from home, I find it important to have a clear break between personal business and work. Having separate devices helps me define that line.

My current cell phone is a Nokia Lumia 1020 (Windows Phone 8), which I definitely enjoy. I haven’t had a good chance to take the camera for a full spin, but I’m looking forward to it. I’ve had a lot of PDAs and smart phones in my time: Palm Pilot, Handspring Visor, Windows Mobile, BlackBerry, Windows Phone 7, even an Android. The one I’ve never had, though, is an iPhone.

And it’s not that I hate Apple. My favorite past laptop was my MacBook Pro (Apple has ruined me for any other touchpad). Granted, I’m that weird bastard who loaded Vista SP1 into Boot Camp and hardly ever booted back into Mac OS X again, but ever since then I’ve usually had a spare Apple computer around the house, if only for Exchange interop testing. OS X is a good operating system, but it’s not my favorite, so my main device is always a Windows machine. My current favorite is my Surface Pro.

In all of that, though, I’ve never had an iOS device. Never an iPhone, never an iPad. Yesterday, that all changed.

I needed a business smart phone that runs a specific application, one that hasn’t yet been ported to Windows Phone. I’ve long been an advocate that “apps matter first; pick your OS and platform after you know what apps you need.” Here was my opportunity not to be a shining hypocrite! After discussion with Jeremy, I finally settled on a iPhone 5, as Android was going to be less suitable for reasons too boring to go into.

So now I have an iPhone, and I have just one question for you iPhone-lovers of the world: You really like this thing? Honest to goodness, no one is putting a gun to your head?

I can’t stand this bloody thing! First, it’s too damn small! I mean, yes, I like my smart phones somewhat large, but I have big hands and I have pockets. The iPhone 5 is a slim, flat little black carbon slab with no heft. I’ve taken to calling it the CSD – the Carbon Suppository of Death. Now, if it were just the form factor, I could get used to it, but there’s so much more that I can’t stand:

  • I didn’t realize how much I love the Windows Phone customizable menu until I wasn’t using it. I forget who once called the iPhone (and Android) menu “Program Manager Reborn” but it’s totally apt. Plus, all the chrome (even in iOS 7) just feels cluttered and junky now.
  • Speaking of cluttered, Apple sometimes takes the minimalist thing too far. One button is not enough. This, I think, Windows Phone nails perfectly. Android’s four buttons feel extraneous, but Apple’s “let there be one” approach feels like dogma that won’t bow to practicality.
  • The last time I used an iPod, it was still black & white. I can’t stand iTunes as a music manager, and I don’t like the device-side interface – so I won’t be putting any music on the CSD. No advantage there.
  • Likewise, you think I’m going to dink around with the camera on the CSD when I have the glorious Lumia camera to use? Get real, human.
  • The on-screen keyboard sucks. Part of this is because the device is so much smaller, but part of it is that Apple doesn’t seem to understand little touches that improve usability. On Windows and Android, when you touch the shift key, the case of the letters on the keys changes correspondingly; Apple is all, “LOL…NOPE!”
  • Even the Mail client irritates me, even though I haven’t managed to put my finger on exactly why yet.

So is there anything I like about the device? Sure! I’m not a total curmudgeon:

  • Build quality looks impressive. If the CSD wasn’t as flimsy as a communion wafer, I would be blown away by the feel of the device. It’s got good clean lines and understated elegance, like that suit from the expensive Saville Row tailors.
  • Power usage. The CSD goes through battery very slowly. Now part of that is because I’m not using it, but Apple has had time to optimize their game, and they do it very well indeed.
  • The simple little physical switch to put the CSD into silent mode. This is exactly the kind of physical control EVERY smart phone should have, just like every laptop should have a physical switch to disable the radios (not just a hotkey combination).

This is where I’m at, with a fistful of suck. Even an Android phone would be better than this. I’ve got no-one to blame but myself, and it’s not going to get any better. So look forward to more of these posts from time to time as I find yet another aspect of the CSD that drives me crazy.

“But Devin,” I hear some of you Apple-pandering do-gooders say, “You’re just not used to it yet. Give it time. You’ll grow to love it.”

CHALLENGE ACCEPTED.

Meet the New Corporate Overlords @CohesiveLogic

Just a brief announcement (you’ll be hearing more about it later) to let everyone know that I’ve found new employment with Cohesive Logic as a Principal Consultant. Jeremy and the rest are good people and I’m happy to be hanging my hat under their shingle. We’ve got some exciting stuff coming down the pipe, and while I’ll still be focusing on Exchange, I’ll have the opportunity to broaden my skill set.

A Keenly Stupid Way To Lock Yourself Out of Windows 8

Ready for this amazing, life-changing technique? Lets go!

  1. Take a domain-joined Windows 8 computer.
  2. Logon as domain user 1.
  3. Notice that the computer name is a generic name and decide to rename it.
  4. Don’t reboot yet, because you have other tasks you want to do first.
  5. Switch users to domain user 2.
  6. Perform more tasks.
  7. Go to switch back to user 1. You can’t!
  8. Try to log back in as user 2. You can’t!

Good for hours of fun!

Defending a Bad Decision

I
t’s already started.

A bit over 12 hours after MSL’s cowardly decision to announce the end of the MCM program (see my previous blog post), we’re already starting to see a reaction from Microsoft on the Labor Day holiday weekend.

SQL Server MVP Jen Stirrup created an impassioned “Save MCM” plea on the Microsoft Connect site this morning at 6:19. Now, 7.5 hours later, it already has almost 200 votes of support. More importantly, she’s already gotten a detailed response from Microsoft’s Tim Sneath:

Thank you for the passion and feedback. We’re reading your comments and take them seriously, and as the person ultimately responsible for the decision to retire the Masters program in its current form, I wanted to provide a little additional context.

Firstly, you should know that while I’ve been accused of many things in my career, I’m not a “bean counter”. I come from the community myself; I co-authored a book on SQL Server development, I have been certified myself for nearly twenty years, I’ve architected and implemented several large Microsoft technology deployments, my major was in computer science. I’m a developer first, a manager second.

Deciding to retire exams for the Masters program was a painful decision – one we did not make lightly or without many months of deliberation. You are the vanguard of the community. You have the most advanced skills and have demonstrated it through a grueling and intensive program. The certification is a clear marker of experience, knowledge and practical skills. In short, having the Masters credential is a huge accomplishment and nobody can take that away from the community. And of course, we’re not removing the credential itself, even though it’s true that we’re closing the program to new entrants at this time.

The truth is, for as successful as the program is for those who are in it, it reaches only a tiny proportion of the overall community. Only a few hundred people have attained the certification in the last few years, far fewer than we would have hoped. We wanted to create a certification that many would aspire to and that would be the ultimate peak of the Microsoft Certified program, but with only ~0.08% of all MCSE-certified individuals being in the program across all programs, it just hasn’t gained the traction we hoped for.

Sure, it loses us money (and not a small amount), but that’s not the point. We simply think we could do much more for the broader community at this level – that we could create something for many more to aspire to. We want it to be an elite community, certainly. But some of the non-technical barriers to entry run the risk of making it elitist for non-technical reasons. Having a program that costs candidates nearly $20,000 creates a non-technical barrier to entry. Having a program that is English-only and only offered in the USA creates a non-technical barrier to entry. Across all products, the Masters program certifies just a couple of hundred people each year, and yet the costs of running this program make it impossible to scale out any further. And many of the certifications currently offered are outdated – for example, SQL Server 2008 – yet we just can’t afford to fully update them.

That’s why we’re taking a pause from offering this program, and looking to see if there’s a better way to create a pinnacle, WITHOUT losing the technical rigor. We have some plans already, but it’s a little too early to share them at this stage. Over the next couple of months, we’d like to talk to many of you to help us evaluate our certifications and build something that will endure and be sustainable for many years to come.

We hate having to do this – causing upset amongst our most treasured community is far from ideal. But sometimes in order to build, you have to create space for new foundations. I personally have the highest respect for this community. I joined the learning team because I wanted to grow the impact and credibility of our certification programs. I know this decision hurts. Perhaps you think it is wrong-headed, but I wanted to at least explain some of the rationale. It comes from the desire to further invest in the IT Pro community, rather than the converse. It comes from the desire to align our programs with market demand, and to scale them in such a way that the market demand itself grows. It comes from the desire to be able to offer more benefits, not fewer. And over time I hope we’ll be able to demonstrate the positive sides of the changes we are going through as we plan a bright future for our certifications.

Thank you for listening… we appreciate you more than you know.

First, I want to thank Tim for taking the time to respond on a holiday Saturday. I have no reason to think ill of him or disbelieve him in any way. That said, it won’t keep me from respectfully calling bullshit. Not to the details of Tim’s response (such as they are) not to the tone of his message, but rather to the worldview that it comes from.

First, this is the way the decision should have been announced to begin with, not that ham-fisted, mealy-mouthed thinly-disguised “sod off” piece of tripe that poor Shelby Grieve sent late last night. This announcement should have been released by the person who made the decision, taking full accountability for it, in the light of day, not pawned off to an underlying who was allowed to sneak it out at midnight Friday on a three-day holiday weekend.

Second, despite Tim’s claims of being a developer first, manager second, I believe he’s failing to account for the seductive echo-chamber mentality that permeates management levels at Microsoft. The fatal weakness of making decision by metrics is choosing the wrong metrics. When the Exchange program started the Ranger program (what later morphed to become the first MCM certification), their goal wasn’t reach into the community. It was reducing CritSits on deployments. It was increasing the quality of deployments to reduce the amount of downtime suffered by customers. This is one of the reasons I have been vocal in the past that having MSL take on 100% responsibility for the MCM program was a mistake, because we slowly but surely began losing the close coupling with the product group. Is the MCM program a failure by those metrics? Does the number of MCMs per year matter more than the actual impact that MCMs are making to Microsoft’s customers? This is hard stuff. Maybe, just maybe, having more than a tenth of a percent of all MCPs achieve this certification is the right thing if you’re focusing on getting the right people to earn it.

Third, MSL has shown us in the recent past that it knows how to transition from one set of certifications to another. When the MCITP and MCTS certification were retired, there was a beautiful, coordinated wave of information that came out showing exactly what the roadmap was, why things were changing, and what the new path would look like for people. We knew what to expect from the change. Shelby’s announcement gave us no hint of anything coming in the future. It was an axe, not a roadmap. It left no way for people who had just signed up (and paid money for the course fees, airplane tickets, etc.) to reach out and get answers to their questions. As far as we know, there may not be any refunds in the offing. I think it’s a bit early to be talking about lawyers, but several of my fellow MCMs don’t. All of this unpleasantness could have been avoided by making this announcement with even a mustard seed of compassion and projection. Right now, we’re left with promises that something will come to replace MCM. Those promises are right up on my hearth along with the promises that we just got made in recent months about new exams, new testing centers, and all the other promises the MCM program has made. This one decision and badly wrought communication has destroyed credibility and trust.

Fourth, many of the concerns Tim mentioned have been brought up internally in the MCM program before. The MCMs I went through my rotation with had lots of wonderful suggestions on how to approach solutions to these problems. The MCMs in my community have continued to offer advice and feedback. Most of this feedback has gone nowhere. It seems that somebody in between the trainers and the face people that we MCMs interact with and the folks at Tim’s level have been gumming up the communication. Ask any good intelligence analyst – sometimes you need to see the raw data instead of just the carefully processed work from the people below you in the food chain. Somewhere in that mass of ideas are good suggestions that probably could have been made to work to break down some of those non-technical barriers long before now, if only they’d gotten to the right level of management where someone had the power to do something about it. Again, in a metrics-driven environment, data that doesn’t light up the chosen metrics usually gets ignored or thrown out. There’s little profit taking the risk of challenging assumptions. Combine that with a distinct “not invented here” syndrome, and it feels like MSL has had a consistent pattern of refusing to even try to solve problems. Other tech companies have Master-level exams that don’t suffer too badly from brain dumps and other cheating measures. Why can’t Microsoft follow what they are doing and improve incrementally from there? I believe it’s because it requires investing even more money and time into these solutions, something that won’t give back the appropriate blips on the metrics within a single financial year.

So while I appreciate the fact that Tim took the time to respond (and I will be emailing him to let him know the existence of this post), I don’t believe that the only option MSL had was to do things in this fashion. And right now, that’s the impression I believe that this response is going to generate among an already angry MCM community.

Ain’t Nobody [at Microsoft Learning] Got Time For That

If you track other people in the Microsoft Certified Master blogosphere you’ve probably already heard about the shot to the face the MCM/MCSM/MCA/MCSA program (which I will henceforth refer to just as MCM for simplicity) took last night: a late Friday night email announcing the cancellation of these programs.

"Wait for it...wait for it..."

“Wait for it…wait for it…”

I was helping a friend move at the time, so check the email on my phone, pondered it just long enough to get pissed off, and then put it away until I had time and energy to deal with it today.

This morning, a lot of my fellow members of the Microsoft IT Pro community are reacting publicly. This list includes Microsoft employees, MCM trainers, MCMs, and MCM candidates:

 Others have already made all of the comments I could think to make — the seemingly deliberately bad timing, the total disconnect of this announcement with recent actions and announcements regarding the MCM availability, the shock and anger, all of it.

The only unique insight I seem to have to share is that this does *not* seem to be something that the product groups are on board with — it seems to be coming directly from Microsoft Learning and the higher-ups in that chain. Unfortunately, those of us who resisted and distrusted the move of MCM from being run by the product groups in partnership with MSL to the new regime of MSL owning all the MCM marbles (which inevitably led to less and less interaction with the actual product groups, with the predictable results) now seem to be vindicated.

I wish I’d been wrong. But even this move was called out by older and wiser heads than mine, and I discounted them at the time. Boy, was I wrong about that.

I’m really starting to think that as Microsoft retools itself to try to become a services and devices company, we’re going to see even more of these kind of measures (TechNet subs, MCM certs) that alienate the highly trained end of the IT Pro pool. After all, we’re the people who know how to design and implement on-premises solutions that you folks can run cheaper than Microsoft’s cloud offerings. Many of the competitors to Microsoft Consulting or to Microsoft hosted services had one or more MCMs on staff, and MCM training was a great viewpoint into how Office 365 was running their deployments. In essence, what had once been a valuable tool for helping sell Microsoft software licenses and reduce Microsoft support costs has now become, in the Cloud era, a way for competitors and customers to knowledgeably and authoritatively derail the Cloud business plans.

From that angle, these changes make a certain twisted sort of short-term sense — and with the focus on stock price and annual revenues, short-term sense is all corporate culture knows these days.

For what it’s worth, SQL Server MVP Jen Stirrup has started this Connect petition to try to save the MCM program. I wish him luck.

The Case for TechNet

By now, those of you who are my IT readers almost certainly know about Microsoft’s July 1st decision to retire the TechNet subscription offerings for IT professionals. In turn, Cody Skidmore put together a popular site to petition Microsoft to save TechNet subscriptions. Cody and many others have blogged about their reasons why they think that TechNet subscriptions need be be revived, rather than stick with Microsoft’s current plans to push Azure services, trial software, and expensive MSDN subscriptions as reasonable alternatives. I have put my name to this petition, as I feel that the loss of TechNet subscriptions is going to have a noticeable impact in the Microsoft ecosystem in the next few years.

I also hear a few voices loudly proclaiming that everything is fine. They generally make a few good points, but they all make a solitary, monumental mistake: they assume that everyone using TechNet subscriptions use them for the same things they do, in the same ways they do. Frankly, this myopia is insulting and stupid, because none of these reasons even begin to address why I personally find the impending loss of TechNet subscriptions to be not only irritating, but actively threatening to my ability to perform at my peak as an IT professional.

As a practicing consultant, I have to be an instant expert on every aspect of my customers’ Exchange environments, including the things that go wrong. Even when I’m on-site (which is rare), I usually don’t have unlimited access to the system; security rules, permissions, change control processes, and the need for uptime are all ethical boundaries that prevent me from running amok and troubleshooting wildly to my heart’s content. I can’t go cowboy and make whatever changes I need to (however carefully researched they may be) until I have worked out that those changes will in fact fix the problem and what the rollback process is going to be if things don’t work as expected.

Like many IT pros, I don’t have a ton of money to throw around at home. Because I have been working from home for most of the last few years, I have not even had access to my employer’s labs for hardware or software. I’ve been able to get around this with TechNet and virtualization. The value that TechNet provides as a reasonable price point is to give me full access to current and past versions of Microsoft software, updates, and patches, so I can replicate the customer’s environment in its essence, reduce the problem to the minimum steps for reproduction, and explore fixes or call in Microsoft Support for those times where it’s an actual bug with a workaround I can’t find. Demo versions of current software don’t help when I’m debugging interactions with legacy software, much of which rapidly becomes unavailable or at least extremely hard to find.

Microsoft needs to sit up and take notice; people like me save them money. Per-incident support pricing is not heinous, and it only takes a handful of hours going back and forth with the support personnel before it’s paid for itself from the customer’s point of view (I have no visibility into the economics on Microsoft’s side, but I suspect it is subsidized via software and license pricing overall). The thing is, though, Microsoft is a metric-driven company. If consultants and systems administrators no longer have a cost-effective source for replicating and simplifying problems, the obvious consequence I see is that Microsoft will see a rise in support cases, as well as a rise in the average time to resolve support cases, with the corresponding decrease in customer satisfaction.

Seriously, Microsoft – help us help you. Bring back TechNet subscriptions. They have made your software ecosystem one of the richest and healthiest of any commercial software company. Losing them won’t stem piracy of your products and won’t keep companies from using your software, but it will threaten to continue the narrative that Microsoft doesn’t care about its customers. And today more than ever, there are enough viable alternatives that you cannot take customer loyalty for granted.

Taking TechNet subscriptions away is a clear statement that Microsoft doesn’t trust its customers; customers will respond in kind. As the inevitable backlash to cloud services spreads in the wake of the NSA revelations, Microsoft will need all of the trust it can get. This is penny-wise, pound-foolish maneuvering at precisely the wrong time.

Newsflash: Sexuality Is Already in Scouting

I ran across this article from May1, from yet another conservative Christian Scouter who seems to be convinced that by accepting gay Scouts into the BSA, the end is near for all morality in the Scouting program. As comments are closed, my responses are here. I hope the author’s blog registers the trackback and he sees it.

First, the condescending tone in the post (see the paragraph about liberals and “the choir”) makes it clear that he thinks there is absolutely no discussion that can be had, nothing to learn from an alternate point of view. This is the kind of closed mind that is the most dangerous to any youth program anywhere. Leaders need to balance between a firm and strong sense of what their pillars are and the willingness to learn new insight from those of opposing views. The only people that Jesus didn’t waste his time with were the Pharisees – the ones whose minds were rigid and unyielding.

Second, I agree 100% with him about the need to ensure that no overt acts of sexuality (regardless of orientation) have any place in the Scouting program. However, if he really thinks that sexuality is not already in Scouting, I have unwelcome news for him:

  • I remember from my own time as a Scout: when the Scoutmasters aren’t around, there’s a large amount of sexual humor and indoctrination that gets passed around from boy to boy. From fairly benign (calling Scout camp “memories without mammaries”) to merely inappropriate (streaking through a camp site) to more potentially unhealthy activities and peer pressure, these activities were there when I was a boy. From what my son tells me, they’re still there today. He’s in a great troop with a lot of amazing leaders, but no matter how great the parents/leaders/boys, when you push a prevalent and powerful aspect of humanity under the carpet, it will find ways to express itself. Adolescence is exactly the time when the humans are dealing with powerful feelings of sexuality, for the first time, and it is confusing. Scouters are often trusted adults, especially when the boys don’t have a good relationship with their parents.
  • As Scouters we have to model responsible behavior to our Scouts, including appropriate forms of sexuality. Sexuality is far more than physical intimacy; it includes our attitudes on gender, orientation, sexual roles, and more. Our Scouts watch us closely; if we are disrespectful of women and dismissive of non-masculine men (as many Scouters frequently are), they will learn that behavior is appropriate and they will indulge in it too.
  • Alaric attended the recent National Scout Jamboree. The Jamboree selection process, if you’re not familiar, limits the number of Scouts per council; there’s an interview and recommendation process that in theory ensures the Scouts who went to Jamboree are living the Scout Oath and Law. Yet despite all the precautions, they had problems with Scouts treating the female youth attendees (American Venture Scouts and international Scouts) with a marked lack of respect, including peeping tom incidents at their showers. Sexuality (of the heterosexual nature) is alive and well in Scouting. The answer is not to ignore sex; it is to address it in the appropriate context and with the appropriate limits and boundaries for Scouting activities.

Third, expecting gay Scouts to be silent about their orientation, *even when they are following Scouting guidelines about sexual activities*, is explicitly unequal.

After all, how many times have you heard of a heterosexual Boy Scout declaring for all to hear that, “I’m a heterosexual and I’m sexually active and I lust after girls?” Why is it that the GLBT crowd needs to publicly share their sexual preferences? And why on earth would a parent go on national television, or go into a court of law, to show support for their teenage son’s sexual preference for other boys?

Heterosexual Scouts make that declaration (or have it made for them) on a regular basis. When there’s an adolescent joke about boobs, or Scouts ogle another Scout’s sister, heterosexual Scouts are non-verbally (but nevertheless clearly and loudly) making the declaration that they are heterosexual beings who are attracted to girls. That doesn’t mean that they are sexually active. It is not a hallmark of “the LGBT agenda” that parents don’t want their boys to be forced to assume a mantle of silence or be assumed to be sexually prolific just because they aren’t attracted to girls. Again, I’ll use poor Alaric as an example; I know he likes girls and I know what types he likes, as do his friends and members of his troop, but he feels no need (I like to think in part because he can be open with us) to become a sexually active fourteen year-old. This is because of his character, not because he likes girls.

The mistake the blog author makes here, and he makes it consistently, is to conflate “gay” with “sexually active.” There is no reason to assume that homosexual teenagers will be any more sexually active than heterosexual teenagers (and if he wants to dispute that, I’ll be happy to point him to the studies showing increased rates of sexually transmitted diseases among Christian, abstinence-only youth who engage in risky alternatives to vaginal intercourse, because their rigid upbringing gives no thought to failure modes). In fact, homosexual youth who have access to a variety of caring, responsible adult role models are more likely to make informed, intelligent choices about sex. If the author really wants to keep kids from acts he believes are immoral, he can do far worse than encourage them to get into Scouting where they can be around leaders and other boys who will help reinforce the desired standard of behavior.

Finally, the author needs to drop the martyr complex he displays in his last paragraph. Although it was never a prominent or prevalent practice, historical research shows that the Christian church throughout the ages has at times and places supported homosexual members, including through the celebration of marriage for homosexual couples. Many of us who support Scouting’s long-overdue change in policy do so from our own religious principles. Thought we differ from the author on this issue, there are many aspects of character that we do agree on, including the points of the Scout Oath and Law, even if we do not see eye-to-eye over every point of interpretation.

Scouting is a worldwide movement. American Scouting and our particular struggles over the interpretation of Christian doctrine are not the acme of the Scouting ideals. If this one issue is really so important to him that he feels he has no choice but to cede his involvement in Scouting in the event of a legal challenge, I for one will miss the richness and depth he brings to the overall tapestry of Scouting. However, that tapestry has to be a living tapestry. Scouting is supposed to be inclusive enough to be an umbrella for multiple religions and views, to adapt and grow as our society changes. I refuse to believe that this one issue is the one that will destroy Scouting.

That’s not the vibrant Scouting program I know.

Finding Differences in Exchange objects (#DoExTip)

Many times, when I’m troubleshooting Exchange issues I need to compare objects (such as user accounts in Active Directory, or mailboxes) to figure out why there is a difference in behavior. Many times, the difference is tiny and hard-to-spot. It may not even be visible through the GUI.

To do this, I first dump the objects to separate text files. How I do this depends on the type of object I need to compare. If I can output the object using Exchange Management Shell, I pipe it through Format-List and dump it to text there:

Get-Mailbox –Identity Devin | fl > Mailbox1.txt

If it’s a raw Active Directory object I need, I use the built-in Windows LDP tool and copy and paste the text dump to separate files in a text editor.

Once the objects are in text file format, I use a text comparison tool, such as the built-in comparison tool in my preferred text editor (UltraEdit) or the standalone tool WinDiff.The key here is to quickly highlight the differences. Many of those differences aren’t important (metadata such as last time updated, etc.) but I can spend my time quickly looking over the properties that are different, rather than brute-force comparing everything about the different objects.

I can hear many of you suggesting other ways of doing this:

  • Why are you using text outputs even in PowerShell? Why not export to XML or CSV?
    If I dump to text, PowerShell displays the values of multi-value properties and other property types that it doesn’t show if I export the object to XML or CSV. This is very annoying, as the missing values are typically the source of the key difference. Also, text files are easy for my customers to generate, bundle, and email to me without any worries that virus scanners or other security policies might intercept them.
  • Why do you run PowerShell cmdlets through Format-List?
    To make sure I have a single property per line of text file. This helps ensure that the text file runs through WinDiff properly.
  • Why do you run Active Directory dumps through LDP?
    Because LDP will dump practically any LDAP property and value as raw text as I access a given object in Active Directory. I can easily walk a customer through using LDP and pasting the results into Notepad while browsing to the objects graphically, as per ADSIedit. There are command line tools that will export in other formats such as LDIF, but those are typically overkill and harder to use while browsing for what you need (you typically have to specify object DNs).
  • PowerShell has a Compare-Object cmdlet. Why don’t you use that for comparisons instead of WinDiff or text editors?
    First, it only works for PowerShell objects, and I want a consistent technique I can use for anything I can dump to text in a regular format. Second, Compare-Object changes its output depending on the object format you’re comparing, potentially making the comparison useless. Third, while Compare-Object is wildly powerful because it can hook into the full PowerShell toolset (sorting, filters, etc.) this complexity can eat up a lot of time fine-tuning your command when the whole point is to save time. Fourth, WinDiff output is easy to show customers. For all of these reasons, WinDiff is good enough.

Using Out-GridView (#DoExTip)

My second tip in this series is going to violate the ground rules I laid out for it, because they’re my rules and I want to. This tip isn’t a tool or script. It’s a pointer to an insanely awesome feature of Windows PowerShell that just happens to nicely solve many problems an Exchange administrator runs across on a day-to-day basis.

I only found out about Out-GridView two days ago, the day that Tony Redmond’s Windows IT Pro post about the loss of the Message Tracking tool hit the Internet. A Twitter conversation started up, and UK Exchange MCM Brian Reid quickly chimed in with a link to a post from his blog introducing us to using the Out-GridView control with the message tracking cmdlets in Exchange Management Shell.

This is a feature introduced in PowerShell 2.0, so Exchange 2007 admins won’t have it available. What it does is simple: take a collection of objects (such as message tracking results, mailboxes, public folders — the output of any Get-* cmdlet, really) and display it in a GUI gridview control. You can sort, filter, and otherwise manipulate the data in-place without having to export it to CSV and get it to a machine with Excel. Brian’s post walks you through the basics.

In just two days, I’ve already started changing how I interact with EMS. There are a few things I’ve learned from Get-Help Out-GridView:

  • On PowerShell 2.0 systems, Out-GridView is the endpoint of the pipeline. However, if you’re running it on a system with PowerShell 3.0 installed (Windows Server 2012), Out-GridView can be used to interactively filter down a set of data and then pass it on in the pipeline to other commands. Think about being able to grab a set of mailboxes, fine-tune the selection, and pass them on to make modifications without having to get all the filtering syntax correct in PowerShell.
  • Out-GridView is part of the PowerShell ISE component, so it isn’t present if you don’t have ISE installed or are running on Server Core. Exchange can’t run on Server Core, but if you want to use this make sure the ISE feature is installed.
  • Out-GridView allows you to select and copy data from the gridview control. You can then paste it directly into Excel, a text editor, or some other program.

This is a seriously cool and useful tip. Thanks, Brian!

Exchange Environment Report script (#DoExTip)

My inaugural DoExTip is a script I have been rocking out to and enthusiastically recommending to customers for over a year: the fantastic Exchange Environment Report script by UK Exchange MVP Steve Goodman. Apparently Microsoft agrees, because they highlight it in the TechNet Gallery.

It’s a simple script: run it and you get a single-page HTML report that gives you a thumbnail overview of your servers and databases, whether standalone or DAG. It’s no substitute for monitoring, but as a regular status update posted to a web page or emailed to a group (easily done from within the script) it’s a great touch point for your organization. Run it as a scheduled task and you’ll always have the 50,000 foot view of your Exchange health.

I’ve used it for migrations in a variety of organizations, from Exchange 2003 (it must be run on Exchange 2007 or higher) on up. I now consider this script an essential part of my Exchange toolkit.

Introducing DoExTips

At my house, we try to live our life by a well-known saying attributed to French philosopher Voltaire: “The perfect is the enemy of the good.” This is a translation from the second line of his French poem La Bégueule, which itself is quoting a more ancient Italian proverb. It’s a common idea that perfection is a trap. You may be more used to modern restatements such as the 80/20 rule (the last 20% of the work takes 80% of the effort).

I’ve had an idea for several years to fill what I see is a gap in the Exchange community. I’ve been toying with this idea for a while, trying to figure out the perfect way to do it. Today, I had a Voltaire moment: forget perfect.

So, without further ado, welcome to Devin on Exchange Tips (or #DoExTips for short). These are intended to be small posts that occur frequently, highlighting free scripts and tools that members of the global Exchange community have written and made available. There’s a lot of good stuff out there, and it doesn’t all come from Microsoft, and you don’t have to pay for it.

The tools and scripts I’ll highlight in DoExTips are not going to be finished products or polished. In many cases, they’ll take work to adapt to your environment. I’m going to quickly show you something I found that I’ve used as a starting point or spring board, not solve all your problems.

So, if you’ve got something you think should be highlighted as a DoExTip, let me know. (Don’t like the name? Blame Tom Clancy. I’ve been re-reading his Jack Ryan techno-thrillers and so military naming is on the brain.)

Let’s Test It!

I’ve been studying karate for nearly five years now, and I don’t think I’ve shared this story before. When we’re sparring, students are required to wear the appropriate protective gear. No head shots, for example, if you’re not wearing head protection. For males, a sports cup is mandatory, for reasons that probably don’t require elaboration.

When I was buying a cup, I had no clue what to get. The only sports I’d done as a kid were one season of track in high school and some Pee-Wee/Little League baseball. I’d never had to deal with a cup before. I’d heard lots of horror stories about them: they were uncomfortable, didn’t fit, and didn’t really keep blows from hurting as much as they reduced the pain to manageable levels.

No, thanks. This geek did some research and came up with the Nutty Buddy. This was a cup whose inventor stood by his product by taking 90mph fast balls from a pitching machine to his crotch. After reading around, I was sold. It was more expensive, but hey, not feeling soul-crushing pain is worth it, right?

Here’s what happened next, as I sent it to Nutty Buddy:

My order arrived on the day of a sparring class. That night, I prepped for class a little early so I could figure out how to get my Nutty Buddy put in place. Having bought the “Build Your Own Package” option, I had everything I needed, and soon I was all dressed in my gi, ready to go. I walked out from my bedroom to the living room to pick up my gear bag and was met by my son, then 11 years old. “Do you have it on?” he asked eagerly and I nodded. “Great, let’s test it!” he said as he executed a perfect front snap-kick to the boys. It was a great kick, too – one of those kind you can’t be thinking about, you just have to let it rip. He immediately realized what he’d done and started apologizing, but was shocked when I laughed. The only thing I’d felt was the shock. The Nutty Buddy lived up to the hype, and I knew it was worth every penny.

No matter how prepared you are for life, sometimes you only know whether something’s going to work by just doing it.

#MSExchange 2010 and .NET 4.0

Oh, Microsoft. By now, one might think that you’d learn not to push updates to systems without testing them thoroughly. One would be wrong. At least this one classifies as a minor annoyance and not outright breakage…

Windows Update offers up .NET 4.0 to Windows 2008 R2 systems as an Important update (and has been for a while). This is fine and good – various versions of the .NET framework can live in parallel. The problem, however, comes when you accept this update on an Exchange 2010 server with the CAS role.

If you do this, you may notice that the /exchange, /exchweb, and /public virtual directories (legacy directories tied to the /owa virtual directory) suddenly aren’t redirecting to /owa like they’re supposed to. Now, people aren’t normally using these directories in their OWA URLs anymore, but if someone does attempt to hit one of these virtual directories it leaves a gnarly error message to spam your event logs.

This is occurring because when .NET 4.0 is installed and the ASP.NET 4.0 components are tied into IIS, the Default Application Pool is reconfigured to use ASP.NET 4.0 instead of ASP.NET 2.0 (the version used by the .NET 3.5 runtime on Windows 2008 R2). What exactly it is about this that breaks these legacy virtual directories, I have no idea, but break them it does.

The fix for this is relatively simple: uninstall .NET 4.0 and hide the update from the machine so it doesn’t come back. If you don’t want to do that, follow this process outlined in TechNet to reset the Default Application Pool back to .NET 2.0. Be sure to run IISRESET afterwards.

Blues Brother

Right at the end of December, I decided that January 2013 would be my year of just saying “Do it.” The first thing I said “do it” to was getting my hair dyed blue, like I’ve been wanting to for over a decade. That Saturday, I walked into my hairstylist for my normal haircut, and came out with a little more.

Devin_blue-dec-sm
My blue-green hair in December

I loved the cut and the color (a blue-green-silver mix), and after two weeks it had faded to a soft cotton-candy color of blue. However, it just kept on fading. Time for a refresh, so back in to my fantastic hairstylist, Liz!

Devin_Liz_Blue-sm
My partner in crime

This time, we dropped the green and mixed the blue and silver in nearly equal proportions. The result is vivid now, but we think it’s going to be fantastic after some fading!

Devin-blue-jan-sm
Move over, IBM

The best part of this experiment is that if I ever get tired of looking like a dry-erase marker, I can simply shave it off. It’s not like that’s a new look for me. The plan, though, is to keep experimenting with fun colors and settle down on a few favorites.

A Few Bullet Points on American Gun Culture

I’m a gun owner. I hold a concealed pistol license in the state of Washington and I own a pistol and a rifle, which I have taken reasonable and prudent steps to keep locked up and safe when they are not in use. Although I have not taken a formal gun safety class, I have had firearms training and have taken steps to ensure that my family is also provided with training. My kids have enjoyed the carefully supervised events when they have been taken shooting by myself and other qualified adults.

I’ve had some thoughts stirring around for a while on the topic of America and the 2nd Amendment, but it wasn’t until today I pulled them together enough to start the process of writing a blog post.

Note 1: I’m going to do my level best to be polite and respectful to all parties, regardless of their political position on this subject, and I request that all commenters do the same. People crossing the line of civility may get a warning or I may just delete their comment, depending on the severity.

The Ground Rules

Today, on Facebook, one my friends posted this picture:

Who knows more about the Constitution?Figure 1: Constitutional law qualifications
(can’t find the original source for this, if you know please let me know?)

As you can imagine, this prompted (as do almost all gun control threads on the Internet) a barrage of comments. Sadly, these types of discussions tend to quickly be dominated by one of two vocal extremes:

  • The gun enthusiast (pejoratively known as the “gun nut” or “right-wing whackjob”), who often gives the impression that she won’t be happy until she can personally and privately own any weapon system ever made, up to and including ICBMs, aircraft carriers, Abrams tanks, and F-22 Raptors. She is typically, but not always, aligned with the more extremely conservative side of the political spectrum
  • The gun worrier (pejoratively known as the “gun grabber” or “bleeding-heart liberal”), who commonly and frequently opines that mankind will know nothing but a wretched existence devoid of any light, joy, or hope until every last physical instance of, drawing of, reference to, or even the mental concept a of weapon is wiped from existence. He is typically, but not always, aligned with the more extremely liberal side of the spectrum.

Note 2: if you fit into one of these two extremes, I will give you good advice: stop reading now, and move on. You won’t like what I have to say; I refuse to validate your unreasonably narrow and exclusionary viewpoint. I won’t let other people call you names should you choose to ignore my advice and comment, but I will redact your extremist attempts to redirect a civil conversation into your own flavor of lunacy. Be warned – my blog, my rules. You want to post your own screed? Go burn your own storage and bandwidth to do it.

Almost immediately, a good point was made: while Obama’s credentials are accurately stated, this picture attempts to make a point through blatant use of stereotypes. We know nothing about the gentleman in the red box – he might also be an Ivy League Constitutional scholar, or a distinguished judge, or even a talented and knowledgeable amateur scholar. We don’t know and we’re not told. This is the good old “guilt by association” propaganda ploy – if you like big scary guns, you’re probably ignorant just based on your appearance. Not a great way for liberals to make a point.

At the same time, conservatives are guilty of blatantly false propaganda too:


Figure 2: One of these things is not like the other
(found on
rashmanly.com)

Really? A democratically elected (twice, now, even!) federal executive, in a country with some of the most extensive checks and balances, who for at least half of his time in office has had to deal with a Congress (you know the branch of the government that actually makes the laws) controlled by his political opponents, is magically a dictator on par with some of the worst tyrants of recorded history? Because his biggest political acts have been to try to keep our country from plunging into a hyper-inflationary depression, to make sure poor people have access to medical care, and to try to maybe do something to reduce the number of innocent people killed by guns in this country every year? Remember, this is the President who pissed off many in his party because he didn’t bother to dismantle many of the incentives put in place by his predecessor.

Note 3: Don’t even think of heading to the “Democrats just want to take away guns and Republicans are protecting gun rights.” Remember the assault rifle ban that expired in 2004? The one that was enacted in 1994, which would have been during the (Democratic) Clinton administration? The one that was lobbied for by Ronald Reagan?

Finding Middle Ground

Okay, now that I’ve unilaterally declared extremes off the table, let’s dig into the meat of the original graphic – which is the fact that Obama has a background in Constitutional law, so unlike many politicians and political wonks, he might actually have a more than passing familiarity with some of the issues involved.

Obama is using executive orders to make changes within the framework of existing law, as well as working to introduce legislation to accomplish additional goals such as reintroducing the expired assault rifle ban. Some of these changes are likely to be polarizing, but outside of the echo chambers and spin factories, there’s actually a large amount of support for many of these proposals – and this according to a poll of 945 gun owners conducted last July by Republican party pollster Frank Luntz, before the events of Newton. After Newton, support for stricter laws on the sale of firearms has increased overall, including increased support for passing new laws although support for renewal of the assault rifle ban is still just shy of a majority. Yet somehow, any discussion of changes provokes an immediate, hostile response.

It’s also inevitable to see someone trot out the argument that since cars kill far more people, we need to regulate cars. Um, hello? We do. Car manufacturers have to regularly participate in studies and make changes to cars to reduce the deaths because of cars, and over the decades, it’s worked. We do the same thing for other forms of violence — we study it, and we make intelligent changes to reduce the impact. But the current climate and talking points (such as the historically inaccurate charge that gun control led to the Holocaust) have kept us in a virtual standstill on dealing with gun violence of any type.

Thanks to a careful and prolonged lobbyist and political spending campaign by the NRA and the gun manufacturers, we don’t even have credible research that would tell us why American gun deaths are so much higher than comparable nations. Let me be clear; the NRA does a lot of good, but they are a human institution and over the past couple of decades, they’ve transformed themselves from a simple society to promote scientific rifle shooting to a lobbyist organization. At times, I think this dichotomy can at times drive the NRA leadership out of sync with their members’ concerns and lead them to try to drive policy and dictate their members beliefs rather then represent them.

At this point, I think its obvious that some sort of changes need to be made. The USA has a gun homicide rate that is 4.5 times higher (or more) than other G-8 countries. When confronted with these facts, many people respond with talking points about how countries that have enacted gun control laws see a rise in crimes such as violent assault (Australia is a frequently featured talking point). However true these points may be, I can’t help but think that’s an invalid comparison. If I were to be the victim of a crime, I think I would rather be injured rather than outright killed. I would rather that my stuff got stolen than lose my wife or one of my kids. But overall, the crime rate in the US is dropping.

Like many Americans, I’m in favor of extending background checks and doing more to ensure that people with a history of violent mental illness and misdemeanor violence have reduced access to guns. Without comprehensive studies, I’m not convinced that renewing the assault rifle ban will actually help anything (are extended magazines actually useful in genuine self-defense scenarios, or would regular magazines do the trick?) But there’s a number of potential steps I’ve thought of that I’ve seen no discussion on:

  • I’m disturbed by the fact that when I take a free CPR or First Aid class, I have more stringent requirements than I do for my CPL. When I get CPR training I have to demonstrate that I am up-to date in my training and technique and recertify every year or two at the most; when I applied for my concealed pistol license, all I had to do was not currently be a felon and I get a five year license. Different states have different requirements; maybe it’s time to get a more consistent framework in place that requires more frequent check-ins and more frequent training?
  • While we’re talking about training, let’s hit another popular talking point: that armed private citizens are likely to stop mass shootings. While there are incidents of gun owners (typically store clerks) stopping an attempted robbery, the private citizens that have stopped instances of mass shootings all turn out to be private or off-duty security personnel who have substantially higher levels of firearms training than the average citizen (such as the Clackamas Mall shooting in Portland, OR).
  • One of the claimed benefits of having less restrictive firearms statutes is crime reduction. More armed citizens, it is said, equals lower crime. However, in order to have this kind of deterrent effect, don’t the criminals have to either know that people are carrying, or at least have a reasonable suspicion that people are carrying? Concealed carry would seem to be counter-productive; open carry would actually allow criminals to know what they’re about to get into. Is American culture ready for open carry? Again, this is an area we’d need more research on.
  • What about on-site gun safe inspections as part of the permit approval process? If one of the big concerns is people getting inappropriate access to guns, we should be making sure they’re being appropriate stored and locked away.

There’s a horrible patchwork of laws in place and there are some loopholes that should be closed, as long as we can do so without heading down the path of a guns registry. Come on, yes there are some screwballs who want to take all guns away, just as there are some screwballs who think that they should be able to own fully operable RPGs and tanks and fighter jets. Most of us are somewhere in the middle, although not in the same part of the middle, but we can’t even have a realistic, reasoned discussion on this because the people who benefit financially from the status quo make sure we can’t.

At this point in time, we can’t have a meaningful conversation on what the “well-regulated” clause in the 2nd Amendment is supposed to mean. All of our other liberties have been slowly and carefully re-interpreted over time – sometimes overly so, usually with corrections in the long run — as the times changed and as the nation changed and (yes) as we saw the fruits of some of the Founders’ mistakes. They were human; of course they made mistakes. They knew they would make mistakes and that we would have to adjust for situations they could never have foreseen. And yet, a strict reading of the 2nd Amendment is somehow off the table for even reasonable discussion? Why must we hew strictly to the Founding Fathers’ intentions in this one area when we willingly ignore them in other areas? (Check out what they had to say about professional politicians, lobbyists, and a two-party system.)

So, yes, sometimes it takes a Constitutional scholar to understand not only the original context of our Constitution, but also remember that the Founding Fathers always intended this Constitution to grow and live and adapt as our country did. It’s time for us to open the doors to a reasoned discussion on all areas of the 2nd Amendment, including the precise definition of which weapons it makes sense to allow citizens to have and what sorts of controls might be prudent to put in place to balance the right to self-defense with the reasonable safety of those around us.

Attached To You: Exchange 2010 Storage Essays, part 3

[2100 PST 11/5/2012: Edited to fix some typos and missing words/sentences.]

So, um…I knew it was going to take me a while to write this third part of the Exchange 2010 storage saga…but over two years? Damn, guys. I don’t even know what to say, other than to get to it.

So, we’ve this lovely little 3-dimension storage axis I’ve been talking about in parts 1 (JBOD vs. RAID) and 2 (SATA vs. SAS/FC). Part 3 addresses the third axis: SAN vs. DAS.

Exchange Storage DAS vs. SAN

What’s in a name?

It used to be that everyone agreed on the distinction between DAS, NAS, and SAN:

  • DAS was typically dumb or entry-level storage arrays that connected to a single (or at most two or three) servers via SCSI, SATA/SAS, or some other storage-specific cabling/protocol. DAS arrays typically had very little on-board smarts, other than the ability to run RAID configurations and present the RAID volumes to the connected server as if they were a single volume instead.
  • NAS was file-level storage presented over a network connection to servers. The two common protocols used were NFS (for Unix machines) and SMB/CIFS (for Windows machines). NAS solutions often include more functionality, including features such as direct interfaces with backup solutions, snapshots of the data volumes, replication of data to other units, and dynamic addition of storage.
  • SAN was high-end, expensive block-level storage presented over a separate network infrastructure such as FC or iSCSI over Ethernet. SAN systems offer even more features aimed at enterprise markets, including sophisticated disk partitioning and access mechanisms designed to achieve incredibly high levels of concurrence and performance.

As time passed and most vendors figured out that providing support for both file-level and block-level protocols made their systems more attractive by allowing them to be reconfigured and repurposed by their customers, the distinction between NAS and SAN began to blur. DAS, however, was definitely dumb storage. Heck, if you wanted to share it with multiple systems, you had to have multiple physical connections! (Anyone other than me remember those lovely days of using SCSI DAS arrays for poor man’s clustering by connecting two SCSI hosts – one with a non-default host ID – to the same SCSI chain?)

At any rate, it was all good. For Exchange 2003 and early Exchange 2007 deployments, storage vendors were happy because if you had more than a few hundred users, you almost certainly needed a NAS/SAN solution to consolidate the number of spindles required to meet your IOPS targets.

The heck you say!

In the middle of the Exchange 2007 era, Microsoft upset the applecart. It turns out that with the ongoing trend of larger mailboxes, Exchange 2007 SP1, CCR, and SCR, many customers were able to do something pretty cool: decrease the mailbox/database density to the point where (with Exchange 2007’s reduced IOPS) the total IOPS for their databases no longer required a sophisticated storage solution to provide the requisite IOPS. In general, disks for SAN/NAS units have to be of a higher quality and speed than for DAS arrays, so they typically had better performance and lower capacity than consumer-grade drives.

This trend only got more noticeable and deliberate in Exchange 2010, when Microsoft unified CCR and SCR into the DAG and moved replication to the application layer (as we discussed in Part 1). Microsoft specifically designed Exchange 2010 to be deployable on a direct-attached RAID-less 2TB SATA 7200 RPM drive to hold a database and log files, so they could scale hosted Exchange deployments up in an affordable fashion. Suddenly, Exchange no longer needed SAN/NAS units for most deployments – as long as you had sufficiently large mailboxes throughout your databases to reduce the IOPS/database ratio below the required amount.

Needless to say, storage vendors have taken this about as light-heartedly as a coronary.

How many of you have heard in the past couple of years the message that “SAN and DAS are the same thing, just different protocols”?

Taken literally, DAS and SAN are only differences in connectivity.

The previous quote is from EMC, but I’ve heard the same thing from NetApp and other SAN vendors. Ever notice how it’s only the SAN vendors who are saying this?

I call shenanigans.

If they were the same thing, storage vendors wouldn’t be spending so much money on whitepapers and marketing to try to convince Exchange admins (more accurately, their managers) that there was really no difference and that the TCO of a SAN just happens to be a better bet.

What SAN vendors now push are features like replication, thin provisioning, virtualization and DR integration, backup and recovery – not to mention the traditional benefits of storage consolidation and centralized management. Here’s the catch, though. From my own experience, their models only work IF and ONLY IF you continue to deploy Exchange 2010 the same way you deployed Exchange 2003 and Exchange 2007:

  • deploying small mailboxes that concentrate IOPS in the same mailbox database
  • grouping mailboxes based on criteria meant to maximize single instance storage (SIS)
  • planning Exchange deployments around existing SAN features and backup strategies
  • relying on third-party functionality for HA and DR
  • deploying Exchange 2010 DAGs as if they were a shared copy cluster

When it comes right down to it, both SAN and DAS deployments are technically (and financially) feasible solutions for Exchange deployments, as long as you know exactly what your requirements are and let your requirements drive your choice of technology. I’ve had too many customers who started with the technology and insisted that they had to use that specific solution. Inevitably, by designing around technological elements, you either have to compromise requirements or spend unnecessary energy, time, and money solving unexpected complications.

So if both technologies are viable solutions, what factors should you consider to help decide between DAS and SAN?

Storage Complexity

You’ve probably heard a lot of other Exchange architects and pros talk about complexity – especially if they’re also Certified Masters. There’s a reason for this – more complex systems, all else being equal, are more prone to system outages and support calls. So why do so many Exchange “pros” insist on putting complexity into the storage design for their Exchange systems when they don’t even know what that complexity is getting them? Yes, that’s right, Exchange has millennia of man-hours poured into optimizing and testing the storage system so that your critical data is safe under almost all conditions, and then you go and design storage systems that increase the odds the fsck-up fairy[1] will come dance with your data in the pale moonlight.

SANs add complexity. They add more system components and drivers, extra bits of configuration, and additional systems with their own operating system, firmware, and maintenance requirements. I’ll pick on NetApp for a moment because I’m most familiar with their systems, but the rest of the vendors have their own stories that hit most of the same high points:

  • I have to pick either iSCSI or FC and configure the appropriate HBA/NICs plus infrastructure, plus drivers and firmware. If I’m using FC I get expensive FC HBAs and switches to manage. If I go with iSCSI I get additional GB or 10GB Ethernet interfaces in my Exchange servers and the joy of managing yet another isolated set of network adapters and making sure Exchange doesn’t perform DAG replication over them.
  • I have to install the NetApp Storage Tools.
  • I have to install the appropriate MPIO driver.
  • I have to install the SnapDrive service, because if I don’t, the NetApp snapshot capability won’t interface with Windows VSS, and if I’m doing software VSS why the hell am I even using a SAN?
  • I *should* install SnapManager for Exchange (although I don’t have to) so that my hardware VSS backups happen and I can use it as an interface to the rest of the NetApp protection products and offerings.
  • I need to make sure my NetApp guy has the storage controllers installed and configured. Did I want redundancy on the NetApp controller? Upgrades get to be fun and I have to coordinate all of that to make sure they don’t cause system outage. I get to have lovely arguments with the NetApp storage guys about why they can’t just treat my LUNs the same way they treat the rest of them, yes I need my own aggregates and volumes and no please don’t give me the really expensive 15KRPM SAS drives that store a thimble because you’re going to make your storage guys pass out when they find out how many you need for all those LUNs and volumes (x2 because of your redundant DAG copies).[2]

Here’s the simple truth: SANs can be very reliable and stable. SANs can also be a single point of failure, because they are wicked expensive and SAN administrators and managers get put out with Exchange administrators who insist on daft restrictions like “give Exchange dedicated spindles” and “don’t put multiple copies of the same database on the same controller” and other party-pooping ways to make their imagined cost savings dwindle away to nothing. The SAN people have their own deployment best practices, just like Exchange people; those practices are designed to consolidate data for applications that don’t manage redundancy or availability on their own.

Every SAN I’ve ever worked with wants to treat all data the same way, so to make it reliable for Exchange you’re going to need to rock boats. This means more complexity (and money) and the SAN people don’t want complexity in their domain any more than you want it in yours. Unless you know exactly what benefits your solution will give you (and I’m not talking general marketing spew, I’m talking specific, realistic, quantified benefits), why in the world would you want to add complexity to your environment, especially if it’s going to start a rumble between the Exchange team and the SAN team that not even Jackie Chain and a hovercraft can fix?

Centralization and Silos

Over the past several years, IT pros and executives have heard a lot of talk about centralization. The argument for centralization is that instead of having “silos” or autonomous groups spread out, all doing the same types of things and repeating effort, you reorganize your operation so that all the storage stuff is handled by a single group, all the network stuff is handled by another group, and so on and so forth. This is another one of those principles and ideas that sounds great in theory, but can fall down in so many ways once you try to put it into practice.

The big flaw I’ve seen in most centralization efforts is that they end up creating artificial dependencies and decrease overall service availability. Exchange already has a number of dependencies that you can’t do anything about, such as Active Directory, networking, and other external systems. It is not wise to create even more dependencies when the Exchange staff doesn’t have the authority to deal with the problems those dependencies create but are still on the hook for them because the new SLAs look just like the old SLAs from the pro-silo regime.

Look, I understand that you need to realign your strategic initiatives to fully realize your operational synergies, but you can’t go do it half-assed, especially when you’re messing with business critical utility systems like corporate email. Deciding that you’re going to arbitrarily rearrange operations patterns without making sure those patterns match your actual business and operational requirements is not a recipe for long-term success.

Again, centralization is not automatically incompatible with Exchange. Doing it correctly, though, requires communication, coordination, and cross-training. It requires careful attention to business requirements, technical limitations, and operational procedures – and making sure all of these elements align. You can’t have a realistic 1-hour SLA for Exchange services when one of the potential causes for failure itself has a 4-hour SLA (and yes, I’ve seen this; holding Exchange metrics hostage to a virtualization group that has incompatible and competing priorities and SLAs makes nobody happy). If Exchange is critical to your organization, pulling the Exchange dependencies out of the central pool and back to where your Exchange team can directly operate on and fix them may be a better answer for your organization’s needs.

The centralization/silo debate is really just capitalism vs. socialism; strict capitalism makes nobody happy except hardcore libertarians, and strict socialism pulls the entire system down to the least common denominator[3]. The real answer is a blend and compromise of both principles, each where they make sense. In your organization, DAS and an Exchange silo just may better fit your business needs.

Management and Monitoring

In most Exchange deployments I’ve seen, this is the one area I consistently see neglected, so it doesn’t surprise me that it’s not more of an issue. Exchange 2010 does a lot to make sure the system stays up and operational, but it can’t manage everything. You need to have a good monitoring system in place and you need to have automation or well-written, thorough processes to handle dealing with common warnings and low-level errors.

One of the advantages of a SAN is that (at least on a storage level) much of this will be taken care of you. Every SAN system I’ve worked with not only built-in monitoring of state of the disks and the storage hardware, but has extensive integration with external monitoring systems. It’s really nice when at the same time you get notification that you’ve had a disk failure in the SAN that the SAN vendor has also been notified, so you know in the next day a spare will show up via FedEx (or even possibly brought by a technician who will replace it for you). This kind of service is not normally associated with DAS arrays.

However, even the SAN’s luscious – nay, sybaritic – level of notification luxury only protects you against SAN-level failures. SAN monitoring doesn’t know anything about Exchange 2010 database copy status or DAG cluster issues or Windows networking or RPC latency or CAS arrays or load balancer errors. Whether you deploy Exchange 2010 on a SAN or DAS offering, you need to have a monitoring solution that provides this kind of end-to-end view of your system. Low-end applications that rely on system-agnostic IP pings and protocol endpoint probes are better than nothing, but they aren’t a substitute for application-aware systems such as Microsoft System Center Operations Manager or some other equivalent that understand all of the components in an Exchange DAG and queries them all for you.

You also need to think about your management software and processes. Many environments don’t like having changes made to centralized, critical dependency systems like a SAN without going through a well-defined (and relatively lengthy) change management process. In these environments, I have found it difficult to get emergency disk allocations pushed through in a timely fashion.

Why would we need emergency disk allocations in an Exchange 2010 system? Let me give you a few real examples:

  • Exchange-integrated applications[4] cause database-level corruption that drives server I/O and RPC latency up to levels that affect other users.
  • Disk-level firmware errors cause disk failure or drops in data transfer rates. Start doing wide-scale disk replacements on a SAN and you’re going to drive system utilization through the roof because of all the RAID group rebuilds going on. Be careful which disks you pull at one time, too – don’t want to pull two or three disks out of the same RAID group and have the entire thing drop offline.
  • Somebody else’s application starts having disk problems. You have to move the upper management’s mailboxes to new databases on unaffected disks until the problems are identified and resolved.
  • A routine maintenance operation on one SAN controller goes awry, taking out half of the database copies. There’s a SAN controller with some spare capacity, but databases need to be temporarily consolidated so there is enough room for two copies of all the databases during the repair on the original controller.

Needless to say, with DAS arrays, you don’t have to tailor your purchasing, management, and operations of Exchange storage around other applications. Yes, DAS arrays have failures too, but managing them can be simpler when the Exchange team is responsible for operations end-to-end.

Backup, Replication, and Resilience

The big question for you is this: what protection and resilience strategy do you want to follow? A lot of organizations are just going on auto-pilot and using backups for Exchange 2010 because that’s how they’ve always done it. But do you really, actually need them?

No, seriously, you need to think about this.

Why do you keep backups for Exchange? If you don’t have a compelling technical reason, find the people who are responsible for the business reason and ask them what they really care about – is it having tapes or a specific technology, or is it the ability to recover information within a specific time window? If it’s the latter, then you need to take a hard look at the Exchange 2010 native data protection regime:

  • At least three database copies
  • Increased deleted item/deleted mailbox recovery limits
  • Recoverable items and hold policies
  • Personal archives and message retention
  • Lagged database copies

If this combination of functionality meets your needs, you need to take a serious look at a DAS solution. A SAN solution is going to be a lot more expensive for the storage options to begin with, and it’s going to be even more expensive for more than two copies. None of my customers deployed more than two copies on a SAN, because not only did they have to budget for the increased per-disk cost, but they would have to deploy additional controllers and shelves to add the appropriate capacity and redundancy. Otherwise, they’d have had multiple copies on the same hardware, which really defeats the purpose. At that point, DAS becomes rather attractive when you start to tally up the true costs of the native data protection solution.

So what do you do if the native data protection isn’t right for you and you need traditional backups? In my experience, one of the most compelling reasons for deploying Exchange on a SAN is the fantastic backup and recovery experience you get. In particular, NetApp’s snapshot-based architecture and SME backup application head the top of my list. SME includes a specially licensed version of the Ontrack PowerControls utility to permit single mailbox recovery, all tied back into NetApp’s kick-ass snapshots. Plus, the backups happen more quickly because the VSS provider is the NetApp hardware, not a software driver in the NTFS file system stack, and you can run the ESE verification off of a separate SME server to offload CPU from the mailbox servers. Other SAN vendors offer some sort of integrated backup option of some equivalency.

The only way you’re going to get close to that via DAS is if you deploy Data Protection Manager. And honestly, if you’re still replying on tape (or cloud) backups, I really recommend that you use something like DPM to stage everything to disk first so that backups from your production servers are staging to a fast disk system. Get those VSS locks dealt with as quickly as possible and offload the ESE checks to the DPM system. Then, do your tape backups off of the DPM server and your backup windows are no longer coupled to your user-facing Exchange servers. That doesn’t even mention DPM’s 15-minute log synchronization and use of deltas to minimize storage space on its own storage pool. DPM has a lot going for it.

A lot of SANs do offer synchronous and asynchronous replication options, often at the block level. These sound like good options, especially to enhance site resiliency, and for other applications, they often can be. Don’t get suckered into using them for Exchange, though, unless they are certified to work against Exchange (and if it’s asynchronous replication, it won’t be). A DAS solution doesn’t offer this functionality, but that’s no loss in this column; whether you’re on SAN or DAS, you should be replicating via Exchange. Replicating using the SAN block-level replication means that the replication is happening without Exchange being aware of it, which means depending on when a failure happens, you could in the worst case end up with a corrupted database replica volume. Best case, your SAN-replicated database will not be in a consistent state, so you will have to run ESEUTIL to perform a consistency check and play log files forward before mounting that copy. If you’re going to that, why are you running Exchange 2010?

Now if you need a synchronous replication option, Exchange 2010 includes an API to allow a third-party provider to replace the native continuous replication capability. As far as I know, only one SAN vendor (EMC) has taken advantage of this option, so your options are pretty clear in this scenario.

Conclusion

We’ve covered a lot of ground in this post, so if you’re looking for a quick take-away, the answer is this:

Determine what your real requirements are, and pick your technology accordingly. Whenever possible, don’t make choices by technology or cost first without having a clear and detailed list of expected benefits in hand. You will typically find some requirement that makes your direction clear.

If anyone tells you that there’s a single right way to do it, they’re probably wrong. Having said that, though, the more I’ve seen over the past couple of years, the more people deviate from the Microsoft sweet spot, the more design compromises they’ve made when perhaps they didn’t have to. Inertia and legacy have their place but need to be balanced with innovation and reinvention.

[1] Not a typo, I’m just showing off my Unix roots. The fsck utility (file system check) helps fix inconsistencies in the Unix file systems. Think chkdsk.

[2] Can you tell I’ve been in this rodeo once or twice? But I’m not bitter. And I do love NetApp because of SME, I just realize it’s not the right answer for everyone.

[3] Yes, I did in fact just go there. Blame it on the nearly two years of political crap we’ve suffered in the U.S. for this election season. November 6th can’t come soon enough.

[4] The application in this instance was an older version of Microsoft Dynamics CRM, very behind on its patches. There was a nasty calendar corruption bug that made my customer’s life hell for a while. The solution was to upgrade CRM to the right patch level, then move all of the affected mailboxes (about 40% of the users) to new databases. We didn’t need to have a lot of new databases, as we could move them in a swing fashion, but in order to get it done in a timely fashion we needed to provision enough LUNs to have enough databases and copies that we could get the process done in a timely fashion. Each swing cycle took about two weeks because of change management when we could have gotten it done much sooner.

Alaric’s Fundraising Progress

Just wanted to drop a quick note to you all to keep you updated on Alaric’s progress in raising funds for his 2013 Summer of Awesome. I’ve created a static page that you can go to and will keep it updated until our goal of $5,000 is met. That’s not to say that I won’t be reminding you all about it here and on Twitter and Facebook on a regular basis, but I wanted to condense all the major details down to one place.

Update: We’re around $1,365 or so, give or take some pending funds from current fundraising efforts and some pledges we’ve not yet receiving but are expecting. Thank you to everyone who has helped us out so far!

Can You Fix This PF Problem?

Today I got to chat with a colleague who was trying to troubleshoot a weird Exchange public folder replication problem. The environment, which is the middle of an Exchange 2007 to Exchange 2010 migration, uses public folders heavily – many hundreds of top-level public folders with a lot of sub-folders. Many of these public folders are mail-enabled.

After replicating creating public folder replicas on Exchange 2010 public folder databases and ensuring that the public folders were starting to replicate, my colleague received notice that specific mail-enabled public folders weren’t getting incoming mail content. Lo and behold, the HT queues were full of thousands of public folder replication messages, all queued up.

After looking at the event logs and turning up the logging levels, my colleague noticed that they were seeing a lot of the 4.3.2 STOREDRV.Deliver; recipient thread limit exceeded error message mentioned in the Microsoft Exchange team blog post Store Driver Fault Isolation Improvements in Exchange 2010 SP1. Adding the RecipientThreadLimit key and setting it to a higher level helped temporarily, but soon the queues would begin backing up again.

At that point, my colleague called me for some suggestions. We talked over a bunch of things to check and troubleshooting trees to follow depending on what he found. Earlier tonight, I got an email confirming the root cause was identified. I was not surprised to find out that the cause turned out to be something relatively basic. Instead of just telling you what it was though, I want you to tell me which of the following options YOU think it is. I’ll follow up with the answer on Monday, 10/15.

Which of the following options is the root cause of the public folder replication issues?

Alaric’s Summer of Awesome

Some of you might get some cognitive whiplash from the following post, given my recent vocal stance on Intel’s corporate fundraising for Boy Scouts of America. If your own views on Scouting are such that you are not able to entertain helping out or sponsoring a Scout, we understand — this post isn’t for you.

Many of you know that my son Alaric has been involved in Scouting for many years. Despite my own issues with the Scouting organization’s policies[1], we’ve seen a lot of benefits from Alaric’s involvement. There are some really great boys and adults we’ve met through Scouting and my boy has learned and grown a lot. He’s currently a Star Scout and an Ordeal member of the Order of the Arrow, and has been serving as a patrol leader for a year. Alaric is well on his way to Life Scout by the end of the year and has given himself a goal of becoming an Eagle Scout by the end of summer 2013.

Alaric receiving four merit badges

Alaric receiving four merit badges

Next summer, Alaric has the opportunity to have the kind of summer adventure that every Boy Scout can only dream of:

  • It’s time for the National Scout Jamboree. This event typically takes place once every 4 years. This year is particularly cool because it will be the first jamboree held on the new Summit grounds in West Virginia’s Bechtel Reserve wilderness area, Scouting’s new permanent jamboree home and high adventure base.
  • Alaric’s troop will be heading to Philmont Scout Ranch in New Mexico, Scouting’s oldest and most famous high adventure backpacking camp. It can take years for troops to get a slot to come to Philmont for a mountain adventure.

Both of these events are usually once-in-a-lifetime events for most Boy Scouts. The fact that Alaric has the chance to go to both is amazing and requires an immense amount of commitment and dedication from him (and us).

Unfortunately, my getting laid off in July threw a huge wrench into the fund-raising portion of this adventure. The total cost to participate in both events, including airline tickets and required gear upgrades, is going to be around $5,000 for our family. If I had a steady job, this wouldn’t have been a problem — we’d have covered half and Alaric could have motored through fund raisers with his troops to get the rest covered. He’s already raised over $600 just through mowing lawns, odd jobs, and even a garage sale.

Even if you don’t want to donate to Scouting — are you willing to invest in my son? The typical Scouting fundraiser is through the sales of Trail’s End popcorn products. Trail’s End is an amazing outfit that makes online sales very easy, they produce fantastic popcorn, and they offer the choice for making donations to help send popcorn to active-duty military units.

Alaric’s popcorn pitch letter can be seen below:

Dear Dad’s Reader,

Did you know you can help send me to the National Jamboree? Just click here and place an order on my behalf. There are all kinds of products to choose from, and every product has better flavor and is better for you.

Plus, you won’t just be helping me go to Jamboree. 70% of your purchase will benefit Scouting in my area and help more kids experience all the things that make Scouting great. It’s a situation where everyone wins.

Thanks for your support,

Alaric

P.S. If you cannot click on the link above, copy and paste this full URL:
http://www.trails-end.com/shop/scouts/email_referral.jsp?id=3440240

If you would rather donate to Alaric directly, contact me using the form below.

If you’re still with me this far, thanks for reading and for your support.

[1] I have two main issues. The first is that they discriminate against gay boys, girls (several programs for older youth are co-ed), and leaders. The second is that their religious requirements discriminate against boys who are atheists or agnostic yet are willing to investigate a religion in the spirit of understanding and tolerance. Look at Girl Scouts to see how these issues can be dealt with sensibly.

Forced Obsolescence

ZDNet’s David Meyer noted earlier today that Google is about to shut down support for exporting the legacy Microsoft Office file formats (.doc, .xls, and .ppt) from Google Apps as of October 1, 2012. The Google blog notes that Google Apps users will still be able to import data from those formats. However, if they want Office compatibility, they need to export to the Office 2007 formats (.docx, .xlsx, and .pptx).

When Office 2007 was still in beta back in 2006, Microsoft released optional patches to Office 2003 to allow it to open and save the new file formats. Over time, these patches got included in Windows Update, so if you still have Office 2003 but have been updating, you probably have this capability today. Office 2003 can’t open these newer documents with 100% fidelity, but it’s good enough to get the job done. And if you’re on earlier versions of Office for Word, Microsoft hasn’t forgotten you; Office 2000 and Office XP (2002) users can also download the Compatibility Pack.

What boggles me are some of the comments on the ZDNet article. I can’t understand why anyone would think this was a bad idea:

  • The legacy formats are bloated and ill-defined. As a result, files saved in those format are more prone to corruption over the document lifecycle, not to mention when moving through various import/export filters. Heck, just opening them in different versions of Word can be enough to break the files.
  • The legacy formats are larger — much larger — than the new formats. Between the use of standard ZIP compression (the new format documents are actually an archive file containing a whole folder/file structure inside) along with smart use of XML rather than proprietary binary data, the new formats can pack a lot more data into the same space. Included picture files, for example, can be stored in compressible formats rather than as space-hogging uncompressible bitmaps.
  • The legacy formats are safer. Macro information is safely stored away from the actual data in the file, and Office (at least) can block the loading and saving of macro information from a variant of these files.

For many companies it would simply be cost-prohibitive to convert legacy files into the new formats…but it might not be a bad idea for critical files. Nowadays, I personally try to make sure I’m only writing new format Office files unless the people I am working with specifically ask for one of the legacy formats. I’m glad to see that Google is doing the right thing in helping make these legacy formats nothing more than a historical footnote — and I’d love to see Microsoft remove write support for them in Office 2013.

MEC Day 3

Unfortunately, this is the day that Murphy caught up with me, in the form of a migraine. When these hit, the only thing I can do is try to sleep it off.

I ended up not hitting the conference center until a bit after noon, just in time to brave lunch. What would a Microsoft conference be without the dreaded salmon meal? At that point, my stomach rebelled and my head agreed, so I wandered back to the MVP area and chatted until it was time to head upstairs to my room for my last session at 1pm.

Big thanks to everyone who showed up for the session. I took some of the feedback from Day 2, and combined with my increased mellowness from the migraine, I made some changes to the structure of the session and clarifications to the message I wanted the attendees to walk away with. We had what I thought was a brilliant session. Apparently, I do my best work while in pain.

After that, it was down to the expo floor for a quick round of good-byes, then off to catch my shuttle to Orlando International Airport. I was able to get checked in with more than enough time for a leisurely meal, then on to gate 10 where I met up with various other MEC attendees on their way back home to Seattle.

WHAT AN AMAZING CONFERENCE. I had SO much fun, even with missing essentially all of Day 3 and the wonderful sessions that I’d planned to sit into. My apologies for the missed Twitter stream that day.

We’ll have to do this again next year. I hope you’ll be there!

And @marypcbuk Nails IT

Amid all the bustle of MEC, I’ve not taken a bunch of time to read my normal email, blogs, etc. However, this article from ZDNet caught my eye:

Windows 8: Why IT admins don’t know best by Mary Branscombe

The gist of it is that IT departments spend a lot of time and effort trying to stop users from doing things with technology when they would often be better served enabling users. Users these days are not shy about embracing new technology, and Mary argues that users find creative ways around IT admins who are impediments:

The reality is that users are pushing technology in the workplace — and out of it. The Olympics has done more to advance flexible and remote working than a decade of IT pilot projects.

What got her going is the tale of an IT admin who found a way to disable, via Group Policy, the short tutorial that users are given on navigating Windows 8 the first time they log on.

I see this behavior all the time from admins and users – admins say “No” and users say “Bet me.” Users usually win this fight, too, because they are finding ways to get their work done. A good admin doesn’t say “No” – they say, “Let me help you find the best way to get that done.”

Mary finishes with this timely reminder:

See something new in Windows 8? If your first impulse is to look for a way to turn it off, be aware that you’re training your users to work around you.

What a refreshing dose of common sense.

MEC Day 2

Today was another fun-filled and informative day at MEC:

  • The day started off with a keynote by Microsoft Distinguished Engineer Perry Clarke, head of Exchange Software Development. Perry does a blog called Ask Perry which regularly includes a video feature, Geek Out With Perry. The keynote was done in this format. The latter half was quite good, but the first half was a little slow and (I thought) lightweight for a deeply technical conference such as MEC. However, that could just have been a gradual wake-up for the people still recovering from last night’s attendee party of Universal’s Islands of Adventure theme park.
  • After a short break, we were off to the interactive sessions! I got caught up in a conversation and made it to my first session a few minutes late – and wasn’t able to enter, as the room was at capacity. So, I missed Jeff Mealiffe’s session on virtualizing Exchange 2013, much to my annoyance. Instead I headed down to the exhibit floor and hung out in the MVP area, talking with a bunch of folks (including one my homies from MCM R1).
  • At lunch I caught up with some old friends – one of the best reasons for coming.
  • After lunch, I squeezed (and by squeezed I am being literal; we were crammed into the room like sardines) into Bill Thompson’s session on the Exchange 2013 transport architecture. WOW. Some bold changes made, but I think they’re going to be good changes.
  • At 3:00, my time at the front of the room had come and I gave my first session of my Exchange 2010 virtualization lessons learned. Mostly full room and there were some good questions. I received some interesting feedback later, so will be wrapping that into tomorrow’s repeat presentation.
  • My last session of the day was Greg Taylor’s session on Exchange 2013 load balancing. Again, lots of good surprises and changes, and as always watching Greg in action was entertaining and informative. This is, after all, the man who talks about Exchange client access using elephant’s asses.
  • Afterwards, I caught up with former co-workers and enjoyed a couple of beers at MAPI Hour in the lovely central atrium of the Gaylord Palms Hotel, then went out to dinner (fantastic burger at the Wrecker’s Sports Bar). Capped the night off with a sundae.

Two down, one more to go. What a fantastic time I’ve been having!

MEC Day 1

After 10 years of absence, the Microsoft Exchange Conference is back. Yes, that’s right, the last time MEC happened was in 2002. How do I know this? I’ve seen a couple of people today who still had their MEC 2002 badges. HOLY CRAP, DUDES. I’m a serious packrat and not even *I* keep my old conference badges.

I decided to live tweet my sessions. I did a good job too – my Twitter statistics are telling me that I’ve sent 258 tweets! If any of my Facebook friends are still bothering to read my automatic Twitter-to-Facebook updates..shit, sorry. Two more days to go and you know I can’t be nearly as prolific today or Wednesday because I’m presenting a session each day:

  • E14.310 Three Years In: Looking Back At Virtualizing Exchange 2010
    Tuesday, September 25 2012 3:00 PM – 4:15 PM in Tallahassee 1
  • E14.310R1 Three Years In: Looking Back At Virtualizing Exchange 2010
    Wednesday, September 26 2012 1:00 PM – 2:15 PM in Tallahassee 1

Monday was the “all Microsoft, all Exchange 2013” day with typical keynotes and breakouts. Today, we start the “un-conference” – smaller, more interactive sessions, led by members of the community like myself. Today and tomorrow will be a lot more peer-to-peer…which will be fun.

See you out there! Drop me a note or track me down to let me know if you read my blog or have a question you’d like me to answer!

The BSA Funding Hornet’s Nest

Earlier today I posted a Scouting-related tweet that provoked drew a strong reaction from several people. Here’s the tweet:

Intel Corporation: Pull your financial support until the Boy Scouts pull their anti-gay policy http://www.change.org/petitions/intel-corporation-pull-your-financial-support-until-the-boy-scouts-pull-their-anti-gay-policy … via @change

I was asked if I thought that it was better for Scouting to lose funds. I was asked how doing this would help the boys in Scouting. I was told that it was abusive and manipulative to use funding to try to effect change in Scouting’s policies over what is a relatively minor matter.

I am a former Boy Scout, my son is a Boy Scout, I have just been registered as an adult Scouter, and my daughter is looking at joining a Venture crew sometime in the next year. I think that Scouting is a fantastic youth program. So how can I support Scouts while calling for Intel to defund them?

I have two main reasons to support the petition to Intel.

Reason 1: Choices have consequences

The value of Scouting isn’t just the outdoor skills and learning how to handle yourself in the wilderness; it’s in the character formation that goes along with the outdoor program. Scouting teaches principles and duty. Scouting youth often drop out when they hit a certain age because of the peer pressure they’re getting by being different, by standing up for their beliefs and values. The kids who stay in Scouting learn that making a stand comes with consequences. It is precisely this kind of character formation that many former Scouts go on to say is the most valuable lesson they learn from Scouting.

The national Scouting organization has now said multiple times that they see having gay Scouts and Scouters as somehow being incompatible with Scouting ideals. Intel and the other companies identified in this article by Andy Birkey on The American Independent (linked to from the petition, BTW) have made their policies on charitable donations crystal clear. These policies are not new. These companies need to make sure their house is in order by verifying that their giving is in line with their policies (as the ones in orange have done). However, Scouting has a responsibility here too. By continuing to accept money from organizations such as Intel in violation of their stated donation guidelines, I believe that Scouting is sending the message that money is more important than principles. I’ve heard a lot of justification for accepting the money, but when it comes right down to it, taking donations from these companies when you don’t comply with their guidelines is hypocrisy, plain and simple. I think Scouting is better than that.

Whether I agree with the national organization’s stance on gay Scouts/Scouters or not, I think the unwritten message is doing more harm in the long run that the immediate defunding would do. I’m confident that should Scouting actually have the courage to turn down this money, alternate funding sources would quickly emerge in today’s polarized climate. Look at the Chik-Fil-A protests and responses if you doubt me. So no, I’m not worried that there would be long-term financial damage to Scouting.

It’s not like this is a theoretical situation for my family. Our local troop enjoys a high level of funding thanks to Microsoft matching contributions to the men and women who volunteer as our Scouters and committee members, many of whom are full-time Microsoft employees. I suspect that Microsoft’s policies are actually the same as Intel’s, based on their publicly stated policies for software donations to charities. If Microsoft were to stop funding Scouting (or Scouting were to stop taking Microsoft dollars because of this policy) our troop would be directly and severely affected.

I personally know at least two gay Scouters, and I suspect I know more. Scouting would somehow find the money to replace the lost donations. I don’t know how they’d replace the people I’m thinking of.

I’ve talked this over with my son on multiple occasions. When we discussed this particular petition and the fact that I was going to publicly support it, we talked about the implications. I asked him if he had any concerns. His response: “Do it, Dad. Scouting needs a kick in the ass.” (Yes, he’s my kid.)

And if you think I’m somehow being abusive or manipulative for supporting the use of defunding as a tool for policy change, go back to that Birkey article:

In a brief filed in the landmark case of Boy Scouts of America v. Dale, a lawyer for the LDS Church warned that the church would leave the scouts if gays were allowed to be scout leaders.

“If the appointment of scout leaders cannot be limited to those who live and affirm the sexual standards of BSA and its religious sponsors, the Scouting Movement as now constituted will cease to exist,” wrote Von G. Keetch on behalf of the LDS Church and several other religious organizations in 2000. “The Church of Jesus Christ of Latter-day Saints — the largest single sponsor of Scouting units in the United States — would withdraw from Scouting if it were compelled to accept openly homosexual Scout leaders.”

According to the Chartered Organizations and the Boy Scouts of America fact sheet, as of December 31, 2011 there are over 100,000 chartered Scouting units, with nearly 7/10 of them chartered by religious organizations. In the tables in that fact sheet, we see data on the top 25 religious charterers, top 20 civic charterers, and the educational charterers – giving us data on 55,100 units (just over half) and 1,031,240 youth. According to this data, the LDS Church sponsors almost 35% of the Scouting units in the BSA. Yet, according to this same data, they have only 16% of the actual youth in Scouting. The youth-to-unit average for the LDS Church is a mere 11.1, which is the lowest of any organization (or group of organizations) listed in the fact sheet data.

Several of the organizations on that list, including the next largest religious sponsor (the United Methodist Church – 11,078 units, 371,491 youth, 33.5 youth per unit, 10% of the total units, and 14% of the total youth) would support and welcome gay Scouts and Scouters. The LDS Church gets to be vocal about it because of that 1/3 number of units – that translates into money for Scouting. This kind of ultimatum is in fact what manipulative behavior (using the threat of defunding) looks like.

Reason 2: People who see a problem need to be part of the solution

I’m continuing to get more involved with Scouting for one simple reason: I believe that if I see something I think is wrong, I need to be part of the solution. I don’t think it’s right that Scouting be in a position where it can have its cake and eat it too. However, I’m not going to throw the baby out with the bathwater; I see the incredible value the Scouting program gives to young men (and the young women who participate in the Venturer program).

My own religious beliefs and principles move me to be more involved precisely because I think Scouting needs more Scouts and Scouters who are open about their support for changing these policies. I know people who gave up on Scouting; I refuse to be one of them.

I want Scouting to change its policies, but I’m willing to keep being a part of it during those changes. I’m not trying to take my bat and ball and go home if the game doesn’t go my way. I want Scouts to continue producing young people of character for future generations.

Want to see the data I’m looking at? I got the fact sheet from the link stated above, brought the data in Excel, and added formulas for unit/youth ratios and percentages. I’ve put this spreadsheet online publicly via SkyDrive.

TMG? Yeah, you knew me!

Microsoft today officially announced a piece of news that came as very little surprise to anyone who has been paying attention for the last year. On May 25th of 2011, Gartner broke an unsubstantiated claim that they had been told by Microsoft that there would be no future release of Forefront Threat Management Gateway (TMG).

Microsoft finally confirmed that information. Although the TMG product will receive mainstream support until April 14, 2015 (a little bit more than 2.5 years from time of writing), it will no longer be available for sale come December 1, 2012.

Why do Exchange people care? Because TMG was the simple, no-brainer solution for environments that needed a reverse proxy in a DMZ network. Many organizations can’t allow incoming connections from the Internet to cross into an interior network. TMG provided protocol-level inspection and NAT out of the box, and could be easily configured for service-aware CAS load balancing and pre-authentication. As I said, no-brainer.

TMG had its limitations, though. No IPv6 support, poor NAT support, and an impressively stupid inability to proxy all non-HTTP protocols in a one-armed configuration. The “clustered” enterprise configuration was sometimes a pain-in-the ass to troubleshoot and work with when the central configuration database broke (and it seemed more fragile than it should be).

The big surprise for me is that TMG shares the chopping block with the on-server Forefront protection products for Exchange, SharePoint, and Lync/OCS. I personally have had more trouble than I care for with the Exchange product — it (as you might expect) eats up CPU like nobody’s business, which made care and feeding of Exchange servers harder than it needed to be. Still, to only offer online service — that’s a telling move.

Duke of URL

Just a quick note to let you know about a change or two I’ve made around the site.

  • Changed the primary URL of the site from www.thecabal.org to www.devinonearth.com. This is actually something I’ve been wanting to do for a long time, to reflect the site’s really awesome branding. Devin on Earth has long been its own entity that has no real connection to my original web site.
  • Added a secondary URL of www.devinganger.com to the site. This is a nod toward the future as I get fiction projects finished and published – author domains are a good thing to have, and I’m lucky mine is unique. Both www.devinganger.com and www.thecabal.org will keep working, so no links will ever go stale.

As a final aside, this is the 600th post on the site. W00t!

My Five Favorite Features of Exchange Server 2013 Preview

Exchange Server 2013 Preview was released a few weeks ago to give us a first look at what the future holds in store for Exchange. I got a couple of weeks to dig into it in depth and so here’s my quick impression of the five changes I like the most about Exchange 2013.

  1. Client rendering is moved from the Client Access role to the Mailbox role. (TechNet) Yes, this means some interesting architectural changes to SMTP, HTTP, and RPC, but I think it will help spread load out to where it should be – the server that host active users’ mailboxes.
  2. The Client Access role is now a stateless proxy. (TechNet) This means we no longer need an expensive L7 load balancer with all sorts of fancy complicated session cookies in our HTTP/HTTPS sessions. It means a simple L4 load balancer is enough to scale the load for thousands of users based solely on source IP and port. No SSL offload required!
  3. The routing logic now recognizes DAG boundaries. (TechNet) This is pretty boss – members of a DAG that are spread across multiple sites will still act as if they were local when routing messages to each other. It’s almost like the concept of routing groups has come back in a very limited way.
  4. No more MAPI-RPC over TCP. (TechNet) Seriously. Outlook Anywhere (aka RPC over HTTPS) is where it’s at. As a result, Autodiscover for clients is mandatory, not just a really damn good idea. Firewall discussions just got MUCH easier. Believe it or not, this simplifies namespace and certificate planning…
  5. Public folders are now mailbox content. (TechNet) Instead of having a completely separate back-end mechanism for public folders, they’re now put in special mailboxes. Yes, this means they are no longer multi-master…but honestly, that causes more angst than it solves in most environments. And now SharePoint and other third-party apps can get to public folder content more easily…

There are a few things I’m not as wild about, but this is a preview and there’s no point kvetching about a moving target. We’ll see how things shake down.

I’m looking forward to getting a deeper dive at MEC in a couple of weeks, where I’ll be presenting a session on lessons learned in virtualizing Exchange 2010. Are you planning on attending?

Have you had a chance to play with Exchange 2013 yet, or at least read the preview documentation? What features are your favorite? What changes have you wondering about the implications? Send me an email or comment and I’ll see if I can’t answer you in a future blog post!

Can’t make a bootable USB stick for Windows 8? Join the club!

I was trying to make a bootable USB stick for Windows 8 this morning, using the Windows 7 USB/DVD Download Tool from Microsoft and the process outlined in this Redmond Pie article (the same basic steps can be found in a number of places). Even though the tool originated for Windows 7 and the steps I linked to are for the Windows 8 Consumer Preview, it all still works fine with Windows 8 RTM.

The steps are pretty simple:

  1. Download and install the tool.
  2. Download the ISO image of the version of Windows you want to install (Windows 7 and 8 for sure, I believe it works with Windows Server 2008 R2 and Windows Server 2012 RC as well).
  3. Plug in a USB stick (8GB or larger recommended) that is either blank or has no data on it you want to keep (it will be reformatted as part of the process).
  4. Run the tool and pick the ISO image.
  5. Select the USB drive (note that this tool can also burn the ISO to DVD).
  6. Wait for the tool to reformat the USB stick, copy the ISO contents to the stick, and make it bootable.

Everything was going fine for me until I got to step 6. The tool would format the USB stick, and then it would immediately fail before beginning the file copy:

DownloadToolError

Redmond, we have a problem…

At first I was wondering if it was related to UAC (it wasn’t) or a bad ISO image (it wasn’t). So I plugged the appropriate search terms into Bing and away we went, where I finally found this thread on the TechNet forums, which led me to this comment in the thread (wasn’t even marked as the solution, although it sure should have been):

We ran across this same "Error during backup., Usb; Unable to set active partition. Return code 87" with DataStick Pro 16 GB USB sticks. The Windows 7 DVD/USB Download Tool would format and then fail as soon as the copy started.

We ended up finding that the USB stick has a partition that starts at position 0 according to DiskPart. We used DiskPart to select the disk that was the USB, then ran Clean, then created the partition again. This time it was at position 1024. The USB stick was removed then reinserted and Windows prompted to format the USB stick, answer Yes.

The Windows 7 DVD/USB Download Tool was now able to copy files.

So, here’s the process I followed:

DiskPartUSBFix

Follow my simple step-by-step instructions. I make hacking FUN!

To do it yourself, launch a command window (either legacy CMD or PowerShell, doesn’t matter) with Administrator privileges and type diskpart to fire up the tool:

  1. LIST DISK gives a listing of all the drives attached to the system. At this point, no disk is selected.
  2. I have a lot of disks here, in part because my system includes an always-active 5-in-1 card reader (disks 1 through 5 that say no media). I also have an external USB hard drive (230GB? How cute!) at disk 6. Disk 7, however — that’s the USB stick. Note that the "free" column is *not* showing free space on the drive in terms of file system — it’s showing free space that isn’t allocated to a partition/volume.
  3. Diskpart, like a lot of Microsoft command-line tools, often requires you to select a specific item for focus, at which point other commands that you run will then run against the currently focused object. Use SELECT DISK to set the focus on your USB stick.
  4. Now that the USB stick has focus, the LIST PART command will run against the selected disk and show us the partitions on that disk.
  5. Uh-oh. This is a problem. With a zero-byte offset on that partition (USB sticks typically only have a single partition) that means there’s not enough room for that partition to be marked bootable and for the boot loader to be put on the disk. The volume starts at the first available byte. Windows needs a little bit of room — typically only one megabyte — for the initial boot loader code (which then jumps into the boot code in the bootable disk partition).
  6. So, let’s use CLEAN to nuke the partitions and restore this USB stick to a fully blank state.
  7. Use LIST PART again (still focused on the disk object) confirms that we’ve removed the offending partition. You can create a new partition in diskpart but I happened to have the Disk Manager MMC console open already as part of my troubleshooting, so that’s what I used to create the new partition.
  8. Another LIST PART to confirm that everything is the way it should be…
  9. Yup! Notice we have that 1 MB offset in place now. There’s now enough room at the start of the USB stick for the boot loader code to be placed.
  10. Use EXIT to close up diskpart.

This time, when I followed the steps with the Download Tool, the bootable USB stick was created without further ado. Off to install Windows 8!

Back on the market

I sent out a brief tweet about this Friday and have received a number of queries, so I figured I should expand on this publicly.

No, I am no longer with Trace3. No, this was not my decision — I was happy with my position and work and was excited about what was happening there. At the same time, this was not a complete shock. I’m not at liberty to go into it (and even if I were, I don’t think I would anyway) but all living organisms (including vibrant corporations) change, and while many of those changes are good for the organism as a whole, they aren’t always so great for individual cells.

I have no hard feelings. I had a fantastic time at Trace3 and have learned a lot. I wish everyone there all the success in the world and am reasonably confident they’ll grab it. At the same time, there were some aspects of my fit at Trace3 that could have been improved on. Always being remote with no local co-workers, for one — that was a definite downer.

I’m feeling confident in my ability to find my next job. I have some exciting opportunities under way. In the meantime, though, if you have a lead or opportunity, let me know — and yes, that does include the potential for 1099 independent consulting work.

Beating Verisign certificate woes in Exchange

I’ve seen this problem in several customers over the last two years, and now I’m seeing signs of it in other places. I want to document what I found so that you can avoid the pain we had to go through.

The Problem: Verisign certificates cause Exchange publishing problems

So here’s the scenario: you’re deploying Exchange 2010 (or some other version, this is not a version-dependent issue with Exchange) and you’re using a Verisign certificate to publish your client access servers. You may be using a load balancer with SSL offload or pass-through, a reverse proxy like TMG 2010, some combination of the above, or you may even be publishing your CAS roles directly. However you publish Exchange, though, you’re running into a multitude of problems:

  • You can’t completely pass ExRCA’s validation checks. You get an error something like:  The certificate is not trusted on any version of Windows Phone device. Root = CN=VeriSign Class 3 Public Primary Certification Authority – G5, OU=”(c) 2006 VeriSign, Inc. – For authorized use only”, OU=VeriSign Trust Network, O=”VeriSign, Inc.”, C=US
  • You have random certificate validation errors across a multitude of clients, typically mobile clients such as smartphones and tablets. However, some desktop clients and browsers may show issues as well.
  • When you view the validation chain for your site certificate on multiple devices, they are not consistent.

These can be very hard problems to diagnose and fix; the first time I ran across it, I had to get additional high-level Trace3 engineers on the call along with the customer and a Microsoft support representative to help figure out what the problem was and how to fix it.

The Diagnosis: Cross-chained certificates with an invalid root

So what’s causing this difficult problem? It’s your basic case of a cross-chained certificate with an invalid root certificate. “Oh, sure,” I hear you saying now. “That clears it right up then.” The cause sounds esoteric, but it’s actually not hard to understand when you remember how certificates work: through a chain of validation. Your Exchange server certificate is just one link in an entire chain. Each link is represented by an X.509v3 digital certificate that is the footprint of the underlying server it represents.

At the base of this chain (aka the root) is the root certificate authority (CA) server. This digital certificate is unique from others because it’s self-signed – no other CA server has signed this server’s certificate. Now, you can use a root CA server to issue certificates to customers, but that’s actually a bad idea to do for a lot of reasons. So instead, you have one or more intermediate CA servers added into the chain, and if you have multiple layers, then the outermost layer are the CA servers that process customer requests. So a typical commercially generated certificate has a validation chain of 3-4 layers: the root CA, one or two intermediate CAs, and your server certificate.

Remember how I said there were reasons to not use root CAs to generate customer certificates? You can probably read up on the security rationales behind this design, but some of the practical reasons include:

  • The ability to offer different classes of service, signed by separate root servers. Instead of having to maintain separate farms of intermediate servers, you can have one pool of intermediate servers that issue certificates for different tiers of service.
  • The ability to retire root and intermediate CA servers without invalidating all of the certificates issued through that root chain, if the intermediate CA servers cross-chain from multiple roots. That is, the first layer intermediate CA servers’ certificates are signed by multiple root CA servers, and the second layer intermediate CA servers’ certificates are signed by multiple intermediate CA servers from the first layer.

So, cross-chaining is a valid practice that helps provide redundancy for certificate authorities and helps protect your investment in certificates. Imagine what a pain it would be if one of your intermediate CAs got revoked and nuked all of your certificates. I’m not terribly fond of having to redeploy certificates for my whole infrastructure without warning.

However, sometimes cross-chained certificates can cause problems, especially when they interact with another feature of the X.509v3 specification: the Authority Information Access (AIA) certificate extension. Imagine a situation where a client (such as a web browser trying to connect to OWA), presented with an X.509v3 certificate for an Exchange server, cannot validate the certificate chain because it doesn’t have the upstream intermediate CA certificate.

If the Exchange server certificate has the AIA extension, the client has the information it needs to try to retrieve the missing intermediate CA certificate – either retrieving it from the HTTPS server, or by contacting a URI from the CA to download it directly. This only works for intermediate CA certificates; you can’t retrieve the root CA certificate this way. So, if you are missing the entire certificate chain, AIA won’t allow you to validate it, but as long as you have the signing root CA certificate, you can fill in any missing intermediate CA certificates this way.

Here’s the catch: some client devices can only request missing certificates from the HTTPS server. This doesn’t sound so bad…but what happens if the server’s certificate is cross-chained, and the certificate chain on the server goes to a root certificate that the device doesn’t have…even if it does have another valid root to another possible chain? What happens is certificate validation failure, on a certificate that tested as validated when you installed it on the Exchange server.

I want to note here that I’ve only personally seen this problem with Verisign certificates, but it’s a potential problem for any certificate authority.

The Fix: Disable the invalid root

We know the problem and we know why it happens. Now it’s time to fix it by disabling the invalid root.

Step #1 is find the root. Fire up the Certificates MMC snap-in, find your Exchange server certificate, and view the certificate chain properties. This is what the incorrect chain has looked like on the servers I’ve seen it on:

image

The invalid root CA server circled in red

That’s a not very helpful friendly name on that certificate, so let’s take a look at the detailed properties:

image

Meet “VeriSign Class 3 Public Primary Certification Authority – G5”

Step #2 is also performed in the Certificates MMC snap-in. Navigate to the Third-Party Root Certification Authorities node and find your certificate. Match the attributes above to the certificate below:

image

Root CA certificate hide and seek

Right-click the certificate and select Properties (don’t just open the certificate) to get the following dialog, where you will want to select the option to disable the certificate for all purposes:

image

C’mon…you know you want to

Go back to the server certificate and view the validation chain again. This time, you should see the sweet, sweet sign of victory (if not, close down the MMC and open it up again):

image

Working on the chain gang

It’s a relatively easy process…so where do you need to do it? Great question!

The process I outlined obviously is for Windows servers, so you would think that you can fix this just on the the Exchange CAS roles in your Internet-facing sites. However, you may have additional work to do depending on how you’re publishing Exchange:

  • If you’re using a hardware load balancer with the SSL certificate loaded, you may not have the ability to disable the invalid root CA certificate on the load balancer. You may simply need to remove the invalid chain, re-export the correct chain from your Exchange server, and reinstall the valid root and intermediate CA certificates.
  • If you’re publishing through ISA/TMG, perform the same process on the ISA/TMG servers. You may also want to re-export the correct chain from your Exchange server onto your reverse proxy servers to ensure they have all the intermediate CA certificates loaded locally.

The general rule is that the outermost server device needs to have the valid, complete certificate chain loaded locally to ensure AIA does its job for the various client devices.

Let me know if this helps you out.

Autism Is Not The New Cool

Pardon, y’all. It’s been a while since I’ve been here <peers at the dust>. I’ve had the best of intentions, but sadly, my bogging client of choice (Windows Live Writer) doesn’t auto-translate those into actual written blog posts yet. Maybe in the next version. <sigh>

I can hear some of you (both of you still reading, thank you loyal fans) asking what finally brought me back, and I have to say it’s a rant. A rant about autism (and Asperger’s, and the rest of the spectrum), how it is perceived, and how trendy equals insensitive. You have been warned.

Hip To Be Square

After karate class tonight on the drive home, Steph was reading through Facebook (something I do but occasionally these days, having overdosed myself on social media some time ago) and came across the following comment on a mutual friend’s post:

image

Yes, that really does say that stupid thing

For some reason, this really punched my buttons. I don’t know much about the person who posted it. I don’t know if they’re a fellow spectrum traveller or not. I don’t know how many close friends or family members they have who have autism. To a certain extent, it really doesn’t matter, because this comment is a textbook illustration of a fallacy that I’m seeing more and more:

If geeks are cool, and a lot of geeks are autistic, they must be cool because they are autistic.

This is a fallacy because it is the living embodiment of failure to grasp proper logic and set theory. This growing "Autism Is The New Cool" meme (AITNC for those of us who adore our acronyms), for lack of a better word, is reaching stupid proportions.

Venn We Dance

Now listen up, because if you’d paid attention in Algebra the first time, I wouldn’t have to be telling you this shit now.

What we are talking about here are properties that people have: the property of being cool, the property of being a geek, and the property of being on the autism spectrum. These are not variables that we can just slam together in a transitive[1] orgy of equation signs, as much as someone might like to be able to write on a whiteboard that A=B=C.

image

You get to stay after class and wipe down the whiteboard

Instead, we need to head over to set theory, which is where we look at groupings (or "sets") of objects, where said sets are organized by a shared trait. Such as being a geek, or being cool, or being on the autism spectrum. We represent these sets by drawing circles. Then we can make useful and interesting (and sometimes even more occasionally related to real life) observations by seeing where these sets overlap and what that tells us. This is a Venn diagram, and it helps us immediately destroy AITNC, because it reminds us that people (the members of the sets) are not single-value variables like A and B and C and the rest of their letter trash, but complex people who are not in any way entirely equal. This is my AITNC mega-buster Venn diagram, whipped up on this evening when I had lots of better stuff to do, just for your edification:

image

Filling in the missing names is left as an exercise for the reader[2]

Note that there are plenty of places where there is no overlap. Note that there are four separate regions where there are overlap. I can think of people who are examples of each of those areas, but I’m not enough of a dick to tell you who they are.

The Big Boy/Girl Panties Are Right Over There

I have, I shit you not, had parents ask me how to get their kid diagnosed with Asperger’s so they can "give him an extra educational advantage" (or some such nonsense). Yeah, I know. Fucked up, right?

I’m no child psychology professional, but I know spoiled, overly sugared kids when I see them. You want your kid to get an extra educational advantage? Don’t let the little bastards play video games and watch TV when they get home from school. Make them do homework and chores. Stop buying them everything they want and make them earn a meager amount of money and prioritize they things they really want from passing whims. Spend time with them and find out what they’re learning. Teach them about things you’re doing, which means you might want to put down the remote and pick up some more books or spend time outdoors or in your shop. Take the time to buy and prepare healthy food instead of boxed-up pre-digested pap. Teach them how to cook and clean, while you’re at it. Get involved with what they’re doing at school and be ruthlessly nosy about their grades and progress. Limit their after-school activities so they have time to study. Make and enforce a reasonable bedtime. In short, be a fucking parent. Stick with that for a year, and I guarantee your kids will have an educational advantage that you can’t believe.

NoYouCannotHaveAPony

Unless you want it in kebabs for dinner

Once you’ve done that for a few years and your kids have adjusted to having the meanest parents on the block like mine have, then you can worry about whether your precious little shit belongs on the autism spectrum, or has ADHD, or whatever other crutch diagnosis you think you need to compensate for being a mere gamete donor instead of a real parent.

People Are Strange (When You’re A Stranger)

I’m not going to sing a litany of woes about how tough it is being Asperger’s. I have fought most of my adult life to keep this thing from defining who I am. Devin != autism, not by a long shot. It’s one of a large number of properties about me, and it’s a mere footnote at that. I refuse to self-identify as an "Aspie" because I see that many of them (not all, but a significant fraction of them) use it as a Get Out Of Life Free card. "Oh, boohoo, I can’t make friends. Boohoo, I can’t have a relationship. Boohoo, my boss doesn’t understand me." I’ll grant it makes things difficult at times, but you know what? I look at so-called "neurotypical" people and they seem to have rough patches too. Life isn’t perfect for anyone. I don’t know how much harder my life is because of Asperger’s, and you don’t either. Anyone who claims to know is full of shit. At best, they’re making wild-ass guesses.

I choose not to play "what-if" games, because there is always something you think of after the fact. This wiring malfunction in my brain does not define or control me unless I choose to let it. The only reason its effects dominated my life through my early adulthood is that I didn’t know. Once I knew…well, I went all G. I. Joe[3] on its ass.

You know what really sucks? That my wife and kids have to be hyper-vigilant about what food they eat because their bodies are attacking their own auto-immune systems. I can tell you exactly how much of a crimp that’s put into their enjoyment of life. One thoughtless dweeb in a restaurant kitchen who doesn’t properly wash bread crumbs off a counter, or clean off that dollop of butter on the knife, can make them miserable for a week. That’s a pretty raw deal, friends. Asperger’s has nothing on that. Try traveling or going out to a restaurant with friends. The number of things you can eat with one of the 8 major food allergies quickly limits your options. Enjoy two of them (like my family) and you can start counting your dining options on one hand.

So if you’re one of those assholes who thinks autism is cool or glamorous, get a life. Seriously. Be thankful for what you have. And recognize that people are cool not because of their afflictions but because they are cool people.

 

[1] You’ll probably have forgotten in five minutes, but transitive means if one thing is equal to a second thing, and a third thing is also equal to the second thing, then the first and third things are equal too. This only usually works in math and quantum mechanics, because how often are two things actually equal in the real world?

[2] Extra credit if you noticed that I really did match the color coding between the two diagrams.Without thinking.

[3] "Knowing is half the battle."

Exchange 2010 virtualization storage gotchas

There’s a lot of momentum for Exchange virtualization. At Trace3, we do a lot of work with VMware, so the majority of the customers I work with already have VMware deployed strategically into their production operation model. As a result, we see a lot of Exchange 2010 under VMware. With Exchange 2010 SP1 and lots of customer feedback, the Exchange product team has really stepped up to provide better support for virtual environments as well as more detailed guidance on planning for and deploying Exchange 2007 and 2010 in virtualization.

Last week, I was talking with a co-worker about Exchange’s design requirements in a virtual environment. I casually mentioned the “no file-level storage protocols” restriction for the underlying storage and suddenly, the conversation turned a bit more serious. Many people who deploy VMware create large data stores on their SAN and share them to the ESX cluster via the NFS protocol. There are a lot of advantages to doing it this way, and it’s a very flexible and relatively easy way to deploy VMs. However, it’s not supported for Exchange VMs.

The Heck You Say?

“But Devin,” I can hear some of you say, “what do you mean it’s not supported to run Exchange VMs on NFS-mounted data stores? I deploy all of my virtual machines using VMDKs on NFS-mounted data stores. I have my Exchange servers there. It all works.”

It probably does work. Whether or not it works, though, it’s not a supported configuration, and one thing Masters are trained to hate with a passion is letting people deploy Exchange in a way that gives them no safety net. It is an essential tool in your toolkit to have the benefit of Microsoft product support to walk you through the times when you get into a strange or deep problem.

Let’s take a look at Microsoft’s actual support statements. For Exchange 2010, Microsoft has the following to say in http://technet.microsoft.com/en-us/library/aa996719.aspx under virtualization (emphasis added):

The storage used by the Exchange guest machine for storage of Exchange data (for example, mailbox databases or Hub transport queues) can be virtual storage of a fixed size (for example, fixed virtual hard disks (VHDs) in a Hyper-V environment), SCSI pass-through storage, or Internet SCSI (iSCSI) storage. Pass-through storage is storage that’s configured at the host level and dedicated to one guest machine. All storage used by an Exchange guest machine for storage of Exchange data must be block-level storage because Exchange 2010 doesn’t support the use of network attached storage (NAS) volumes. Also, NAS storage that’s presented to the guest as block-level storage via the hypervisor isn’t supported.

Exchange 2007 has pretty much the same restrictions as shown in the http://technet.microsoft.com/en-us/library/bb738146(EXCHG.80).aspx TechNet topic. What about Exchange 2003? Well, that’s trickier; Exchange 2003 was never officially supported under any virtualization environment other than Microsoft Virtual Server 2005 R2.

The gist of the message is this: it is not supported by Microsoft for Exchange virtual machines to use disk volumes that are on file-level storage such as NFS or CIFS/SMB, if those disk volumes hold Exchange data. I realize this is a huge statement, so let me unpack this a bit. I’m going to assume a VMware environment here, but these statements are equally true for Hyper-V or any other hypervisor supported under the Microsoft SVVP.

While the rest of the discussion will focus on VMware and NFS, all of the points made are equally valid for SMB/CIFS and other virtualization system. (From a performance standpoint, I would not personally want to use SMB for backing virtual data stores; NFS, in my experience, is much better optimized for the kind of large-scale operations that virtualization clusters require. I know Microsoft is making great strides in improving the performance of SMB, but I don’t know if it’s there yet.

It’s Just Microsoft, Right?

So is there any way to design around this? Could I, in theory, deploy Exchange this way and still get support from my virtualization vendor? A lot of people I talk to point to a whitepaper that VMware published in 2009 that showed the relative performance of Exchange 2007 over iSCSI, FC, and NFS. They use this paper as “proof” that Exchange over NFS is supported.

Not so much, at least not with VMware. The original restriction may come from the Exchange product group (other Microsoft workloads are supported in this configuration), but the other vendors certainly know the limitation and honor it in their guidance. Look at VMware’s Exchange 2010 best practices at http://www.vmware.com/files/pdf/Exchange_2010_on_VMware_-_Best_Practices_Guide.pdf on page 13:

It is important to note that there are several different shared-storage options available to ESX (iSCSI, Fibre Channel, NAS, etc.); however, Microsoft does not currently support NFS for the Mailbox Server role (clustered or standalone). For Mailbox servers that belong to a Database Availability Group, only Fibre Channel is currently supported; iSCSI can be used for standalone mailbox servers. To see the most recent list of compatibilities please consult the latest VMware Compatibility Guides.

According to this document, VMware is even slightly more restrictive! If you’re going to use RDMs (this section is talking about RDMs, so don’t take the iSCSI/FC statement as a limit on guest-level volume mounts), VMware is saying that you can’t use iSCSI RDMs, only FC RDMs.

Now, I believe – and there is good evidence to support me – that this guidance as written is actually slightly wrong:

  • The HT queue database is also an ESE database and is subject to the same limitations; this is pretty clear on a thorough read-through of the Exchange 2010 requirements in TechNet. Many people leave the HT queue database on the same volume they install Exchange to, which means that volume also cannot be presented via NFS. If you follow best practices, you move this queue database to a separate volume (which should be an RDM or guest-mounted iSCSI/FC LUN).
  • NetApp, one of the big storage vendors that supports the NFS-mounted VMware data store configuration, only supports Exchange databases mounted via FC/iSCSI LUNs using SnapManager for Exchange (SME) as shown in NetApp TR-3845. Additionally, in the join NetApp-VMware-Cisco performance whitepaper on virtualizing Microsoft workloads, the only configuration tested for Exchange 2010 is FC LUNs (TR-3785).
  • It is my understanding that the product group’s definition of Exchange files doesn’t just extend to ESE files and transaction logs, but to all of the Exchange binaries and associated files. I have not yet been able to find a published source to document this interpretation, but I am working on it.
  • I am not aware of any Microsoft-related restriction about iSCSI + DAG. This VMware Exchange 2010 best practices document (published in 2010) is the only source I’ve seen mention this restriction, and in fact, the latest VMware Microsoft clustering support matrix (published in June 2011) lists no such restriction. Microsoft’s guidelines seem to imply that block storage is block storage is block storage when it comes to “SCSI pass-through storage”). I have queries in to nail this one down because I’ve been asking in various communities for well over a year with no clear resolution other than, “That’s the way VMware is doing it.”

Okay, So Now What?

When I’m designing layouts for customers who are used to deploying Windows VMs via NFS-mounted VMDKs, I have a couple of options. My preferred option, if they’re also using RDMs, is to just have them provision one more RDM for the system drive and avoid NFS entirely for Exchange servers. That way, if my customer does have to call Microsoft support, we don’t have to worry about the issue at all.

However, that’s not always possible. My customer may have strict VM provisioning processes in place, have limited non-NFS storage to provision, or have some other reason why they need to use NFS-based VMDKs. In this case, I have found the following base layout to work well:

Volume Type Notes
C: VMDK or RDM Can be on any type of supported data store. Should be sized to include static page file of size PhysicalRAM + 10 MB.
E: RDM or guest iSCSI/FC iSCSI/FC    All Exchange binaries installed here. Move IIS files here (scripts out on Internet to do this for you). Create an E:\Exchdata directory and use NTFS mount points to mount each of the data volumes the guest will mount.
Data volumes RDM or guest iSCSI/FC Any volume holding mailbox/PF database EDB or logs, or HT queue EDB or logs. Should mount these separately, NTFS mount points recommended. Format these NTFS volumes with 64K block size, not default.

Note that we have several implicit best practices in use here:

  • Static page file, properly sized for a 64-bit operating system with a large amount of physical RAM. Doing this ensures that you have enough virtual memory for the Exchange memory profile AND that you can write a kernel memory crash dump to disk in the event of a blue screen. (If the page file is not sized properly, or is not on C:, the full dump cannot be written to disk.)
  • Exchange binaries not installed on the system drive. This makes restores much easier. Since Exchange uses IIS heavily, I recommend moving the IIS data files (the inetpub and children folders) off of the system drive and onto the Exchange volume. This helps reduce the rate of change on the system drive and offers other benefits such as making it easier to properly configure anti-virus exclusions.
  • The use of NTFS mount points (which mount the volume to a directory) instead of separate drive letters. For large DAGs, you can easily have a large number of volumes per MB role, making the use of drive letters a limitation on scalability. NTFS mount points work just like Unix mount points and work terribly well – they’ve been supported since Exchange 2003 and recommended since the late Exchange 2003 era for larger clusters. In Exchange 2007 and 2010 continuous replication environments (CCR, SCR, DAG), all copies must have the same pathnames.
  • Using NTFS 64K block allocations for any volumes that hold ESE databases. While not technically necessary for log partitions, doing so does not hurt performance.

So Why Is This Even A Problem?

This is the money question, isn’t it? Windows itself is supported under this configuration. Even SQL Server is. Why not Exchange?

At heart, it comes down to this: the Exchange ESE database engine is a very finely-tuned piece of software, honed for over 15 years. During that time, with only one exception (the Windows Storage Server 2003 Feature Pack 1, which allowed storage solutions running WSS 2003 + FP1 to host Exchange database files over NAS protocols), Exchange has never supported putting Exchange database files over file-level storage. I’m not enough of an expert on ESE to whip up a true detailed answer, but here is what I understand about it.

Unlike SQL Server, ESE is not a general purpose database engine. SQL is optimized to run relational databases of all types. The Exchange flavor of ESE is optimized for just one type of data: Exchange. As a result, ESE has far more intimate knowledge about the data than any SQL Server instance can. ESE provides a lot of performance boosts for I/O hungry Exchange databases and it can do so precisely because it can make certain assumptions. One of those assumptions is that it’s talking to block-level storage.

When a host process commits writes to storage, there’s a very real difference in the semantics of the write operation between block-level protocols and file-level protocols. Exchange, in particular, depends dramatically on precise control over block-level writes – which file protocols like NFS and SMB can mask. The cases under which this can cause data corruption for Exchange are admittedly corner cases, but they do exist and they can cause impressive damage.

Cleaning Up

What should we do about it if we have an Exchange deployment that is in violation of these support guidelines?

Ideally, we fix it. Microsoft’s support stance is very clear on this point, and in the unlikely event that data loss occurs in this configuration, Microsoft support is going to point at the virtualization/storage vendors and say, “Get them to fix it.” I am not personally aware of any cases of a configuration like this causing data loss or corruption, but I am not the Exchange Product Group – they get access to an amazing amount of data.

At the very least, you need to understand and document that you are in an unsupported configuration so that you can make appropriate plans to get into support as you roll out new servers or upgrade to future versions of Exchange. This is where getting a good Exchange consultant to do an Exchange health check can help you get what you need and provide the support you need with your management – we will document this in black and white and help provide the outside validation you might need to get things put right.

One request for the commenters: if all you’re going to do is say, “Well we run this way and have no problems,” don’t bother. I know and stipulate that there are many environments out there running in violating of this support boundary that have not (yet) run into issues. I’ve never said it won’t work. There are a lot of things we can do, but that doesn’t mean we should do them. At the same time, at the end of the day – if you know the issues and potential risks, you have to make the design decision that’s right for your organization. Just make sure it’s an informed (and documented, and signed-off!) decision.