Devin’s Load Balancer for Exchange 2010

Overview

One of the biggest differences I’m seeing when deploying Exchange 2010 compared to previous versions is that for just about all of my customers, load balancing is becoming a critical part of the process. In Exchange 2003 FE/BE, load balancing was a luxury unheard of for all but the largest organizations with the deepest pockets. Only a handful of outfits offered load balancing products, and they were expensive. For Exchange 2007 and the dedicated CAS role, it started becoming more common.

For Exchange 2003 and 2007, you could get all the same benefits of load balancing (as far as Exchange was concerned) by deploying an ISA server or ISA server cluster using Windows Network Load Balancing (WNLB). ISA included the concept of a “web farm” so it would round-robin incoming HTTP connections to your available FE servers (and Exchange 2007 CAS servers). Generally, your internal clients would directly talk to their mailbox servers, so this worked well. Hardware load balancers were typically used as a replacement for publishing with an ISA reverse proxy (and more rarely to load balance the ISA array instead of WNLB). Load balancers could perform SSL offloading, pre-authentication, and many of the same tasks people were formerly using ISA for. Some small shops deployed WNLB for Exchange 2003 FEs and Exchange 2007 CAS roles.

In Exchange 2010, everything changes. Outlook RPC connections now go to the CAS servers in the site, not the MB server that hosts the active copy of the database. Mailbox databases now have an affiliation with either a specific CAS server or a site-specific RPC client access array, which you can see using the –RpcClientAccessServer parameter of the Get-MailboxDatabase cmdlet. If you have two or more servers, I recommend you set up the RPC client access array as part of the initial deployment and get some sort of load balancer in place.

Load Balancing Options

At Trace3, we’re an F5 reseller, and F5 is one of the few load balancer companies out there that has really made an effort to understand and optimize Exchange 2010 deployments. However, I’m not on the sales side; I have customers using a variety of load balancing solutions for their Exchange deployments. At the end of the day, we want the customer to do what’s right for them. For some customers, that’s an F5. Others require a different solution. In those cases, we have to get creative – sometimes they don’t have budget, sometimes the networking team has their own plans, and on some rare occasions, the plans we made going in turned out not to be a good fit after all and now we have to come up with something on the fly.

If you’re not in a position to use a high-end hardware load balancer like an F5 BIG-IP or a Cisco ACE solution, and can’t look at some of the lower-cost (and correspondingly lower-feature) solutions that are now on the market, there are few alternatives:

  • WNLB. To be honest, I have attempted to use this in several environments now and even when I spent time going over the pros and cons, it failed to meet expectations. If you’re virtualizing Exchange (like many of my customers) and are trying to avoid single points of failure, WNLB is so clearly not the way to go. I no longer recommend this to my customers.
  • DNS round robin. This method at least has the advantage of in theory driving traffic to all of the CAS instances. However, in practice it gets in the way of quickly resolving problems when they come up. It’s better than nothing, but not by much.
  • DAG cluster IP. Some clever people came up with this option for instances where you are deploying multi-role servers with MB+HT+CAS on all servers and configuring them in a DAG. DAG = cluster, these smart people think, and clusters have a cluster IP address. Why can’t we just use that as the IP address of the RPC client access array? Sure enough, this works, but it’s not tested or supported by Microsoft and it isn’t a perfect solution. It’s not load balancing at all; the server holding the cluster IP address gets all the CAS traffic. Server sizing is important!

The fact of the matter is, there are no great alternatives if you’re not going to use hardware load balancing. You’re going to have to compromise something.

Introducing Devin’s Load Balancer

For many of my customers, we end up looking something like this:

  • The CAS/HT roles are co-located on one set of servers, while MB (and the DAG) is on another. This rules out the DAG cluster IP option.
  • They don’t want users to complain excessively when something goes wrong with one of the CAS/HT servers. This rules out DNS round robin.
  • They don’t have the budget for a hardware solution yet, or one is already in the works but not ready because of schedule. They need a temporary, low-impact solution. This effectively rules out WNLB.

I’ve come up with a quick and dirty fix I call Devin’s Load Balancer or, as I commonly call it, the DLB. It looks like this:

  1. Pick one CAS server that can handle all the traffic for the site. This is our target server.
  2. Pick an IP address for the RPC client access array for the site. Create the DNS A record for the RPC client access array FQDN, pointing to the IP address.
  3. Create the RPC client access array in EMS, setting the name, FQDN, and site.
  4. On the main network interface of the target server, add the IP address. If this IP address is on the same subnet as the main IP address, there is no need to create a secondary interface! Just add it as a secondary IP address/subnet mask.
  5. Make sure the appropriate mailbox databases are associated with the RPC client access array.
  6. Optionally, point the internal HTTP load balance array DNS A record to this IP address as well (or publish this IP address using ISA).

You may have noticed that this sends all traffic to the target server; it doesn’t really load balance. DLB also stands for Doesn’t Load Balance!

This configuration, despite its flaws, gives me what I believe are several important benefits:

  • It’s extremely easy to switchover/failover. If something happens to my target server, I simply add the RPC client access array IP address as a secondary IP address to my next CAS instance. There are no DNS cache entries to wait to expire. There are are no switch configurations to modify. There are no DNS records I have to update. If this is a planned switchover, client get disrupted but can immediately connect. I can make the update as soon as I get warning that something happened and my clients can reconnect without any further action on their part.
  • It isolates what I do with the other CAS instances. Windows and Exchange no longer have any clue they’re in a load balanced pseudo-configuration. With WNLB, if I make any changes to the LB cluster (like add or remove a member), all connections to the cluster IP addresses are dropped!
  • It makes it very easy to upgrade to a true load balancing solution. I set the true solution up in parallel with an alternate, temporary IP address. I use local HOSTS file entries on my test machines while I’m getting everything tested and validated. And then I simply take the RPC client access array IP address off the target server and put it on the load balancer. Existing connections are dropped, but new ones immediately connect with no timeouts – and now we’re really load balancing.

Note that you do not need the CAS SSL certificate to contain the FQDN of the RPC client access array as a SAN entry. RPC doesn’t use SSL for encryption (it’s not based on HTTP).

Even in a deployment where the customer is putting all roles into single-server configuration, if there’s any thought at all that they might want to expand to an HA configuration in the future, I now am in the habit of configuring this. The RPC client access array is now configured and somewhat isolated from the CAS configuration, so now my future upgrades are easier and less disruptive.

Review: Cooking for Geeks (O’Reilly)

Edit 1/1/2013: (Belatedly) updated the author’s website per his request.

Writing books is a ton of work. Making them appealing is even more so, especially when your audience is geeks. You have to know your stuff, you have to present it well, and it doesn’t hurt if you can make it entertaining. In the technical field, I think O’Reilly is the one publisher that hits this bar more consistently than any other publisher. Getting to co-write my first book for them was a great experience; if they ever came asking me to work on another book for them, I would seriously think about it (more importantly, my wife wouldn’t automatically say no).

Back at the end 0f August, I had the opportunity, thanks to the @OReillyMedia twitter feed, to get my hands on a review copy of Cooking for Geeks (CfG) in e-book format. As part of the review agreement, I was supposed to:

  • Select a recipe from the book,
  • Prepare it,
  • Photograph it,
  • Write a review and post it,
  • Post the photograph on the O’Reilly Facebook page,
  • and all by September 6th.

Oops. Obviously, I’ve missed the precise timing here, but a bit belated, here’s the review I owe.

Why this cooking book?

There’s a lot of information on cooking out there. Stephanie has a metric ton of cookbooks and collected recipes in our house, and there are large chunks of old-growth forest bound up in the various cookbooks you can find in various stores. Thanks to the celebrity chef craze on TV, cooking (never an unpopular subject) has grown leaps and bounds beyond the good old Betty Crocker Cookbook that many of us grew up with[1]. Popular TV chefs now write and sell cookbooks on just about any specialty and niche you can imagine. I’ve even indulged in the recipe fetish myself once or twice, most noticeably to snag and perfect my favorite dish, the Cheesecake Factory’s Spicy Cashew Chicken dish.

What caught my attention (other than this being an O’Reilly book) about CfG was that my household has been slowly and steadily moving into the exciting world of food allergens. We recently flung ourselves off the cliffs of insanity this summer when blood tests revealed that Steph and Treanna tested positive for gluten antibodies. Add that to the existing dairy-free regime, and it was clear that menu planning at Chez Ganger had just started a new, exciting, but potentially very limited and boring chapter.

We’ve got a lot of friend who are gluten-free, dairy-free, vegetarian, vegan, some other regime, or even combinations of the above, so Steph’s no stranger to the issues involved. What is doable as an occasional thing, though, can become overwhelming when it’s a sudden lifestyle change that comes hard on the heels of a long, exhausting summer – just in time for the new school year. Understandably, Steph was struggling to cope – and we weren’t exactly the most helpful crew she could hope for.

After a few weeks of the same basic dishes being served over and over again, I was ready for any lifesaver that I could find. That’s when the fateful tweet caught my eye. After a few rounds of back and forth e-mail, I discovered that CfG included a chapter on cooking to accommodate allergens. The rest, as they say, is fate.

Torturing Chickens For Fun and Noms

Although I could go into great detail about the recipe my family ended up selecting – butterflied roasted chicken – my wife has already done so. Like a good writer, I will steal her efforts link to her blog post instead. She even took pictures! Go, read and salivate!

Back already?

Under the Cover

CfG is written by Jeff Potter, whose geek credentials appear to be genuine. The book has a fantastic companion site, which is essentially a link fest to the related blog and Twitter stream (as well as to the various places you can go on the Internet to purchase a copy of the book).

My lovely wife handled the “cooking” and “presentation” parts well, so I’m going to move on to our thoughts about the book itself:

  • Content. If you want a book that explores the science and the art behind cooking, this is your book. It’s not a college textbook; it’s a great middle school or high school-level overview of the science of cooking that seems more interested in sharing Jeff’s love of cooking with you rather than creating cooking’s equivalent of the CCIE. Jeff writes with a very informal personable voice and isn’t afraid to show off his mastery of the physics behind good and bad dishes, sharing them in a way that’s part Bill Nye the Science Guy and part Ferris Bueller. I have never before laughed while reading a book on cooking. However, if you’re expecting a cookbook, check your expectations at the door. If this book has a weakness, it’s that talking about all this food will make you want a lot of recipes to try out, and I was surprised by how relatively few recipes there actually are. What is there provides an interesting cross-section across different types of dishes and ingredients, but it’s not a comprehensive reference guide. This is not “Cooking in a Nutshell” or cooking’s Camel Book; it is instead a not-to-scale map of the CfG theme park. If you find something that entrances you, you should be able to walk away with enough exposure to be able to knowledgeably pick out some other more detailed work for given area. CfG is the culinary equivalent of Jerome K. Jerome’s immortal Three Men in a Boat (To Say Nothing of the Dog); you’re going to get a fantastic lazy summer day punt trip down the river of Jeff’s epicurean experiences.
  • Format. We used the PDF format (like all of O’Reilly’s e-books, unencumbered by DRM). Steph already made a comment about how useful she found the e-book format. With a sturdy tablet, I think an e-book cookbook would be great in the kitchen, especially if there were some great application that could handle browsing and organizing recipes from multiple sources. As I already said, though, this book is not a cookbook and I’d probably just make a quick copy of (or retype) the recipes I was interested in so that I didn’t have to use the physical book in the kitchen. Having said that, though, we’re going to purchase a physical copy of the book to facilitate quick browsing. If you’ve already made the switch to casual e-reading (we have not yet), you probably won’t have this same issue.
  • Organization. Whether you like the book’s organization will depend on what you wanted out of it. If you wanted cooking’s Camel Book, you will find the book to be dismayingly unorganized. The structure of the book (and the recipes within) are based around the physics of cooking. Here, Jeff reveals himself to be a Lego Master of building blocks – you will find yourself introduced to one scientific concept after another, and each chapter will build on that knowledge by concentrating on a particular theme or technique rather than on a specific type of food or course. It really will help you to think of it as a novel (a romance, actually, between Jeff and food) and read the book from cover to cover rather than jump around in typical O’Reilly reference format. This is passion, not profession; calling, not career.
  • Utility. I’m pretty much a dunce when it comes to cooking, so I found this book to be extremely useful. I hate following the typical magical thinking approach to cooking: put ingredient A into slot B and pull on tab C for 30 minutes until you screw it all up because you didn’t know that your masterpiece was afraid of loud noises. I want to know why I’m putting nasty old cream of tartar into my mixing bowl; what purpose does it serve? How can I usefully strike out into the scary wilderness of trying to adapt existing favorite recipes to a gluten-free, dairy-free existence? CfG doesn’t answer all my questions, but it answers a hell of a lot more of them than any other cooking book I’ve picked up. It didn’t talk down to me, but it didn’t assume I was already a lifelong member of the Secret and Worshipful Order of Basters, Bakers, and Broilers. What it didn’t do, though, is give me a large number of variations on a theme to go and try. At times the recipe selection – while ecletic and representative – felt somewhat sparse and even unrelated to what was being talked about in the main text. It seemed like someone on the team had written a badly behaved random recipe widget[2] to insert a recipe every so often. I would love, in the second edition, to see a little bit more connection between the theory and the practice, even though I recognize this isn’t a textbook.

We found our payoff in the chapter on cooking around allergens. Of all the chapters, this is the one that most felt like a reference work — a concise but thorough reference work. Jeff explains why (for example) taking gluten out of a recipe and merely substituting some non-gluten flour is probably not going to produce edible results, and then explains some of the common approaches for dealing with the problem. He’s trusting us, the readers, to be able and willing to do some experimentation and find our own way without having a GPS to lead us by the nose. While it’s initially tempting to have the comfort of specific substitution steps, in the end, CfG will help you know how to make substitutions on your own and quickly dial in to an acceptable solution rather than sit around waiting for someone to write the HOWTO.

In the end, Jeff’s approach is empowerment. We liked it a lot; thank you, Jeff and O’Reilly!

[1] Not only did I grow up with one and spend a lot of time browsing it, Steph has one. I’ll have you know, however, that I’ve only flipped through it once for auld lang syne.

[2] Probably written in Ruby or PHP.

Moving to Exchange Server 2010 Service Pack 1

Microsoft recently announced that Service Pack 1 (SP1) for Exchange Server 2010 had been released to web, prompting an immediate upgrade rush for all of us Exchange professionals. Most of us maintain at least one home/personal lab environment, the better to pre-break things before setting foot on a customer site. Before you go charging out to do this for production (especially if you’re one of my customers, or don’t want to run the risk of suddenly becoming one of my customers), take a few minutes to learn about some of the current issues with SP1.

Easy Installation and Upgrade Slipstreaming

One thing that I love about Exchange service packs is that from Exchange 2007 on, they’re full installations in their own right. Ready to deploy a brand new Exchange 2010 SP1 server? Just run setup from the SP1 binaries – no more fiddling around with the original binaries, then applying your service packs. Of course, the Update Rollups now take the place of that, but there’s a mechanism to slipstream them into the installer (and here is the Exchange 2007 version of this article).

Note: If you do make use of the slipstream capabilities, remember that Update Rollups are both version-dependent (tied to the particular RTM/SP release level) and are cumulative. SP1 UR4 is not the same thing as RTM UR4! However, RTM UR4 will include RTM UR3, RTM UR2, and RTM UR1…just as SP1 UR4 will contain SP1 UR3, SP1 UR2, and SP1 UR1.

The articles I linked to say not to slipstream the Update Rollups with a service pack, and I’ve heard some confusion about what this means. It’s simple: you can use the Updates folder mechanism to slipstream the Update Rollups when you are performing a clean install. You cannot use the slipstream mechanism when you are applying a service pack to an existing Exchange installation. In the latter situation, apply the service pack, then the latest Update Rollup.

It’s too early for any Update Rollups for Exchange 2010 SP1 to exist at the time of writing, but if there were (for the sake of illustration, let’s say that SP1 UR X just came out), consider these two scenarios:

  • You have an existing Exchange 2010 RTM UR4 environment and want to upgrade directly to SP1 UR1. You would do this in two steps on each machine: run the SP1 installer, then run the latest SP1 UR X installer.
  • You now want to add a new Exchange 2010 server into your environment and want it to be at the same patch level. You could perform the installation in a single step from the SP1 binaries by making sure the latest SP1 UR X installer was in the Updates folder.

If these scenarios seem overly complicated, just remember back to the Exchange 2003 days…and before.

Third Party Applications

This might surprise you, but in all of the current Exchange 2010 projects I’m working on, I’ve not even raised the question of upgrading to SP1 yet. Why would I not do that? Simple – all of these environments have dependencies on third-party software that is not yet certified for Exchange 2010 SP1. In some cases, the software has barely just been certified for Exchange 2010 RTM! If the customer brings it up, I always encourage them to start examining SP1 in the lab, but for most production environments, supportability is a key requirement.

Make sure you’re not going to break any applications you care about before you go applying service packs! Exchange service packs always make changes – some easy to see, some harder to spot. You may need to upgrade your third-party applications, or you may simply need to make configuration changes ahead of time – but if you blindly apply service packs, you’ll find these things out the hard way. If you have a critical issue or lack of functionality that the Exchange 2010 SP1 will address, get it tested in your lab and make sure things will work.

Key applications I encourage my customers to test include:

Applications that use SMTP submission are typically pretty safe, and there are other applications that you might be okay living without if something does break. Figure out what you can live with, test them (or wait for certifications), and go from there.

Complications and Gotchas

Unfortunately, not every service pack goes smoothly. For Exchange 2010 SP1, one of the big gotchas that early adopters are giving strong feedback about is the number of hotfixes you must download and apply to Windows and the .NET Framework before applying SP1 (a variable number, depending on which base OS your Exchange 2010 server is running).

Having to install hotfixes wouldn’t be that bad if the installer told you, “Hey, click here and here and here to download and install the missing hotfixes.” Exchange has historically not done that (citing boundaries between Microsoft product groups) even though other Microsoft applications don’t seem to be quite as hobbled. However, this instance of (lack of) integration is particularly egregious because of two factors.

Factor #1: hotfix naming conventions. Back in the days of Windows 2000, you knew whether a hotfix was meant for your system, because whether you were running Workstation or Server, it was Windows 2000. Windows XP and Windows 2003 broke that naming link between desktop and server operating systems, often confusingly so once 64-bit versions of each were introduced (32-bit XP and 32-bit 2003 had their own patch versions, but 64-bit XP applied 64-bit 2003 hotfixes).

Then we got a few more twists to deal with. For example, did you know that Windows Vista and Windows Server 2008 are the same codebase under the hood? Or that Windows 7 and Windows Server 2008 R2, likewise, are BFFs? It’s true. Likewise, the logic behind the naming of Windows Server 2003 R2 and Windows Server 2008 R2 were very different; Windows Server 2003 R2 was basically Windows Server 2003 with a SP and few additional components, while Windows Server 2008 R2 has some substantially different code under the hood than Windows Server 2008 with SP. (I would guess that Windows Server 2008 R2 got the R2 moniker to capitalize on Windows 2008’s success, while Windows 7 got a new name to differentiate itself from the perceived train wreck that Vista had become, but that’s speculation on my part.)

At any rate, figuring out which hotfixes you need – and which versions of those hotfixes – is less than easy. Just remember that you’re always downloading the 64-bit patch, and that Windows 2008=Vista while Windows 2008 R2=Windows 7 and you should be fine.

Factor #2: hotfix release channels. None of these hotfixes show up under Windows Update. There’s no easy installer or tool to run that gets them for you. In fact, at least two of the hotfixes must be obtained directly from Microsoft Customer Support Services. All of these hotfixes include scary legal boilerplate about not being fully regression tested and thereby not supported unless you were directly told to install them by CSS. This has caused quite a bit of angst out in the Exchange community, enough so that various people are collecting the various hotfixes and making them available off their own websites in one easy package to download[1].

I know that these people mean well and are trying to save others from a frustrating experience, but in this case, the help offered is a bad idea. That same hotfix boilerplate means that everyone who downloads those hotfixes agree not to redistribute those hotfixes. There’s no exception for good intentions. If you think this is bogus, let me give you two things to think about:

  • You need to be able to verify that your hotfixes are legitimate and haven’t been tampered with. Do you really want to trust production mission-critical systems to hotfixes you scrounged from some random Exchange pro you only know through blog postings? Even if the pro is trustworthy, is their web site? Quite frankly, I trust Microsoft’s web security team to prevent, detect, and mitigate hotfix-affecting intrusions far more quickly and efficiently than some random Exchange professional’s web host. I’m not disparaging any of my colleagues out there, but let’s face it – we have a lot more things to stay focused on. Few of us (if any) have the time and resources the Microsoft security guys do.
  • Hotfixes in bundles grow stale. When you link to a KB article or Microsoft Download offering to get a hotfix, you’re getting the most recent version of that hotfix. Yes, hotfixes may be updated behind the scenes as issues are uncovered and testing results come in. In the case of the direct-from-CSS hotfixes, you can get them for free through a relatively simple process. As part of that process, Microsoft collects your contact info so they can alert you if any issues later come up with the hotfix that may affect you. Downloading a stale hotfix from a random bundle increases the chances of getting an old hotfix version that may cause issues in your environment, costing you a support incident. How many of these people are going to update their bundles as new hotfix versions become available? How quickly will they do it – and how will you know?

The Exchange product team has gotten an overwhelming amount of feedback on this issue, and they’ve responded on their blog. Not only do they give you a handy table rounding up links to get the hotfixes, they also collect a number of other potential gotchas and advice to learn from from before beginning your SP1 deployment. Go check it out, then start deploying SP1 in your lab.

Good luck, and have fun! SP1 includes some killer new functionality, so take a look and enjoy!

[1] If you’re about to deploy a number of servers in a short period of time, of course you should cache these downloaded hotfixes for your team’s own use. Just make sure that that you check back occasionally for updated versions of the hotfixes. The rule of thumb I’d use is about a week – if I’m hitting my own hotfix cache and it’s older than a week, it’s worth a couple of minutes to make sure it’s still current.

Manually creating a DAG FSW for Exchange 2010

I just had a comment from Chris on my Busting the Exchange Trusted Subsystem Myth post that boiled down to asking what you do when you have to create the FSW for an Exchange 2010 DAG manually?

In order for this to be true, you have to have the following conditions:

  1. You have no other Exchange 2010 servers in the AD site. This implies that at least one of your DAG nodes is multi-role — remember that you need to have a CAS role and an HT role in the same site as your MB roles, preferably two or more of each for redundancy and load. If you have another Exchange 2010 server, then it’s already got the correct permissions — let Exchange manage the FSW automatically.
  2. If the site in question is part of a DAG that stretches sites, there are more DAG nodes in this site than in the second site. If you’re trying to place the FSW in the site with fewer members, you’re asking for trouble[1].
  3. You have no other Windows 2003 or 2008 servers in the site that you consider suitable for Exchange’s automatic FSW provisioning[2]. By this, I mean you’re not willing to the the Exchange Trusted Subsystem security group to the server’s local Administrators group so that Exchange can create, manage, and repair the FSW on its own. If your only other server in the site is a DC, I can understand not wanting to add the group to the Domain Admins group.

If that’s the case, and you’re dead set on doing it this way, you will have to manually create the FSW yourself. A FSW consists of two pieces: the directory, and the file share. The process for doing this is not documented anywhere on TechNet that I could find with a quick search, but happily, one Rune Bakkens blogs the following process:

To pre-create the FSW share you need the following:
– Create a folder etc. D:\FilesWitness\DAGNAME
– Give the owner permission to Exchange Trusted Subsystem
– Give the Exchange Trusted Subsystem Full Control (NTFS)
– Share the folder with the following DAGNAME.FQDN (If you try a different share name,
it won’t work. This is somehow required)
– Give the DAGNAME$ computeraccount Full Control (Share)

When you’ve done this, you can run the set-databaseavailabilitygroup -witnessserver CLUSTERSERVER – witnessdirectory D:\Filewitness\DAGNAME

You’ll get the following warning message:

WARNING: Specified witness server Cluster.fqdn is not an Exchange server, or part of the Exchange Servers security group.
WARNING: Insufficient permission to access file shares on witness server Cluster.fqdn. Until this problem is corrected, the database availability group may be more vulnerable to failures. You can use the set-databaseavailabilitygroup cmdlet to try the operation again. Error: Access is denied

This is expected, since the cmdlet tries to create the folder and share, but don’t have the permissions to do this.

When this is done, the FSW should be configured correct. To verify this, the following files should be created:

- VerifyShareWriteAccess
– Witness

Just for the record, I have not tested this process yet. However, I’ve had to do some recent FSW troubleshooting lately and this matches with what I’ve seen for naming conventions and permissions, so I’m fairly confident this should get you most of the way there. Thank you, Rune!

Don’t worry, I haven’t forgotten the next installment of my Exchange 2010 storage series. It’s coming, honest!

[1] Consider the following two-site DAG scenarios:

  • If there’s an odd number of MB nodes, Exchange won’t use the FSW.
  • An even number (n) of nodes in each site. The FSW is necessary for there to even be a quorum (you have 2n+1 nodes so a simple majority is n+1). If you lose the FSW and one other node — no matter where that node is — you’ll lose quorum. If you lose the link between sites, you lose quorum no matter where the FSW is.
  • A number (n) nodes in site A, with at least one fewer nodes (m) in site B. If n+m is odd, you have an odd number of nodes — our first case. Even if m is only 1 fewer than n, putting the FSW in site B is meaningless — if you lose site A, B will never have quorum (in this case, m+1 = n, and n is only half — one less than quorum).

I am confident in this case that if I’ve stuffed up the math here, someone will come along to correct me. I’m pretty sure I’m right, though, and now I’ll have to write up another post to show why. Yay for you!

[2] You do have at least one other Windows server in that site, though, right — like your DC? Exchange doesn’t like not having a DC in the local site — and that DC should also be a GC.

On Patriotism

Patriotism is being committed to making things better for those around me no matter how good I personally have it. No government, political system, or economic theory is perfect; there will always be people who fall through the cracks. As a patriot, I have a responsibility to identify those cracks and work to mitigate them. Dedication to capitalism or socialism should not deaden me to the suffering of those who are not as fortunate as I am. In helping my fellow Americans, I am strengthening my country.

Patriotism is holding my elected officials, their political appointees, and the news media accountable for the choices and actions they take in my name. As a patriot, I have a responsibility to ensure that my representatives are conducting the business of government according to the values and principles they represented during election time. I need accurate and timely information on their performance and actions. I need to understand the difference between news and entertainment and know when each is appropriate.

Patriotism is acknowledging my country’s flaws with integrity and honesty instead of trying to cover them up or excuse them. When my government and policies fail – and being human institutions, they will fail – I will be tempted to downplay or minimize the impact of these failures. Instead, I must face these failures and their consequences forthrightly, make every reasonable effort to keep them from occurring again, and encourage my fellow Americans to do the same.

Patriotism is respecting the offices and institutions of my government even when expressing my disagreement with its policies and actions. Whether I am Democrat, Independent, Libertarian, Republican, some other party, or a member of none, I choose to discuss government and politics with civility and grace. I do not have to vilify political opponents in order to successfully engage their ideas and point out the failures of their actions. I can condemn bad choices and actions without hatred or unnecessary anger towards those who make them.

Patriotism is placing untainted personal ethics and morality ahead of my politics. I will not spread racism, classism, sexism, or other institutionalized forms of hatred. I have a responsibility to ensure that the voice of every American can be heard and that America provides as level of a playing field as possible. I have a personal stake in making America an ideal of compassionate, reasoned behavior to Americans and to the people of the world. I understand that my country will not be truly great if her citizens are not also great.

Patriotism is patient and compassionate. It is not jealous or blind. It does not covet or boast. Patriotism builds up and exhorts. It does not destroy or belittle. It does not promote lies or avoid the truth. Patriotism does not demand perfection, but asks you to always give your best.

May we all strive to be better patriots.

How To Develop Patience

“Lord, give me patience, and give it to me now!” I’m willing to bet most of of us have heard that joke (or some variant) at some point in our lives, but it underscores a serious question: how does one go about learning to exercise patience?

I’m no guru or saint, so I can’t answer the question for you, but for me it turns out the answer comes from a combination of two life experiences: my six and a half years at 3Sharp, and the nearly two years I’ve been studying karate. At 3Sharp, I learned how to do a lot of things that were beyond my initial comfort zone, developing deep technical presentations (and delivering them to large audiences), scoping and producing large technical projects such as books and whitepapers, and doing a large variety of work from hands-on consulting to research projects.

I’ve talked in previous posts about the physical benefits I’ve seen from karate. However, two weeks ago I tested for my 5th kyu belt (the second of my three green belts) and that experience made me aware of some deep changes in my personality and character. The step from 6th kyu to 5th kyu was particularly hard for me, and it took some time to sort out the two reasons why.

The obvious cause was schedule. I took two months off of karate at the beginning of the year, due to a combination of factors. That’s a hard gap to come back from; I had problems after the three week hiatus I took because of the MCM class. After two months, I just didn’t feel that my presence in class was doing any good until I had the privilege of watching two of my friends from the Mukilteo dojo earn their black belts one Saturday morning in February. I walked away from that experience feeling a new level of commitment to karate. After all, I told myself at that point, if I study hard, I’ll get to 5th kyu sooner or later, and that’s half-way to black belt!

The other cause was technical. The test kata for 5th kyu (Pinan Shodan) is the karateka’s first introduction[1] to a well-known and complicated set of katas, and while most of it seems to be straightforward, there’s a lot boiling up from underneath the surface. Carlos Sensei began introducing us to a series of drills based around Pinan Shodan that unpack a lot of useful theory and practice from the first eight moves of the kata. There’s this very difficult pivot/kick/double punch move right in there (I dub it UberHardMove) that is a key element of the kata, and I was having a hard time getting the pivot, kick, and punches all coordinated together and working the right way without falling on my ass. In fact, I had such a difficult time with it that I can remember sometime around the end of December thinking that maybe I’d found the wall beyond which my lack of coordination was not going to let me pass. In addition, there’s some pretty gnarly tuite that goes along with all of this and I found that I felt horribly weak on my tuite all around, let alone with the techniques I was supposed to able to demonstrate some proficiency at.

What ended up happening, though, was that the two-month time-out did me unexpected good. I didn’t go to class during that period, but I kept practicing karate around the house. (Just ask Steph and the kids; they’ll tell you that it can be difficult to get me to knock it off and stop interfering with whatever they’re trying to do.) And what I did during that time was to take UberHardMove and break it down into components, the way I had previously been shown as a blue belt[2]. I combined that with specific suggestions given to me by both Carlos Sensei and Liam Sensei and picked UberHardMove down to bare bones.

When I finally came back to class, I came back finally believing that the whole concept of me one day earning my black belt wasn’t the world’s best joke. I came back believing that I’d already invested nearly two years and I was willing to invest even more. I didn’t have to be perfect; I gave myself permission to suck. I knew that I was going to make stupid mistakes that I wouldn’t make (like mixing up techniques in lower level katas) if I’d been in class the whole time. I knew that my endurance was going to be awful. I knew that there was a lot of rust to scrub off and deal with and that it wasn’t going to happen immediately. I knew that I needed to let my instructors know that I desperately needed help with my tuite techniques. I knew that I was going to have to have them explain the same things about UberHardMove multiple times until I finally grokked it. In short, I accepted failure without accepting being a failure.

That was March. I tested near the end of May. Somewhere in there, I became proficient with my tuite. I learned a measure of peace with UberHardMove; I’m still not great at it, but I mastered it enough to move on to the next lessons[3]. Perfection is in fact is a bad word in our household. We think the concept of perfection is one of the worst lies that the Adversary ever got humans to accept.

When you stop trying to be perfect – when you give yourself permission to have flaws and failings and determine to be honest about them and learn from them rather than try to cover them up – something amazing tends to happen. You accept “doing your best” instead of “doing it better than everyone else.” You accept “that’s enough for now” instead of “that’s not good enough yet.” You develop a sense of faith that over time, your progress will trend upwards. With that faith, you can draw valuable lessons from your mistakes and missteps. You stop fighting the basic physical and neurological limits of how your body and mind acquire new proficiencies and start working within your limits to expand them instead of struggling against them to tear them down with brute force. You acquire patience – new and fledgling, but the seed of something that starts to affect how you deal with all of your life.

I’m no paragon of patience, but I can see clear changes. For example, I’ve been spending far less time playing Call of Duty on the Xbox in the last month or so. I have a better understanding of how that experience has been frustrating instead of fun and relaxing and I’m less willing to give in to that anymore.

I don’t know where this will go ultimately or at what pace. I can honestly say, though, that I’m okay with that. Will I get my black belt? I don’t know; there are many circumstances that could prevent or delay that. However, I certainly want to, and I finally know I’m capable of doing it, so I wouldn’t bet against me. But I also know that’s just another waypoint on the journey. It’s not an end. It’s a marker where I can say, “See what I’ve done so far? That’s pretty cool. Now I’ve learned enough that I can get serious about learning this stuff and helping pass it on to others.”

Two months ago, I’d have said I couldn’t wait for that day. You know what? That’s not true. I can wait. I will wait. And I will do so profitably.

[1] In our style, at least. There are other styles that place another Pinan kata before Pinan Shodan.

[2] In a nice twist of synchronicity, the person who showed me was at the time was a helpful brown belt from Mukilteo who ended up being one of the two black belts I got to watch test. He has continued to be an amazing source of inspiration for me through what is now a large number of discouraging situations. Hi, Max!

[3] It’s not going away; I still practice it, and I know that it will get better as I learn more. In fact, those final four moves in Pinan Nidan where I’m in a cat stance might be helpful here, hmmm…

The Disk’s The Thing! Exchange 2010 Storage Essays, part 2

Greetings, readers! When I first posted From Whence Redundancy? (part 1 of this series of essays on Exchange 2010 storage) I’d intended to follow up with other posts a bit faster than I have been. So much for intentions; let us carry on.

In part 1, I began the process of talking about how I think the new Exchange 2010 storage options will play out in live Exchange deployments over the next several years. The first essay in this series discussed what is I believe the fundamental question at the heart an Exchange 2010 storage design: at what level will you ensure the redundancy of your Exchange mailbox databases? The traditional approach has used RAID at the disk level, but Exchange 2010 DAGs allow you to deploy mailbox databases in JBOD configurations. While I firmly believe that’s the central question, answering it requires us to dig under the hood of storage.

With Exchange 2010, Microsoft specifically designed Exchange mailbox servers to be capable of using the lowest common denominator of server storage: a directly attached storage (DAS) array of 7200 RPM SATA disks in a Just a Box of Disks (JBOD) configuration (what I call DJS). Understanding why they’ve made this shift requires us to understand more about the disk drive technology. In this essay, part 2 of this series, let’s talk about disk technology and find out how Fibre Channel (FC), Serially Attached SCSI (SAS), and Serial Advanced Technology Attachment (SATA) disk drives are the same – and more importantly, what slight differences they have and what that means for your Exchange systems.

Exchange Storage SATA vs SAS

So here’s the first dirty little secret: for the most part, all disks are the same. Regardless of what type of bus they use, what form factor they are, what capacity they are, and what speed they rotate at, all modern disks use the same construction and principles:

  • They all have one or more thin rotating platters coated with magnetic media; the exact number varies by form factor and capacity. Platters look like mini CD-ROM disks, but unlike CDs, platters are typically double-sided. Platters have a rotational speed measured in revolutions per minute (RPMs).
  • Each side of a platter has an associated read-write head. These heads are on a single-track arm that moves in toward the hub of the platter or out towards the rim. The heads do not touch the platter, but float very close to the surface. It takes a measurable fraction of a second for the head to relocate from one position to another; this is called its seek time.
  • The circle described by the head’s position on the platter is called a track. In a multi-platter disk, the heads move in synchronization (there’s no independent tracking per platter or side). As a result, each head is on the same track at the same time, describing a cylinder.
  • Each drive unit has embedded electronics that implement the bus protocol, control the rotational speed of the platters, and translate I/O requests into the appropriate commands to the heads. Even though there are different flavors, they all perform the same basic functions.

If you would like a more in-depth primer on how disks work, I recommend starting with this article. I’ll wait for you.

Good? Great! So that’s how all drives are the same. It’s time to dig into the differences. They’re relatively small, but small differences have a way of piling up. Take a look at Table 1 which summarizes the differences between various FC, SATA, and SAS disks, compared with legacy PATA 133 (commonly but mistakenly referred to as IDE) and SCSI Ultra 320 disks:

Table 1: Disk parameter differences by disk bus type

Type Max wire bandwidth(Mbit/s) Max data transfer(MB/s)
PATA 133 1,064 133.5
SCSI Ultra 320 2,560 320
SATA-I 1,500 150
SATA-II 3,000 300
SATA 6 Gb/s 6,000 600
SAS 150 1,500 150
SAS 300 3,000 300
FC (copper) 4,000 400
FC (optic) 10,520 2,000

 

As of this writing, the most common drive types you’ll see for servers are SATA-II, SAS 300, and FC over copper. Note that while SCSI Ultra 320 drives in theory have a maximum data transfer higher than either SATA-II or SAS 300, in reality that bandwidth is shared among all the devices connected to the SCSI bus; both SATA and SAS have a one-to-one connection between disk and controller, removing contention. Also remember that SATA is only a half-duplex protocol, while SAS is a full-duplex protocol. SAS and FC disks use the full SCSI command set to allow better performance when multiple I/O requests are queued for the drive, whereas SATA uses the ATA command set. Both SAS and SATA implement tagged queuing, although they use two different standards (each of which has its pros and cons).

The second big difference is the average access time of the drive, which is the sum of multiple factors:

  • The average seek time of the heads. The actuator motors that move the heads from track to track are largely the same from drive to drive and thus the time contributed to the drive’s average seek time by just the head movements is roughly the same from drive to drive. What varies is the length of the head move; is it moving to a neighboring track, or is it moving across the entire surface? We can average out small track changes with large track changes to come up with idealized numbers.
  • The average latency of the platter. How fast the platters are spinning determines how quickly a given sector containing the data to be read (or where new data will be written) will move into position under the head once it’s in the proper track. This is a simple calculation based on the RPM of the platter and the observed average drive latency. We can assume that a given sector will move into position, on average, in no more than half a rotation. This gives us 30 seconds out of each minute of rotation, or 30,000 ms, into which we can divide the drive’s actual rotation.
  • The overhead caused by the various electronics and queuing mechanisms of the drive electronics, including any power saving measures such as reducing the spin rate of the drive platters. Although electricity is pretty fast and on-board electronics are relatively small circuits, there may be other factors (depending on the drive type) that may introduce delays into the process of fulfilling the I/O request received from the host server.

What has the biggest impact is how fast the platter is spinning, as shown in Table 2:

Table 2: Average latency caused by rotation speed

Platter RPM Average latency in ms
7,200 4.17
10,000 3
12,000 2.5
15,000 2

 

(As an exercise, do the same math on the disk speeds for the average laptop drives. This helps explain why laptop drives are so much slower than even low-end 7,200 RPM SATA desktop drives.)

Rather than painfully take you through the result of all of these tables and calculations step by step, I’m simply going to refer you to work that’s already been done. Once we know the various averages and performance metrics, we can figure out how many I/O operations per second (IOPS) a given drive can sustain on average, according to the type, RPMs, and nature of the I/O (sequential or random). Since Microsoft has already done that work for us as part of the Exchange 2010 Mailbox Role Calculator (version 6.3 as of this writing, I’m going to simply use the values there. Let’s take a look at how all this plays out in Table 3 by selecting some representative values.

Table 3: Drive IOPS by type and RPM

Size Type RPM Average Random IOPS
3.5” SATA 5,400 50
2.5” SATA 5,400 55
3.5” SAS 5,400 52.5
3.5” SAS 5,900 52.5
3.5” SATA 7,200 55
2.5” SATA 7,200 60
3.5” SAS 7,200 57.5
2.5” SAS 7,200 62.5
3.5” FC/SCSI/SAS 10,000 130
2.5” SAS 10,000 165
3.5” FC/SCSI/SAS 15,000 180
2.5” SAS 15,000 230

 

There are three things to note about Table 3.

  1. These numbers come from Microsoft’s Exchange 2010 Mailbox Sizing Calculator and are validated across vendors through extensive testing in an Exchange environment. While there may be minor variances between drive model and manufacturers and these number may seem pessimistic according to calculated IOPS number published for individual drives, these are good figures to use in the real world. Using calculated IOPS numbers can lead both to a range of figures, depending on the specific drive model and manufacturer, as well as to overestimating the amount of IOPS the drive will actually provide to Exchange.
  2. For the most part, SAS and FC are indistinguishable from the IOPs point of view. Regardless of the difference between the electrical interfaces, the drive mechanisms and I/O behaviors are comparable.
  3. Sequential IOPS are not listed; they will be quite a bit higher than the random IOPS (that same 7,200RPM SATA drive can provide 300+ IOPS for sequential operations). The reason is simple; although a lot of Exchange 2010 I/O has been converted from random to sequential, there’s still some random I/O going on. That’s going to be the limiting factor.

The IOPS listed are per-drive IOPS. When you’re measuring your drive system, remember that the various RAID configurations have their own IOPS overhead factor that will consume a certain number

There are of course some other factors that we need to consider, such as form factor and storage capacity. We can address these according to some generalizations:

  • Since SAS and FC tend to have the same performance characteristics, the storage enclosure tends to differentiate between which technology is used. SAS enclosures can often be used for SATA drives as well, giving more flexibility to the operator. SAN vendors are increasingly offering SAS/SATA disk shelves for their systems because paying the FC toll can be a deal-breaker for new storage systems.
  • SATA disks tend to have a larger storage capacity than SAS or FC disks. There are reasons for this, but the easiest one to understand is that SAS, being traditionally a consumer technology, has a lower duty cycle and therefore lower quality control specifications that must be met.
  • SATA disks tend to be offered with lower RPMs than SAS and FC disks. Again, we can acknowledge that quality control plays a part here – the faster a platter spins, the more stringently the drive components need to meet their specifications for a longer period of time.
  • 2.5” drives tend to have lower capacity than their 3.5” counterparts. This makes sense – they have smaller platters (and may have fewer platters in the drive).
  • 2.5” drives tend to use less power and generate less heat than equivalent 3.5” drives. This too makes sense – the smaller platters have less mass, requiring less energy to sustain rotation.
  • 2.5” drives tend to permit a higher drive density in a given storage chassis while using only fractionally more power. Again, this makes sense based on the previous two points; I can physically fit more drives into a given space, sometimes dramatically so.

Let’s look at an example. A Supermicro SC826 chassis holds 12 3.5” drives with a minimum of 800W power while the equivalent Supermicro SC216 chassis holds 24 2.5” drives with a minimum of 900W of power in the same 2Us of rack space. Doubling the number of drives makes up for the capacity difference between the 2.5” and 3.5” drives, provides twice as many spindles and allows a greater aggregate IOPS for the array, and only requires 12.5% more power.

The careful reader has noted that I’ve had very little to say about capacity in this essay, other than the observation above that SATA drives tend to have larger capacities, and that 3.5” drives tend to be larger than 2.5” drives. From what I’ve seen in the field, the majority of shops are just now looking at 2.5” drive shelves, so it’s safe to assume 3.5” is the norm. As a result, the 3.5” 7,200 RPM SATA drive represents the lowest common denominator for server storage, and that’s why the Exchange product team chose that drive as the performance bar for DJS configurations.

Exchange has been limited by performance (IOPS) requirements for most of its lifetime; by going after DJS, the product team has been able to take advantage of the fact that the capacity of these drives is the first to grow. This is why I think that Microsoft is betting that you’re going to want to simplify your deployment, aim for big, cheap, slow disks, and let Exchange DAGs do the work of replicating your data.

Now that we’ve talked about RAID vs. JBOD and SATA vs. SAS/FC, we’ll need to examine the final topic: SAN vs. DAS. Look for that discussion in Part 3, which will be forthcoming.

A Psalm for Karatekas

Last night I went to my first karate class in several weeks. On the way, my brain reinterpreted Psalm 23 from the viewpoint of a karateka. Enjoy.

1 The LORD is my sensei; I shall not fear.

2 He makes me work out with white belts; he leads me through katas.

3 He perfects my form. He leads me in the path of new techniques for the sake of advancement.

4 Though I walk through the valley of the shadow of death, I fear no evil, for his teachings are with me; my kama and bo staff comfort me.

5 He prepares testing for me in the presence of my fellow karateka; he adorns my waist with new obi, my gi fits better.

6 Surely discipline and health shall follow me all the days of my studies, and I will dwell in the dojo of the LORD forever.

More Exchange blogging with Trace3!

I just wanted to drop a quick note to let you all know that I’ll be cross-posting all of my Exchange related material both here and at the Trace3 blog. The Trace3 blog is a multi-author blog, so you’ll get not only all my Exchange-related content, but you’ll get a variety of other interesting discussions from a number of my co-workers.

To kick it off, I’ve updated my From Whence Redundancy? Exchange 2010 Storage Essays, Part 1 post with some new material on database reseed times and reposted it there in its entirety. Don’t worry, I’ve also updated it here.

What Exchange 2010 on Windows Datacenter Means

Exchange Server has historically come in two flavors for many versions – Standard Edition and Enterprise Edition. The main difference this license change made for you was the maximum number of supported mailbox databases as shown in Table 1:

Version Standard Edition Enterprise Edition
Exchange 2003 1 (75GB max) 20
Exchange 2007 5 50
Exchange 2010 5 100

Table 1: Maximum databases per Exchange editions

However, the Exchange Server edition is not directly tied to the Windows Server edition:

  • For Exchange 2003 failover cluster mailbox servers, Exchange 2007 SCC/CCR environments [1], and  Exchange 2010 DAG environments, you need Windows Server Enterprise Edition in order to get the MSCS cluster component framework.
  • For Exchange 2003 servers running purely as bridgeheads or front-end servers, or Exchange 2007/2010 HT, CAS, ET, and UM servers, you only need Windows Server Standard Edition.

I’ve seen some discussion around the fact that Exchange 2010 will install on Windows Server 2008 Datacenter Edition and Windows Server 2008 R2 Datacenter Edition, even though it’s not supported there and is not listed in the Operating System requirements section of the TechNet documentation.

HOWEVER…if we look at the Prerequisites for Exchange 2010 Server section of the Exchange Server 2010 Licensing site, we now see that Datacenter edition is, in fact listed as shown in Figure 1:

Exchange 2010 server license comparison

Figure 1: Exchange 2010 server license comparison

This is pretty cool, and the appropriate TechNet documentation is in the process of being updated to reflect this. What this means is that you can deploy Exchange 2010 on Windows Server Datacenter Edition; the differences between editions of Windows Server 2008 R2 are found here.[2] If you take a quick scan through the various feature comparison charts in the sidebar, you might wonder why anyone would want to install Exchange 2010 on Windows Server Datacenter Edition; it’s more costly and seems to provide the same benefits. However, take a look at the technical specifications comparison; this is, I believe, the meat of the matter:

  • Both editions give you a maximum of 2 TB – more than you can realistically throw at Exchange 2010.
  • Enterprise Edition gives you support for a maximum eight (8) x64 CPU sockets, while Datacenter Edition gives you sixty-four (64). With quad-core CPUs, this means a total of 32 cores under Enterprise vs. 256 cores under Datacenter.
  • With the appropriate hardware, you can hot-add memory in Enterprise Edition. However, you can’t perform a hot-replace, nor can you hot-add or hot-replace CPUs under Enterprise. With Datacenter, you can hot-add and hot-remove both memory and CPUs.

These seem to be compelling in many scenarios at first glance, unless you’re familiar with the recommended maximum configurations for Exchange 2010 server sizing. IIRC, the maximum CPUs that are recommended for most Exchange 2010 server configurations (including multirole servers) would be 24 cores – which fits into the 8 socket limitation of Enterprise Edition while using quad core CPUs.

With both Intel and AMD now offering hexa-core (6 core) CPUs, you can move up to 48 cores in Enterprise Edition. This is more than enough for any practical deployment of Exchange Server 2010 I can think of at this time, unless future service packs drastically change the CPU performance factors. Both Enterprise and Datacenter give you a ceiling of 2TB of RAM, which is far greater than required by even the most aggressively gigantic mailbox load I’d want to place on a single server. I’m having a difficult time seeing how anyone could realistically build out an Exchange 2010 server that goes beyond the performance and scalability limits of Enterprise Edition in any meaningful way.

In fact, I can think of only three reasons someone would want to run Exchange 2010 on Windows Server Datacenter Edition:

  • You have spare Datacenter Edition licenses, aren’t going to use them, and don’t want to buy more Enterprise Edition licenses. This must be a tough place to be in, but it can happen under certain scenarios.
  • You have a very high server availability requirements and require the hot-add/hot-replace capabilities. This will get costly – the server hardware that supports this isn’t cheap – but if you need it, you need it.
  • You’re already running a big beefy box with Datacenter and virtualization[3]. The box has spare capacity, so you want to make use of it.

The first two make sense. The last one, though, I’d be somewhat leery of doing. Seriously, think about this – I’m spending money on monstrous hardware with awesome fault tolerance capabilities, I’ve forked over for an OS license[4] that gives me the right to unlimited virtual machines, and now I’m going to clutter up my disaster recovery operations by mixing Exchange and other applications (including virtualization) in the same host OS instance? That may be great for a lab environment, but I’d have a long conversation with any customer who wanted to do this under production. Seriously, just spin up a new VM, use Windows Server Enterprise Edition, and go to town. The loss of hardware configuration flexibility I get from going virtual is less than I gain by compartmentalizing my Exchange server to its own machine, along with the ability to move that virtual machine to any virtualization host I have.

So, there you have it: Exchange 2010 can now be run on Windows Server Datacenter Edition, which means yay! for options. But in the end, I don’t expect this to make a difference for any of the deployments I’m like to be working on. This is a great move for a small handful of customers who really need this.

[1] MSCS is not required for Exchange 2007 SCR, although manual target activation can be easier in some scenarios if your target is configured as a single passive node cluster.

[2] From what I can tell, the same specs seem to be valid for Windows Server 2008, with the caveat that Windows Server 2008 R2 doesn’t offer a 32-bit version so the chart doesn’t give that information. However, since Exchange 2010 is x64 only, this is a moot point.

[3] This is often an attractive option, since you can hosted an unlimited number of Windows Server virtual machines without having to buy further Windows Server licenses for them.

[4] Remember that Datacenter is not licensed at a flat cost per server like Enterprise is; it’s licensed per socket. The beefier the machine you run it on, the more you pay.

Things They Forgot

Pat Robertson’s comments on Haiti basically boil down to “they got what was coming to them.” Mr. Robertson, I think you forgot Matthew 25:34-46 (KJV):

34Then shall the King say unto them on his right hand, Come, ye blessed of my Father, inherit the kingdom prepared for you from the foundation of the world: 35For I was an hungred, and ye gave me meat: I was thirsty, and ye gave me drink: I was a stranger, and ye took me in: 36Naked, and ye clothed me: I was sick, and ye visited me: I was in prison, and ye came unto me. 37Then shall the righteous answer him, saying, Lord, when saw we thee an hungred, and fed thee? or thirsty, and gave thee drink? 38When saw we thee a stranger, and took thee in? or naked, and clothed thee? 39Or when saw we thee sick, or in prison, and came unto thee? 40And the King shall answer and say unto them, Verily I say unto you, Inasmuch as ye have done it unto one of the least of these my brethren, ye have done it unto me.

41Then shall he say also unto them on the left hand, Depart from me, ye cursed, into everlasting fire, prepared for the devil and his angels: 42For I was an hungred, and ye gave me no meat: I was thirsty, and ye gave me no drink: 43I was a stranger, and ye took me not in: naked, and ye clothed me not: sick, and in prison, and ye visited me not. 44Then shall they also answer him, saying, Lord, when saw we thee an hungred, or athirst, or a stranger, or naked, or sick, or in prison, and did not minister unto thee? 45Then shall he answer them, saying, Verily I say unto you, Inasmuch as ye did it not to one of the least of these, ye did it not to me. 46And these shall go away into everlasting punishment: but the righteous into life eternal.

Rush Limbaugh may have forgotten the above as well. His claims that Obama is using humanitarian aid for political profit definitely seem to have forgotten Matthew 7:15-20:

15 Beware of false prophets, which come to you in sheep’s clothing, but inwardly they are ravening wolves. 16 Ye shall know them by their fruits. Do men gather grapes of thorns, or figs of thistles? 17 Even so every good tree bringeth forth good fruit; but a corrupt tree bringeth forth evil fruit. 18 A good tree cannot bring forth evil fruit, neither can a corrupt tree bring forth good fruit. 19 Every tree that bringeth not forth good fruit is hewn down, and cast into the fire. 20 Wherefore by their fruits ye shall know them.

If that last passage seems a bit murky, here’s a quote from C. S. Lewis’s The Last Battle (the last book of the Chronicles of Narnia) that I have always loved. The speaker is a Calormene soldier, Emeth, who has had a life-changing encounter with Aslan during the last hours of Narnia:

He answered, Child, all the service thou hast done to Tash, I account as service done to me. Then by reasons of my great desire for wisdom and understanding, I overcame my fear and questioned the Glorious One and said, Lord, is it then true, as the Ape said, that thou and Tash are one? The Lion growled so that the earth shook (but his wrath was not against me) and said, It is false. Not because he and I are one, but because we are opposites, I take to me the services which thou hast done to him. For I and he are of such different kinds that no service which is vile can be done to me, and none which is not vile can be done to him. Therefore if any man swear by Tash and keep his oath for the oath’s sake, it is by me that he had truly sworn, though he know it not, and it is I who reward him. And if any man do a cruelty in my name, then, though he says the name Aslan, it is Tash whom he serves and by Tash his deed is accepted. Dost thou understand, Child?

By their fruits ye shall know them…whatever their claims.

Poor Google? Not.

Since yesterday, the Net has been abuzz because of Google’s blog posting about their discovery they were being hacked by China. Almost every response I’ve seen has focused on the attempted hacking of the mailboxes of Chinese human rights activists.

That’s exactly where Google wants you to focus.

Let’s take a closer look at their blog post.

Paragraph 1:

In mid-December, we detected a highly sophisticated and targeted attack on our corporate infrastructure originating from China that resulted in the theft of intellectual property from Google.

Paragraph 2:

As part of our investigation we have discovered that at least twenty other large companies from a wide range of businesses–including the Internet, finance, technology, media and chemical sectors–have been similarly targeted.

Whoa. That’s some heavy-league stuff right there. Coordinated, targeted commercial espionage across a variety of vertical industries. Google first accuses China of stealing its intellectual property, then says that they weren’t the only ones. Mind you, industry experts – including the United States governmenthave been saying the same thing for years. Cries of ‘China hacked us!” happen relatively frequently in the IT security industry, enough so that it blends into the background noise after awhile.

My question is why, exactly, Google thought this wouldn’t happen to them? They’re a big fat juicy target on many levels. Gmail with thousands upon thousands of juicy mailboxes? Check! Search engine code and data that allows sophisticated monitoring and manipulation of Internet queries? Check! Cloud-based office documents that just might contain some competitive value? Check!

My second question is, why, exactly, is Google trying to shift the focus of the story from the IP theft (which by their own press report was successful) and cloak their actions in the “oh, noes, China tried to grab dissidents’ email” moral veil they’re using?

Paragraph 3:

Second, we have evidence to suggest that a primary goal of the attackers was accessing the Gmail accounts of Chinese human rights activists. Based on our investigation to date we believe their attack did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.

Two accounts, people, and the attempt wasn’t even fully successful. And the moral outrage shimmering from the screen in Paragraph 4, when Google says that “dozens” of accounts were accessed by third parties not through any sort of security flaw in Google, but rather through what is probably malware, is enough to knock you over.

Really, Google? You’re just now tumbling to the fact that people’s GMail accounts are getting hacked through malware?

I don’t buy the moral outrage. I think the meat of the matter is back in paragraph 1. I believe that the rest of the outrage is a smokescreen to repaint Google into the moral high ground for their actions, when from the sidelines here it certainly looks like Google chose knowingly to play with fire and is now suddenly outraged that they, too, got burned.

Google, you have enough people willing to play along with your attempt to be the victim. I’m not one of them. You compromised human rights principles in 2006 and knowingly put your users into harm’s way. “Do no evil,” my ass.

From Whence Redundancy? Exchange 2010 Storage Essays, part 1

Updated 4/13 with improved reseed time data provided by item #4 in the Top 10 Exchange Storage Myths blog post from the Exchange team.

Over the next couple of months, I’d like to slowly sketch out some of the thoughts and impressions that I’ve been gathering about Exchange 2010 storage over the last year or so and combine them with the specific insights that I’m gaining at my new job. In this inaugural post, I want to tackle what I have come to view as the fundamental question that will drive the heart of your Exchange 2010 storage strategy: will you use a RAID configuration or will you use a JBOD configuration?

In the interests of full disclosure, the company I work for now is a strong NetApp reseller, so of course my work environment is conducive to designing Exchange in ways that make it easy to sell the strengths of NetApp kit. However, part of the reason I picked this job is precisely because I agree with how they address Exchange storage and how I think the Exchange storage paradigm is going to shake out in the next 3-5 years as more people start deploying Exchange 2010.

In Exchange 2010, Microsoft re-designed the Exchange storage system to target what we can now consider to be the lowest common denominator of server storage: a directly attached storage (DAS) array of 7200 RPM SATA disks in a Just a Box of Disks (JBOD) configuration. This DAS/JBOD/SATA (what I will now call DJS) configuration has been an unworkable configuration for Exchange for almost its entire lifetime:

  • The DAS piece certainly worked for the initial versions of Exchange; that’s what almost all storage was back then. Big centralized SANs weren’t part of the commodity IT server world, reserved instead for the mainframe world. Server administrators managed server storage. The question was what kind of bus you used to attach the array to the server. However, as Exchange moved to clustering, it required some sort of shared storage. While a shared SCSI bus was possible, it not only felt like a hack, but also didn’t scale well beyond two nodes.
  • SATA, of course, wasn’t around back in 1996; you had either IDE or SCSI. SCSI was the serious server administrator’s choice, providing better I/O performance for server applications, as well as faster bus speeds. SATA, and its big brother SAS, both are derived from the lessons that years of SCSI deployments have provided. Even for Exchange 2007, though, SATA’s poor random I/O performance made it unsuitable for Exchange storage. You had to use either SAS or FC drives.
  • RAID has been a requirement for Exchange deployments, historically, for two reasons: to combine enough drive spindles together for acceptable I/O performance (back when disks were smaller than mailbox databases), and to ensure basic data redundancy. Redundancy was especially important once Exchange began supporting shared storage clustering and required both aggregate I/O performance only achievable with expensive disks and interfaces as well as the reduced chance of a storage failure being a single point of failure.

If you look at the marketing material for Exchange 2010, you would certainly be forgiven for thinking that DJS is the only smart way to deploy Exchange 2010, with SAN, RAID, and non-SATA systems supported only for those companies caught in the mire of legacy deployments. However, this isn’t at all true. There are a growing number of Exchange experts (and not just those of us who either work for storage vendors or resell their products) who think that while DJS is certainly an interesting option, it’s not one that’s a good match for every customer.

In order to understand why DJS is truly possible in Exchange 2010, and more importantly begin to understand where DJS configurations are a good fit and what underlying conditions and assumptions you need to meet in order to get the most value from DJS, we need to separate these three dimensions and discuss them separately.

JBOD vs RAID

While I will go into more detail on all three dimensions at later date, I want to focus on the JBOD vs.. RAID question now. If you need some summaries, then check out fellow Exchange MVP (and NetApp consultant) John Fullbright’s post on the economics of DAS vs. SAN as well as Microsoft’s Matt Gossage and his TechEd 2009 session on Exchange 2010 storage. Although there are good arguments for diving into drive technology or storage connection debates, I’ve come to believe that the central philosophy question you must answer in your Exchange 2010 design is at what level you will keep your data redundant. Until Exchange 2007, you had only one option: keeping your data redundant at the disk controller level. Using RAID technologies, you had two copies of your data[1]. Because you had a second copy of the data, shared storage clustering solutions could be used to provide availability for the mailbox service.

With Exchange 2007’s continuous replication features, you could add in data redundancy at the application level and avoid the dependency of shared storage; CCR creates two copies, and SCR can be used to create one or more additional copies off-site. However, given the realities of Exchange storage, for all but the smallest deployments, you had to use RAID to provide the required number of disk spindles for performance. With CCR, this really meant you were creating four copies; with SCR, you were creating an additional two copies for each target replica you created.

This is where Exchange 2010 throws a wrench into the works. By virtue of a re-architected storage engine, it’s possible under specific circumstances to design a mailbox database that will fit on a single drive while still providing acceptable performance. The reworked continuous replication options, now simplified into the DAG functionality, create additional copies on the application level. If you hit that sweet spot of the 1:1 database to disk ratio, then you only have a single copy of the data per replica and can get an n-1 level of redundancy, where n is the number of replicas you have. This is clearly far more efficient for disk usage…or is it? The full answer is complex, the simple answer is, “In some cases.”

In order to get the 1:1 database to disk ratio, you have to follow several guidelines:

  1. Have at least three replicas of the database in the DAG, regardless of which sites they are in. Doing so allows you to place both the EDB and transaction log files on the same physical drive, rather than separating them as you did in previous versions of Exchange.
  2. Ensure that you have at least two replicas per site. The reason for this is that unlike Exchange 2007, you can reseed a failed replica from another passive copy. This allows you to avoid reseeding over your WAN, which is something you do not want to do.
  3. Size your mailbox databases to include no more users than will fit in the drive’s performance envelope. Although Exchange 2010 converts many of the random I/O patterns to sequential, giving better performance, not all has been converted, so you still have to plan against the random I/O specs.
  4. Ensure that write transactions can get written successfully to disk. Use a battery-backed caching controller for your storage array to ensure the best possible performance from the disks. Use write caching for the physical disks, which means ensuring each server hosting a replica has a UPS.

At this point, you probably have disk capacity to spare, which is why Exchange 2010 allows the creation of archive mailboxes in the same mailbox database. All of the user’s data is kept at the same level of redundancy, and the archived data – which is less frequently accessed than the mainline data – is stored without additional significant disk or I/O penalty. This all seems to indicate that JBOD is the way to go, yes? Two copies in the main site, two off-site DR copies, and I’m using cheaper storage with larger mailboxes and only four copies of my data instead of the minimum of six I’d have with CCR+SCR (or the equivalent DAG setup) on RAID configurations.

Not so fast. Microsoft’s claims around DJS configurations usually talk about the up-front capital expenditures. There’s more to a solid design than just the up-front storage price tag, and even if the DJS solution does provide savings in your situation, that is only the start. You also need to think about the lifetime of your storage and all the operational costs. For instance, what happens when one of those 1:1 drives fails?

Well, if you bought a really cheap DAS array, your first indication will be when Exchange starts throwing errors and the active copy moves to one of the other replicas. (You are monitoring your Exchange servers, right?) More expensive DAS arrays usually directly let you know that a disk failed. Either way, you have to replace the disk. Again, with a cheap white-box array, you’re on your own to buy replacement disks, while a good DAS vendor will provide replacements within the warranty/maintenance period. Once the disk is replaced, you have to re-establish the database replica. This brings us to the wonderful manual process known as database reseeding, which is not only a manual task, but can take quite a significant amount of time – especially if you made use of archival mailboxes and stuffed that DJS configuration full of data. Let’s take a closer look at what this means to you.

[Begin 4/13 update]

There’s a dearth of hard information out there about what types of reseed throughputs we can achieve in the real world, and my initial version of this post where I assumed 20GB/hour as an “educated guess” earned me a bit of ribbing in some quarters. In my initial example, I said that if we can reseed 20GB of data per hour (from a local passive copy to avoid the I/O hit to the active copy), that’s 10 hours for a 200GB database, 30 hours for a 600GB database, or 60 hours –two and a half days! – for a 1.2 TB database[2].

According to the Top 10 Exchange Storage Myths post on the Exchange team blog, 20GB/hour is way too low; in their internal deployments, they’re seeing between 35-70GB per hour. How would these speeds affect reseed times in my examples above? Well, let’s look at Table 1:

Table 1: Example Exchange 2010 Mailbox Database reseed times

Database Size Reseed Throughput Reseed Time
200GB 20GB/hr 10 hours
200GB 35GB/hr 7 hours
200GB 50GB/hr 4 hours
200GB 70GB/hr 3 hours
600GB 20GB/hr 30 hours
600GB 35GB/hr 18 hours
600GB 50GB per hour 12 hours
600GB 70GB per hour 9 hours
1.2TB 20GB/hr 60 hours
1.2TB 35GB/hr 35 hours
1.2TB 50GB/hr 24 hours
1.2TB 70GB/hr 18 hours

As you can see, reseed time can be a key variable in a DJS design. In some cases, depending on your business needs, these times could make or break whether this is a good design. I’ve done some talking around and found out that reseed times in the field are all over the charts. I had several people talk to me at the MVP Summit and ask me under what conditions I’d seen 20GB/hour, as that was too high. Astrid McClean and Matt Gossage of Microsoft had a great discussion with me and obviously felt that 20GB/hour is way too low.

Since then, I’ve received a lot of feedback and like I said, it’s all over the map. However, I’ve yet to hear anyone outside of Microsoft publicly state a reseed throughput higher than 20GB/hour. What this says to me is that getting the proper network design in place to support a good reseed rate hasn’t been a big point in deployments so far, and that in order to make a DJS design work, this may need to be an additional consideration.

If your replication network is designed to handle the amount of traffic required for normal DAG replication and doesn’t have sufficient throughput to handle reseed operations, you may be hurting yourself in the unlikely event of suffering multiple simultaneous replica failures on the same mailbox database.

This is a bigger concern for shops that have a small tolerance for any given drive failure. In most environments, one of the unspoken effects of a DJS DAG design is that you are trading number of replicas – and database-level failover – for replica rebuild time. If you’re reduced from four replicas down to three, or three down to two during the time it takes to detect the disk failure, replace the disk, and complete the reseed, you’ll probably be okay with that taking a longer period of time as long as you have sufficient replicas.

All during the reseed time, you have one fewer replica of that database to protect you. If your business processes and requirements don’t give you that amount of leeway, you either have to design smaller databases (and waste the disk capacity, which brings us right back to the good old bad days of Exchange 2000/2003 storage design) or use RAID.

[End 4/13 update]

Now, with a RAID solution, we don’t have that same problem. We still have a RAID volume rebuild penalty, but that’s happening inside the disk shelf at the controller, not across our network between Exchange servers. And with a well-designed RAID solution such as generic RAID 10 (1+0) or NetApp’s RAID-DP, you can actually survive the loss of more disks at the same time. Plus, a RAID solution gives me the flexibility to populate my databases with smaller or larger mailboxes as I need, and aggregate out the capacity and performance across my disks and databases. Sure, I don’t get that nice 1:1 disk to database ratio, but I have a lot more administrative flexibility and can survive disk loss without automatically having to begin the reseed dance.

Don’t get me wrong – I’m wildly enthusiastic that I as an Exchange architect have the option of designing to JBOD configurations. I like having choices, because that helps me make the right decisions to meet my customers’ needs. And that, in the end, is the point of a well-designed Exchange deployment – to meet your needs. Not the needs of Microsoft, and not the needs of your storage or server vendors. While I’m fairly confident that starting with a default NetApp storage solution is the right choice for many of the environments I’ll be facing, I also know how to ask the questions that lead me to consider DJS instead. There’s still a place for RAID at the Exchange storage table.

In further installments over the next few months, I’ll begin to address the SATA vs. SAS/FC and DAS vs. SAN arguments as well. I’ll then try to wrap it up with a practical and realistic set of design examples that pull all the pieces together.

[1] RAID-1 (mirroring) and RAID-10 (striping and mirroring) both create two physical copies of the data. RAID-5 does not, but it allows the loss of a single drive failure — effectively giving you a virtual second copy of the data.

[2] Curious why picked these database sizes?  200GB is the recommended maximum size for Exchange 2007 (due to backup limitations), and 600GB/1.2TB are the realistic recommended maximums you can get from 1TB and 2TB disks today in a DJS replica-per-disk deployment; you need to leave room for the content index, transaction logs, and free space.

A Virtualization Metaphor

This is a rare kind of blog post for me, because I’m basically copying a discussion that rose from one of my Twitter/Facebook status updates earlier today:

I wish I could change the RAM, CPU configuration on running VMs in #VMWare and have the changes apply on next reboot.

This prompted one of my nieces, a lovely and intelligent young lady in high school, to ask me to say that in English.

I pondered just hand waving it, but I was loathe to do so. Like I said, she’s intelligent. I firmly believe that kids live up to your expectations; if you talk down to them and treat them like they’re dumb because that’s what you expect, they’re happy to be that way. On the other hand, if you expect them to be able to understand concepts with the proper explanations, even if they may not immediately grasp the fine points, I’ve found that kids are actually quite able to do so – better than many adults, truth be told.

So, this is my answer:

The physical machinery of computers is called hardware. The programs that run on them (Windows, games, etc.) is software.
VMware is software that allows you to create virtual machines. That is, instead of buying (for example) 10 computers to do different tasks and have most of them have unused memory and processor power, you buy one or two really beefy computers and run VMWare. That allows you to create a virtual machine in software, so those two computers become 10. I don’t have to buy quite as much hardware because each virtual machine only uses the resources it needs, leaving the rest for the other virtual machines.

However, one of the problems with VMWare currently is that if you find you’ve given a virtual machine too much memory or processor (or not enough), you have to shut it down, make the change, then start it back up. I want the software to be smart enough to take the change *now* and automatically apply it when it can, such as when the virtual machine is rebooting. For a physical computer, it makes sense — I have to power it down, crack the case open, put memory in, etc. — but for a virtual computer, it should be able to be done in software.

Think of it this way: hardware is like a closet. You can build a big closet or a small closet or a medium closet, but each closet holds a finite amount of stuff. Software is the stuff you put in the closet — clothes, shoes, linens, etc. You can dump a bunch of stuff into a big closet, but doing so makes it cluttered and hard to use. So if you use multiple smaller closets, you’re wasting space because you probably won’t fill every one exactly.

In this metaphor, virtualization is like a closet organizer system. You can add a clothing rod here to hang dresses and blouses on, and underneath that add a shelf or two for shoes, while to the side you have more shelves for pants and towels and other stuff. You waste a little bit of your closet space for the organizer, but you keep everything organized and clutter-free, which means you’re better off and take less time to keep everything up.

Of course, this metaphor fails on my original point, because it totally makes sense you have to take all the stuff off shelves before moving those shelves around. In the world of software, though, it doesn’t necessarily make sense — it’s just the right people didn’t think of it at the right time.

Clear?

I came close to busting out Visio and starting to diagram some of this. I decided not to.

Edit: I don’t have to diagram it! Thank you, Ikea, and your lovely KOMPLEMENT wardrobe organizer line!

Ikea KOMPLEMENT organizer as virtualization software

North Pole data leakage woes

Not even old Saint Nick is immune from the need for a good data management and protection regime.

First, we have confirmation that his naughty and nice database has been hacked.

Now, there are credible rumors that the North Pole CIO has been covering up a years-long, systemic problem with Santa losing mobile devices. According to unidentified sources, the list of allegations includes:

  • Lack of priority for safeguarding key data, especially through mobile systems. Recent refits for the sled have focused on tracking transponders for “greater publicity”, but no corresponding upgrades to mobile IT systems. These systems are specifically characterized as “obsolete 286 systems running DOS and home-brew Paradox applications written by some dentist in his spare time.”
  • Habitual problems with smartphones. In order to ensure inexpensive world-wide access, Santa’s system includes the use of multiple handsets from strategically selected regional carriers. “In the last several years, Santa has yet to come back from his Christmas Eve run without having lost at least three of his devices,” one insider claims, “and of course we don’t have remote wipe capabilities. That would require him spending money.”
  • Lax information and network practices, including no formal security policies or processes. Remote accesses aren’t even protected via SSL, according to sources, since “anyone who’s so cheap they haven’t updated stock PR footage of elves making wooden toys isn’t likely to shell out for a respected SSL certificate or PKI infrastructure.”

It will take time to gather confirmation of these claims, but if they are true, it shows a shocking disregard for basic security best practices at the North Pole.

Busting the Exchange Trusted Subsystem Myth

It’s amazing what kind of disruption leaving your job, looking for a new job, and starting to get settled in to a new job can have on your routines. Like blogging. Who knew?

At any rate, I’m back with some cool Exchange blogging. I’ve been getting a chance to dive into a “All-Devin, All-Exchange, All The Time” groove and it’s been a lot of fun, some of the details of which I hope to be able to share with you in upcoming months. In the process, I’ve been building a brand new Exchange 2010 lab environment and ran smack into a myth that seems to be making the rounds among people who are deploying Exchange 2010. This myth gives bum advice for those of you who are deploying an Exchange 2010 DAG and not using an Exchange 2010 Hub Transport as your File Share Witness (FSW). I call it the Exchange Trusted Subsystem Myth, and the first hint of it I see seems to be on this blog post. However, that same advice seems to have gotten around the net, as evidenced by this almost word-for-word copy or this posting that links to the first one. Like many myths, this one is pernicious not because it’s completely wrong, but because it works even though it’s wrong.

If you follow the Exchange product group’s deployment assumptions, you’ll never run into the circumstance this myth addresses; the FSW is placed on an Exchange 2010 HT role in the organization. Although you can specify the FSW location (server and directory) or let Exchange pick a server and directory or you, the FSW share isn’t created during the configuration of the DAG (as documented by fellow Exchange MVP Elan Shudnow and the “Witness Server Requirements” section of the Planning for High Availability and Site Resilience TechNet topic). Since it’s being created on an Exchange server as the second member of the DAG is joined, Exchange has all the permissions it needs on the system to create the share. If you elect to put the share on a non-Exchange server, then Exchange doesn’t have permissions to do it. Hence the myth:

  1. Add the FSW server’s machine account to the Exchange Trusted Subsystem group.
  2. Add the Exchange Trusted Subsystem group to the FSW server’s local Administrators group.

The sad part is, only the second action is necessary. True, doing the above will make the FSW work, but it will also open a much wider hole in your security than you need or want. Let me show you from my shiny new lab! In this configuration, I have three Exchange systems: EX10MB01, EX10MB02, and EX10MB03. All three systems have the Mailbox, Client Access, and Hub Transport roles. Because of this, I want to put the FSW on a separate machine. I could have used a generic member server, but I specifically wanted to debunk the myth, so I picked my DC EX10DC01 with malice aforethought.

  • In Figure 1, I show adding the Exchange Trusted Subsystem group to the Builtin/Administrators group on EX10DC01. If this weren’t a domain controller, I could add it to the local Administrators group instead, but DCs require tinkering. [1]

ExTrSubSys-DC-AdminsGroup
Figure 1: Membership of the Builtin/Administrators group on EX10DC01

  • In Figure 2, I show the membership of the Builtin/Administrators group on EX10DC01. No funny business up my sleeve!

ExTrSubSys-Members
Figure 2: Membership of the Exchange Trusted Subsystem group

  • I now create the DAG object, specifying EX10DC01 as my FSW server and the C:\EX10DAG01 directory so we can see if it ever gets created (and when).
  • In Figure 3, I show the root of the C:\ drive on EX10DC01 after adding the second Exchange 2010 server to the DAG. Now, the directory and share are created, without requiring the server’s machine account to be added to the Exchange Trusted Subsystem group.

ExTrSubSys-FSWCreated
Figure 3: The FSW created on EX10DC01

I suspect that this bad advice came about through a combination of circumstances, including an improper understanding of Exchange caching of Active Directory information and when the FSW is actually created. However it came about, though, it needs to be stopped, because any administrator that configures their Exchange organization is opening a big fat hole in the Exchange security model.

So, why is adding the machine account to the Exchange Trusted Subsystem group a security hole? The answer lies in Exchange 2010’s shift to Role Based Access Control (RBAC). In previous versions of Exchange, you delegated permissions directly to Active Directory and Exchange objects, allowing users to perform actions directly from their security context. If they had the appropriate permissions, their actions succeeded.

In Exchange 2010 RBAC, this model goes away; you now delegate permissions by telling RBAC what options given groups, policies, or users can perform, then assigning group memberships or policies as needed. When the EMS cmdlets run, they do so as the local machine account; since the local machine is an Exchange 2010 server, this account has been added to the Exchange Trusted Subsystem group. This group has been delegated the appropriate access entries in Active Directory and Exchange databases objects, as described in the Understanding Split Permissions TechNet topic. For a comprehensive overview of RBAC and how all the pieces fit together, read the Understanding Role Based Access Control TechNet topic.

By improperly adding a non-Exchange server to this group, you’re now giving that server account the ability to read and change any Exchange-related object or property in Active Directory or Exchange databases. Obviously, this is a hole, especially given the relative ease with which one local administrator can get a command line prompt running as one of the local system accounts.

So please, do us all a favor: if you ever hear or see someone passing around this myth, please, link them here.

ExTrSubSys-Busted
Busted!

[1] Yes, it is also granting much broader permissions than necessary to make a DC the FSW node. Now the Exchange Trusted Subsystem group is a member of the Domain Admins group. This is probably not what you want to do, so really, don’t do this outside of a demo lab.

Support Our Scout

Edit 11/11/09 to remove the embedded video and replace it with a link. It was messing up the layout and I need to do more research to figure out how to embed videos inline.

I love living in the future. First, though, watch this video that Alaric and I made.

I was a Boy Scout for close to three years. I started as a Boy Scout; I missed Cub Scouts, including Webelos Scout. When I was in Scouting, we had to go door-to-door to do our fundraisers, or spend a lot of time with our relatives over the phone. I hated doing it, for reasons that didn’t become clear until much later in life when I began grappling with autism and Asperger’s. However, I have a lot of good memories of Scouting; it did a lot for me and it was a valuable part of my childhood.

Steph and I wanted Alaric to experience Scouting. Even though the modern BSA has some characteristics that I don’t agree with, I’ve come to the decision that first and foremost, Scouting is about the boys. Scouting needs intelligent, reasonable adults of all persuasions to help drive the program. By being part of Scouting, Alaric will learn and do things Steph and I can’t give him on our own; by having us there with him, Alaric will learn how to deal with people from differing backgrounds in a diplomatic and productive manner.

Over the summer, Alaric has really seen what a good thing Scouting is. He even got me to go to Scout Camp with him for four days in July, and I must admit I even had fun. It was a great experience for both of us, including facing down and conquering some challenges.

Unlike many Scout packs and troops, Alaric’s pack works on the schedule of the school year. As a result, they do their major fundraising push at the beginning of the school year with a number of activities. Alaric’s already helped out pulling Hire-A-Scout wagons at the local auto swap meet and had a great time. However, the major source of operating funds is the traditional Trail’s End popcorn fundraiser. Trail’s End, if you don’t know, has been the go-to-source for Scout fundraising for a long time, and they offer some of the best popcorn on the planet.

Over the past few weeks, we’ve been rather hectic and busy and haven’t really had time to coach Alaric on his first door-to-door sales campaign. (Poor guy seems to have the same issues I did when I was his age, so it was pretty painful.) This last week, I came up with what is I hope a brainstorm: harness the power of the Internet to get Alaric’s sales pitch out there. So, you get to enjoy the results: the following video where Alaric and I pitch popcorn to YOU, the faithful reader. And because this is the future, Trail’s End even got with the program: they now allow you to purchase online, supporting a specific Scout, and have the product shipped directly to your door!

Go to Trail’s End to support Alaric’s fundraising for his pack

Thank you for your support!

Leaving 3Sharp

3Sharp has been a fantastic place to work; for the last six and half years, my co-workers and I have walked the road together. One of the realities of growth, though, is that you often reach the fork in the road where you have to move down different paths. Working with Paul, Tim, Missy, Kevin, and the rest of the folks who have been part of the Platform Services Group here at 3Sharp over the years has been a wild journey, but we were only one of three groups at 3Sharp; the other two groups are also chock-full of smart people doing wonderful things with SharePoint and Office. 3Sharp will be moving forward to focus on those opportunities, and the Platform Services Group (which focused on Exchange, OCS, Windows Server, Windows Mobile, and DPM) is closing its doors. My last day here will be tomorrow, Friday, October 16.

I think that the Ecclesiastes 3:1 says it best; in the King James Version, the poet says, “To every thing there is a season, and a time to every purpose under the heaven.” It has been my privilege to use this blog to talk about Exchange, data protection, and all the other topics I’ve talked about since my first post here five years ago (holy crap, has it really been five years???) With 3Sharp’s gracious permission and blessing, I’ll be duplicating all of the content I’ve posted here over on my personal blog, Devin on Earth. If you have a link or bookmark for this blog or are following me via RSS, please take a moment to update it now (Devin on Earth RSS feed). I’ve got a few new posts cooking, but this will be my last post here.

Thank you to 3Sharp and the best damn co-workers I could ever hope to work with over the years. Thank you, my readers. You all have helped me grow and solidify my skills, and I hope I returned the favor. I look forward to continuing the journey with many of you, even if I’m not sure yet where it will take me.

OneNote 2010 Keeps Your Brains In Your Head

Some months back, those of you who follow me on Twitter (@devinganger) may have a noticed a series of teaser Tweets about a project I was working on that involved zombies.

Yes, that’s right, zombies. The RAHR-BRAINS-RAHR shambling undead kind, not the “mystery objects in Active Directory” kind.

Well, now you can see what I was up to.

I was working with long-time fellow 3Sharpie David Gerhardt on creating a series of 60-second vignettes for the upcoming Office 2010 application suite. Each vignette focuses on a single new area of functionality in one of the Office products. I got to work with OneNote 2010.

Here’s where the story gets good.

I got brought into the project somewhat late, after a bunch of initial planning and prep work had been done. The people who had been working on the project had decided that they didn’t want to do the same boring business-related content in their OneNote 2010 vignettes; oh, no! Instead, they hit upon the wonderful idea of using a Zombie Plan as the base document. Now, I don’t really like zombies, but this seemed like a great way to spice up a project!

The rest, as they say, is history. Check out the results (posted both at GetSharp and somewhere out on YouTube) for yourself:

One of the best parts of this project, other than getting a chance to learn about some of the wildly cool stuff the OneNote team is doing to enhance an already wonderful product, was the music selection. We worked a deal with local artist Dave Pezzner to use some of his short music clips for these videos. Dave is immensely talented and provided a wide selection of material, so I enjoyed being able to pick and choose just the right music for each video. It did occur to me how cool it would be if I could use Jonathan Coulton’s fantastic song Re: Your Brains, but somehow I think his people lost my query email. Such is life – and I think Mr. Pezzner’s music provided just the right accompaniment to the Zombie Plan content.

Enjoy!

The Great Exchange Blog Migration

Over the next few days, I’ll be adding a large number of posts (just over 250!!!) to the archives of this blog. For a number of congruent reasons, 3Sharp is closing down the Platform Services Group (which focused on Exchange, OCS, Windows Server, Windows Mobile, and DPM) and my last day will be this Friday, October 16 after over six and half years with them. With 3Sharp’s gracious permission and blessing, I’ll be duplicating all of the content I’ve posted on the 3Sharp blog server over to here. If you have a link or bookmark for my work blog or are following it via RSS, please take a moment to update your settings. Yes, that means there’s going to be more geeky technical Exchange stuff going forward, but hey, with a single blog to focus on, maybe I’ll be more prolific overall!

To head off some of the obvious questions:

  • This is not a horrible thing. 3Sharp and I are parting ways peacefully because it’s the right decision for all of us; they need to focus on SharePoint, and I’m so not a SharePoint person. They’ve done fantastic things for my career and I cherish my time with them, but part of being an adult is knowing when to move on. We’re all agreed that time has come.
  • I’m not quite sure where I’m going to end up yet. I’ve got a couple of irons in the fire and I have high hopes for them, but it’s not time to talk about them. I am going to have at least a week or two of time off, which is good; there are several projects at home in dire need of sustained attention (unburying my home office, for one; fixing a balky Exchange account for another).
  • I’m not going to be a complete shut-in. I’ve got a couple of appointments for the following week, including a Microsoft focus group and a presentation on PowerPoint for Treanna’s English class. I’m open to doing some short-term independent consulting or contracting work as well, so contact me if you know someone who needs some Exchange help.

Thank you to 3Sharp and the best damn co-workers I could ever hope to work with over the years – and a huge thank you to all of my readers, regardless of which blog you’ve been following. The last several years have been a wild ride, and I look forward to continuing the journey with many of you, even if I’m not sure yet where it will take me.

Two karate blessings

These past 14 months that I’ve been a karate student have given me a number of deeply satisfying moments, including the joy of sharing an activity with my daughter. Last Tuesday, however, proved to be an especially fruitful class for both of us.

Starting in September, the YMCA agreed to try out dropping class fees for YMCA members, and as you might imagine, we immediately saw a small but steady wave of new sign-ups for class. As a result, for the first time in a while, we have a good number of new students – white belts. As a result, we spend a large chunk of class time going back over many of the basic techniques in more detail than we’ve gotten used to. Those of us who are higher belts get to work with the white belts one-on-one during many of these exercises. This proves beneficial to everyone – they get a personal workout, and we get a mirror to more clearly see how well we’ve mastered the basics (or not, as it usually happens).

The first blessing was working with a gentleman who has been in class somewhere around a month. He and I were working through one-step exercises: one person performs a basic punch attack while the other defends, then we switch roles. We do this with seven defenses. As you work through the ranks, the defense techniques get more complicated, but for white belt one-steps, it’s pretty simple. Or so it seems now after a year; they were quite challenging when I first started and I got to re-experience that working with this gentleman. During our practice, he had one of those epiphany moments and what had been a struggle suddenly turned into AHA! with a clarity we both felt. It was an honor to be working with him in that moment.

The second blessing came about indirectly because of some misbehavior. You see, our protocol and customs direct us to pay attention and not engage in side conversations or monkey business when sensei is teaching. (Turns out there are no exceptions for “if you think you already know this” or “if you’re bored.” I checked. Who knew?) Well, several of us – including me and Treanna – weren’t quite paying attention to that one, and the senior student got called on it. I later told Treanna that he’d taken one for the team; we all were equally guilty of inattention. As class was drawing to an end, though, Treanna engaged in another breach of protocol that earned her some gentle ribbing. (She might read this, so I won’t tell you what she did. This time.)

Being a vigilant father and role model, I immediately realized we had what the experts call “a teachable moment” here. So we cracked open our karate notebooks and made a date to come back tonight after dinner, both having read the protocols, and discuss what we’d found:

  • There are three basic sets of protocol in our notebook: white belt (people who’ve just joined), blue belt (9th kyu, or your first belt), and orange belt (7th kyu, or your third belt). After reading them, we decided that they all have the common themes of respect, safety, and responsibility.
  • We think that white belt protocol focuses mainly on the what habits I need to become a student (discipline). That is, all of the guidance seems to be directed more at helping the newcomer gain the structures he will need to effectively learn karate.
  • We think that blue belt protocol focuses mainly on how I become a member of the community (identity). This comes after the first belt (typically earned after several months) and the guidance is more focused on becoming aware of and fitting into the dojo structure.
  • Finally, we think that orange belt protocol focuses mainly on how I give back to the community (service). This comes after three belts and around a year of study – a good foundation from which to be able to start learning to progress by helping others.
  • As a final note, we saw that there was no specific protocol for further belts. We speculate that’s because the student in green and brown belts is expected to do the same things she is already doing, just to a greater degree. And once she gets to black belt – that’s a watershed mark, and sensei will teach us what is expected of us on that day at the proper time.

If you’re not in a martial art, that’s probably boring and generic. To Treanna and I, though, it seemed pretty profound, and I think we’ll walk back into class tomorrow with a new-found sense of focus and commitment.

Why Aren’t My Exchange Certificates Validating?

Updated 10/13: Updated the link to the blog article on configuring Squid for Exchange per the request of the author Owen Campbell. Thank you, Owen, for letting me know the location had changed!

By now you should be aware that Microsoft strongly recommends that you publish Exchange 2010/2007 client access servers (and Exchange 2003/2000 front-end servers) to the Internet through a reverse proxy like Microsoft’s Internet Security and Acceleration Server 2006 SP1 (ISA) or the still-in-beta Microsoft Forefront Threat Management Gateway (TMG). There are other reverse proxy products out there, such as the open source Squid (with some helpful guides on how to configure it for EAS, OWA, and Outlook Anywhere), but many of them can only be used to proxy the HTTP-based protocols (for example, the reverse proxy module for the Apache web server) and won’t handle the RPC component of Outlook Anywhere.

When you’re following this recommendation, you keep your Exchange CAS/HT/front-end servers in your private network and place the ISA Server (or other reverse proxy solution) in your perimeter (DMZ) network. In addition to ensuring that your reverse proxy is scrubbing incoming traffic for you, you can also gain another benefit: SSL bridging. SSL bridging is where there are two SSL connections – one between the client machine and the reverse proxy, and a separate connection (often using a different SSL certificate) between the reverse proxy and the Exchange CAS/front-end server. SSL bridging is awesome because it allows you radically reduce the number of commercial SSL certificates you need to buy. You can use Windows Certificate Services to generate and issue certificates to all of your internal Exchange servers, creating them with all of the Subject Alternate Names that you need and desire, and still have a commercial certificate deployed on your Internet-facing system (nice to avoid certificate issues when you’re dealing with home systems, public kiosks, and mobile devices, no?) that has just the public common namespaces like autodiscover.yourdomain.tld and mail.yourdomain.tld (or whatever you actually use).

In the rest of this article, I’ll be focusing on ISA because, well, I don’t know Squid that well and haven’t actually seen it in use to publish Exchange in a customer environment. Write what you know, right?

One of the most irritating experiences I’ve consistently had when using ISA to publish Exchange securely is getting the certificate configuration on ISA correct. If you all want, I can cover certificate namespaces in another post, because that’s not what I’m talking about – I actually find that relatively easy to deal with these days. No, what I find annoying about ISA and certificates is getting all of the proper root CA certificates and intermediate CA certificates in place. The process you have to go through varies on who you buy your certificates from. There are a couple, like GoDaddy, that offer inexpensive certificates that do exactly what Exchange needs for a decent price – but they require an extra bit of configuration to get everything working.

The problem you’ll see is two-fold:

  1. External clients will not be able to connect to Exchange services. This will be inconsistent; some browsers and some Outlook installations (especially those on new Windows installs or well-updated Windows installs) will work fine, while others won’t. You may have big headaches getting mobile devices to work, and the error messages will be cryptic and unhelpful.
  2. While validating your Exchange publishing rules with the Exchange Remote Connectivity Analyzer (ExRCA), you get a validation error on your certificate as shown in Figure 1.

ExRCA can't find the intermediate certificate on your ISA server
Figure 1: Missing intermediate CA certificate validation error in ExRCA

The problem is that some devices don’t have the proper certificate chain in place. Commercial certificates typically have two or three certificates in their signing chain: the root CA certificate, an intermediate CA certificate, and (optionally) an additional intermediate CA certificate. The secondary intermediate CA certificate is typically the source of the problem; it’s configured as a cross-signing certificate, which is intended to help CAs transition old certificates from one CA to another without invalidating the issued certificates. If your certificate was issued by a CA that has these in place, you have to have both intermediate CA certificates in place on your ISA server in the correct certificate stores.

By default, CAs will issue the entire certificate chain to you in a single bundle when they issue your cert. You have to import this bundle on the machine you issued the request from or else you don’t get the private key associated with the certificate. Once you’ve done that, you need to re-export the certificate, with the private key and its entire certificate chain, so that you can import it in ISA. This is important because ISA needs the private key so it can decrypt the SSL session (required for bridging), and ISA needs all the certificate signing chain so that it can hand out missing intermediate certificates to devices that don’t have them (such as Windows Mobile devices that have the root CA certificates). If the device doesn’t have the right intermediates, can’t download it itself (like Internet Explorer can), and can’t get it from ISA, you’ll get the certificate validation errors.

Here’s what you need to do to fix it:

  • Ensure that your server certificate has been exported with the private key and *all* necessary intermediate and root CA certificates.
  • Import this certificate bundle into your ISA servers. Before you do this, check the computer account’s personal certificate store and make sure any root or intermediate certificates that got accidentally imported there are deleted.
  • Using the Certificate MMC snap-in, validate that the certificate now shows as valid when browsing the certificate on your ISA server, as shown in Figure 2.

Even though the Certificates MMC snap-in shows this certificate as valid, ISA won't serve it out until the ISA Firewall Service is restarted!
Figure 2: A validated server certificate signing chain on ISA Server

  • IMPORTANT STEP: restart the ISA Firewall Service on your ISA server (if you’re using an array, you have to do this on each member; you’ll want to drain the connections before restarting, so it can take a while to complete). Even though the Certificate MMC snap-in validates the certificate, the ISA Firewall only picks up the changes to the certificate chain on startup. This is annoying and stupid and has caused me pain in the past – most recently, with 3Sharp’s own Exchange 2010 deployment (thanks to co-worker and all around swell guy Tim Robichaux for telling me how to get ISA to behave).

Also note that many of the commercial CAs specifically provide downloadable packages of their root CA and intermediate CA certificates. Some of them get really confusing – they have different CAs for different tiers or product lines, so you have to match the server certificate you have with the right CA certificates. GoDaddy’s CA certificate page can be found here.

Some Thoughts on FBA (part 2)

As promised, here’s part 2 of my FBA discussion, in which we’ll talk about the interaction of ISA’s forms-based authentication (FBA) feature with Exchange 2010. (See part 1 here.)

Offloading FBA to ISA

As I discussed in part 1, ISA Server includes the option of performing FBA pre-authentication as part of the web listener. You aren’t stuck with FBA – you can use other pre-auth methods too. The thinking behind this is that ISA is the security server sitting in the DMZ, while the Exchange CAS is in the protected network. Why proxy an incoming connection from the Internet into the real world (even with ISA’s impressive HTTP reverse proxy and screening functionality) if it doesn’t present valid credentials? In this configuration, ISA is configured for FBA while the Exchange 2010/2007 CAS or Exchange 2003 front-end server are configured for Windows Integrated or Basic as shown in Figure 1 (a figure so nice I’ll re-use it):

Publishing Exchange using FBA on ISA

Figure 1: Publishing Exchange using FBA on ISA

Moving FBA off of ISA

Having ISA (and Threat Management Gateway, the 64-bit successor to ISA 2006) perform pre-auth in this fashion is nice and works cleanly. However, in our Exchange 2010 deployment, we found a couple of problems with it:

The early beta releases of Entourage for EWS wouldn’t work with this configuration; Entourage could never connect. If our users connected to the 3Sharp VPN, bypassing the ISA publishing rules, Entourage would immediately see the Exchange 2010 servers and do its thing. I don’t know if the problem was solved for the final release.

We couldn’t get federated calendar sharing, a new Exchange 2010 feature, to work. Other Exchange 20120 organizations would get errors when trying to connect to our organization. This new calendar sharing feature uses a Windows Live-based central brokering service to avoid the need to provision and manage credentials.

Through some detailed troubleshooting with Microsoft and other Exchange 2010 organizations, we finally figured out that our ISA FBA configuration was causing the problem. The solution was to disable ISA pre-authentication and re-enable FBA on the appropriate virtual directories (OWA and ECP) on our CAS server. Once we did that, not only did federated calendar sharing start working flawlessly, but our Entourage users found their problems had gone away too. For more details of what we did, read on.

How Calendar Sharing Works in Exchange 2010

If you haven’t seen other descriptions of the federated calendar sharing, here’s a quick primer on how it works. This will help you understand why, if you’re using ISA pre-auth for your Exchange servers, you’ll want to rethink it.

In Exchange 2007, you could share calendar data with other Exchange 2007 organizations. Doing so meant that your CAS servers had to talk to their calendar servers, and the controls around it were not that granular. In order to do it, you either needed to establish a forest trust and grant permissions to the other forest’s CAS servers (to get detailed per-user free/busy information) or set up a separate user in your forest for the foreign forests to use (to get default per-org free/busy data). You also have to fiddle around with the Autodiscover service connection points and ensure that you’ve got pointers for the foreign Autodiscover SCPs in your own AD (and the foreign systems have yours). You also have to publish Autodiscover and EWS externally (which you have to do for Outlook Anywhere) and coordinate all your certificate CAs. While this doesn’t sound that bad, you have to do these steps for every single foreign organization you’re sharing with. That adds up, and it’s a poorly documented process – you’ll start at this TechNet topic about the Availability service and have to do a lot of chasing around to figure out how certificates fit in, how to troubleshoot it, and the SCP export and import process.

In Exchange 2010, this gets a lot easier; individual users can send sharing invitations to users in other Exchange 2010 organizations, and you can set up organization relationships with other Exchange 2010 organizations. Microsoft has broken up the process into three pieces:

  1. Establish your organization’s trust relationship with Windows Live. This is a one-time process that must take place before any sharing can take place – and you don’t have to create or manage any service or role accounts. You just have to make sure that you’re using a CA to publish Autodiscover/EWS that Windows Live will trust. (Sorry, there’s no list out there yet, but keep watching the docs on TechNet.) From your Exchange 2010 organization (typically through EMC, although you can do it from EMS) you’ll swap public keys (which are built into your certificates) with Windows Live and identify one or more accepted domains that you will allow to be federated. Needless to say, Autodiscover and EWS must be properly published to the Internet. You also have to add a single DNS record to your public DNS zone, showing that you do have authority over the domain namespace. If you have multiple domains and only specify some of them, beware: users that don’t have provisioned addresses in those specified domains won’t be able to share or receive federated calendar info!
  2. Establish one or more sharing policies. These policies control how much information your users will be able to share with external users through sharing invitations. The setting you pick here defines the maximum level of information that your users can share from their calendars: none, free/busy only, some details, or all details. You can create a single policy for all your users or use multiple policies to provision your users on a more granular basis. You can assign these policies on a per-user basis.
  3. Establish one or more sharing relationships with other organizations. When you want to view availability data of users in other Exchange 2010 organizations, you create an organization relationship with them. Again, you can do this via EMC or EMS. This tells your CAS servers to lookup information from the defined namespaces on behalf of your users – contingent, of course, that the foreign organization has established the appropriate permissions in their organization relationships. If the foreign namespace isn’t federated with Windows Live, then you won’t be allowed to establish the relationship.

You can read more about these steps in the TechNet documentation and at this TechNet topic (although since TechNet is still in beta, it’s not all in place yet). You should also know that these policies and settings combine with the ACLs on users calendar folders, and as is the typical case in Exchange when there are multiple levels of permission, the most restrictive level wins.

What’s magic about all of this is that, at no point along the way other than the initial first step, do you have to worry consciously about the certificates you’re using. You never have to provide or provision credentials. As you create your policies and sharing relationships with other organizations – and other organizations create them with yours – Windows Live is hovering silently in the background, acting as a trusted broker for the initial connections. When your Exchange 2010 organization interacts with another, your CAS servers receive a SAML token from Windows Live. This token is then passed to the foreign Exchange 2010 organization, which can validate it because of its own trust relationship with Windows Live. All this token does is validate that your servers are really coming from the claimed namespace – Windows Live plays no part in authorization, retrieving the data, or managing the sharing policies.

However, here’s the problem: when my CAS talks to your CAS, they’re using SAML tokens – not user accounts – to authenticate against IIS for EWS calls. ISA Server (and, IIRC, TMG) don’t know how to validate these tokens, so the incoming requests can’t authenticate and pass on to the CAS. The end result is that you can’t get a proper sharing relationship set up and you can’t federate calendar data.

What We Did To Fix It

Once we knew what the problem was, fixing it was easy:

  1. Modify the OWA and ECP virtual directors on all of our Exchange 2010 CAS servers to perform FBA. These are the only virtual directories that permit FBA, so they’re the only two you need to change:Set-OWAVirtualDirectory -Identity “CAS-SERVER\owa (Default Web Site)” -BasicAuthentication $TRUE -WindowsAuthentication $FALSE -FormsAuthentication $TRUESet-ECPVirtualDirectory -Identity “CAS-SERVER\ecp (Default Web Site)” -BasicAuthentication $TRUE -WindowsAuthentication $FALSE -FormsAuthentication $TRUE
  2. Modify the Web listener on our ISA server to disable pre-authentication. In our case, we were using a single Web listener for Exchange (and only for Exchange), so it was a simple matter of changing the authentication setting to a value of No Authentication.
  3. Modify each of the ISA publishing rules (ActiveSync, Outlook Anywhere, and OWA):On the Authentication tab, select the value No delegation, but client may authenticate directly.On the Users tab, remove the value All Authenticated Users and replace it with the value All Users. This is important! If you don’t do this, ISA won’t pass any connections on!

You may also need to take a look at the rest of your Exchange virtual directories and ensure that the authentication settings are valid; many places will allow Basic authentication between ISA and their CAS servers and require NTLM or Windows Integrated from external clients to ISA.

Calendar sharing and ISA FBA pre-authentication are both wonderful features, and I’m a bit sad that they don’t play well together. I hope that future updates to TMG will resolve this issue and allow TMG to successfully pre-authenticate incoming federated calendar requests.

Stolen Thunder: Outlook for the Mac

I was going to write up a quick post about the release of Entourage for EWS (allowing it to work in native Exchange 2007, and, more importantly, Exchange 2010 environments) and the announcement that Office 2010 for the Mac would have Outlook, not Entourage, but Paul beat me to it, including my whole take on the thing. So go read his.

For those keeping track at home, yes, I still owe you a second post on the Exchange 2010 calendar sharing. I’m working on it! Soon!

Windows 7 RC: The Switch

This weekend, I finally finished getting our desktop computers replaced. They’re older system that have been running Windows XP for a long time. I’d gotten newer hardware and had started building new systems, intending to put Vista Ultimate SP1 on them (so we could take advantage of domain memberships and Windows Media Center goodness with our Xboxes), but one thing led to another and they’ve been sitting forlornly on a shelf.

I must confess – I’m not a Vista fan. I grudgingly used it as the main OS on my work MacBook Pro for a while, but I never really warmed up to it. SP1, in my opinion, made it barely useable. There were some features about it I grew to like, but those were offset by a continued annoyance at how many clicks useful features had gotten buried behind.

So when I finally got busy getting these systems ready – thanks to Steph’s system suddenly forgetting how to talk to USB devices – I decided to use Windows 7 RC instead. What I’d seen of Windows 7 already made me believe that we’d have a much happier time with it. So far, I’d have to say that’s correct. Steph’s new machine was slightly tricky to install – the built-in network interface on the motherboard wasn’t recognized so I had to bootstrap with XP drivers – but otherwise, the whole experience has been flawless.

Want to try Windows 7 for yourself? Get it here.

One of my favorite experiences was migrating our files and settings from the old machines. Windows 7, like Vista and Server 2008 before it, includes the Easy Transfer Wizard. This wizard is the offspring of XP’s Files and Settings Transfer Wizard but has a lot more smarts built in. As a result, I was able to quickly and easily get all our files and settings moved over without a hitch. With the exception of a laptop, we’re now XP free in my house.

Today, I ran across this blog post detailing Seven Windows 7 Tips. There were a couple of them I had already figured out (2, 4, and partial 3), but I’ll be trying out the rest this evening!

EAS: King of Sync?

Seven months or so ago, IBM surprised a bunch of people by announcing that they were licensing Microsoft’s Exchange ActiveSync protocol (EAS) for use with a future version of Lotus Notes. I’m sure there were a few folks who saw it coming, but I cheerfully admit that I was not one of them. After about 30 seconds of thought, though, I realized that it made all kinds of sense. EAS is a well-designed protocol, I am told by my developer friends, and I can certainly attest to the relative lightweight load it puts on Exchange servers as compared to some of the popular alternatives – enough so that BlackBerry add-ons that speak EAS have become a not-unheard of alternative for many organizations.

So, imagine my surprise when my Linux geek friend Nick told me smugly that he now had a new Palm Pre and was synching it to his Linux-based email system using the Pre’s EAS support. “Oh?” said I, trying to stay casual as I was mentally envisioning the screwed-up mail forwarding schemes he’d put in place to route his email to an Exchange server somewhere. “Did you finally break down and migrate your email to an Exchange system? If not, how’d you do that?”

Nick then proceeded to point me in the direction of Z-Push, which is an elegant little open source PHP-based implementation of EAS. A few minutes of poking around and I became convinced that this was a wicked cool project. I really like how Z-Push is designed:

  • The core PHP module answers incoming requests for the http://server/Microsoft-Server-ActiveSync virtual directory and handles all the protocol-level interactions. I haven’t dug into this deeply, but although it appears it was developed against Apache, folks have managed to get it working on a variety of web servers, including IIS! I’m not clear on whether authentication is handled by the package itself or by the web server. Now that I think about it, I suspect it just proxies your provided credentials on to the appropriate back-end system so that you don’t have to worry about integrating Z-Push with your authentication sources.
  • One or more back-end modules (also written in PHP), which read and write data from various data sources such as your IMAP server, a Maildir file system, or some other source of mail, calendar, or contact information. These back-end modules are run through a differential engine to help cut down on the amount of synching the back-end modules must perform. It looks like the API for these modules is very well thought-out; they obviously want developers to be able to easily write backends to tie in to a wide variety of data sources. You can mix and match multiple backends; for example, get your contact data from one system, your calendar from another, and your email from yet a third system.
  • If you’re running the Zarafa mail server, there’s a separate component that handles all types of data directly from Zarafa, easing your configuration. (Hey – Zarafa and Z-Push…I wonder if Zarafa provides developer resources; if so, way to go, guys!)

You do need to be careful about the back-end modules; because they’re PHP code running on your web server, poor design or bugs can slam your web server. For example, there’s currently a bug in how the IMAP back-end re-scans messages, and the resulting load can create a noticeable impact on an otherwise healthy Apache server with just a handful of users. It’s a good thing that there seems to be a lively and knowledgeable community on the Z-Push forums; they haven’t wasted any time in diagnosing the bug and providing suggested fixes.

Very deeply cool – folks are using Z-Push to provide, for example, an EAS connection point on their Windows Home Server, synching to their Gmail account. I wonder how long it will take for Linux-based “Exchange killers” (other than Zarafa) to wrap this product into their overall packages.

It’s products like this that help reinforce the awareness that EAS – and indirectly, Exchange – are a dominant enough force in the email market to make the viability of this kind of project not only potentially useful, but viable as an open source project.

Comparing PowerShell Switch Parameters with Boolean Parameters

If you’ve ever take a look at the help output (or TechNet documentation) for PowerShell cmdlets, you see that they list several pieces of information about each of the various parameters the cmdlet can use:

  • The parameter name
  • Whether it is a required or optional parameter
  • The .NET variable type the parameter expects
  • A description of the behavior the parameter controls

Let’s focus on two particular types of parameters, the Switch (System.Management.Automation.SwitchParameter) and the Boolean (System.Boolean). While I never really thought about it much before reading a discussion on an email list earlier, these two parameter types seem to be two ways of doing the same thing. Let me give you a practical example from the Exchange 2007 Management Shell: the New-ExchangeCertificate cmdlet. Table 1 lists an excerpt of its parameter list from the current TechNet article:

Table 1: Selected parameters of the New-ExchangeCertificate cmdlet

Parameter Description

GenerateRequest
SwitchParameter)

 

Use this parameter to specify the type of certificate object to create.

By default, this parameter will create a self-signed certificate in the local computer certificate store.

To create a certificate request for a PKI certificate (PKCS #10) in the local request store, set this parameter to $True.

PrivateKeyExportable
(Boolean)

Use this parameter to specify whether the resulting certificate will have an exportable private key.

By default, all certificate requests and certificates created by this cmdlet will not allow the private key to be exported.

You must understand that if you cannot export the private key, the certificate itself cannot be exported and imported.

Set this parameter to $true to allow private key exporting from the resulting certificate.

On quick examination, both parameters control either/or behavior. So why the two different types? The mailing list discussion I referenced earlier pointed out the difference:

Boolean parameters control properties on the objects manipulated by the cmdlets. Switch parameters control behavior of the cmdlets themselves.

So in our example, a digital certificate has a property as part of the certificate that marks whether the associated private key can be exported in the future. That property goes along with the certificate, independent of the management interface or tool used. For that property, then, PowerShell uses the Boolean type for the -PrivateKeyExportable property.

On the other hand, the –GenerateRequest parameter controls the behavior of the cmdlet. With this property specified, the cmdlet creates a certificate request with all of the specified properties. If this parameter isn’t present, the cmdlet creates a self-signed certificate with all of the specified properties. The resulting object (CSR or certificate) has no corresponding sign of what option was chosen – you could just as easily submit that CSR to another tool on the same machine to create a self-signed certificate.

I hope this helps draw the distinction. Granted, it’s one I hadn’t thought much about before today, but now that I have, it’s nice to know that there’s yet another sign of intelligence and forethought in the PowerShell architecture.

Some Thoughts on FBA (part 1)

It’s funny how topics tend to come in clumps. Take the current example: forms-based authentication (FBA) in Exchange.

An FBA Overview

FBA was introduced in Exchange Server 2003 as a new authentication method for Outlook Web Access. It requires OWA to be published using SSL – which was not yet common practice at that point in time – and in turn allowed credentials to be sent a single time using plain-text form fields. It’s taken a while for people to get used to, but FBA has definitely become an accepted practice for Exchange deployments, and it’s a popular way to publish OWA for Exchange 2003, Exchange 2007, and the forthcoming Exchange 2010.

In fact, FBA is so successful, that the ISA Server group got into the mix by including FBA pre-authentication for ISA Server. With this model, instead of configuring Exchange for FBA you instead configure your ISA server to present the FBA screen. Once the user logs in, ISA takes the credentials and submits them to the Exchange 2003 front-end server or Exchange 2007 (or 2010) Client Access Server using the appropriately configured authentication method (Windows Integrated or Basic). In Exchange 2007 and 2010, this allows each separate virtual directory (OWA, Exchange ActiveSync, RPC proxy, Exchange Web Services, Autodiscover, Unified Messaging, and the new Exchange 2010 Exchange Control Panel) to have its own authentication settings, while ISA server transparently mediates them for remote users. Plus, ISA pre-authenticates those connections – only connections with valid credentials ever get passed on to your squishy Exchange servers – as shown in Figure 1:

Publishing Exchange using FBA on ISA

Figure 1: Publishing Exchange using FBA on ISA

Now that you know more about how FBA, Exchange, and ISA can interact, let me show you one mondo cool thing today. In a later post, we’ll have an architectural discussion for your future Exchange 2010 deployments.

The Cool Thing: Kay Sellenrode’s FBA Editor

On Exchange servers, it is possible to modify both the OWA themes and the FBA page (although you should check about the supportability of doing so). Likewise, it is also possible to modify the FBA page on ISA Server 2006. This is a nice feature as it helps companies integrate the OWA experience into the overall look and feel of the rest of their Web presence. Making these changes on Exchange servers is a somewhat well-documented process. Doing them on ISA is a bit more arcane.

Fellow Exchange 2007 MCM Kay Sellenrode has produced a free tool to simplify the process of modifying the ISA 2006 FBA – named, aptly enough, the FBA Editor. You can find the tool, as well as a YouTube video demo of how to use it, from his blog. While I’ve not had the opportunity to modify the ISA FBA form myself, I’ve heard plenty of horror stories about doing so – and Kay’s tool is a very cool, useful community contribution.

In the next day or two (edit: or more), we’ll move on to part 2 of our FBA discussion – deciding when and where you might want to use ISA’s FBA instead of Exchange’s.

A Modest Thought on “Don’t Ask/Don’t Tell”

With the recent activity surrounding the hearing for Army Lieutenant Dan Choi, an Iraq War veteran and Arab linguist who is also openly gay, I had a thought occur to me and I wanted to share it with y’all.

In my (limited) experience with the military, there’s still quite a bit of public resistance to the idea of allowing gays to openly serve. There are many reasons that one may take this stance, ranging from deeply principled to deeply homophobic and covering all points in between. If the objection comes from deeply held religious or moral convictions, I choose to respectfully disagree with you, but I understand and value the fact that you do have your beliefs on this issue.

From my anecdotal experience, though, the people who are usually the loudest about this issue (“I ain’t lettin’ no queer next to me with a gun; I’ll shoot his ass first!” is a representative sample I’ve heard recently) tend to be strongly grounded in the “mindlessly homophobic” rationale. This isn’t just confined to the military, though. I have plenty of memories of the charming functional illiterates at my rural high school indignantly asking me if I was gay, harrassing me for my presumed homosexuality, and making not-so-subtle meant-to-be-overheard comments about my lack of “real manliness”. These were the people who would always get in your face and confront you on your disgusting life choices — as long (of course) as you weren’t big enough or mean enough to be perceived as capable of handling the violence they always threatened to dish out.

Let’s take a representative example of this kind of person — we’ll call him Bubba. (Don’t assume that it’s only guys who do this; I’ve heard plenty of women who do too. ) Down at the bottom of it all, though, these guys and gals have one common flawed assumption, deeply rooted in raging sense of entitlement:

If that person is gay, they want to have sex with me.

I think the appropriate response here is a quote from Megan Fox’s character of Mikaela:

Oh God, I can’t even tell you how much I’m not your “little bunny.”

In other words, Bubba has committed the logical fallacy of assuming that just because a gay man is sexually attracted to some men, they must like all men — including, necessarily, Bubba. In other words, the defining characteristic of a gay man’s sexuality, according to Bubba, is the orientation; once a man is gay, they automatically must like all men even if those men are otherwise unattractive. Bubba, sad to say, thinks that being gay overrides any sense of taste or choice or other form of preference.

Bubba is a dumbshit. Bubba is, however, all too common — I’ve heard plenty of people independently reproduce this exact line of reasoning.

My thought and theory is: that for the Bubbas of the world, the objection to knowingly associating with someone who is gay comes down to projection of their own inner characteristics: Bubba wants to nail pretty much every female, or in the event that he has some self-restraint, is deluded enough to think that every woman wants to have sex with him. Being a paragon of self-control and discernment, Bubba is naturally are unable to conceive of someone who could in theory be attracted to them but isn’t.

What Bubba objects to, I believe, is not the gay person’s lack of taste and self control, but his own. It’s the same as the liar who in turn is convinced that everyone lies to him and is unable to see a truthful response without looking for the “real” answer, or the person who continually cheats others in big and small ways and in turn expects everyone to cheat her.

Do I think that everyone who objects to military service for gays and lesbians falls into this trap? Not at all. I just tend to think that the more vocal someone is about it, the more likely they are to have this motivation simmering at the bottom of it all. People who suffer from this attitude tend to have the crudest, most violent responses to homosexuality; they tend to be the loudest slanderers, the meanest and most illogical protesters. They argue from a well-deserved fear, because if everyone was just like them, all the sick, dark scenarios they fantasize would of course happen.

God knows that my gay and lesbian friends and acquaintances are no saints. Some of them are people I don’t willingly spend time around — but then, there are plenty of straight people I don’t want to spend a lot of time around either. Frankly, I’ve found that brushing off determined advances from a guy who likes me is no better and no worse than those from a gal who likes me — orientation having less to do with it than does their fundamental ability to hear and accept, “Thanks, but I’m not interested.”

Mind you, typically the Bubbas of the world are at heart hypocrites, because almost all of them have absolutely no problems with lesbians. Oh, no. They’re in favor of lesbians. Mainly because, along with all their other stinking thinking, they are under the delusion that those lesbians still secretly want them — so they’ll be able to score with the lesbian and her girlfirend at the same time.  Because of this, it’s easy to spot a Bubba and identify his objection for what it really is.

And now, after the long break

Okay, okay…so updating my blog server took longer than I’d anticipated. Getting the old material out of Community Server into BlogML format turned out to be a lot easier than I’d thought and finding the time to get it all imported into WordPress wasn’t much harder. What tripped me up was getting all of the redirection for the old, legacy URLs working.

Community Server and WordPress store their content in very different ways, and so they generate the URLs for blog posts using different algorithms. I know there are a fairish number of links out there in blog land to various posts I’ve done, and for vanity sake, I’d rather not orphan those links to the dreaded 404 not found error. The solution was to find the time to buy the lastest edition of O’Reilly’s Apache Cookbook and bone up on the Apache web server directives.

So, I think all the relevant old URLs should now automatically redirect to their proper new places — there’s not much point in keeping all the old posts if you don’t do this. The nice thing, for those of you who are web geeks, is that I’m issuing permanent redirections so Google and other search engines will update their links as they re-trawl my web site, thus pointing to the new URLs. For those of you who are humans, you might want to take a minute to check your bookmarks and make sure they’re updated to the new links.

One note: some commenter data didn’t make the import successfully. I could probably dig into it and find out why, but frankly, at this point, it’s more important to get the site (and Steph’s blog) back up and running. So, sorry — if you were one of those commenters, I apologize. Future comments should be preserved properly, and I really don’t see moving away from WordPress anytime soon.

If you’re reading this, then the necessary DNS updates have finished rolling out and we’re back live to the world. Thanks for your patience!

You, too, can Master Exchange

One of the biggest criticisms I’ve seen of the MCM program, even when it first was announced, was the cost – at a list price of $18,500 for the actual MCM program, discounting the travel, lodging, food, and opportunity cost of lost revenue, a lot of people are firmly convinced that the program is way too expensive for anybody but the bigger shops.

This discussion has of course gone back and forth within the Exchange community. I think part of the pushback comes from the fact that MCM is the next evolution of the Exchange Ranger program, which felt very elitist and exclusive (and by many accounts was originally designed to be, back when it was only a Microsoft-only evolution designed to provide a higher degree of training for Microsoft consultants and engineers to better resolve their own customer issues). Starting off with that kind of background leaves a lot of lingering impressions, and the Exchange community has long memories. Paul has a great discussion of his point of view as a new MCM instructor and shares his take on the “is it worth it?” question.

Another reason for pushback is the economy. The typical argument is, “I can’t afford to take this time right now.” Let’s take a ballpark figure here, aimed at the coming May 4 rotation, just to have some idea of the kinds of numbers folks are thinking about:

  • Imagine a consultant working a 40-hour week. Her bosses would like her to meet 90% (36 hours) billable. Given two weeks of vacation a year, that 50 weeks at 36 hours a week.
  • We’ll also imagine that she’s able to bill out at $100/hour. This brings her minimum annual revenue to $180,000. They set her opportunity cost (lost revenue) at $3,600/week.
  • We’ll assume she have the pre-requisites nailed (MCITP Enterprise Messaging, the additional AD exam for either Windows 2003 or Windows 2008, and the field experience). No extra cost there (otherwise it’s $150/test, or $600 total).
  • Let’s say her plane tickets are $700 for round-trip to Redmond and back.
  • And we’ll say that she needs to stay at a hotel, checking in Sunday May 3rd, checking out Sunday May 24th, at a daily rate of $200.
  • Let’s also assume she’ll need $75 a day for meals.

That works out to $18,500 (class fee) + $700 (plane) + 21 x $275 (hotel + meals) + 3 x $3,600 (opportunity cost of work she won’t be doing) — $18,500 + $700 + $5,775 + $10,800 = a whopping total of $35,775. That, many people argue, is far too much for what they get out of the course – it represents just over 10 weeks of her regular revenue, or approximately 1/5th of her year’s revenue.

If those numbers were the final answer, they’d be right.

However, Paul has some great talking points in his post; although he focuses on the non-economic piece, I’d like to tie some of those back in to hard numbers.

  • The level of training. I don’t care how well you know Exchange. You will walk out of this class knowing a lot more and you will be immediately able to take advantage of that knowledge to the betterment of your customers. Plus, you will have ongoing access to some of the best Exchange people in the world. I don’t know a single consultant out there who can work on a problem that is stumping them for hours or days and be able to consistently bill every single hour they spend showing no results. Most of us end up eating time, which shows up in the bottom line. For the sake of argument, let’s say that our consultant ends up spending 30% instead of 10% of her time working on issues that she can’t directly bill for because of things like this. That drops her opportunity cost from $3,600/week to $2,520, or $7,560 for the three weeks (and it means she’s only got an annual revenue of $126,000). If she can reduce that non-billable time, she can increase my efficiency and get more real billable work done in the same calendar period. We’ll say she can gain back 10% of that lost time and get up to only 20% lost time, or 32 hours a week.
  • The demonstration of competence. This is a huge competitive advantage for two reasons. First, it helps you land work you may not have been able to land before. This is great for keeping your pipeline full – always a major challenge in a rough economy. Second, it allows you to raise your billing rates. Okay, true, maybe you can’t raise your billing rates for all the work that you do for all of your customers, but even some work at a higher rate directly translates to your pocket book. Let’s say she can bill 25% of those 32 hours at $150/hour. That turns her week’s take into (8 x $150) + (24 x $100) = $1,200 + $2,400 = $3,600. That modest gain in billing rates right there compensates for the extra 10% loss of billing hours and pays for itself every 3-4 weeks.

Let’s take another look at those overall numbers again. This time, let’s change our ballpark with numbers more closely matching the reality of the students at the classes:

  • There’s a 30% discount on the class, so she pays only $12,950 (not $18,500).
  • We’ll keep the $700 for plane tickets.
  • From above, we know that her real lost opportunity cost is more like $7,560 (3 x $2,520 and not the $10,800 worst case).
  • She can get shared apartment housing with other students right close to campus for more like $67 a night (three bedrooms).
  • Food expenses are more typically averaged out to $40 per day. You can, of course, break the bank on this during the weekends, but during the days you don’t really have time.

This puts the cost of her rotation at $12,950 + $700 + (21 x $107) + $7,560, or $23,457. That’s only 66% – two-thirds – of the worst-case cost we came up with above. With her adjusted annual revenue of $126,000, this is only 19%, or just less than 1/5th of her annual revenue.

And it doesn’t stop there. Armed with the data points I gave above, let’s see how this works out for the future and when the benefits from the rotation pay back.

Over the year, our hypothetical consultant, working only a 40-hour work week (I know, you can stop laughing at me now) brings in 50 x $2,520 = $126,000.  The MCM rotation represents 19% of her revenue for the year before costs.

However, let’s figure out earning potential in that same year: (47 x $3,600) – ($13,650 + $700 + $2247) = $152,603. That’s a 20% increase.

Will these numbers make sense for everyone? No, and I’m not trying to argue that they do. What I am trying to point out, though, is that the business justification for going to the rotation may actually make sense once you sit down and work out the numbers. Think about your current projects and how changes to hours and billing rates may improve your bottom line. Think about work you haven’t gotten or been unwilling to pursue because you or the customer felt it was out of your league. Take some time to play with the numbers and see if this makes sense for you.

If it does, or if you have any further questions, let me know.

Fixing interoperability problems between OCS 2007 R2 Public Internet Connectivity and AOL IM

One of the cool things you can do with OCS is connect your internal organization to various public IM clouds (MSN/Windows Live, Yahoo!, and AOL) using the Public Internet Connectivity, or PIC, feature. As you might imagine, though, PIC involves lots of fiddly bits that all have to work just right in order for there to be a seamless user experience. Recently, lots of people deploying OCS 2007 R2 have been reporting problems with PIC – specifically, in getting connectivity to the AOL IM cloud working properly.

Background

It turns out that the problem has to do with with changes that were made to the default SSL algorithm negotiations made in Windows Server 2008. If you deployed OCS 2007 R2 Edge roles on Windows Server 2003, you’d be fine; if you used Windows 2008, you’d see problems.

When an HTTP client and server connect (and most IM protocols use HTTPS or HTTP + TLS as a firewall-friendly transport[1]), one of the first things they do is negotiate the specific suite of cryptographic algorithms that will be used for that session. The cipher suite includes three components:

  • Key exchange method – this is the algorithm that defines the way that the two endpoints will agree upon a shared symmetric key for the session. This session key will later be used to encrypt the contents of the session, so it’s important for it to be secure. This key should never be passed in cleartext – and since the session isn’t encrypted yet, there has to be some mechanism to do it. Some of the potential methods allow digital signatures, providing an extra level of confidence against a man-in-the-middle attack. There are two main choices: RSA public-private certificates and Diffie-Hellman keyless exchanges (useful when there’s no prior communication or shared set of trusted certificates between the endpoints).
  • Session cipher – this is the cipher that will be used to encrypt all of the session data. A symmetric cipher is faster to process for both ends and reduces CPU overhead, but is more vulnerable in principal to discovery and attack (as both sides have to have the same key and therefore have to exchange it over the wire). The next choice is streaming cipher or cipher block chaining (CBC) cipher? For streaming, you have RC4 (40 and 128-bit variants). For CBC, you can choose RC2 (40-bit), DES (40-bit or 56-bit), 3DES (168-bit), Idea (128-bit), or Fortezza (96-bit). You can also choose none, but that’s not terribly secure.
  • Message digest algorithm – the message digest is a hash cipher used to create the Hashed Message Authentication Code (HMAC), which is used to help verify the integrity of the cipher. It’s also used to guard against an attacker trying to replay this stream in the future and fool the server into giving up information it shouldn’t. In SSL 3.0, this is just a MAC. There are three choices: null (none), MD5 (128-bit), and SHA-1 (160-bit).

Problem

Windows Server 2003 uses the following suites for TLS 1.0/SSL 3.0 connections by default:

  1. TLS_RSA_WITH_RC4_128_MD5 (RSA certificate key exchange, RC4 streaming session cipher with 128-bit key, and 128-bit MD5 HMAC; a safe, legacy choice of protocols, although definitely aging in today’s environment)
  2. TLS_RSA_WITH_RC4_128_SHA (RSA certificate key exchange, RC4 streaming session cipher with 128-bit key, and 160-bit SHA-1 HMAC; a bit stronger than the above, thanks to SHA-1 being not quite as brittle as MD5 yet)
  3. TLS_RSA_WITH_3DES_EDE_CBC_SHA (you can work out the rest)
  4. TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
  5. TLS_RSA_WITH_DES_CBC_SHA
  6. TLS_DHE_DSS_WITH_DES_CBC_SHA
  7. TLS_RSA_EXPORT1024_WITH_RC4_56_SHA
  8. TLS_RSA_EXPORT1024_WITH_DES_CBC_SHA
  9. TLS_DHE_DSS_EXPORT1024_WITH_DES_CBC_SHA
  10. TLS_RSA_EXPORT_WITH_RC4_40_MD5
  11. TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5
  12. TLS_RSA_WITH_NULL_MD5
  13. TLS_RSA_WITH_NULL_SHA

Let’s contrast that with Windows Server 2008, which cleans out some cruft but adds support for quite a few new algorithms (new suites bolded):

  1. TLS_RSA_WITH_AES_128_CBC_SHA (Using AES 128-bit as a CBC session cipher)
  2. TLS_RSA_WITH_AES_256_CBC_SHA (Using AES 256-bit as a CBC session cipher)
  3. TLS_RSA_WITH_RC4_128_SHA
  4. TLS_RSA_WITH_3DES_EDE_CBC_SHA
  5. TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P256 (AES 128-bit, SHA 256-bit)
  6. TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P384(AES 128-bit, SHA 384-bit)
  7. TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P521(AES 128-bit, SHA 521-bit)
  8. TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256(AES 256-bit, SHA 256-bit)
  9. TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P384(AES 256-bit, SHA 384-bit)
  10. TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P521(AES 256-bit, SHA 521-bit)
  11. TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256 (you can work out the rest)
  12. TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P384
  13. TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P521
  14. TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P256
  15. TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P384
  16. TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA_P521
  17. TLS_DHE_DSS_WITH_AES_128_CBC_SHA
  18. TLS_DHE_DSS_WITH_AES_256_CBC_SHA
  19. TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
  20. TLS_RSA_WITH_RC4_128_MD5
  21. SSL_CK_RC4_128_WITH_MD5 (not sure)
  22. SSL_CK_DES_192_EDE3_CBC_WITH_MD5 (not sure)
  23. TLS_RSA_WITH_NULL_MD5
  24. TLS_RSA_WITH_NULL_SHA

Okay, so take a look at line 20 in the second list – see how TLS_RSA_WITH_RC4_128_MD5 got moved from first to darned near worst? Yeah, well, that’s because AES and SHA-1 are the strongest protocols of their type likely to be commonly supported, so Windows 2008 moves those to the default offered. Unfortunately, this causes problems with PIC to AOL.

Solution

Now that we know what the problem is, what can we do about it? For the fix, check out Scott Oseychik’s post here.

[1] HTTPS is really Hop Through Tightened Perimeters Simply – aka the Universal Firewall Traversal Protocol.

Defend THIS

Iowa’s Supreme Court handed out a fairly shocking unanimous decision this morning striking down the definition of marriage as “one man, one woman”, upholding a 2007 Polk Country ruling

If you follow along my blog, you probably already know that I think this is a good thing, so I won’t comment extensively on it here. However, there’s one section in the article I linked to above that just reeks of so much stupidity that I have to respond:

Maggie Gallagher, president of the National Organization for Marriage, a New Jersey group, said “once again, the most undemocratic branch of government is being used to advance an agenda the majority of Americans reject.”

“Marriage means a husband and wife. That’s not discrimination, that’s common sense,” she said in a press release. “Even in states like Vermont, where they are pushing this issue through legislatures, gay marriage advocates are totally unwilling to let the people decide these issues directly.”

Really? Ms. Gallagher, did you really just stoop to the “30 billion flies eat shit” argument to justify your position? You lose.

Okay, to unpack that for anyone who didn’t follow that train of thought:

Ms. Gallagher is relying on the tactic of telling people “the government is ignoring your opinion.” By telling people this, she’s playing on a fundamental ignorance of the design and intent of the American government system, which is the tired old myth that America = democracy = the will of the people = only tolerating Christian values. Let’s see what our founding fathers had to say about that:

It is, that in a democracy, the people meet and exercise the government in person; in a republic, they assemble and administer it by their representatives and agents. A democracy, consequently, will be confined to a small spot. A republic may be extended over a large region.
Federalist No. 14

Democracy is two wolves and a lamb voting on what to have for lunch. Liberty is a well-armed lamb contesting the vote!
Benjamin Franklin

Remember, democracy never lasts long. It soon wastes, exhausts, and murders itself.
John Adams

It cannot be emphasized too strongly or too often that this great nation was founded, not by religionists, but by Christians; not on religions, but on the Gospel of Jesus Christ. For this very reason peoples of other faiths have been afforded asylum, prosperity, and freedom of worship here.
Patrick Henry

I know no safe depository of the ultimate powers of the society but the people themselves, (A)nd if we think them not enlightened enough to exercise their control with a wholesome discretion, the remedy is not to take it from them, but to inform their discretion by education. This is the true corrective of abuses of constitutional power.
Thomas Jefferson

I have always thought that all men should be free; but if any should be slaves, it should first be those who desire it for themselves, and secondly those who desire it for others. Whenever I hear anyone arguing for slavery, I feel a strong impulse to see it tried on them personally.
Abraham Lincoln

I could go on all day and find tons of quotes, but the key threads that I’m weaving here are these:

America is not and was never intended to be a pure democracy. Remember the phrase “the tyranny of the majority”? Basically, it’s great to be in a democracy if you’re part of the 51%. Not so much to be in the 49% Our democratic functions are not set up to allow citizens to directly decide upon laws and legislation and the handling of day-to-day governance; they are set up to elect responsible leaders who do that for us, and to give us mechanisms to take those leaders out of the picture when they fail to discharge their responsibilities. That’s the “democratic republic.” Remember the Pledge of Allegiance? “I pledge allegiance to the flag of the United States of America and to the Republic for which it stands…”

By electing responsible leaders (including legislators and judges), we are in fact giving those leaders the mandate to act in the fashion they see as best. If we don’t like what they do with that mandate, then we’d better pay attention and give them feedback. You can’t leave the people out of the equation, but you can’t directly hand them the keys to the kingdom, either. That’s why we have checks and balances, including the judicial branch of government. It is their job to say, “No, these laws are causing harm and cannot be used, even though they are popularly supported.”  The exercise of democracy should never come at the expense of depriving others of their liberties. How long did popular opinion support and uphold slavery, and how much damage did that do to our country (and continue to do today)? How long was racism enshrined in our laws? Sexism? If you’re counting upon the will of the people to make the correct choice every time, you’ve got a pretty grim track record of results.

America was designed to be a refuge for all religious belief systems, not just a narrow stripe of fundamentalist Christianity. This includes religious systems that directly challenge basic beliefs of Christianity. It was never designed to be a system that promoted Christianity over all others, even though the majority of founders were Christians, espoused Christian ideals, and wanted to see this country continue to be based on a set of morals not completely incompatible with Christianity. When push came to shove, most of the founders espoused liberty and freedom *over* Christian principles as a guiding principle for the government. They reasoned, correctly, that Christianity could flourish in an environment where liberty was pursued, but the reverse was not true (as had been graphically demonstrated). That is, the proper place for Christian values is on the individual level and in our relationships with others, not hard-wiring our specific interpretations into our functions of government. Religion + bureaucracy + power = corruption of values and lessening of liberty.

Let me leave you with this final challenge if you’re still thinking that it’s your religious duty to enshrine your notion of marriage into the laws of our nation:

Show me a comprehensive case in Scripture for collective Christian political activism. Remember the specific accusations the Pharisees made against Jesus to Pontius Pilate and his answers to Pilate in return. Remember his response to the commercialism in the Temple, how his fiercest criticisms were reserved for those who used religion to gain and maintain power. And then take a look at the agenda and funding of groups like National Organization for Marriage and Focus on the Family who are leading this fight to preserve marriage (whatever that really means) and tell me how they’re not gaining power and money from their collective activism.

ExMon released (no joke!)

If you’re tempted to think this is an April Fool’s Day joke, no worries – this is the real deal. Yesterday, Microsoft published the Exchange 2007-aware version of Exchange Server User Monitor (ExMon) for download.

“ExMon?” you ask. “What’s that?” I’m happy to explain!

ExMon is a tool that gives you a real-time look inside your Exchange servers to help find out what kind of impact your MAPI clients are having on the system. That’s right – it’s a way to monitor MAPI connections. (Sorry; it doesn’t monitor WebDAV, POP3, IMAP, SMTP, OWA, EAS, or EWS.) With this release, you can now monitor the following versions of Exchange:

  • Exchange Server 2007 SP1+
  • Exchange Server 2003 SP1+
  • Exchange 2000 Server SP2+

You can find out more about it from TechNet.

Even though the release date isn’t a celebration of April 1st, there is currently a bit of an unintentional joke, as shown by the current screenshot:

image

Note that while the Date Published is March 31, the Version is only 06.05.7543 – which is the Exchange 2003 version published in 2005, as shown below:

image

So, for now, hold off trying to download and use it. I’ll update this post when the error is fixed.

A Path of Nines

Nine months ago I stepped outside of my comfort zone and started a month of karate at the local YMCA. I didn’t expect to renew for a second month. It turns out that I love it. I’ve gotten to the point that I start dreaming about the things I’m doing, which is scary on one level and very cool on others. At any rate, I’ve had a lot of thoughts that need more time to flesh out and probably will only interest my fellow students, but I do want to share a few correspondences I’ve noticed lately between karate and the number nine.

  • There are nine belts, or kyus, between rank beginner and black belt in my school of karate (which is part of the All-Okinawan Shorin-Ryu Matsumura Karate and Kobudo Federation, or OSMKKF). As of tonight, I have passed three of them. That makes me 7th kyu — what you might call orange belt, except that we don’t actually use the orange belt (or even stripes on the belts); we just have three blue belt kyus, three green, and three brown. I like this because it helps minimize rivalry between students.
  • The blue belt kyus use the same basic kata, with what look to be minor differences for each kyu — mostly in the blocking techniques you demonstrate. The footwork, though, is the same, and it requires you to face the nine cardinal points of the compass (the normal eight plus the center position for the beginning and end of the kata). All too often we learn the specific steps of the kata and don’t stop to think about how the overall pattern looks or rhythm flows. That’s the kind of stuff I’ve started dreaming about, and man, it is cool!
  • I have learned to examine the first kata at a whole new level with each additional kyu, and I have been told that this will continue. So the very first kata they teach us unpacks to at least nine separate layers! No wonder it takes years to really master this stuff! Some students make the mistake of thinking they’ve learned everything they need to know from the earlier levels; I’ve already had at least case of figuring out how a current technique I was mastering applied to an earlier technique, making both of them stronger as a result.
  • In a typical Tuesday evening workout, I will practice various katas an average of nine times. This typically includes polishing the kata I will next be testing for and learning the basics of the next kata. There are days this does not feel like it is enough — and that would be right. So we practice at home too; in fact, there are certain parts that I find myself practicing at work as I walk back and forth from my office to the kitchen or to co-workers’ offices. (Apparently I look really funny walking through the lobby practicing punches.)
  • For my next kyu, I start to fold in weapons work (which is the kobudo part; karate is technically only bare-hand work). I will first work with the bo staff, which is six feet or 72 inches tall — nine times eight. I’m tremendously excited to be working with the bo; somewhere in my head, the iconic definition or avatar of martial arts got associated with being a bad-ass with the staff, so now I feel like I’m finally stepping into the heart of what it means to be a martial artist. Intellectually, I realize this is silly, but it still feels true.

Don’t worry; I’m not trying to seriously assert that the number nine somehow has some sort of mystic foothold in karate (that would be number ten, which in Japanese is ju, and controls our workouts). I just noticed these and was amused. What’s been more awe-inspiring has been noticing the changes in the last nine months:

  • I’ve continued to lose weight. Granted, I’ve not experienced the same dramatic pace as I did in the first month, but it’s still a slow and steady drop. This is really cool given some of the interruptions and stressors I’ve had during these nine months that have wreaked havoc with my karate attendance.
  • My overall muscle tone has improved. You probably wouldn’t notice the difference, but I certainly do. Certain actions are a lot less effort than they used to be, and there is visible muscle definition amongst the remaining layers of pudge.
  • My endurance has increased. Right now I’m at that point where if I miss a week and a half of karate, I definitely feel it, but if I attend regularly I can make it through the workouts and not feel completely beat up. More importantly, I’m better able to keep up as the speed of some of the workouts increases; if I slow down it’s to perfect technique, not because I can’t do it.
  • My reflexes have improved. This has been the startling one for me, because as long as I can remember my reflexes have sucked. I’m still no Chuck Norris or Bruce Lee, but the other day I knocked a glass tumbler off the counter and caught it without looking directly at it. Whoa!

By some counts, these last nine months have gotten me a third of the way to black belt. I don’t feel that way; I feel that they’ve set my feet on a path that I’ll still be walking for years to come. I’m not worried about belts or kyus; that’s sensei’s job to track, not mine. I just have to get through each workout, each kata, each set of one-steps, each class having given my best and learned everything I can. The rest will take care of itself. I’d never have caught that glass if I’d been trying to learn it as a trick, but by focusing on each step while I’m at it, I’ve gotten my body — as out of shape as it still is — to a point where I can do things I’ve never been able to do before. And that, friends, is magic.

Two CCR White Papers from Missy

This actually happened last week, but I’ve been remiss in getting it posted (sorry, Missy!) Missy recently completed two Exchange 2007 whitepapers, both centered around the CCR story.

The first one, High Availability Choices for Exchange Server 2007: Continuous Cluster Replication or Single Copy Clustering, provides a thorough overview of the questions and issues to be considered by companies who are looking for Exchange 2007 availability:

  • Large mailbox support. In my experience, this is a major driver for Exchange 2007 migrations and for looking at CCR. Exchange 2007’s I/O performance increases have shifted the balance for the Exchange store being always I/O bound to now sometimes being capacity bound, depending on the configuration, and providing that capacity can be extremely expensive in SCC configurations (that typically rely on SANs). CCR offers some other benefits that Missy outlines.
  • Points of failure. With SCC, you still only have a single copy of the data – making that data (and that SAN frame) a SPOF. There are mitigation steps you can take, but those are all expensive. When it comes to losing your Exchange databases, storage issues are the #1 cause.
  • Database replication. Missy takes a good look at what replication means, how it affects your environment, and why CCR offers a best-of-breed solution for Exchange database replication. She also tackles the religious issue of why SAN-based availability solutions aren’t necessarily the best solution – and why people need to re-examine the question of whether Exchange-based availability features are the right way to go.
  • RTO and RPO. These scary TLAs are popping up all over the place lately, but you really need to understand them in order to have a good handle on what your organization’s exact needs are – and which solution is going to be the best fit for you.
  • Hardware and storage considerations. Years of cluster-based availability solutions have given many Exchange administrators and consultants a blind spot when it comes to how Exchange should be provisioned and designed. These solutions have limited some of the flexibility that you may need to consider in the current economic environment.
  • Cost. Talk about money and you always get people’s attention. Missy details several areas of hidden cost in Exchange availability and shows how CCR helps address many of these issues.
  • Management. It’s not enough to design and deploy your highly available Exchange solution – if you don’t manage and monitor it, and have good operational policies and procedures, your investment will be wasted. Missy talks about several realms of management.

I really recommend this paper for anyone who is interested in Exchange availability. It’s a cogent walkthrough of the major discussion points centering around the availability debate.

Missy’s second paper, Continuous Cluster Replication and Direct Attached Storage: High Availability without Breaking the Bank, directly addresses one of the key assumptions underneath CCR – that DAS can be a sufficient solution. Years of Exchange experience have slowly moved organizations away from DAS to SAN, especially when high availability is a requirement – and many people now write off DAS solutions out of habit, without realizing that Exchange 2007 has in fact enabled a major switch in the art of Exchange storage design.

In order to address this topic, Missy takes a great look at the history of Exchange storage and the technological factors that led to the initial storage design decisions and the slow move to SAN solutions. These legacy decisions continue to box today’s Exchange organizations into a corner with unfortunate consequences – unless something breaks demand for SAN storage.

Missy then moves into how Exchange 2007 and CCR make it possible to use DAS, outlining the multiple benefits of doing so (not just cost – but there’s a good discussion of the money factor, too).

Both papers are outstanding; I highly recommend them.

Haz Firewall, Want Cheezburger

Although Window Server 2008 offers an impressive built-in firewall, in some cases we Exchange administrators don’t want to have to deal with it. Maybe you are building a demo to show a customer, or a lab environment to reproduce an issue. Maybe you just want to get Exchange installed now and will loop back to deal with fine-tuning firewall issues later. Maybe you have some other firewall product you’d rather use. Maybe, even, you don’t believe in defense in depth – or don’t think server-level firewall is useful.

Whatever the reason, you’ve decided to disable the Windows 2008 firewall for an Exchange 2007 server. It turns out that there is a right way to do it and a wrong way to do it.

The wrong way

image

This seems pretty intuitive to long-term Exchange administrators who are used to Windows Server 2003. The problem is, the Windows firewall service in Windows 2008 has been re-engineered and works a bit differently. It now includes the concept of profiles, a feature that built into the networking stack at a low level, enabling Windows to identify the network you’re on and apply the appropriate sets of configuration (such as enabling or disabling firewall rules and services).

Because this functionality is now tied into the network stack, disabling the Windows Firewall service and shutting it off can actually lead to all sorts of interesting and hard-to-fix errors.

The right way

Doing it the right way involves taking advantage of those network profiles.

Method 1 (GUI):

  1. Open the Windows Firewall with Advanced Security console (Start, Administrative Tools, Windows Firewall with Advanced Security).
  2. In the Overview pane, click Windows Firewall Properties.
  3. For each network profile (Domain network, Public network, Private network) that the server or image will be operating in, select Firewall state to Off. Typically, setting the Domain network profile is sufficient for an Exchange server, unless it’s an Edge Transport box.
  4. Once you’ve set all the desired profiles, click OK.
  5. Close the Windows Firewall with Advanced Security console.

image

Method 2 (CLI):

  1. Open your favorite CLI interface: CMD.EXE or PowerShell.
  2. Type the following command:netsh advfirewall set profiles state off

    Fill in profiles with one of the following values:

    • DomainProfile — the Domain network profile. Typically the profile needed for all Exchange servers except Edge Transport.
    • PrivateProfile — the Private network profile. Typicall the profile you’ll need for Edge Transport servers if the perimeter network has been identified as a private network.
    • PublicProfile — the Public network profile. Typicall the profile you’ll need for Edge Transport servers if the perimeter network has been identified as a public network (which is what I’d recommend).
    • CurrentProfile — the currently selected network profile
    • AllProfiles — all network profiles
  3. Close the command prompt.

image

And there you have it – the right way to disable the Windows 2008 firewall for Exchange Server 2007, complete with FAIL/LOLcats.

Wanted: Your broken Mac mini

Life is full of synchronicity; most of the time, this is through the workings of chance, but every now and then we get to help it along. Two ships may pass in the night, but how often does the helmsman take a hand?

You’re the owner of a no-longer-working original PowerPC Mac mini. This awesome little piece of technology once rocked your world, but slowly, you moved on to bigger and better things. Maybe you upgraded; maybe it stopped working. This Mac mini, though, still hangs around, complete with a working SuperDrive. You may feel a bit of guilt over not passing it on or getting it refurbished.

I’m the owner of a proud original PowerPC Mac mini that is having problems with its SuperDrive. My mini wants to be a member of the OS X 10.4 generation but can’t boot from the internal drive, nor can it seem to find an external USB drive as a boot device.

If you’ve got a spare original Mac mini (or drive that fits) and you’re willing to part it with inexpensively, please drop me a line. No pina coladas or getting caught in the rain required.

A long-overdue status update

So, you haven’t seen a lot of me on the blog lately. The sad part is that I have three or four blog posts in various states of completion, I just seem to have very little time these days to work on it. I think part of it is that ever since my MCM Exchange 2007 class last October, I felt like I had a big burden of unfinished business on my shoulders.

Happily, that’s not the case anymore. Yesterday I retook and passed the lab and received word that I am have officially earned the coveted Microsoft Certified Master | Exchange 2007 certification. While I’m taking this moment to express my utmost relief about this, be assured I’ve got plenty more to say about it in an upcoming blog post, but it’ll have to wait.

I’ve also been re-awarded as an Exchange MVP — 3 years, wow! — and continue to be going full-bore with that. I have become very deeply aware that my continued presence in the Microsoft communities is in large part due to the fantastic caliber of people who are involved in them. A friend once mentioned the “open source community” as if it was a singular community and I had to laugh; from my experience, it’s anything but. Consider the following examples:

  • KDE vs. Gnome
  • Linux vs. BSD
  • Linux distro vs. Linux distro
  • Sun Java vs. IBM Java
  • Tomcat vs. other Java frameworks
  • Sendmail vs. Postfix vs. Exim
  • Berstein vs. everyone else
  • Stallman/FSF vs. everyone else

I made the initial mental leap from “Unix IT pro who knows Windows” to being a “Windows IT pro who knows Unix” because of the management challenges I saw Active Directory and Group Policy addressing, but I stayed for the people. Including people like you, reading my blog.

On that note, since I know many of you started reading me because of seeing me at conferences: I will not be at Spring Connections this year. I know, right? Anyway, it’s all for the best; things are shaping up to be busy and it will be nice to have one year when I’m not flying to Orlando. This is even more awesome because I will be at Tech-Ed, giving both a breakout session and an Interactive Theater session. More details as we get closer. I’ve also got a great project that I’m working on that I hope to be able to announce later.

Oh, hey, have you seen 3Sharp’s new podcasting site, built entirely on the Podcasting Kit for SharePoint that we were the primary developers for? I’ve got a few podcasts in the works…so if you’ve got any questions or ideas of short subjects you’d like me to talk about, let me know!

Alright, folks — it’s late and my Xbox is calling me! (My wife and kids probably want a word with me too.)

Outlook Performance Goodness

Microsoft has recently released a pair of Outlook 2007 updates (okay, technically, they’re updates for Outlook 2007 with SP1 applied) that you might want to look at installing sooner rather than later. These two updates are together being billed as the “February cumulative update” at KB 968009, which has some interesting verbiage about how many of the fixes were originally slated to be in Outlook 2007 SP2:

The fix list for the February CU may not be identical to the fix list for SP2, but for the purposes of this article, the February CU fixes are referred to synonymously with the fixes for SP2. Also, when Office suite SP2 releases, there will not be a specific package that targets only Outlook.

Let’s start with the small one, KB 697688. This one fixes some issues with keyboard shortcuts, custom forms, and embedded Web browser controls.

Okay, with that out of the way, let’s move on to juicy KB 961752, an unlooked-for roll-up containing a delectable selection of fixes. Highlights include:

  1. Stability fixes
  2. SharePoint/Outlook integration
  3. Multiple mailbox handling behavior
  4. Responsiveness

From reports that I’ve seen, users who have applied these two patches are reporting significantly better response times in Outlook 2007 cached mode even when attaching to large mailboxes or mailboxes with folders that contain many items — traditionally, two scenarios that caused a lot of problems for Outlook because of the way the .ost stored local data. They’ve also reported that the “corrupted data file” problem that many people have complained about (close Outlook, it takes forever to shut down so writes to the .ost don’t fully happen) seems to have gone away.

Note that you may have an awkward moment after starting Outlook for the first time after applying these updates: you’re going to get a dialog something like this:

image

“Wait a minute,” you might say. “First use? Where’s my data?” Chillax [1]. It’s there — but in order to do the magic, Outlook is changing the structure of the existing .ost file. This is a one-time operation and it can take a little bit of time, depending on how much data you’ve got stuff away in there (I’ve currently got on the order of 2GB or so, so you can draw your own rough estimates; I suspect it also depends on the number/depth of folders, items per folder, number of attachments, etc.)

Once the re-order is done, though, you get all the benefits. Faster startup, quicker shut-down, and generally more responsive performance overall. This is seriously crisp stuff, folks — I opened my Deleted Items folder (I hardly ever look in there, I just occasionally nuke it from orbit) and SNAP! everything was there as close to instantly as I can measure. No waiting for 3-5 (or 10, or 20) seconds for the view to build.

 

[1] A mash-up of “chill” and “relax”. This is my new favorite word.

Live from Facebook: 25 Random Things about Devin

Over on my Facebook profile, I got tagged by about five people with this whole “25 Things About Me” meme. I finally decided to respond. Am I glad I did — I’ve been having a great amount of fun with the ensuing comment thread. In fact, it’s so much fun, I figured I’d repost it here. (If you read this and my Facebook profile, you’ve already seen this; feel free to skip it.)

  1. When I was a child, I once typed out over 3/4 of my favorite book so I could have my own copy to read. I couldn’t afford to buy one at the time.
  2. I learned to read when I was four; we moved to a new house and couldn’t get TV reception, so my parents got rid of our TV. The next year, I figured out people got paid to write books. I’ve wanted to be a published writer ever since.
  3. I enjoy karate, now that I’m taking it. I know that martial arts the world over teach a variety of armed and unarmed techniques, but I’ve always secretly thought of the bo staff when I think of martial arts. Now that I get to work with the staff, I *feel* like a martial artist.
  4. I love peppermint ice cream, caramel, and Girl Scout thin mint cookies. However, my favorite dessert is chocolate chip cookies. My wife makes a killer variant: orange chocolate chip cookies. YUM!
  5. I’m a sucker for all things feline, except for some pure-bred Persians and Siamese that are too stupid to breathe. When I was a kid, I got to play with a white tiger cub; white tigers are my favorite cat. I like some breeds of dogs, but not the small yappy ones.
  6. I think that forgiveness isn’t a “get out of jail free card.” It’s a process designed to help victims divest themselves of the continuing karmic damage they inflict upon themselves and let go of any claims of vengeance or retaliation. True forgiveness does not absolve the offender of consequences, but it does open the door to mercy and breaks the cycle of anger and revenge.
  7. I hated high school. I’d home schooled for five years, then moved to a new town and started public high school. So much wasted time and energy, especially on social hierarchy games! I wonder if I would feel the same if I’d been one of the popular kids…but we’ll never know.
  8. After my son was born and my daughter was a toddler, we found out that my family has a history of autism. If you ever wondered why I was so weird, you can thank Asperger’s Syndrome. However, that only gets 65% of the blame; the rest is all me.
  9. My first trip outside North America was a speaking gig at a roadshow in Lisbon, Portugal. I’ve always wanted to visit Portugal; they were the home of some of history’s greatest navigators and explorers.
  10. I have discovered that I enjoy speaking in public; the bigger audience, the better. However, I typically dread question and answer sessions, even though I’ve been told I do them well.
  11. The first time I saw Steph I knew I would marry her, even before we were introduced. The universe gave an audible and tactile “click” that was impossible for me to miss! This is why I was able to not get all nervous around her.
  12. As I have gotten older, I have become more concerned with uncovering the structures and principles that events work on, and less concerned with arguing the particular details of a given situation. Getting axle-wrapped about details is a great way to keep anything from being resolved. Boring!
  13. My favorite food? The Cheescake Factory’s Spicy Cashew Chicken. Screw dessert — I gorge myself on the chicken. Yum! If we’re talking homemade, then it’s the pizza that my wife makes, based on a modification of my mother’s recipe.
  14. When format allows, I always leave blank lines between paragraphs. I also insist on serial commas in lists unless the style guide says otherwise. (Real writers can do whatever the style guide says, or rewrite to avoid the points they disagree with.) The sentence “I’d like to thank my parents, God and Ayn Rand” gives me all the justification I need.
  15. My daily work involves Microsoft Windows and Exchange, and I’ve just been recognized for my third year as a Microsoft Exchange MVP. If you’d told me ten years ago I’d not still be working with Unix, Sendmail, and Postfix, I’d have laughed at you.
  16. I don’t like kids, mainly because I hated being one. Adults always talked down to me and condescended in other ways. As a result, I try to never talk down to kids myself. I find they are better listeners than most adults and respond well to more advanced instructions that most adults would believe.
  17. Before the Internet got popular, I used to run an electronic BBS. I had no games and the only files I had for download were basic utilities; I specialized in message forums no one else in my area would touch. My BBS was always busy, and over 80% of my callers came from out of state.
  18. To me, the difference between a “friend” and an “acquaintance” is how much work is put into the relationship. You can’t really be a friend if both sides don’t work to make it happen.
  19. I’ve been sporting a shaved head since college, when my best friend’s dad talked me into it. Although I occasionally grow my hair out, I’m resigned to shaving my head for the rest of my life. Nothing else really works well.
  20. I have a simple philosophy about shopping: do your research and buy an well-made item that will last (even if it’s expensive) instead of buying for price and having to replace it multiple times. Your time is worth more than your money.
  21. I can’t stand thrift stores, second-hand shopping, or even most garage sales. There’s a psychic residue to most of the items there that is very unpalatable. I’ve had to learn to let Steph do her bargain-hunting thing, but she knows how to find the good items.
  22. I was never a Cub Scout, but once I got into Boy Scouts, I was a den chief to both a Cub Scout den and a Webelos Scout den. My favorite part of Boy Scouts, though, was being on the ceremonial Native American dance team for our Order of the Arrow lodge.
  23. I’ve really enjoyed the Halo universe, both video games and novels. In fact, I’d like to build a set of Mjolnir armor, and one of my friends and I are planning to build a working Warthog. Geek!
  24. I often have insomnia. Part of it is that I resent the time I lose to sleep. It feels like dying a little bit, especially because it can be a struggle to wake up again in the morning.
  25. I want my wife to be a ninja. I mean, who wouldn’t

If you’re reading this for the first time, consider yourself tagged. Your turn! Post a link to your blog (or wherever you post your “25 Things” list) in the comments so I can go read it too.

The Facebook Experiment

Warning: the following post may not make much sense. If it does, it may sound bitter and arrogant. I apologize in advance; that’s not my goal here.

I finally got a critical mass of people dragging me into Facebook, so I’ve ben doing it over the last couple of months. I entered into it with a simple rule: as long as I knew someone or could figure out what context we shared, I’d accept friend requests. I only send friend requests to people I want to be in contact with, but if someone wants to keep up with me, I’ll happily approve the request. (Remember, Asperger’s Syndrome; I may be able to fake looking like I’m socially adjusted, but underneath, I’m not.)

This resolve has been sorely tested by a number of requests I’ve gotten from people from my high school days. I am not one of those people who thinks that high school was the best time of my life. Far from it, actually. Now that I understand about Asperger’s, I have been able to go back and identify what I was doing to contribute to my misery during those years — and boy was I — but I also know that there were a bunch of people who were happy to help. I was happy to leave that town, happy to never go back, and happy — for the most part — to not try to get back into some mythical BFF state with these people that I never shared in the first place. There are some exceptions; you should know who you are. If you aren’t sure and want to know, send me a private message and ask. Don’t ask, though, unless you’re ready to be told that you’re not.

Does this mean I want people to stop requesting? No. We’re adults. (At least, we should be.) Life moves on. I’m not that same person, and I’m willing to bet you’re not either. Let’s try to get to know one another as we are now, without presuming some deeper level of friendship than really exists. It’ll be a lot easier for everyone that way, and probably a lot more fun.

Three observations, two confessions, and an apology

Observation The First: only paying a touch over $2/gallon for gas feels positively sinful.

Observation The Second: one way to survive Seattle winters is to occasionally say “screw it”, roll the window down in the car, and let the cold wet air in while pretending it’s an 80-degree summer day with blue skies.

Observation The Third: If you’re cutting back on caffeine intake and you’re down to approximately two bottles/cans of Coke a day, one should really not have one’s morning bottle of Coke during the morning commute and the thoughtlessly purchase and drink a 20oz. mocha latte (one of the approximately two a year I have) during the latter portion of that same commute. Hot damn, I can levitate right now.

Confession The First: I really like Katy Perry’s Hot N Cold. Sure, the song is pop glitter, but it’s fun pop glitter, and it makes me squee like a little girl every time I turn it on.

Confession The Second: When I say that I don’t dance, what I really mean is that I don’t dance standing up. I’ll dance at my car or desk, but I’ll do it in a way that’s deliberately bad and frightening, because I like to mess with people. You’d be amazed how well a properly timed desk dance can clear out your office of annoying project managers and co-workers.

Apology*: For the residents and fellow commuters along Avondale road between 8:20 and 8:23am who heard and saw a red Ford Focus (with a Decepticon icon on the hood) blasting Hot N Cold out the driver’s window at high volumes, I plead guilty. That was me in my overcaffeinated, car-dancing bliss. Same to the folks along 124th, especially at the 124th/Woodinville-Redmond Road intersection, who were treated to the same, only with Cyndi Lauper’s Into the Nightlife, from 8:31 to 8:34am.

* I don’t know if this is a real apology, because I can’t guarantee I won’t do it again. At least I’m honest.

GetSharp lives!

If you’ve only interacted with 3Sharp through me, Paul, Missy, and Tim, then you’ve missed a whole key aspect of the talent we’ve got here at 3Sharp. Our group (Infrastructure Solution Group or ISG, formerly known as the Platform Group) is just a small part of what goes on around here.

GetSharp is 3Sharp’s personal implementation of PKS, the Podcasting Kit for Sharepoint. PKS was the brainchild of a fairish bit of the rest of the company. Quite simply, it’s podcasting for SharePoint — think something like Youtube, mixed into SharePoint with a whole lot of awesome (like the ability to use Live IDs). When I saw the first demo of what we were doing with GetSharp, I was blown away. I’m happy to have uploaded the videocast series on Exchange 2007 we did for Windows IT Pro, and I’ve got a series on virtualization I’ll be working on when I get back to work next week.

This is what I do for fun???

For the last three weeks, I’ve been on vacation.

Much of that vacation has consisted of quality Xbox 360 time, both by myself (Call of Duty: World at War for Christmas) and with Steph and Chris. (Alaric had a friend over today and we had a nice six-way Halo 3 match; the adults totally dominated the kids in team deathmatch, I might add.) However, I’d also slated doing some much-needed rebuilds on my network infrastructure here at home: migrating off of Exchange to a hosted email solution (still Exchange, just not a server *I* have to maintain), decommissioning old servers, renumbering my network, building a new firewall that can gracefully handle multiple Xbox 360s, building some new servers, and sorting through the tons of computer crap I have. All of this activity was aimed at reducing my footprint in the back room so we can unbury my desk and move Alaric’s turtle into the back room where she should have a quieter and warmer existence.

Yeah, well. Best laid plans. I’ve gotten a surprising amount of stuff done, even if I have taken over the dining room table for the week. (Gotta have room to sort out all that computer gear, y’know. Who knew I had that much cool stuff?) My progress, however, has slowed quite a bit the last couple of days as I ran into some unexpected network issues I had to work my butt off to resolve.

Except that now I think I just figured out the two causes. Combined, they made my “new” network totally unusable and masked each other in all sorts of weird and wonderful ways. It was rather reminiscent, actually, of the MCM hands-on lab. I guess I’ve been practicing for my retest.

Ah, well. I still have one day of freedom left before I head back to work. I might actually be ready to go.

Self-imposed silence

I really hate that there are things I want — nay, need — to write about right this minute and I can’t because it would be revealing too much about sensitive stuff that isn’t my place to talk about.

What good is a personal blog if you can’t use it to rant and kvetch and cry and work out dealing with the bumps the Universe throws your way?