More Exchange blogging with Trace3!

I just wanted to drop a quick note to let you all know that I’ll be cross-posting all of my Exchange related material both here and at the Trace3 blog. The Trace3 blog is a multi-author blog, so you’ll get not only all my Exchange-related content, but you’ll get a variety of other interesting discussions from a number of my co-workers.

To kick it off, I’ve updated my From Whence Redundancy? Exchange 2010 Storage Essays, Part 1 post with some new material on database reseed times and reposted it there in its entirety. Don’t worry, I’ve also updated it here.


What Exchange 2010 on Windows Datacenter Means

Exchange Server has historically come in two flavors for many versions – Standard Edition and Enterprise Edition. The main difference this license change made for you was the maximum number of supported mailbox databases as shown in Table 1:

Version Standard Edition Enterprise Edition
Exchange 2003 1 (75GB max) 20
Exchange 2007 5 50
Exchange 2010 5 100

Table 1: Maximum databases per Exchange editions

However, the Exchange Server edition is not directly tied to the Windows Server edition:

  • For Exchange 2003 failover cluster mailbox servers, Exchange 2007 SCC/CCR environments [1], and  Exchange 2010 DAG environments, you need Windows Server Enterprise Edition in order to get the MSCS cluster component framework.
  • For Exchange 2003 servers running purely as bridgeheads or front-end servers, or Exchange 2007/2010 HT, CAS, ET, and UM servers, you only need Windows Server Standard Edition.

I’ve seen some discussion around the fact that Exchange 2010 will install on Windows Server 2008 Datacenter Edition and Windows Server 2008 R2 Datacenter Edition, even though it’s not supported there and is not listed in the Operating System requirements section of the TechNet documentation.

HOWEVER…if we look at the Prerequisites for Exchange 2010 Server section of the Exchange Server 2010 Licensing site, we now see that Datacenter edition is, in fact listed as shown in Figure 1:

Exchange 2010 server license comparison

Figure 1: Exchange 2010 server license comparison

This is pretty cool, and the appropriate TechNet documentation is in the process of being updated to reflect this. What this means is that you can deploy Exchange 2010 on Windows Server Datacenter Edition; the differences between editions of Windows Server 2008 R2 are found here.[2] If you take a quick scan through the various feature comparison charts in the sidebar, you might wonder why anyone would want to install Exchange 2010 on Windows Server Datacenter Edition; it’s more costly and seems to provide the same benefits. However, take a look at the technical specifications comparison; this is, I believe, the meat of the matter:

  • Both editions give you a maximum of 2 TB – more than you can realistically throw at Exchange 2010.
  • Enterprise Edition gives you support for a maximum eight (8) x64 CPU sockets, while Datacenter Edition gives you sixty-four (64). With quad-core CPUs, this means a total of 32 cores under Enterprise vs. 256 cores under Datacenter.
  • With the appropriate hardware, you can hot-add memory in Enterprise Edition. However, you can’t perform a hot-replace, nor can you hot-add or hot-replace CPUs under Enterprise. With Datacenter, you can hot-add and hot-remove both memory and CPUs.

These seem to be compelling in many scenarios at first glance, unless you’re familiar with the recommended maximum configurations for Exchange 2010 server sizing. IIRC, the maximum CPUs that are recommended for most Exchange 2010 server configurations (including multirole servers) would be 24 cores – which fits into the 8 socket limitation of Enterprise Edition while using quad core CPUs.

With both Intel and AMD now offering hexa-core (6 core) CPUs, you can move up to 48 cores in Enterprise Edition. This is more than enough for any practical deployment of Exchange Server 2010 I can think of at this time, unless future service packs drastically change the CPU performance factors. Both Enterprise and Datacenter give you a ceiling of 2TB of RAM, which is far greater than required by even the most aggressively gigantic mailbox load I’d want to place on a single server. I’m having a difficult time seeing how anyone could realistically build out an Exchange 2010 server that goes beyond the performance and scalability limits of Enterprise Edition in any meaningful way.

In fact, I can think of only three reasons someone would want to run Exchange 2010 on Windows Server Datacenter Edition:

  • You have spare Datacenter Edition licenses, aren’t going to use them, and don’t want to buy more Enterprise Edition licenses. This must be a tough place to be in, but it can happen under certain scenarios.
  • You have a very high server availability requirements and require the hot-add/hot-replace capabilities. This will get costly – the server hardware that supports this isn’t cheap – but if you need it, you need it.
  • You’re already running a big beefy box with Datacenter and virtualization[3]. The box has spare capacity, so you want to make use of it.

The first two make sense. The last one, though, I’d be somewhat leery of doing. Seriously, think about this – I’m spending money on monstrous hardware with awesome fault tolerance capabilities, I’ve forked over for an OS license[4] that gives me the right to unlimited virtual machines, and now I’m going to clutter up my disaster recovery operations by mixing Exchange and other applications (including virtualization) in the same host OS instance? That may be great for a lab environment, but I’d have a long conversation with any customer who wanted to do this under production. Seriously, just spin up a new VM, use Windows Server Enterprise Edition, and go to town. The loss of hardware configuration flexibility I get from going virtual is less than I gain by compartmentalizing my Exchange server to its own machine, along with the ability to move that virtual machine to any virtualization host I have.

So, there you have it: Exchange 2010 can now be run on Windows Server Datacenter Edition, which means yay! for options. But in the end, I don’t expect this to make a difference for any of the deployments I’m like to be working on. This is a great move for a small handful of customers who really need this.

[1] MSCS is not required for Exchange 2007 SCR, although manual target activation can be easier in some scenarios if your target is configured as a single passive node cluster.

[2] From what I can tell, the same specs seem to be valid for Windows Server 2008, with the caveat that Windows Server 2008 R2 doesn’t offer a 32-bit version so the chart doesn’t give that information. However, since Exchange 2010 is x64 only, this is a moot point.

[3] This is often an attractive option, since you can hosted an unlimited number of Windows Server virtual machines without having to buy further Windows Server licenses for them.

[4] Remember that Datacenter is not licensed at a flat cost per server like Enterprise is; it’s licensed per socket. The beefier the machine you run it on, the more you pay.


Things They Forgot

Pat Robertson’s comments on Haiti basically boil down to “they got what was coming to them.” Mr. Robertson, I think you forgot Matthew 25:34-46 (KJV):

34Then shall the King say unto them on his right hand, Come, ye blessed of my Father, inherit the kingdom prepared for you from the foundation of the world: 35For I was an hungred, and ye gave me meat: I was thirsty, and ye gave me drink: I was a stranger, and ye took me in: 36Naked, and ye clothed me: I was sick, and ye visited me: I was in prison, and ye came unto me. 37Then shall the righteous answer him, saying, Lord, when saw we thee an hungred, and fed thee? or thirsty, and gave thee drink? 38When saw we thee a stranger, and took thee in? or naked, and clothed thee? 39Or when saw we thee sick, or in prison, and came unto thee? 40And the King shall answer and say unto them, Verily I say unto you, Inasmuch as ye have done it unto one of the least of these my brethren, ye have done it unto me.

41Then shall he say also unto them on the left hand, Depart from me, ye cursed, into everlasting fire, prepared for the devil and his angels: 42For I was an hungred, and ye gave me no meat: I was thirsty, and ye gave me no drink: 43I was a stranger, and ye took me not in: naked, and ye clothed me not: sick, and in prison, and ye visited me not. 44Then shall they also answer him, saying, Lord, when saw we thee an hungred, or athirst, or a stranger, or naked, or sick, or in prison, and did not minister unto thee? 45Then shall he answer them, saying, Verily I say unto you, Inasmuch as ye did it not to one of the least of these, ye did it not to me. 46And these shall go away into everlasting punishment: but the righteous into life eternal.

Rush Limbaugh may have forgotten the above as well. His claims that Obama is using humanitarian aid for political profit definitely seem to have forgotten Matthew 7:15-20:

15 Beware of false prophets, which come to you in sheep’s clothing, but inwardly they are ravening wolves. 16 Ye shall know them by their fruits. Do men gather grapes of thorns, or figs of thistles? 17 Even so every good tree bringeth forth good fruit; but a corrupt tree bringeth forth evil fruit. 18 A good tree cannot bring forth evil fruit, neither can a corrupt tree bring forth good fruit. 19 Every tree that bringeth not forth good fruit is hewn down, and cast into the fire. 20 Wherefore by their fruits ye shall know them.

If that last passage seems a bit murky, here’s a quote from C. S. Lewis’s The Last Battle (the last book of the Chronicles of Narnia) that I have always loved. The speaker is a Calormene soldier, Emeth, who has had a life-changing encounter with Aslan during the last hours of Narnia:

He answered, Child, all the service thou hast done to Tash, I account as service done to me. Then by reasons of my great desire for wisdom and understanding, I overcame my fear and questioned the Glorious One and said, Lord, is it then true, as the Ape said, that thou and Tash are one? The Lion growled so that the earth shook (but his wrath was not against me) and said, It is false. Not because he and I are one, but because we are opposites, I take to me the services which thou hast done to him. For I and he are of such different kinds that no service which is vile can be done to me, and none which is not vile can be done to him. Therefore if any man swear by Tash and keep his oath for the oath’s sake, it is by me that he had truly sworn, though he know it not, and it is I who reward him. And if any man do a cruelty in my name, then, though he says the name Aslan, it is Tash whom he serves and by Tash his deed is accepted. Dost thou understand, Child?

By their fruits ye shall know them…whatever their claims.


Poor Google? Not.

Since yesterday, the Net has been abuzz because of Google’s blog posting about their discovery they were being hacked by China. Almost every response I’ve seen has focused on the attempted hacking of the mailboxes of Chinese human rights activists.

That’s exactly where Google wants you to focus.

Let’s take a closer look at their blog post.

Paragraph 1:

In mid-December, we detected a highly sophisticated and targeted attack on our corporate infrastructure originating from China that resulted in the theft of intellectual property from Google.

Paragraph 2:

As part of our investigation we have discovered that at least twenty other large companies from a wide range of businesses–including the Internet, finance, technology, media and chemical sectors–have been similarly targeted.

Whoa. That’s some heavy-league stuff right there. Coordinated, targeted commercial espionage across a variety of vertical industries. Google first accuses China of stealing its intellectual property, then says that they weren’t the only ones. Mind you, industry experts – including the United States governmenthave been saying the same thing for years. Cries of ‘China hacked us!” happen relatively frequently in the IT security industry, enough so that it blends into the background noise after awhile.

My question is why, exactly, Google thought this wouldn’t happen to them? They’re a big fat juicy target on many levels. Gmail with thousands upon thousands of juicy mailboxes? Check! Search engine code and data that allows sophisticated monitoring and manipulation of Internet queries? Check! Cloud-based office documents that just might contain some competitive value? Check!

My second question is, why, exactly, is Google trying to shift the focus of the story from the IP theft (which by their own press report was successful) and cloak their actions in the “oh, noes, China tried to grab dissidents’ email” moral veil they’re using?

Paragraph 3:

Second, we have evidence to suggest that a primary goal of the attackers was accessing the Gmail accounts of Chinese human rights activists. Based on our investigation to date we believe their attack did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves.

Two accounts, people, and the attempt wasn’t even fully successful. And the moral outrage shimmering from the screen in Paragraph 4, when Google says that “dozens” of accounts were accessed by third parties not through any sort of security flaw in Google, but rather through what is probably malware, is enough to knock you over.

Really, Google? You’re just now tumbling to the fact that people’s GMail accounts are getting hacked through malware?

I don’t buy the moral outrage. I think the meat of the matter is back in paragraph 1. I believe that the rest of the outrage is a smokescreen to repaint Google into the moral high ground for their actions, when from the sidelines here it certainly looks like Google chose knowingly to play with fire and is now suddenly outraged that they, too, got burned.

Google, you have enough people willing to play along with your attempt to be the victim. I’m not one of them. You compromised human rights principles in 2006 and knowingly put your users into harm’s way. “Do no evil,” my ass.


From Whence Redundancy? Exchange 2010 Storage Essays, part 1

Updated 4/13 with improved reseed time data provided by item #4 in the Top 10 Exchange Storage Myths blog post from the Exchange team.

Over the next couple of months, I’d like to slowly sketch out some of the thoughts and impressions that I’ve been gathering about Exchange 2010 storage over the last year or so and combine them with the specific insights that I’m gaining at my new job. In this inaugural post, I want to tackle what I have come to view as the fundamental question that will drive the heart of your Exchange 2010 storage strategy: will you use a RAID configuration or will you use a JBOD configuration?

In the interests of full disclosure, the company I work for now is a strong NetApp reseller, so of course my work environment is conducive to designing Exchange in ways that make it easy to sell the strengths of NetApp kit. However, part of the reason I picked this job is precisely because I agree with how they address Exchange storage and how I think the Exchange storage paradigm is going to shake out in the next 3-5 years as more people start deploying Exchange 2010.

In Exchange 2010, Microsoft re-designed the Exchange storage system to target what we can now consider to be the lowest common denominator of server storage: a directly attached storage (DAS) array of 7200 RPM SATA disks in a Just a Box of Disks (JBOD) configuration. This DAS/JBOD/SATA (what I will now call DJS) configuration has been an unworkable configuration for Exchange for almost its entire lifetime:

  • The DAS piece certainly worked for the initial versions of Exchange; that’s what almost all storage was back then. Big centralized SANs weren’t part of the commodity IT server world, reserved instead for the mainframe world. Server administrators managed server storage. The question was what kind of bus you used to attach the array to the server. However, as Exchange moved to clustering, it required some sort of shared storage. While a shared SCSI bus was possible, it not only felt like a hack, but also didn’t scale well beyond two nodes.
  • SATA, of course, wasn’t around back in 1996; you had either IDE or SCSI. SCSI was the serious server administrator’s choice, providing better I/O performance for server applications, as well as faster bus speeds. SATA, and its big brother SAS, both are derived from the lessons that years of SCSI deployments have provided. Even for Exchange 2007, though, SATA’s poor random I/O performance made it unsuitable for Exchange storage. You had to use either SAS or FC drives.
  • RAID has been a requirement for Exchange deployments, historically, for two reasons: to combine enough drive spindles together for acceptable I/O performance (back when disks were smaller than mailbox databases), and to ensure basic data redundancy. Redundancy was especially important once Exchange began supporting shared storage clustering and required both aggregate I/O performance only achievable with expensive disks and interfaces as well as the reduced chance of a storage failure being a single point of failure.

If you look at the marketing material for Exchange 2010, you would certainly be forgiven for thinking that DJS is the only smart way to deploy Exchange 2010, with SAN, RAID, and non-SATA systems supported only for those companies caught in the mire of legacy deployments. However, this isn’t at all true. There are a growing number of Exchange experts (and not just those of us who either work for storage vendors or resell their products) who think that while DJS is certainly an interesting option, it’s not one that’s a good match for every customer.

In order to understand why DJS is truly possible in Exchange 2010, and more importantly begin to understand where DJS configurations are a good fit and what underlying conditions and assumptions you need to meet in order to get the most value from DJS, we need to separate these three dimensions and discuss them separately.


While I will go into more detail on all three dimensions at later date, I want to focus on the JBOD vs.. RAID question now. If you need some summaries, then check out fellow Exchange MVP (and NetApp consultant) John Fullbright’s post on the economics of DAS vs. SAN as well as Microsoft’s Matt Gossage and his TechEd 2009 session on Exchange 2010 storage. Although there are good arguments for diving into drive technology or storage connection debates, I’ve come to believe that the central philosophy question you must answer in your Exchange 2010 design is at what level you will keep your data redundant. Until Exchange 2007, you had only one option: keeping your data redundant at the disk controller level. Using RAID technologies, you had two copies of your data[1]. Because you had a second copy of the data, shared storage clustering solutions could be used to provide availability for the mailbox service.

With Exchange 2007’s continuous replication features, you could add in data redundancy at the application level and avoid the dependency of shared storage; CCR creates two copies, and SCR can be used to create one or more additional copies off-site. However, given the realities of Exchange storage, for all but the smallest deployments, you had to use RAID to provide the required number of disk spindles for performance. With CCR, this really meant you were creating four copies; with SCR, you were creating an additional two copies for each target replica you created.

This is where Exchange 2010 throws a wrench into the works. By virtue of a re-architected storage engine, it’s possible under specific circumstances to design a mailbox database that will fit on a single drive while still providing acceptable performance. The reworked continuous replication options, now simplified into the DAG functionality, create additional copies on the application level. If you hit that sweet spot of the 1:1 database to disk ratio, then you only have a single copy of the data per replica and can get an n-1 level of redundancy, where n is the number of replicas you have. This is clearly far more efficient for disk usage…or is it? The full answer is complex, the simple answer is, “In some cases.”

In order to get the 1:1 database to disk ratio, you have to follow several guidelines:

  1. Have at least three replicas of the database in the DAG, regardless of which sites they are in. Doing so allows you to place both the EDB and transaction log files on the same physical drive, rather than separating them as you did in previous versions of Exchange.
  2. Ensure that you have at least two replicas per site. The reason for this is that unlike Exchange 2007, you can reseed a failed replica from another passive copy. This allows you to avoid reseeding over your WAN, which is something you do not want to do.
  3. Size your mailbox databases to include no more users than will fit in the drive’s performance envelope. Although Exchange 2010 converts many of the random I/O patterns to sequential, giving better performance, not all has been converted, so you still have to plan against the random I/O specs.
  4. Ensure that write transactions can get written successfully to disk. Use a battery-backed caching controller for your storage array to ensure the best possible performance from the disks. Use write caching for the physical disks, which means ensuring each server hosting a replica has a UPS.

At this point, you probably have disk capacity to spare, which is why Exchange 2010 allows the creation of archive mailboxes in the same mailbox database. All of the user’s data is kept at the same level of redundancy, and the archived data – which is less frequently accessed than the mainline data – is stored without additional significant disk or I/O penalty. This all seems to indicate that JBOD is the way to go, yes? Two copies in the main site, two off-site DR copies, and I’m using cheaper storage with larger mailboxes and only four copies of my data instead of the minimum of six I’d have with CCR+SCR (or the equivalent DAG setup) on RAID configurations.

Not so fast. Microsoft’s claims around DJS configurations usually talk about the up-front capital expenditures. There’s more to a solid design than just the up-front storage price tag, and even if the DJS solution does provide savings in your situation, that is only the start. You also need to think about the lifetime of your storage and all the operational costs. For instance, what happens when one of those 1:1 drives fails?

Well, if you bought a really cheap DAS array, your first indication will be when Exchange starts throwing errors and the active copy moves to one of the other replicas. (You are monitoring your Exchange servers, right?) More expensive DAS arrays usually directly let you know that a disk failed. Either way, you have to replace the disk. Again, with a cheap white-box array, you’re on your own to buy replacement disks, while a good DAS vendor will provide replacements within the warranty/maintenance period. Once the disk is replaced, you have to re-establish the database replica. This brings us to the wonderful manual process known as database reseeding, which is not only a manual task, but can take quite a significant amount of time – especially if you made use of archival mailboxes and stuffed that DJS configuration full of data. Let’s take a closer look at what this means to you.

[Begin 4/13 update]

There’s a dearth of hard information out there about what types of reseed throughputs we can achieve in the real world, and my initial version of this post where I assumed 20GB/hour as an “educated guess” earned me a bit of ribbing in some quarters. In my initial example, I said that if we can reseed 20GB of data per hour (from a local passive copy to avoid the I/O hit to the active copy), that’s 10 hours for a 200GB database, 30 hours for a 600GB database, or 60 hours –two and a half days! – for a 1.2 TB database[2].

According to the Top 10 Exchange Storage Myths post on the Exchange team blog, 20GB/hour is way too low; in their internal deployments, they’re seeing between 35-70GB per hour. How would these speeds affect reseed times in my examples above? Well, let’s look at Table 1:

Table 1: Example Exchange 2010 Mailbox Database reseed times

Database Size Reseed Throughput Reseed Time
200GB 20GB/hr 10 hours
200GB 35GB/hr 7 hours
200GB 50GB/hr 4 hours
200GB 70GB/hr 3 hours
600GB 20GB/hr 30 hours
600GB 35GB/hr 18 hours
600GB 50GB per hour 12 hours
600GB 70GB per hour 9 hours
1.2TB 20GB/hr 60 hours
1.2TB 35GB/hr 35 hours
1.2TB 50GB/hr 24 hours
1.2TB 70GB/hr 18 hours

As you can see, reseed time can be a key variable in a DJS design. In some cases, depending on your business needs, these times could make or break whether this is a good design. I’ve done some talking around and found out that reseed times in the field are all over the charts. I had several people talk to me at the MVP Summit and ask me under what conditions I’d seen 20GB/hour, as that was too high. Astrid McClean and Matt Gossage of Microsoft had a great discussion with me and obviously felt that 20GB/hour is way too low.

Since then, I’ve received a lot of feedback and like I said, it’s all over the map. However, I’ve yet to hear anyone outside of Microsoft publicly state a reseed throughput higher than 20GB/hour. What this says to me is that getting the proper network design in place to support a good reseed rate hasn’t been a big point in deployments so far, and that in order to make a DJS design work, this may need to be an additional consideration.

If your replication network is designed to handle the amount of traffic required for normal DAG replication and doesn’t have sufficient throughput to handle reseed operations, you may be hurting yourself in the unlikely event of suffering multiple simultaneous replica failures on the same mailbox database.

This is a bigger concern for shops that have a small tolerance for any given drive failure. In most environments, one of the unspoken effects of a DJS DAG design is that you are trading number of replicas – and database-level failover – for replica rebuild time. If you’re reduced from four replicas down to three, or three down to two during the time it takes to detect the disk failure, replace the disk, and complete the reseed, you’ll probably be okay with that taking a longer period of time as long as you have sufficient replicas.

All during the reseed time, you have one fewer replica of that database to protect you. If your business processes and requirements don’t give you that amount of leeway, you either have to design smaller databases (and waste the disk capacity, which brings us right back to the good old bad days of Exchange 2000/2003 storage design) or use RAID.

[End 4/13 update]

Now, with a RAID solution, we don’t have that same problem. We still have a RAID volume rebuild penalty, but that’s happening inside the disk shelf at the controller, not across our network between Exchange servers. And with a well-designed RAID solution such as generic RAID 10 (1+0) or NetApp’s RAID-DP, you can actually survive the loss of more disks at the same time. Plus, a RAID solution gives me the flexibility to populate my databases with smaller or larger mailboxes as I need, and aggregate out the capacity and performance across my disks and databases. Sure, I don’t get that nice 1:1 disk to database ratio, but I have a lot more administrative flexibility and can survive disk loss without automatically having to begin the reseed dance.

Don’t get me wrong – I’m wildly enthusiastic that I as an Exchange architect have the option of designing to JBOD configurations. I like having choices, because that helps me make the right decisions to meet my customers’ needs. And that, in the end, is the point of a well-designed Exchange deployment – to meet your needs. Not the needs of Microsoft, and not the needs of your storage or server vendors. While I’m fairly confident that starting with a default NetApp storage solution is the right choice for many of the environments I’ll be facing, I also know how to ask the questions that lead me to consider DJS instead. There’s still a place for RAID at the Exchange storage table.

In further installments over the next few months, I’ll begin to address the SATA vs. SAS/FC and DAS vs. SAN arguments as well. I’ll then try to wrap it up with a practical and realistic set of design examples that pull all the pieces together.

[1] RAID-1 (mirroring) and RAID-10 (striping and mirroring) both create two physical copies of the data. RAID-5 does not, but it allows the loss of a single drive failure — effectively giving you a virtual second copy of the data.

[2] Curious why picked these database sizes?  200GB is the recommended maximum size for Exchange 2007 (due to backup limitations), and 600GB/1.2TB are the realistic recommended maximums you can get from 1TB and 2TB disks today in a DJS replica-per-disk deployment; you need to leave room for the content index, transaction logs, and free space.


A Virtualization Metaphor

This is a rare kind of blog post for me, because I’m basically copying a discussion that rose from one of my Twitter/Facebook status updates earlier today:

I wish I could change the RAM, CPU configuration on running VMs in #VMWare and have the changes apply on next reboot.

This prompted one of my nieces, a lovely and intelligent young lady in high school, to ask me to say that in English.

I pondered just hand waving it, but I was loathe to do so. Like I said, she’s intelligent. I firmly believe that kids live up to your expectations; if you talk down to them and treat them like they’re dumb because that’s what you expect, they’re happy to be that way. On the other hand, if you expect them to be able to understand concepts with the proper explanations, even if they may not immediately grasp the fine points, I’ve found that kids are actually quite able to do so – better than many adults, truth be told.

So, this is my answer:

The physical machinery of computers is called hardware. The programs that run on them (Windows, games, etc.) is software.
VMware is software that allows you to create virtual machines. That is, instead of buying (for example) 10 computers to do different tasks and have most of them have unused memory and processor power, you buy one or two really beefy computers and run VMWare. That allows you to create a virtual machine in software, so those two computers become 10. I don’t have to buy quite as much hardware because each virtual machine only uses the resources it needs, leaving the rest for the other virtual machines.

However, one of the problems with VMWare currently is that if you find you’ve given a virtual machine too much memory or processor (or not enough), you have to shut it down, make the change, then start it back up. I want the software to be smart enough to take the change *now* and automatically apply it when it can, such as when the virtual machine is rebooting. For a physical computer, it makes sense — I have to power it down, crack the case open, put memory in, etc. — but for a virtual computer, it should be able to be done in software.

Think of it this way: hardware is like a closet. You can build a big closet or a small closet or a medium closet, but each closet holds a finite amount of stuff. Software is the stuff you put in the closet — clothes, shoes, linens, etc. You can dump a bunch of stuff into a big closet, but doing so makes it cluttered and hard to use. So if you use multiple smaller closets, you’re wasting space because you probably won’t fill every one exactly.

In this metaphor, virtualization is like a closet organizer system. You can add a clothing rod here to hang dresses and blouses on, and underneath that add a shelf or two for shoes, while to the side you have more shelves for pants and towels and other stuff. You waste a little bit of your closet space for the organizer, but you keep everything organized and clutter-free, which means you’re better off and take less time to keep everything up.

Of course, this metaphor fails on my original point, because it totally makes sense you have to take all the stuff off shelves before moving those shelves around. In the world of software, though, it doesn’t necessarily make sense — it’s just the right people didn’t think of it at the right time.


I came close to busting out Visio and starting to diagram some of this. I decided not to.

Edit: I don’t have to diagram it! Thank you, Ikea, and your lovely KOMPLEMENT wardrobe organizer line!

Ikea KOMPLEMENT organizer as virtualization software


North Pole data leakage woes

Not even old Saint Nick is immune from the need for a good data management and protection regime.

First, we have confirmation that his naughty and nice database has been hacked.

Now, there are credible rumors that the North Pole CIO has been covering up a years-long, systemic problem with Santa losing mobile devices. According to unidentified sources, the list of allegations includes:

  • Lack of priority for safeguarding key data, especially through mobile systems. Recent refits for the sled have focused on tracking transponders for “greater publicity”, but no corresponding upgrades to mobile IT systems. These systems are specifically characterized as “obsolete 286 systems running DOS and home-brew Paradox applications written by some dentist in his spare time.”
  • Habitual problems with smartphones. In order to ensure inexpensive world-wide access, Santa’s system includes the use of multiple handsets from strategically selected regional carriers. “In the last several years, Santa has yet to come back from his Christmas Eve run without having lost at least three of his devices,” one insider claims, “and of course we don’t have remote wipe capabilities. That would require him spending money.”
  • Lax information and network practices, including no formal security policies or processes. Remote accesses aren’t even protected via SSL, according to sources, since “anyone who’s so cheap they haven’t updated stock PR footage of elves making wooden toys isn’t likely to shell out for a respected SSL certificate or PKI infrastructure.”

It will take time to gather confirmation of these claims, but if they are true, it shows a shocking disregard for basic security best practices at the North Pole.


Busting the Exchange Trusted Subsystem Myth

It’s amazing what kind of disruption leaving your job, looking for a new job, and starting to get settled in to a new job can have on your routines. Like blogging. Who knew?

At any rate, I’m back with some cool Exchange blogging. I’ve been getting a chance to dive into a “All-Devin, All-Exchange, All The Time” groove and it’s been a lot of fun, some of the details of which I hope to be able to share with you in upcoming months. In the process, I’ve been building a brand new Exchange 2010 lab environment and ran smack into a myth that seems to be making the rounds among people who are deploying Exchange 2010. This myth gives bum advice for those of you who are deploying an Exchange 2010 DAG and not using an Exchange 2010 Hub Transport as your File Share Witness (FSW). I call it the Exchange Trusted Subsystem Myth, and the first hint of it I see seems to be on this blog post. However, that same advice seems to have gotten around the net, as evidenced by this almost word-for-word copy or this posting that links to the first one. Like many myths, this one is pernicious not because it’s completely wrong, but because it works even though it’s wrong.

If you follow the Exchange product group’s deployment assumptions, you’ll never run into the circumstance this myth addresses; the FSW is placed on an Exchange 2010 HT role in the organization. Although you can specify the FSW location (server and directory) or let Exchange pick a server and directory or you, the FSW share isn’t created during the configuration of the DAG (as documented by fellow Exchange MVP Elan Shudnow and the “Witness Server Requirements” section of the Planning for High Availability and Site Resilience TechNet topic). Since it’s being created on an Exchange server as the second member of the DAG is joined, Exchange has all the permissions it needs on the system to create the share. If you elect to put the share on a non-Exchange server, then Exchange doesn’t have permissions to do it. Hence the myth:

  1. Add the FSW server’s machine account to the Exchange Trusted Subsystem group.
  2. Add the Exchange Trusted Subsystem group to the FSW server’s local Administrators group.

The sad part is, only the second action is necessary. True, doing the above will make the FSW work, but it will also open a much wider hole in your security than you need or want. Let me show you from my shiny new lab! In this configuration, I have three Exchange systems: EX10MB01, EX10MB02, and EX10MB03. All three systems have the Mailbox, Client Access, and Hub Transport roles. Because of this, I want to put the FSW on a separate machine. I could have used a generic member server, but I specifically wanted to debunk the myth, so I picked my DC EX10DC01 with malice aforethought.

  • In Figure 1, I show adding the Exchange Trusted Subsystem group to the Builtin/Administrators group on EX10DC01. If this weren’t a domain controller, I could add it to the local Administrators group instead, but DCs require tinkering. [1]

Figure 1: Membership of the Builtin/Administrators group on EX10DC01

  • In Figure 2, I show the membership of the Builtin/Administrators group on EX10DC01. No funny business up my sleeve!

Figure 2: Membership of the Exchange Trusted Subsystem group

  • I now create the DAG object, specifying EX10DC01 as my FSW server and the C:\EX10DAG01 directory so we can see if it ever gets created (and when).
  • In Figure 3, I show the root of the C:\ drive on EX10DC01 after adding the second Exchange 2010 server to the DAG. Now, the directory and share are created, without requiring the server’s machine account to be added to the Exchange Trusted Subsystem group.

Figure 3: The FSW created on EX10DC01

I suspect that this bad advice came about through a combination of circumstances, including an improper understanding of Exchange caching of Active Directory information and when the FSW is actually created. However it came about, though, it needs to be stopped, because any administrator that configures their Exchange organization is opening a big fat hole in the Exchange security model.

So, why is adding the machine account to the Exchange Trusted Subsystem group a security hole? The answer lies in Exchange 2010’s shift to Role Based Access Control (RBAC). In previous versions of Exchange, you delegated permissions directly to Active Directory and Exchange objects, allowing users to perform actions directly from their security context. If they had the appropriate permissions, their actions succeeded.

In Exchange 2010 RBAC, this model goes away; you now delegate permissions by telling RBAC what options given groups, policies, or users can perform, then assigning group memberships or policies as needed. When the EMS cmdlets run, they do so as the local machine account; since the local machine is an Exchange 2010 server, this account has been added to the Exchange Trusted Subsystem group. This group has been delegated the appropriate access entries in Active Directory and Exchange databases objects, as described in the Understanding Split Permissions TechNet topic. For a comprehensive overview of RBAC and how all the pieces fit together, read the Understanding Role Based Access Control TechNet topic.

By improperly adding a non-Exchange server to this group, you’re now giving that server account the ability to read and change any Exchange-related object or property in Active Directory or Exchange databases. Obviously, this is a hole, especially given the relative ease with which one local administrator can get a command line prompt running as one of the local system accounts.

So please, do us all a favor: if you ever hear or see someone passing around this myth, please, link them here.


[1] Yes, it is also granting much broader permissions than necessary to make a DC the FSW node. Now the Exchange Trusted Subsystem group is a member of the Domain Admins group. This is probably not what you want to do, so really, don’t do this outside of a demo lab.


Support Our Scout

Edit 11/11/09 to remove the embedded video and replace it with a link. It was messing up the layout and I need to do more research to figure out how to embed videos inline.

I love living in the future. First, though, watch this video that Alaric and I made.

I was a Boy Scout for close to three years. I started as a Boy Scout; I missed Cub Scouts, including Webelos Scout. When I was in Scouting, we had to go door-to-door to do our fundraisers, or spend a lot of time with our relatives over the phone. I hated doing it, for reasons that didn’t become clear until much later in life when I began grappling with autism and Asperger’s. However, I have a lot of good memories of Scouting; it did a lot for me and it was a valuable part of my childhood.

Steph and I wanted Alaric to experience Scouting. Even though the modern BSA has some characteristics that I don’t agree with, I’ve come to the decision that first and foremost, Scouting is about the boys. Scouting needs intelligent, reasonable adults of all persuasions to help drive the program. By being part of Scouting, Alaric will learn and do things Steph and I can’t give him on our own; by having us there with him, Alaric will learn how to deal with people from differing backgrounds in a diplomatic and productive manner.

Over the summer, Alaric has really seen what a good thing Scouting is. He even got me to go to Scout Camp with him for four days in July, and I must admit I even had fun. It was a great experience for both of us, including facing down and conquering some challenges.

Unlike many Scout packs and troops, Alaric’s pack works on the schedule of the school year. As a result, they do their major fundraising push at the beginning of the school year with a number of activities. Alaric’s already helped out pulling Hire-A-Scout wagons at the local auto swap meet and had a great time. However, the major source of operating funds is the traditional Trail’s End popcorn fundraiser. Trail’s End, if you don’t know, has been the go-to-source for Scout fundraising for a long time, and they offer some of the best popcorn on the planet.

Over the past few weeks, we’ve been rather hectic and busy and haven’t really had time to coach Alaric on his first door-to-door sales campaign. (Poor guy seems to have the same issues I did when I was his age, so it was pretty painful.) This last week, I came up with what is I hope a brainstorm: harness the power of the Internet to get Alaric’s sales pitch out there. So, you get to enjoy the results: the following video where Alaric and I pitch popcorn to YOU, the faithful reader. And because this is the future, Trail’s End even got with the program: they now allow you to purchase online, supporting a specific Scout, and have the product shipped directly to your door!

Go to Trail’s End to support Alaric’s fundraising for his pack

Thank you for your support!


Leaving 3Sharp

3Sharp has been a fantastic place to work; for the last six and half years, my co-workers and I have walked the road together. One of the realities of growth, though, is that you often reach the fork in the road where you have to move down different paths. Working with Paul, Tim, Missy, Kevin, and the rest of the folks who have been part of the Platform Services Group here at 3Sharp over the years has been a wild journey, but we were only one of three groups at 3Sharp; the other two groups are also chock-full of smart people doing wonderful things with SharePoint and Office. 3Sharp will be moving forward to focus on those opportunities, and the Platform Services Group (which focused on Exchange, OCS, Windows Server, Windows Mobile, and DPM) is closing its doors. My last day here will be tomorrow, Friday, October 16.

I think that the Ecclesiastes 3:1 says it best; in the King James Version, the poet says, “To every thing there is a season, and a time to every purpose under the heaven.” It has been my privilege to use this blog to talk about Exchange, data protection, and all the other topics I’ve talked about since my first post here five years ago (holy crap, has it really been five years???) With 3Sharp’s gracious permission and blessing, I’ll be duplicating all of the content I’ve posted here over on my personal blog, Devin on Earth. If you have a link or bookmark for this blog or are following me via RSS, please take a moment to update it now (Devin on Earth RSS feed). I’ve got a few new posts cooking, but this will be my last post here.

Thank you to 3Sharp and the best damn co-workers I could ever hope to work with over the years. Thank you, my readers. You all have helped me grow and solidify my skills, and I hope I returned the favor. I look forward to continuing the journey with many of you, even if I’m not sure yet where it will take me.


OneNote 2010 Keeps Your Brains In Your Head

Some months back, those of you who follow me on Twitter (@devinganger) may have a noticed a series of teaser Tweets about a project I was working on that involved zombies.

Yes, that’s right, zombies. The RAHR-BRAINS-RAHR shambling undead kind, not the “mystery objects in Active Directory” kind.

Well, now you can see what I was up to.

I was working with long-time fellow 3Sharpie David Gerhardt on creating a series of 60-second vignettes for the upcoming Office 2010 application suite. Each vignette focuses on a single new area of functionality in one of the Office products. I got to work with OneNote 2010.

Here’s where the story gets good.

I got brought into the project somewhat late, after a bunch of initial planning and prep work had been done. The people who had been working on the project had decided that they didn’t want to do the same boring business-related content in their OneNote 2010 vignettes; oh, no! Instead, they hit upon the wonderful idea of using a Zombie Plan as the base document. Now, I don’t really like zombies, but this seemed like a great way to spice up a project!

The rest, as they say, is history. Check out the results (posted both at GetSharp and somewhere out on YouTube) for yourself:

One of the best parts of this project, other than getting a chance to learn about some of the wildly cool stuff the OneNote team is doing to enhance an already wonderful product, was the music selection. We worked a deal with local artist Dave Pezzner to use some of his short music clips for these videos. Dave is immensely talented and provided a wide selection of material, so I enjoyed being able to pick and choose just the right music for each video. It did occur to me how cool it would be if I could use Jonathan Coulton’s fantastic song Re: Your Brains, but somehow I think his people lost my query email. Such is life – and I think Mr. Pezzner’s music provided just the right accompaniment to the Zombie Plan content.



The Great Exchange Blog Migration

Over the next few days, I’ll be adding a large number of posts (just over 250!!!) to the archives of this blog. For a number of congruent reasons, 3Sharp is closing down the Platform Services Group (which focused on Exchange, OCS, Windows Server, Windows Mobile, and DPM) and my last day will be this Friday, October 16 after over six and half years with them. With 3Sharp’s gracious permission and blessing, I’ll be duplicating all of the content I’ve posted on the 3Sharp blog server over to here. If you have a link or bookmark for my work blog or are following it via RSS, please take a moment to update your settings. Yes, that means there’s going to be more geeky technical Exchange stuff going forward, but hey, with a single blog to focus on, maybe I’ll be more prolific overall!

To head off some of the obvious questions:

  • This is not a horrible thing. 3Sharp and I are parting ways peacefully because it’s the right decision for all of us; they need to focus on SharePoint, and I’m so not a SharePoint person. They’ve done fantastic things for my career and I cherish my time with them, but part of being an adult is knowing when to move on. We’re all agreed that time has come.
  • I’m not quite sure where I’m going to end up yet. I’ve got a couple of irons in the fire and I have high hopes for them, but it’s not time to talk about them. I am going to have at least a week or two of time off, which is good; there are several projects at home in dire need of sustained attention (unburying my home office, for one; fixing a balky Exchange account for another).
  • I’m not going to be a complete shut-in. I’ve got a couple of appointments for the following week, including a Microsoft focus group and a presentation on PowerPoint for Treanna’s English class. I’m open to doing some short-term independent consulting or contracting work as well, so contact me if you know someone who needs some Exchange help.

Thank you to 3Sharp and the best damn co-workers I could ever hope to work with over the years – and a huge thank you to all of my readers, regardless of which blog you’ve been following. The last several years have been a wild ride, and I look forward to continuing the journey with many of you, even if I’m not sure yet where it will take me.


Two karate blessings

These past 14 months that I’ve been a karate student have given me a number of deeply satisfying moments, including the joy of sharing an activity with my daughter. Last Tuesday, however, proved to be an especially fruitful class for both of us.

Starting in September, the YMCA agreed to try out dropping class fees for YMCA members, and as you might imagine, we immediately saw a small but steady wave of new sign-ups for class. As a result, for the first time in a while, we have a good number of new students – white belts. As a result, we spend a large chunk of class time going back over many of the basic techniques in more detail than we’ve gotten used to. Those of us who are higher belts get to work with the white belts one-on-one during many of these exercises. This proves beneficial to everyone – they get a personal workout, and we get a mirror to more clearly see how well we’ve mastered the basics (or not, as it usually happens).

The first blessing was working with a gentleman who has been in class somewhere around a month. He and I were working through one-step exercises: one person performs a basic punch attack while the other defends, then we switch roles. We do this with seven defenses. As you work through the ranks, the defense techniques get more complicated, but for white belt one-steps, it’s pretty simple. Or so it seems now after a year; they were quite challenging when I first started and I got to re-experience that working with this gentleman. During our practice, he had one of those epiphany moments and what had been a struggle suddenly turned into AHA! with a clarity we both felt. It was an honor to be working with him in that moment.

The second blessing came about indirectly because of some misbehavior. You see, our protocol and customs direct us to pay attention and not engage in side conversations or monkey business when sensei is teaching. (Turns out there are no exceptions for “if you think you already know this” or “if you’re bored.” I checked. Who knew?) Well, several of us – including me and Treanna – weren’t quite paying attention to that one, and the senior student got called on it. I later told Treanna that he’d taken one for the team; we all were equally guilty of inattention. As class was drawing to an end, though, Treanna engaged in another breach of protocol that earned her some gentle ribbing. (She might read this, so I won’t tell you what she did. This time.)

Being a vigilant father and role model, I immediately realized we had what the experts call “a teachable moment” here. So we cracked open our karate notebooks and made a date to come back tonight after dinner, both having read the protocols, and discuss what we’d found:

  • There are three basic sets of protocol in our notebook: white belt (people who’ve just joined), blue belt (9th kyu, or your first belt), and orange belt (7th kyu, or your third belt). After reading them, we decided that they all have the common themes of respect, safety, and responsibility.
  • We think that white belt protocol focuses mainly on the what habits I need to become a student (discipline). That is, all of the guidance seems to be directed more at helping the newcomer gain the structures he will need to effectively learn karate.
  • We think that blue belt protocol focuses mainly on how I become a member of the community (identity). This comes after the first belt (typically earned after several months) and the guidance is more focused on becoming aware of and fitting into the dojo structure.
  • Finally, we think that orange belt protocol focuses mainly on how I give back to the community (service). This comes after three belts and around a year of study – a good foundation from which to be able to start learning to progress by helping others.
  • As a final note, we saw that there was no specific protocol for further belts. We speculate that’s because the student in green and brown belts is expected to do the same things she is already doing, just to a greater degree. And once she gets to black belt – that’s a watershed mark, and sensei will teach us what is expected of us on that day at the proper time.

If you’re not in a martial art, that’s probably boring and generic. To Treanna and I, though, it seemed pretty profound, and I think we’ll walk back into class tomorrow with a new-found sense of focus and commitment.


Why Aren’t My Exchange Certificates Validating?

Updated 10/13: Updated the link to the blog article on configuring Squid for Exchange per the request of the author Owen Campbell. Thank you, Owen, for letting me know the location had changed!

By now you should be aware that Microsoft strongly recommends that you publish Exchange 2010/2007 client access servers (and Exchange 2003/2000 front-end servers) to the Internet through a reverse proxy like Microsoft’s Internet Security and Acceleration Server 2006 SP1 (ISA) or the still-in-beta Microsoft Forefront Threat Management Gateway (TMG). There are other reverse proxy products out there, such as the open source Squid (with some helpful guides on how to configure it for EAS, OWA, and Outlook Anywhere), but many of them can only be used to proxy the HTTP-based protocols (for example, the reverse proxy module for the Apache web server) and won’t handle the RPC component of Outlook Anywhere.

When you’re following this recommendation, you keep your Exchange CAS/HT/front-end servers in your private network and place the ISA Server (or other reverse proxy solution) in your perimeter (DMZ) network. In addition to ensuring that your reverse proxy is scrubbing incoming traffic for you, you can also gain another benefit: SSL bridging. SSL bridging is where there are two SSL connections – one between the client machine and the reverse proxy, and a separate connection (often using a different SSL certificate) between the reverse proxy and the Exchange CAS/front-end server. SSL bridging is awesome because it allows you radically reduce the number of commercial SSL certificates you need to buy. You can use Windows Certificate Services to generate and issue certificates to all of your internal Exchange servers, creating them with all of the Subject Alternate Names that you need and desire, and still have a commercial certificate deployed on your Internet-facing system (nice to avoid certificate issues when you’re dealing with home systems, public kiosks, and mobile devices, no?) that has just the public common namespaces like autodiscover.yourdomain.tld and mail.yourdomain.tld (or whatever you actually use).

In the rest of this article, I’ll be focusing on ISA because, well, I don’t know Squid that well and haven’t actually seen it in use to publish Exchange in a customer environment. Write what you know, right?

One of the most irritating experiences I’ve consistently had when using ISA to publish Exchange securely is getting the certificate configuration on ISA correct. If you all want, I can cover certificate namespaces in another post, because that’s not what I’m talking about – I actually find that relatively easy to deal with these days. No, what I find annoying about ISA and certificates is getting all of the proper root CA certificates and intermediate CA certificates in place. The process you have to go through varies on who you buy your certificates from. There are a couple, like GoDaddy, that offer inexpensive certificates that do exactly what Exchange needs for a decent price – but they require an extra bit of configuration to get everything working.

The problem you’ll see is two-fold:

  1. External clients will not be able to connect to Exchange services. This will be inconsistent; some browsers and some Outlook installations (especially those on new Windows installs or well-updated Windows installs) will work fine, while others won’t. You may have big headaches getting mobile devices to work, and the error messages will be cryptic and unhelpful.
  2. While validating your Exchange publishing rules with the Exchange Remote Connectivity Analyzer (ExRCA), you get a validation error on your certificate as shown in Figure 1.

ExRCA can't find the intermediate certificate on your ISA server
Figure 1: Missing intermediate CA certificate validation error in ExRCA

The problem is that some devices don’t have the proper certificate chain in place. Commercial certificates typically have two or three certificates in their signing chain: the root CA certificate, an intermediate CA certificate, and (optionally) an additional intermediate CA certificate. The secondary intermediate CA certificate is typically the source of the problem; it’s configured as a cross-signing certificate, which is intended to help CAs transition old certificates from one CA to another without invalidating the issued certificates. If your certificate was issued by a CA that has these in place, you have to have both intermediate CA certificates in place on your ISA server in the correct certificate stores.

By default, CAs will issue the entire certificate chain to you in a single bundle when they issue your cert. You have to import this bundle on the machine you issued the request from or else you don’t get the private key associated with the certificate. Once you’ve done that, you need to re-export the certificate, with the private key and its entire certificate chain, so that you can import it in ISA. This is important because ISA needs the private key so it can decrypt the SSL session (required for bridging), and ISA needs all the certificate signing chain so that it can hand out missing intermediate certificates to devices that don’t have them (such as Windows Mobile devices that have the root CA certificates). If the device doesn’t have the right intermediates, can’t download it itself (like Internet Explorer can), and can’t get it from ISA, you’ll get the certificate validation errors.

Here’s what you need to do to fix it:

  • Ensure that your server certificate has been exported with the private key and *all* necessary intermediate and root CA certificates.
  • Import this certificate bundle into your ISA servers. Before you do this, check the computer account’s personal certificate store and make sure any root or intermediate certificates that got accidentally imported there are deleted.
  • Using the Certificate MMC snap-in, validate that the certificate now shows as valid when browsing the certificate on your ISA server, as shown in Figure 2.

Even though the Certificates MMC snap-in shows this certificate as valid, ISA won't serve it out until the ISA Firewall Service is restarted!
Figure 2: A validated server certificate signing chain on ISA Server

  • IMPORTANT STEP: restart the ISA Firewall Service on your ISA server (if you’re using an array, you have to do this on each member; you’ll want to drain the connections before restarting, so it can take a while to complete). Even though the Certificate MMC snap-in validates the certificate, the ISA Firewall only picks up the changes to the certificate chain on startup. This is annoying and stupid and has caused me pain in the past – most recently, with 3Sharp’s own Exchange 2010 deployment (thanks to co-worker and all around swell guy Tim Robichaux for telling me how to get ISA to behave).

Also note that many of the commercial CAs specifically provide downloadable packages of their root CA and intermediate CA certificates. Some of them get really confusing – they have different CAs for different tiers or product lines, so you have to match the server certificate you have with the right CA certificates. GoDaddy’s CA certificate page can be found here.


Some Thoughts on FBA (part 2)

As promised, here’s part 2 of my FBA discussion, in which we’ll talk about the interaction of ISA’s forms-based authentication (FBA) feature with Exchange 2010. (See part 1 here.)

Offloading FBA to ISA

As I discussed in part 1, ISA Server includes the option of performing FBA pre-authentication as part of the web listener. You aren’t stuck with FBA – you can use other pre-auth methods too. The thinking behind this is that ISA is the security server sitting in the DMZ, while the Exchange CAS is in the protected network. Why proxy an incoming connection from the Internet into the real world (even with ISA’s impressive HTTP reverse proxy and screening functionality) if it doesn’t present valid credentials? In this configuration, ISA is configured for FBA while the Exchange 2010/2007 CAS or Exchange 2003 front-end server are configured for Windows Integrated or Basic as shown in Figure 1 (a figure so nice I’ll re-use it):

Publishing Exchange using FBA on ISA

Figure 1: Publishing Exchange using FBA on ISA

Moving FBA off of ISA

Having ISA (and Threat Management Gateway, the 64-bit successor to ISA 2006) perform pre-auth in this fashion is nice and works cleanly. However, in our Exchange 2010 deployment, we found a couple of problems with it:

The early beta releases of Entourage for EWS wouldn’t work with this configuration; Entourage could never connect. If our users connected to the 3Sharp VPN, bypassing the ISA publishing rules, Entourage would immediately see the Exchange 2010 servers and do its thing. I don’t know if the problem was solved for the final release.

We couldn’t get federated calendar sharing, a new Exchange 2010 feature, to work. Other Exchange 20120 organizations would get errors when trying to connect to our organization. This new calendar sharing feature uses a Windows Live-based central brokering service to avoid the need to provision and manage credentials.

Through some detailed troubleshooting with Microsoft and other Exchange 2010 organizations, we finally figured out that our ISA FBA configuration was causing the problem. The solution was to disable ISA pre-authentication and re-enable FBA on the appropriate virtual directories (OWA and ECP) on our CAS server. Once we did that, not only did federated calendar sharing start working flawlessly, but our Entourage users found their problems had gone away too. For more details of what we did, read on.

How Calendar Sharing Works in Exchange 2010

If you haven’t seen other descriptions of the federated calendar sharing, here’s a quick primer on how it works. This will help you understand why, if you’re using ISA pre-auth for your Exchange servers, you’ll want to rethink it.

In Exchange 2007, you could share calendar data with other Exchange 2007 organizations. Doing so meant that your CAS servers had to talk to their calendar servers, and the controls around it were not that granular. In order to do it, you either needed to establish a forest trust and grant permissions to the other forest’s CAS servers (to get detailed per-user free/busy information) or set up a separate user in your forest for the foreign forests to use (to get default per-org free/busy data). You also have to fiddle around with the Autodiscover service connection points and ensure that you’ve got pointers for the foreign Autodiscover SCPs in your own AD (and the foreign systems have yours). You also have to publish Autodiscover and EWS externally (which you have to do for Outlook Anywhere) and coordinate all your certificate CAs. While this doesn’t sound that bad, you have to do these steps for every single foreign organization you’re sharing with. That adds up, and it’s a poorly documented process – you’ll start at this TechNet topic about the Availability service and have to do a lot of chasing around to figure out how certificates fit in, how to troubleshoot it, and the SCP export and import process.

In Exchange 2010, this gets a lot easier; individual users can send sharing invitations to users in other Exchange 2010 organizations, and you can set up organization relationships with other Exchange 2010 organizations. Microsoft has broken up the process into three pieces:

  1. Establish your organization’s trust relationship with Windows Live. This is a one-time process that must take place before any sharing can take place – and you don’t have to create or manage any service or role accounts. You just have to make sure that you’re using a CA to publish Autodiscover/EWS that Windows Live will trust. (Sorry, there’s no list out there yet, but keep watching the docs on TechNet.) From your Exchange 2010 organization (typically through EMC, although you can do it from EMS) you’ll swap public keys (which are built into your certificates) with Windows Live and identify one or more accepted domains that you will allow to be federated. Needless to say, Autodiscover and EWS must be properly published to the Internet. You also have to add a single DNS record to your public DNS zone, showing that you do have authority over the domain namespace. If you have multiple domains and only specify some of them, beware: users that don’t have provisioned addresses in those specified domains won’t be able to share or receive federated calendar info!
  2. Establish one or more sharing policies. These policies control how much information your users will be able to share with external users through sharing invitations. The setting you pick here defines the maximum level of information that your users can share from their calendars: none, free/busy only, some details, or all details. You can create a single policy for all your users or use multiple policies to provision your users on a more granular basis. You can assign these policies on a per-user basis.
  3. Establish one or more sharing relationships with other organizations. When you want to view availability data of users in other Exchange 2010 organizations, you create an organization relationship with them. Again, you can do this via EMC or EMS. This tells your CAS servers to lookup information from the defined namespaces on behalf of your users – contingent, of course, that the foreign organization has established the appropriate permissions in their organization relationships. If the foreign namespace isn’t federated with Windows Live, then you won’t be allowed to establish the relationship.

You can read more about these steps in the TechNet documentation and at this TechNet topic (although since TechNet is still in beta, it’s not all in place yet). You should also know that these policies and settings combine with the ACLs on users calendar folders, and as is the typical case in Exchange when there are multiple levels of permission, the most restrictive level wins.

What’s magic about all of this is that, at no point along the way other than the initial first step, do you have to worry consciously about the certificates you’re using. You never have to provide or provision credentials. As you create your policies and sharing relationships with other organizations – and other organizations create them with yours – Windows Live is hovering silently in the background, acting as a trusted broker for the initial connections. When your Exchange 2010 organization interacts with another, your CAS servers receive a SAML token from Windows Live. This token is then passed to the foreign Exchange 2010 organization, which can validate it because of its own trust relationship with Windows Live. All this token does is validate that your servers are really coming from the claimed namespace – Windows Live plays no part in authorization, retrieving the data, or managing the sharing policies.

However, here’s the problem: when my CAS talks to your CAS, they’re using SAML tokens – not user accounts – to authenticate against IIS for EWS calls. ISA Server (and, IIRC, TMG) don’t know how to validate these tokens, so the incoming requests can’t authenticate and pass on to the CAS. The end result is that you can’t get a proper sharing relationship set up and you can’t federate calendar data.

What We Did To Fix It

Once we knew what the problem was, fixing it was easy:

  1. Modify the OWA and ECP virtual directors on all of our Exchange 2010 CAS servers to perform FBA. These are the only virtual directories that permit FBA, so they’re the only two you need to change:Set-OWAVirtualDirectory -Identity “CAS-SERVER\owa (Default Web Site)” -BasicAuthentication $TRUE -WindowsAuthentication $FALSE -FormsAuthentication $TRUESet-ECPVirtualDirectory -Identity “CAS-SERVER\ecp (Default Web Site)” -BasicAuthentication $TRUE -WindowsAuthentication $FALSE -FormsAuthentication $TRUE
  2. Modify the Web listener on our ISA server to disable pre-authentication. In our case, we were using a single Web listener for Exchange (and only for Exchange), so it was a simple matter of changing the authentication setting to a value of No Authentication.
  3. Modify each of the ISA publishing rules (ActiveSync, Outlook Anywhere, and OWA):On the Authentication tab, select the value No delegation, but client may authenticate directly.On the Users tab, remove the value All Authenticated Users and replace it with the value All Users. This is important! If you don’t do this, ISA won’t pass any connections on!

You may also need to take a look at the rest of your Exchange virtual directories and ensure that the authentication settings are valid; many places will allow Basic authentication between ISA and their CAS servers and require NTLM or Windows Integrated from external clients to ISA.

Calendar sharing and ISA FBA pre-authentication are both wonderful features, and I’m a bit sad that they don’t play well together. I hope that future updates to TMG will resolve this issue and allow TMG to successfully pre-authenticate incoming federated calendar requests.


Stolen Thunder: Outlook for the Mac

I was going to write up a quick post about the release of Entourage for EWS (allowing it to work in native Exchange 2007, and, more importantly, Exchange 2010 environments) and the announcement that Office 2010 for the Mac would have Outlook, not Entourage, but Paul beat me to it, including my whole take on the thing. So go read his.

For those keeping track at home, yes, I still owe you a second post on the Exchange 2010 calendar sharing. I’m working on it! Soon!


Windows 7 RC: The Switch

This weekend, I finally finished getting our desktop computers replaced. They’re older system that have been running Windows XP for a long time. I’d gotten newer hardware and had started building new systems, intending to put Vista Ultimate SP1 on them (so we could take advantage of domain memberships and Windows Media Center goodness with our Xboxes), but one thing led to another and they’ve been sitting forlornly on a shelf.

I must confess – I’m not a Vista fan. I grudgingly used it as the main OS on my work MacBook Pro for a while, but I never really warmed up to it. SP1, in my opinion, made it barely useable. There were some features about it I grew to like, but those were offset by a continued annoyance at how many clicks useful features had gotten buried behind.

So when I finally got busy getting these systems ready – thanks to Steph’s system suddenly forgetting how to talk to USB devices – I decided to use Windows 7 RC instead. What I’d seen of Windows 7 already made me believe that we’d have a much happier time with it. So far, I’d have to say that’s correct. Steph’s new machine was slightly tricky to install – the built-in network interface on the motherboard wasn’t recognized so I had to bootstrap with XP drivers – but otherwise, the whole experience has been flawless.

Want to try Windows 7 for yourself? Get it here.

One of my favorite experiences was migrating our files and settings from the old machines. Windows 7, like Vista and Server 2008 before it, includes the Easy Transfer Wizard. This wizard is the offspring of XP’s Files and Settings Transfer Wizard but has a lot more smarts built in. As a result, I was able to quickly and easily get all our files and settings moved over without a hitch. With the exception of a laptop, we’re now XP free in my house.

Today, I ran across this blog post detailing Seven Windows 7 Tips. There were a couple of them I had already figured out (2, 4, and partial 3), but I’ll be trying out the rest this evening!


EAS: King of Sync?

Seven months or so ago, IBM surprised a bunch of people by announcing that they were licensing Microsoft’s Exchange ActiveSync protocol (EAS) for use with a future version of Lotus Notes. I’m sure there were a few folks who saw it coming, but I cheerfully admit that I was not one of them. After about 30 seconds of thought, though, I realized that it made all kinds of sense. EAS is a well-designed protocol, I am told by my developer friends, and I can certainly attest to the relative lightweight load it puts on Exchange servers as compared to some of the popular alternatives – enough so that BlackBerry add-ons that speak EAS have become a not-unheard of alternative for many organizations.

So, imagine my surprise when my Linux geek friend Nick told me smugly that he now had a new Palm Pre and was synching it to his Linux-based email system using the Pre’s EAS support. “Oh?” said I, trying to stay casual as I was mentally envisioning the screwed-up mail forwarding schemes he’d put in place to route his email to an Exchange server somewhere. “Did you finally break down and migrate your email to an Exchange system? If not, how’d you do that?”

Nick then proceeded to point me in the direction of Z-Push, which is an elegant little open source PHP-based implementation of EAS. A few minutes of poking around and I became convinced that this was a wicked cool project. I really like how Z-Push is designed:

  • The core PHP module answers incoming requests for the http://server/Microsoft-Server-ActiveSync virtual directory and handles all the protocol-level interactions. I haven’t dug into this deeply, but although it appears it was developed against Apache, folks have managed to get it working on a variety of web servers, including IIS! I’m not clear on whether authentication is handled by the package itself or by the web server. Now that I think about it, I suspect it just proxies your provided credentials on to the appropriate back-end system so that you don’t have to worry about integrating Z-Push with your authentication sources.
  • One or more back-end modules (also written in PHP), which read and write data from various data sources such as your IMAP server, a Maildir file system, or some other source of mail, calendar, or contact information. These back-end modules are run through a differential engine to help cut down on the amount of synching the back-end modules must perform. It looks like the API for these modules is very well thought-out; they obviously want developers to be able to easily write backends to tie in to a wide variety of data sources. You can mix and match multiple backends; for example, get your contact data from one system, your calendar from another, and your email from yet a third system.
  • If you’re running the Zarafa mail server, there’s a separate component that handles all types of data directly from Zarafa, easing your configuration. (Hey – Zarafa and Z-Push…I wonder if Zarafa provides developer resources; if so, way to go, guys!)

You do need to be careful about the back-end modules; because they’re PHP code running on your web server, poor design or bugs can slam your web server. For example, there’s currently a bug in how the IMAP back-end re-scans messages, and the resulting load can create a noticeable impact on an otherwise healthy Apache server with just a handful of users. It’s a good thing that there seems to be a lively and knowledgeable community on the Z-Push forums; they haven’t wasted any time in diagnosing the bug and providing suggested fixes.

Very deeply cool – folks are using Z-Push to provide, for example, an EAS connection point on their Windows Home Server, synching to their Gmail account. I wonder how long it will take for Linux-based “Exchange killers” (other than Zarafa) to wrap this product into their overall packages.

It’s products like this that help reinforce the awareness that EAS – and indirectly, Exchange – are a dominant enough force in the email market to make the viability of this kind of project not only potentially useful, but viable as an open source project.


A Deplorable Development in the Tuber Arms Race


No, seriously. _O_ M _G_.

Potato guns/canons? Pretty wicked cool.

But a potato gatling gun?????

Frakking YES.


Comparing PowerShell Switch Parameters with Boolean Parameters

If you’ve ever take a look at the help output (or TechNet documentation) for PowerShell cmdlets, you see that they list several pieces of information about each of the various parameters the cmdlet can use:

  • The parameter name
  • Whether it is a required or optional parameter
  • The .NET variable type the parameter expects
  • A description of the behavior the parameter controls

Let’s focus on two particular types of parameters, the Switch (System.Management.Automation.SwitchParameter) and the Boolean (System.Boolean). While I never really thought about it much before reading a discussion on an email list earlier, these two parameter types seem to be two ways of doing the same thing. Let me give you a practical example from the Exchange 2007 Management Shell: the New-ExchangeCertificate cmdlet. Table 1 lists an excerpt of its parameter list from the current TechNet article:

Table 1: Selected parameters of the New-ExchangeCertificate cmdlet

Parameter Description



Use this parameter to specify the type of certificate object to create.

By default, this parameter will create a self-signed certificate in the local computer certificate store.

To create a certificate request for a PKI certificate (PKCS #10) in the local request store, set this parameter to $True.


Use this parameter to specify whether the resulting certificate will have an exportable private key.

By default, all certificate requests and certificates created by this cmdlet will not allow the private key to be exported.

You must understand that if you cannot export the private key, the certificate itself cannot be exported and imported.

Set this parameter to $true to allow private key exporting from the resulting certificate.

On quick examination, both parameters control either/or behavior. So why the two different types? The mailing list discussion I referenced earlier pointed out the difference:

Boolean parameters control properties on the objects manipulated by the cmdlets. Switch parameters control behavior of the cmdlets themselves.

So in our example, a digital certificate has a property as part of the certificate that marks whether the associated private key can be exported in the future. That property goes along with the certificate, independent of the management interface or tool used. For that property, then, PowerShell uses the Boolean type for the -PrivateKeyExportable property.

On the other hand, the –GenerateRequest parameter controls the behavior of the cmdlet. With this property specified, the cmdlet creates a certificate request with all of the specified properties. If this parameter isn’t present, the cmdlet creates a self-signed certificate with all of the specified properties. The resulting object (CSR or certificate) has no corresponding sign of what option was chosen – you could just as easily submit that CSR to another tool on the same machine to create a self-signed certificate.

I hope this helps draw the distinction. Granted, it’s one I hadn’t thought much about before today, but now that I have, it’s nice to know that there’s yet another sign of intelligence and forethought in the PowerShell architecture.


Some Thoughts on FBA (part 1)

It’s funny how topics tend to come in clumps. Take the current example: forms-based authentication (FBA) in Exchange.

An FBA Overview

FBA was introduced in Exchange Server 2003 as a new authentication method for Outlook Web Access. It requires OWA to be published using SSL – which was not yet common practice at that point in time – and in turn allowed credentials to be sent a single time using plain-text form fields. It’s taken a while for people to get used to, but FBA has definitely become an accepted practice for Exchange deployments, and it’s a popular way to publish OWA for Exchange 2003, Exchange 2007, and the forthcoming Exchange 2010.

In fact, FBA is so successful, that the ISA Server group got into the mix by including FBA pre-authentication for ISA Server. With this model, instead of configuring Exchange for FBA you instead configure your ISA server to present the FBA screen. Once the user logs in, ISA takes the credentials and submits them to the Exchange 2003 front-end server or Exchange 2007 (or 2010) Client Access Server using the appropriately configured authentication method (Windows Integrated or Basic). In Exchange 2007 and 2010, this allows each separate virtual directory (OWA, Exchange ActiveSync, RPC proxy, Exchange Web Services, Autodiscover, Unified Messaging, and the new Exchange 2010 Exchange Control Panel) to have its own authentication settings, while ISA server transparently mediates them for remote users. Plus, ISA pre-authenticates those connections – only connections with valid credentials ever get passed on to your squishy Exchange servers – as shown in Figure 1:

Publishing Exchange using FBA on ISA

Figure 1: Publishing Exchange using FBA on ISA

Now that you know more about how FBA, Exchange, and ISA can interact, let me show you one mondo cool thing today. In a later post, we’ll have an architectural discussion for your future Exchange 2010 deployments.

The Cool Thing: Kay Sellenrode’s FBA Editor

On Exchange servers, it is possible to modify both the OWA themes and the FBA page (although you should check about the supportability of doing so). Likewise, it is also possible to modify the FBA page on ISA Server 2006. This is a nice feature as it helps companies integrate the OWA experience into the overall look and feel of the rest of their Web presence. Making these changes on Exchange servers is a somewhat well-documented process. Doing them on ISA is a bit more arcane.

Fellow Exchange 2007 MCM Kay Sellenrode has produced a free tool to simplify the process of modifying the ISA 2006 FBA – named, aptly enough, the FBA Editor. You can find the tool, as well as a YouTube video demo of how to use it, from his blog. While I’ve not had the opportunity to modify the ISA FBA form myself, I’ve heard plenty of horror stories about doing so – and Kay’s tool is a very cool, useful community contribution.

In the next day or two (edit: or more), we’ll move on to part 2 of our FBA discussion – deciding when and where you might want to use ISA’s FBA instead of Exchange’s.


A Modest Thought on “Don’t Ask/Don’t Tell”

With the recent activity surrounding the hearing for Army Lieutenant Dan Choi, an Iraq War veteran and Arab linguist who is also openly gay, I had a thought occur to me and I wanted to share it with y’all.

In my (limited) experience with the military, there’s still quite a bit of public resistance to the idea of allowing gays to openly serve. There are many reasons that one may take this stance, ranging from deeply principled to deeply homophobic and covering all points in between. If the objection comes from deeply held religious or moral convictions, I choose to respectfully disagree with you, but I understand and value the fact that you do have your beliefs on this issue.

From my anecdotal experience, though, the people who are usually the loudest about this issue (“I ain’t lettin’ no queer next to me with a gun; I’ll shoot his ass first!” is a representative sample I’ve heard recently) tend to be strongly grounded in the “mindlessly homophobic” rationale. This isn’t just confined to the military, though. I have plenty of memories of the charming functional illiterates at my rural high school indignantly asking me if I was gay, harrassing me for my presumed homosexuality, and making not-so-subtle meant-to-be-overheard comments about my lack of “real manliness”. These were the people who would always get in your face and confront you on your disgusting life choices — as long (of course) as you weren’t big enough or mean enough to be perceived as capable of handling the violence they always threatened to dish out.

Let’s take a representative example of this kind of person — we’ll call him Bubba. (Don’t assume that it’s only guys who do this; I’ve heard plenty of women who do too. ) Down at the bottom of it all, though, these guys and gals have one common flawed assumption, deeply rooted in raging sense of entitlement:

If that person is gay, they want to have sex with me.

I think the appropriate response here is a quote from Megan Fox’s character of Mikaela:

Oh God, I can’t even tell you how much I’m not your “little bunny.”

In other words, Bubba has committed the logical fallacy of assuming that just because a gay man is sexually attracted to some men, they must like all men — including, necessarily, Bubba. In other words, the defining characteristic of a gay man’s sexuality, according to Bubba, is the orientation; once a man is gay, they automatically must like all men even if those men are otherwise unattractive. Bubba, sad to say, thinks that being gay overrides any sense of taste or choice or other form of preference.

Bubba is a dumbshit. Bubba is, however, all too common — I’ve heard plenty of people independently reproduce this exact line of reasoning.

My thought and theory is: that for the Bubbas of the world, the objection to knowingly associating with someone who is gay comes down to projection of their own inner characteristics: Bubba wants to nail pretty much every female, or in the event that he has some self-restraint, is deluded enough to think that every woman wants to have sex with him. Being a paragon of self-control and discernment, Bubba is naturally are unable to conceive of someone who could in theory be attracted to them but isn’t.

What Bubba objects to, I believe, is not the gay person’s lack of taste and self control, but his own. It’s the same as the liar who in turn is convinced that everyone lies to him and is unable to see a truthful response without looking for the “real” answer, or the person who continually cheats others in big and small ways and in turn expects everyone to cheat her.

Do I think that everyone who objects to military service for gays and lesbians falls into this trap? Not at all. I just tend to think that the more vocal someone is about it, the more likely they are to have this motivation simmering at the bottom of it all. People who suffer from this attitude tend to have the crudest, most violent responses to homosexuality; they tend to be the loudest slanderers, the meanest and most illogical protesters. They argue from a well-deserved fear, because if everyone was just like them, all the sick, dark scenarios they fantasize would of course happen.

God knows that my gay and lesbian friends and acquaintances are no saints. Some of them are people I don’t willingly spend time around — but then, there are plenty of straight people I don’t want to spend a lot of time around either. Frankly, I’ve found that brushing off determined advances from a guy who likes me is no better and no worse than those from a gal who likes me — orientation having less to do with it than does their fundamental ability to hear and accept, “Thanks, but I’m not interested.”

Mind you, typically the Bubbas of the world are at heart hypocrites, because almost all of them have absolutely no problems with lesbians. Oh, no. They’re in favor of lesbians. Mainly because, along with all their other stinking thinking, they are under the delusion that those lesbians still secretly want them — so they’ll be able to score with the lesbian and her girlfirend at the same time.  Because of this, it’s easy to spot a Bubba and identify his objection for what it really is.


And now, after the long break

Okay, okay…so updating my blog server took longer than I’d anticipated. Getting the old material out of Community Server into BlogML format turned out to be a lot easier than I’d thought and finding the time to get it all imported into WordPress wasn’t much harder. What tripped me up was getting all of the redirection for the old, legacy URLs working.

Community Server and WordPress store their content in very different ways, and so they generate the URLs for blog posts using different algorithms. I know there are a fairish number of links out there in blog land to various posts I’ve done, and for vanity sake, I’d rather not orphan those links to the dreaded 404 not found error. The solution was to find the time to buy the lastest edition of O’Reilly’s Apache Cookbook and bone up on the Apache web server directives.

So, I think all the relevant old URLs should now automatically redirect to their proper new places — there’s not much point in keeping all the old posts if you don’t do this. The nice thing, for those of you who are web geeks, is that I’m issuing permanent redirections so Google and other search engines will update their links as they re-trawl my web site, thus pointing to the new URLs. For those of you who are humans, you might want to take a minute to check your bookmarks and make sure they’re updated to the new links.

One note: some commenter data didn’t make the import successfully. I could probably dig into it and find out why, but frankly, at this point, it’s more important to get the site (and Steph’s blog) back up and running. So, sorry — if you were one of those commenters, I apologize. Future comments should be preserved properly, and I really don’t see moving away from WordPress anytime soon.

If you’re reading this, then the necessary DNS updates have finished rolling out and we’re back live to the world. Thanks for your patience!


You, too, can Master Exchange

One of the biggest criticisms I’ve seen of the MCM program, even when it first was announced, was the cost – at a list price of $18,500 for the actual MCM program, discounting the travel, lodging, food, and opportunity cost of lost revenue, a lot of people are firmly convinced that the program is way too expensive for anybody but the bigger shops.

This discussion has of course gone back and forth within the Exchange community. I think part of the pushback comes from the fact that MCM is the next evolution of the Exchange Ranger program, which felt very elitist and exclusive (and by many accounts was originally designed to be, back when it was only a Microsoft-only evolution designed to provide a higher degree of training for Microsoft consultants and engineers to better resolve their own customer issues). Starting off with that kind of background leaves a lot of lingering impressions, and the Exchange community has long memories. Paul has a great discussion of his point of view as a new MCM instructor and shares his take on the “is it worth it?” question.

Another reason for pushback is the economy. The typical argument is, “I can’t afford to take this time right now.” Let’s take a ballpark figure here, aimed at the coming May 4 rotation, just to have some idea of the kinds of numbers folks are thinking about:

  • Imagine a consultant working a 40-hour week. Her bosses would like her to meet 90% (36 hours) billable. Given two weeks of vacation a year, that 50 weeks at 36 hours a week.
  • We’ll also imagine that she’s able to bill out at $100/hour. This brings her minimum annual revenue to $180,000. They set her opportunity cost (lost revenue) at $3,600/week.
  • We’ll assume she have the pre-requisites nailed (MCITP Enterprise Messaging, the additional AD exam for either Windows 2003 or Windows 2008, and the field experience). No extra cost there (otherwise it’s $150/test, or $600 total).
  • Let’s say her plane tickets are $700 for round-trip to Redmond and back.
  • And we’ll say that she needs to stay at a hotel, checking in Sunday May 3rd, checking out Sunday May 24th, at a daily rate of $200.
  • Let’s also assume she’ll need $75 a day for meals.

That works out to $18,500 (class fee) + $700 (plane) + 21 x $275 (hotel + meals) + 3 x $3,600 (opportunity cost of work she won’t be doing) — $18,500 + $700 + $5,775 + $10,800 = a whopping total of $35,775. That, many people argue, is far too much for what they get out of the course – it represents just over 10 weeks of her regular revenue, or approximately 1/5th of her year’s revenue.

If those numbers were the final answer, they’d be right.

However, Paul has some great talking points in his post; although he focuses on the non-economic piece, I’d like to tie some of those back in to hard numbers.

  • The level of training. I don’t care how well you know Exchange. You will walk out of this class knowing a lot more and you will be immediately able to take advantage of that knowledge to the betterment of your customers. Plus, you will have ongoing access to some of the best Exchange people in the world. I don’t know a single consultant out there who can work on a problem that is stumping them for hours or days and be able to consistently bill every single hour they spend showing no results. Most of us end up eating time, which shows up in the bottom line. For the sake of argument, let’s say that our consultant ends up spending 30% instead of 10% of her time working on issues that she can’t directly bill for because of things like this. That drops her opportunity cost from $3,600/week to $2,520, or $7,560 for the three weeks (and it means she’s only got an annual revenue of $126,000). If she can reduce that non-billable time, she can increase my efficiency and get more real billable work done in the same calendar period. We’ll say she can gain back 10% of that lost time and get up to only 20% lost time, or 32 hours a week.
  • The demonstration of competence. This is a huge competitive advantage for two reasons. First, it helps you land work you may not have been able to land before. This is great for keeping your pipeline full – always a major challenge in a rough economy. Second, it allows you to raise your billing rates. Okay, true, maybe you can’t raise your billing rates for all the work that you do for all of your customers, but even some work at a higher rate directly translates to your pocket book. Let’s say she can bill 25% of those 32 hours at $150/hour. That turns her week’s take into (8 x $150) + (24 x $100) = $1,200 + $2,400 = $3,600. That modest gain in billing rates right there compensates for the extra 10% loss of billing hours and pays for itself every 3-4 weeks.

Let’s take another look at those overall numbers again. This time, let’s change our ballpark with numbers more closely matching the reality of the students at the classes:

  • There’s a 30% discount on the class, so she pays only $12,950 (not $18,500).
  • We’ll keep the $700 for plane tickets.
  • From above, we know that her real lost opportunity cost is more like $7,560 (3 x $2,520 and not the $10,800 worst case).
  • She can get shared apartment housing with other students right close to campus for more like $67 a night (three bedrooms).
  • Food expenses are more typically averaged out to $40 per day. You can, of course, break the bank on this during the weekends, but during the days you don’t really have time.

This puts the cost of her rotation at $12,950 + $700 + (21 x $107) + $7,560, or $23,457. That’s only 66% – two-thirds – of the worst-case cost we came up with above. With her adjusted annual revenue of $126,000, this is only 19%, or just less than 1/5th of her annual revenue.

And it doesn’t stop there. Armed with the data points I gave above, let’s see how this works out for the future and when the benefits from the rotation pay back.

Over the year, our hypothetical consultant, working only a 40-hour work week (I know, you can stop laughing at me now) brings in 50 x $2,520 = $126,000.  The MCM rotation represents 19% of her revenue for the year before costs.

However, let’s figure out earning potential in that same year: (47 x $3,600) – ($13,650 + $700 + $2247) = $152,603. That’s a 20% increase.

Will these numbers make sense for everyone? No, and I’m not trying to argue that they do. What I am trying to point out, though, is that the business justification for going to the rotation may actually make sense once you sit down and work out the numbers. Think about your current projects and how changes to hours and billing rates may improve your bottom line. Think about work you haven’t gotten or been unwilling to pursue because you or the customer felt it was out of your league. Take some time to play with the numbers and see if this makes sense for you.

If it does, or if you have any further questions, let me know.


Fixing interoperability problems between OCS 2007 R2 Public Internet Connectivity and AOL IM

One of the cool things you can do with OCS is connect your internal organization to various public IM clouds (MSN/Windows Live, Yahoo!, and AOL) using the Public Internet Connectivity, or PIC, feature. As you might imagine, though, PIC involves lots of fiddly bits that all have to work just right in order for there to be a seamless user experience. Recently, lots of people deploying OCS 2007 R2 have been reporting problems with PIC – specifically, in getting connectivity to the AOL IM cloud working properly.


It turns out that the problem has to do with with changes that were made to the default SSL algorithm negotiations made in Windows Server 2008. If you deployed OCS 2007 R2 Edge roles on Windows Server 2003, you’d be fine; if you used Windows 2008, you’d see problems.

When an HTTP client and server connect (and most IM protocols use HTTPS or HTTP + TLS as a firewall-friendly transport[1]), one of the first things they do is negotiate the specific suite of cryptographic algorithms that will be used for that session. The cipher suite includes three components:

  • Key exchange method – this is the algorithm that defines the way that the two endpoints will agree upon a shared symmetric key for the session. This session key will later be used to encrypt the contents of the session, so it’s important for it to be secure. This key should never be passed in cleartext – and since the session isn’t encrypted yet, there has to be some mechanism to do it. Some of the potential methods allow digital signatures, providing an extra level of confidence against a man-in-the-middle attack. There are two main choices: RSA public-private certificates and Diffie-Hellman keyless exchanges (useful when there’s no prior communication or shared set of trusted certificates between the endpoints).
  • Session cipher – this is the cipher that will be used to encrypt all of the session data. A symmetric cipher is faster to process for both ends and reduces CPU overhead, but is more vulnerable in principal to discovery and attack (as both sides have to have the same key and therefore have to exchange it over the wire). The next choice is streaming cipher or cipher block chaining (CBC) cipher? For streaming, you have RC4 (40 and 128-bit variants). For CBC, you can choose RC2 (40-bit), DES (40-bit or 56-bit), 3DES (168-bit), Idea (128-bit), or Fortezza (96-bit). You can also choose none, but that’s not terribly secure.
  • Message digest algorithm – the message digest is a hash cipher used to create the Hashed Message Authentication Code (HMAC), which is used to help verify the integrity of the cipher. It’s also used to guard against an attacker trying to replay this stream in the future and fool the server into giving up information it shouldn’t. In SSL 3.0, this is just a MAC. There are three choices: null (none), MD5 (128-bit), and SHA-1 (160-bit).


Windows Server 2003 uses the following suites for TLS 1.0/SSL 3.0 connections by default:

  1. TLS_RSA_WITH_RC4_128_MD5 (RSA certificate key exchange, RC4 streaming session cipher with 128-bit key, and 128-bit MD5 HMAC; a safe, legacy choice of protocols, although definitely aging in today’s environment)
  2. TLS_RSA_WITH_RC4_128_SHA (RSA certificate key exchange, RC4 streaming session cipher with 128-bit key, and 160-bit SHA-1 HMAC; a bit stronger than the above, thanks to SHA-1 being not quite as brittle as MD5 yet)
  3. TLS_RSA_WITH_3DES_EDE_CBC_SHA (you can work out the rest)

Let’s contrast that with Windows Server 2008, which cleans out some cruft but adds support for quite a few new algorithms (new suites bolded):

  1. TLS_RSA_WITH_AES_128_CBC_SHA (Using AES 128-bit as a CBC session cipher)
  2. TLS_RSA_WITH_AES_256_CBC_SHA (Using AES 256-bit as a CBC session cipher)
  5. TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P256 (AES 128-bit, SHA 256-bit)
  6. TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P384(AES 128-bit, SHA 384-bit)
  7. TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA_P521(AES 128-bit, SHA 521-bit)
  8. TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P256(AES 256-bit, SHA 256-bit)
  9. TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P384(AES 256-bit, SHA 384-bit)
  10. TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA_P521(AES 256-bit, SHA 521-bit)
  11. TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA_P256 (you can work out the rest)
  20. TLS_RSA_WITH_RC4_128_MD5
  21. SSL_CK_RC4_128_WITH_MD5 (not sure)
  22. SSL_CK_DES_192_EDE3_CBC_WITH_MD5 (not sure)

Okay, so take a look at line 20 in the second list – see how TLS_RSA_WITH_RC4_128_MD5 got moved from first to darned near worst? Yeah, well, that’s because AES and SHA-1 are the strongest protocols of their type likely to be commonly supported, so Windows 2008 moves those to the default offered. Unfortunately, this causes problems with PIC to AOL.


Now that we know what the problem is, what can we do about it? For the fix, check out Scott Oseychik’s post here.

[1] HTTPS is really Hop Through Tightened Perimeters Simply – aka the Universal Firewall Traversal Protocol.


Defend THIS

Iowa’s Supreme Court handed out a fairly shocking unanimous decision this morning striking down the definition of marriage as “one man, one woman”, upholding a 2007 Polk Country ruling

If you follow along my blog, you probably already know that I think this is a good thing, so I won’t comment extensively on it here. However, there’s one section in the article I linked to above that just reeks of so much stupidity that I have to respond:

Maggie Gallagher, president of the National Organization for Marriage, a New Jersey group, said “once again, the most undemocratic branch of government is being used to advance an agenda the majority of Americans reject.”

“Marriage means a husband and wife. That’s not discrimination, that’s common sense,” she said in a press release. “Even in states like Vermont, where they are pushing this issue through legislatures, gay marriage advocates are totally unwilling to let the people decide these issues directly.”

Really? Ms. Gallagher, did you really just stoop to the “30 billion flies eat shit” argument to justify your position? You lose.

Okay, to unpack that for anyone who didn’t follow that train of thought:

Ms. Gallagher is relying on the tactic of telling people “the government is ignoring your opinion.” By telling people this, she’s playing on a fundamental ignorance of the design and intent of the American government system, which is the tired old myth that America = democracy = the will of the people = only tolerating Christian values. Let’s see what our founding fathers had to say about that:

It is, that in a democracy, the people meet and exercise the government in person; in a republic, they assemble and administer it by their representatives and agents. A democracy, consequently, will be confined to a small spot. A republic may be extended over a large region.
Federalist No. 14

Democracy is two wolves and a lamb voting on what to have for lunch. Liberty is a well-armed lamb contesting the vote!
Benjamin Franklin

Remember, democracy never lasts long. It soon wastes, exhausts, and murders itself.
John Adams

It cannot be emphasized too strongly or too often that this great nation was founded, not by religionists, but by Christians; not on religions, but on the Gospel of Jesus Christ. For this very reason peoples of other faiths have been afforded asylum, prosperity, and freedom of worship here.
Patrick Henry

I know no safe depository of the ultimate powers of the society but the people themselves, (A)nd if we think them not enlightened enough to exercise their control with a wholesome discretion, the remedy is not to take it from them, but to inform their discretion by education. This is the true corrective of abuses of constitutional power.
Thomas Jefferson

I have always thought that all men should be free; but if any should be slaves, it should first be those who desire it for themselves, and secondly those who desire it for others. Whenever I hear anyone arguing for slavery, I feel a strong impulse to see it tried on them personally.
Abraham Lincoln

I could go on all day and find tons of quotes, but the key threads that I’m weaving here are these:

America is not and was never intended to be a pure democracy. Remember the phrase “the tyranny of the majority”? Basically, it’s great to be in a democracy if you’re part of the 51%. Not so much to be in the 49% Our democratic functions are not set up to allow citizens to directly decide upon laws and legislation and the handling of day-to-day governance; they are set up to elect responsible leaders who do that for us, and to give us mechanisms to take those leaders out of the picture when they fail to discharge their responsibilities. That’s the “democratic republic.” Remember the Pledge of Allegiance? “I pledge allegiance to the flag of the United States of America and to the Republic for which it stands…”

By electing responsible leaders (including legislators and judges), we are in fact giving those leaders the mandate to act in the fashion they see as best. If we don’t like what they do with that mandate, then we’d better pay attention and give them feedback. You can’t leave the people out of the equation, but you can’t directly hand them the keys to the kingdom, either. That’s why we have checks and balances, including the judicial branch of government. It is their job to say, “No, these laws are causing harm and cannot be used, even though they are popularly supported.”  The exercise of democracy should never come at the expense of depriving others of their liberties. How long did popular opinion support and uphold slavery, and how much damage did that do to our country (and continue to do today)? How long was racism enshrined in our laws? Sexism? If you’re counting upon the will of the people to make the correct choice every time, you’ve got a pretty grim track record of results.

America was designed to be a refuge for all religious belief systems, not just a narrow stripe of fundamentalist Christianity. This includes religious systems that directly challenge basic beliefs of Christianity. It was never designed to be a system that promoted Christianity over all others, even though the majority of founders were Christians, espoused Christian ideals, and wanted to see this country continue to be based on a set of morals not completely incompatible with Christianity. When push came to shove, most of the founders espoused liberty and freedom *over* Christian principles as a guiding principle for the government. They reasoned, correctly, that Christianity could flourish in an environment where liberty was pursued, but the reverse was not true (as had been graphically demonstrated). That is, the proper place for Christian values is on the individual level and in our relationships with others, not hard-wiring our specific interpretations into our functions of government. Religion + bureaucracy + power = corruption of values and lessening of liberty.

Let me leave you with this final challenge if you’re still thinking that it’s your religious duty to enshrine your notion of marriage into the laws of our nation:

Show me a comprehensive case in Scripture for collective Christian political activism. Remember the specific accusations the Pharisees made against Jesus to Pontius Pilate and his answers to Pilate in return. Remember his response to the commercialism in the Temple, how his fiercest criticisms were reserved for those who used religion to gain and maintain power. And then take a look at the agenda and funding of groups like National Organization for Marriage and Focus on the Family who are leading this fight to preserve marriage (whatever that really means) and tell me how they’re not gaining power and money from their collective activism.


ExMon released (no joke!)

If you’re tempted to think this is an April Fool’s Day joke, no worries – this is the real deal. Yesterday, Microsoft published the Exchange 2007-aware version of Exchange Server User Monitor (ExMon) for download.

“ExMon?” you ask. “What’s that?” I’m happy to explain!

ExMon is a tool that gives you a real-time look inside your Exchange servers to help find out what kind of impact your MAPI clients are having on the system. That’s right – it’s a way to monitor MAPI connections. (Sorry; it doesn’t monitor WebDAV, POP3, IMAP, SMTP, OWA, EAS, or EWS.) With this release, you can now monitor the following versions of Exchange:

  • Exchange Server 2007 SP1+
  • Exchange Server 2003 SP1+
  • Exchange 2000 Server SP2+

You can find out more about it from TechNet.

Even though the release date isn’t a celebration of April 1st, there is currently a bit of an unintentional joke, as shown by the current screenshot:


Note that while the Date Published is March 31, the Version is only 06.05.7543 – which is the Exchange 2003 version published in 2005, as shown below:


So, for now, hold off trying to download and use it. I’ll update this post when the error is fixed.


A Path of Nines

Nine months ago I stepped outside of my comfort zone and started a month of karate at the local YMCA. I didn’t expect to renew for a second month. It turns out that I love it. I’ve gotten to the point that I start dreaming about the things I’m doing, which is scary on one level and very cool on others. At any rate, I’ve had a lot of thoughts that need more time to flesh out and probably will only interest my fellow students, but I do want to share a few correspondences I’ve noticed lately between karate and the number nine.

  • There are nine belts, or kyus, between rank beginner and black belt in my school of karate (which is part of the All-Okinawan Shorin-Ryu Matsumura Karate and Kobudo Federation, or OSMKKF). As of tonight, I have passed three of them. That makes me 7th kyu — what you might call orange belt, except that we don’t actually use the orange belt (or even stripes on the belts); we just have three blue belt kyus, three green, and three brown. I like this because it helps minimize rivalry between students.
  • The blue belt kyus use the same basic kata, with what look to be minor differences for each kyu — mostly in the blocking techniques you demonstrate. The footwork, though, is the same, and it requires you to face the nine cardinal points of the compass (the normal eight plus the center position for the beginning and end of the kata). All too often we learn the specific steps of the kata and don’t stop to think about how the overall pattern looks or rhythm flows. That’s the kind of stuff I’ve started dreaming about, and man, it is cool!
  • I have learned to examine the first kata at a whole new level with each additional kyu, and I have been told that this will continue. So the very first kata they teach us unpacks to at least nine separate layers! No wonder it takes years to really master this stuff! Some students make the mistake of thinking they’ve learned everything they need to know from the earlier levels; I’ve already had at least case of figuring out how a current technique I was mastering applied to an earlier technique, making both of them stronger as a result.
  • In a typical Tuesday evening workout, I will practice various katas an average of nine times. This typically includes polishing the kata I will next be testing for and learning the basics of the next kata. There are days this does not feel like it is enough — and that would be right. So we practice at home too; in fact, there are certain parts that I find myself practicing at work as I walk back and forth from my office to the kitchen or to co-workers’ offices. (Apparently I look really funny walking through the lobby practicing punches.)
  • For my next kyu, I start to fold in weapons work (which is the kobudo part; karate is technically only bare-hand work). I will first work with the bo staff, which is six feet or 72 inches tall — nine times eight. I’m tremendously excited to be working with the bo; somewhere in my head, the iconic definition or avatar of martial arts got associated with being a bad-ass with the staff, so now I feel like I’m finally stepping into the heart of what it means to be a martial artist. Intellectually, I realize this is silly, but it still feels true.

Don’t worry; I’m not trying to seriously assert that the number nine somehow has some sort of mystic foothold in karate (that would be number ten, which in Japanese is ju, and controls our workouts). I just noticed these and was amused. What’s been more awe-inspiring has been noticing the changes in the last nine months:

  • I’ve continued to lose weight. Granted, I’ve not experienced the same dramatic pace as I did in the first month, but it’s still a slow and steady drop. This is really cool given some of the interruptions and stressors I’ve had during these nine months that have wreaked havoc with my karate attendance.
  • My overall muscle tone has improved. You probably wouldn’t notice the difference, but I certainly do. Certain actions are a lot less effort than they used to be, and there is visible muscle definition amongst the remaining layers of pudge.
  • My endurance has increased. Right now I’m at that point where if I miss a week and a half of karate, I definitely feel it, but if I attend regularly I can make it through the workouts and not feel completely beat up. More importantly, I’m better able to keep up as the speed of some of the workouts increases; if I slow down it’s to perfect technique, not because I can’t do it.
  • My reflexes have improved. This has been the startling one for me, because as long as I can remember my reflexes have sucked. I’m still no Chuck Norris or Bruce Lee, but the other day I knocked a glass tumbler off the counter and caught it without looking directly at it. Whoa!

By some counts, these last nine months have gotten me a third of the way to black belt. I don’t feel that way; I feel that they’ve set my feet on a path that I’ll still be walking for years to come. I’m not worried about belts or kyus; that’s sensei’s job to track, not mine. I just have to get through each workout, each kata, each set of one-steps, each class having given my best and learned everything I can. The rest will take care of itself. I’d never have caught that glass if I’d been trying to learn it as a trick, but by focusing on each step while I’m at it, I’ve gotten my body — as out of shape as it still is — to a point where I can do things I’ve never been able to do before. And that, friends, is magic.


Two CCR White Papers from Missy

This actually happened last week, but I’ve been remiss in getting it posted (sorry, Missy!) Missy recently completed two Exchange 2007 whitepapers, both centered around the CCR story.

The first one, High Availability Choices for Exchange Server 2007: Continuous Cluster Replication or Single Copy Clustering, provides a thorough overview of the questions and issues to be considered by companies who are looking for Exchange 2007 availability:

  • Large mailbox support. In my experience, this is a major driver for Exchange 2007 migrations and for looking at CCR. Exchange 2007’s I/O performance increases have shifted the balance for the Exchange store being always I/O bound to now sometimes being capacity bound, depending on the configuration, and providing that capacity can be extremely expensive in SCC configurations (that typically rely on SANs). CCR offers some other benefits that Missy outlines.
  • Points of failure. With SCC, you still only have a single copy of the data – making that data (and that SAN frame) a SPOF. There are mitigation steps you can take, but those are all expensive. When it comes to losing your Exchange databases, storage issues are the #1 cause.
  • Database replication. Missy takes a good look at what replication means, how it affects your environment, and why CCR offers a best-of-breed solution for Exchange database replication. She also tackles the religious issue of why SAN-based availability solutions aren’t necessarily the best solution – and why people need to re-examine the question of whether Exchange-based availability features are the right way to go.
  • RTO and RPO. These scary TLAs are popping up all over the place lately, but you really need to understand them in order to have a good handle on what your organization’s exact needs are – and which solution is going to be the best fit for you.
  • Hardware and storage considerations. Years of cluster-based availability solutions have given many Exchange administrators and consultants a blind spot when it comes to how Exchange should be provisioned and designed. These solutions have limited some of the flexibility that you may need to consider in the current economic environment.
  • Cost. Talk about money and you always get people’s attention. Missy details several areas of hidden cost in Exchange availability and shows how CCR helps address many of these issues.
  • Management. It’s not enough to design and deploy your highly available Exchange solution – if you don’t manage and monitor it, and have good operational policies and procedures, your investment will be wasted. Missy talks about several realms of management.

I really recommend this paper for anyone who is interested in Exchange availability. It’s a cogent walkthrough of the major discussion points centering around the availability debate.

Missy’s second paper, Continuous Cluster Replication and Direct Attached Storage: High Availability without Breaking the Bank, directly addresses one of the key assumptions underneath CCR – that DAS can be a sufficient solution. Years of Exchange experience have slowly moved organizations away from DAS to SAN, especially when high availability is a requirement – and many people now write off DAS solutions out of habit, without realizing that Exchange 2007 has in fact enabled a major switch in the art of Exchange storage design.

In order to address this topic, Missy takes a great look at the history of Exchange storage and the technological factors that led to the initial storage design decisions and the slow move to SAN solutions. These legacy decisions continue to box today’s Exchange organizations into a corner with unfortunate consequences – unless something breaks demand for SAN storage.

Missy then moves into how Exchange 2007 and CCR make it possible to use DAS, outlining the multiple benefits of doing so (not just cost – but there’s a good discussion of the money factor, too).

Both papers are outstanding; I highly recommend them.


Haz Firewall, Want Cheezburger

Although Window Server 2008 offers an impressive built-in firewall, in some cases we Exchange administrators don’t want to have to deal with it. Maybe you are building a demo to show a customer, or a lab environment to reproduce an issue. Maybe you just want to get Exchange installed now and will loop back to deal with fine-tuning firewall issues later. Maybe you have some other firewall product you’d rather use. Maybe, even, you don’t believe in defense in depth – or don’t think server-level firewall is useful.

Whatever the reason, you’ve decided to disable the Windows 2008 firewall for an Exchange 2007 server. It turns out that there is a right way to do it and a wrong way to do it.

The wrong way


This seems pretty intuitive to long-term Exchange administrators who are used to Windows Server 2003. The problem is, the Windows firewall service in Windows 2008 has been re-engineered and works a bit differently. It now includes the concept of profiles, a feature that built into the networking stack at a low level, enabling Windows to identify the network you’re on and apply the appropriate sets of configuration (such as enabling or disabling firewall rules and services).

Because this functionality is now tied into the network stack, disabling the Windows Firewall service and shutting it off can actually lead to all sorts of interesting and hard-to-fix errors.

The right way

Doing it the right way involves taking advantage of those network profiles.

Method 1 (GUI):

  1. Open the Windows Firewall with Advanced Security console (Start, Administrative Tools, Windows Firewall with Advanced Security).
  2. In the Overview pane, click Windows Firewall Properties.
  3. For each network profile (Domain network, Public network, Private network) that the server or image will be operating in, select Firewall state to Off. Typically, setting the Domain network profile is sufficient for an Exchange server, unless it’s an Edge Transport box.
  4. Once you’ve set all the desired profiles, click OK.
  5. Close the Windows Firewall with Advanced Security console.


Method 2 (CLI):

  1. Open your favorite CLI interface: CMD.EXE or PowerShell.
  2. Type the following command:netsh advfirewall set profiles state off

    Fill in profiles with one of the following values:

    • DomainProfile — the Domain network profile. Typically the profile needed for all Exchange servers except Edge Transport.
    • PrivateProfile — the Private network profile. Typicall the profile you’ll need for Edge Transport servers if the perimeter network has been identified as a private network.
    • PublicProfile — the Public network profile. Typicall the profile you’ll need for Edge Transport servers if the perimeter network has been identified as a public network (which is what I’d recommend).
    • CurrentProfile — the currently selected network profile
    • AllProfiles — all network profiles
  3. Close the command prompt.


And there you have it – the right way to disable the Windows 2008 firewall for Exchange Server 2007, complete with FAIL/LOLcats.


Off-topic: trying to refurbish a Mac mini

Full details on my home blog.


Wanted: Your broken Mac mini

Life is full of synchronicity; most of the time, this is through the workings of chance, but every now and then we get to help it along. Two ships may pass in the night, but how often does the helmsman take a hand?

You’re the owner of a no-longer-working original PowerPC Mac mini. This awesome little piece of technology once rocked your world, but slowly, you moved on to bigger and better things. Maybe you upgraded; maybe it stopped working. This Mac mini, though, still hangs around, complete with a working SuperDrive. You may feel a bit of guilt over not passing it on or getting it refurbished.

I’m the owner of a proud original PowerPC Mac mini that is having problems with its SuperDrive. My mini wants to be a member of the OS X 10.4 generation but can’t boot from the internal drive, nor can it seem to find an external USB drive as a boot device.

If you’ve got a spare original Mac mini (or drive that fits) and you’re willing to part it with inexpensively, please drop me a line. No pina coladas or getting caught in the rain required.


A long-overdue status update

So, you haven’t seen a lot of me on the blog lately. The sad part is that I have three or four blog posts in various states of completion, I just seem to have very little time these days to work on it. I think part of it is that ever since my MCM Exchange 2007 class last October, I felt like I had a big burden of unfinished business on my shoulders.

Happily, that’s not the case anymore. Yesterday I retook and passed the lab and received word that I am have officially earned the coveted Microsoft Certified Master | Exchange 2007 certification. While I’m taking this moment to express my utmost relief about this, be assured I’ve got plenty more to say about it in an upcoming blog post, but it’ll have to wait.

I’ve also been re-awarded as an Exchange MVP — 3 years, wow! — and continue to be going full-bore with that. I have become very deeply aware that my continued presence in the Microsoft communities is in large part due to the fantastic caliber of people who are involved in them. A friend once mentioned the “open source community” as if it was a singular community and I had to laugh; from my experience, it’s anything but. Consider the following examples:

  • KDE vs. Gnome
  • Linux vs. BSD
  • Linux distro vs. Linux distro
  • Sun Java vs. IBM Java
  • Tomcat vs. other Java frameworks
  • Sendmail vs. Postfix vs. Exim
  • Berstein vs. everyone else
  • Stallman/FSF vs. everyone else

I made the initial mental leap from “Unix IT pro who knows Windows” to being a “Windows IT pro who knows Unix” because of the management challenges I saw Active Directory and Group Policy addressing, but I stayed for the people. Including people like you, reading my blog.

On that note, since I know many of you started reading me because of seeing me at conferences: I will not be at Spring Connections this year. I know, right? Anyway, it’s all for the best; things are shaping up to be busy and it will be nice to have one year when I’m not flying to Orlando. This is even more awesome because I will be at Tech-Ed, giving both a breakout session and an Interactive Theater session. More details as we get closer. I’ve also got a great project that I’m working on that I hope to be able to announce later.

Oh, hey, have you seen 3Sharp’s new podcasting site, built entirely on the Podcasting Kit for SharePoint that we were the primary developers for? I’ve got a few podcasts in the works…so if you’ve got any questions or ideas of short subjects you’d like me to talk about, let me know!

Alright, folks — it’s late and my Xbox is calling me! (My wife and kids probably want a word with me too.)


Outlook Performance Goodness

Microsoft has recently released a pair of Outlook 2007 updates (okay, technically, they’re updates for Outlook 2007 with SP1 applied) that you might want to look at installing sooner rather than later. These two updates are together being billed as the “February cumulative update” at KB 968009, which has some interesting verbiage about how many of the fixes were originally slated to be in Outlook 2007 SP2:

The fix list for the February CU may not be identical to the fix list for SP2, but for the purposes of this article, the February CU fixes are referred to synonymously with the fixes for SP2. Also, when Office suite SP2 releases, there will not be a specific package that targets only Outlook.

Let’s start with the small one, KB 697688. This one fixes some issues with keyboard shortcuts, custom forms, and embedded Web browser controls.

Okay, with that out of the way, let’s move on to juicy KB 961752, an unlooked-for roll-up containing a delectable selection of fixes. Highlights include:

  1. Stability fixes
  2. SharePoint/Outlook integration
  3. Multiple mailbox handling behavior
  4. Responsiveness

From reports that I’ve seen, users who have applied these two patches are reporting significantly better response times in Outlook 2007 cached mode even when attaching to large mailboxes or mailboxes with folders that contain many items — traditionally, two scenarios that caused a lot of problems for Outlook because of the way the .ost stored local data. They’ve also reported that the “corrupted data file” problem that many people have complained about (close Outlook, it takes forever to shut down so writes to the .ost don’t fully happen) seems to have gone away.

Note that you may have an awkward moment after starting Outlook for the first time after applying these updates: you’re going to get a dialog something like this:


“Wait a minute,” you might say. “First use? Where’s my data?” Chillax [1]. It’s there — but in order to do the magic, Outlook is changing the structure of the existing .ost file. This is a one-time operation and it can take a little bit of time, depending on how much data you’ve got stuff away in there (I’ve currently got on the order of 2GB or so, so you can draw your own rough estimates; I suspect it also depends on the number/depth of folders, items per folder, number of attachments, etc.)

Once the re-order is done, though, you get all the benefits. Faster startup, quicker shut-down, and generally more responsive performance overall. This is seriously crisp stuff, folks — I opened my Deleted Items folder (I hardly ever look in there, I just occasionally nuke it from orbit) and SNAP! everything was there as close to instantly as I can measure. No waiting for 3-5 (or 10, or 20) seconds for the view to build.


[1] A mash-up of “chill” and “relax”. This is my new favorite word.


Live from Facebook: 25 Random Things about Devin

Over on my Facebook profile, I got tagged by about five people with this whole “25 Things About Me” meme. I finally decided to respond. Am I glad I did — I’ve been having a great amount of fun with the ensuing comment thread. In fact, it’s so much fun, I figured I’d repost it here. (If you read this and my Facebook profile, you’ve already seen this; feel free to skip it.)

  1. When I was a child, I once typed out over 3/4 of my favorite book so I could have my own copy to read. I couldn’t afford to buy one at the time.
  2. I learned to read when I was four; we moved to a new house and couldn’t get TV reception, so my parents got rid of our TV. The next year, I figured out people got paid to write books. I’ve wanted to be a published writer ever since.
  3. I enjoy karate, now that I’m taking it. I know that martial arts the world over teach a variety of armed and unarmed techniques, but I’ve always secretly thought of the bo staff when I think of martial arts. Now that I get to work with the staff, I *feel* like a martial artist.
  4. I love peppermint ice cream, caramel, and Girl Scout thin mint cookies. However, my favorite dessert is chocolate chip cookies. My wife makes a killer variant: orange chocolate chip cookies. YUM!
  5. I’m a sucker for all things feline, except for some pure-bred Persians and Siamese that are too stupid to breathe. When I was a kid, I got to play with a white tiger cub; white tigers are my favorite cat. I like some breeds of dogs, but not the small yappy ones.
  6. I think that forgiveness isn’t a “get out of jail free card.” It’s a process designed to help victims divest themselves of the continuing karmic damage they inflict upon themselves and let go of any claims of vengeance or retaliation. True forgiveness does not absolve the offender of consequences, but it does open the door to mercy and breaks the cycle of anger and revenge.
  7. I hated high school. I’d home schooled for five years, then moved to a new town and started public high school. So much wasted time and energy, especially on social hierarchy games! I wonder if I would feel the same if I’d been one of the popular kids…but we’ll never know.
  8. After my son was born and my daughter was a toddler, we found out that my family has a history of autism. If you ever wondered why I was so weird, you can thank Asperger’s Syndrome. However, that only gets 65% of the blame; the rest is all me.
  9. My first trip outside North America was a speaking gig at a roadshow in Lisbon, Portugal. I’ve always wanted to visit Portugal; they were the home of some of history’s greatest navigators and explorers.
  10. I have discovered that I enjoy speaking in public; the bigger audience, the better. However, I typically dread question and answer sessions, even though I’ve been told I do them well.
  11. The first time I saw Steph I knew I would marry her, even before we were introduced. The universe gave an audible and tactile “click” that was impossible for me to miss! This is why I was able to not get all nervous around her.
  12. As I have gotten older, I have become more concerned with uncovering the structures and principles that events work on, and less concerned with arguing the particular details of a given situation. Getting axle-wrapped about details is a great way to keep anything from being resolved. Boring!
  13. My favorite food? The Cheescake Factory’s Spicy Cashew Chicken. Screw dessert — I gorge myself on the chicken. Yum! If we’re talking homemade, then it’s the pizza that my wife makes, based on a modification of my mother’s recipe.
  14. When format allows, I always leave blank lines between paragraphs. I also insist on serial commas in lists unless the style guide says otherwise. (Real writers can do whatever the style guide says, or rewrite to avoid the points they disagree with.) The sentence “I’d like to thank my parents, God and Ayn Rand” gives me all the justification I need.
  15. My daily work involves Microsoft Windows and Exchange, and I’ve just been recognized for my third year as a Microsoft Exchange MVP. If you’d told me ten years ago I’d not still be working with Unix, Sendmail, and Postfix, I’d have laughed at you.
  16. I don’t like kids, mainly because I hated being one. Adults always talked down to me and condescended in other ways. As a result, I try to never talk down to kids myself. I find they are better listeners than most adults and respond well to more advanced instructions that most adults would believe.
  17. Before the Internet got popular, I used to run an electronic BBS. I had no games and the only files I had for download were basic utilities; I specialized in message forums no one else in my area would touch. My BBS was always busy, and over 80% of my callers came from out of state.
  18. To me, the difference between a “friend” and an “acquaintance” is how much work is put into the relationship. You can’t really be a friend if both sides don’t work to make it happen.
  19. I’ve been sporting a shaved head since college, when my best friend’s dad talked me into it. Although I occasionally grow my hair out, I’m resigned to shaving my head for the rest of my life. Nothing else really works well.
  20. I have a simple philosophy about shopping: do your research and buy an well-made item that will last (even if it’s expensive) instead of buying for price and having to replace it multiple times. Your time is worth more than your money.
  21. I can’t stand thrift stores, second-hand shopping, or even most garage sales. There’s a psychic residue to most of the items there that is very unpalatable. I’ve had to learn to let Steph do her bargain-hunting thing, but she knows how to find the good items.
  22. I was never a Cub Scout, but once I got into Boy Scouts, I was a den chief to both a Cub Scout den and a Webelos Scout den. My favorite part of Boy Scouts, though, was being on the ceremonial Native American dance team for our Order of the Arrow lodge.
  23. I’ve really enjoyed the Halo universe, both video games and novels. In fact, I’d like to build a set of Mjolnir armor, and one of my friends and I are planning to build a working Warthog. Geek!
  24. I often have insomnia. Part of it is that I resent the time I lose to sleep. It feels like dying a little bit, especially because it can be a struggle to wake up again in the morning.
  25. I want my wife to be a ninja. I mean, who wouldn’t

If you’re reading this for the first time, consider yourself tagged. Your turn! Post a link to your blog (or wherever you post your “25 Things” list) in the comments so I can go read it too.


The Facebook Experiment

Warning: the following post may not make much sense. If it does, it may sound bitter and arrogant. I apologize in advance; that’s not my goal here.

I finally got a critical mass of people dragging me into Facebook, so I’ve ben doing it over the last couple of months. I entered into it with a simple rule: as long as I knew someone or could figure out what context we shared, I’d accept friend requests. I only send friend requests to people I want to be in contact with, but if someone wants to keep up with me, I’ll happily approve the request. (Remember, Asperger’s Syndrome; I may be able to fake looking like I’m socially adjusted, but underneath, I’m not.)

This resolve has been sorely tested by a number of requests I’ve gotten from people from my high school days. I am not one of those people who thinks that high school was the best time of my life. Far from it, actually. Now that I understand about Asperger’s, I have been able to go back and identify what I was doing to contribute to my misery during those years — and boy was I — but I also know that there were a bunch of people who were happy to help. I was happy to leave that town, happy to never go back, and happy — for the most part — to not try to get back into some mythical BFF state with these people that I never shared in the first place. There are some exceptions; you should know who you are. If you aren’t sure and want to know, send me a private message and ask. Don’t ask, though, unless you’re ready to be told that you’re not.

Does this mean I want people to stop requesting? No. We’re adults. (At least, we should be.) Life moves on. I’m not that same person, and I’m willing to bet you’re not either. Let’s try to get to know one another as we are now, without presuming some deeper level of friendship than really exists. It’ll be a lot easier for everyone that way, and probably a lot more fun.


Three observations, two confessions, and an apology

Observation The First: only paying a touch over $2/gallon for gas feels positively sinful.

Observation The Second: one way to survive Seattle winters is to occasionally say “screw it”, roll the window down in the car, and let the cold wet air in while pretending it’s an 80-degree summer day with blue skies.

Observation The Third: If you’re cutting back on caffeine intake and you’re down to approximately two bottles/cans of Coke a day, one should really not have one’s morning bottle of Coke during the morning commute and the thoughtlessly purchase and drink a 20oz. mocha latte (one of the approximately two a year I have) during the latter portion of that same commute. Hot damn, I can levitate right now.

Confession The First: I really like Katy Perry’s Hot N Cold. Sure, the song is pop glitter, but it’s fun pop glitter, and it makes me squee like a little girl every time I turn it on.

Confession The Second: When I say that I don’t dance, what I really mean is that I don’t dance standing up. I’ll dance at my car or desk, but I’ll do it in a way that’s deliberately bad and frightening, because I like to mess with people. You’d be amazed how well a properly timed desk dance can clear out your office of annoying project managers and co-workers.

Apology*: For the residents and fellow commuters along Avondale road between 8:20 and 8:23am who heard and saw a red Ford Focus (with a Decepticon icon on the hood) blasting Hot N Cold out the driver’s window at high volumes, I plead guilty. That was me in my overcaffeinated, car-dancing bliss. Same to the folks along 124th, especially at the 124th/Woodinville-Redmond Road intersection, who were treated to the same, only with Cyndi Lauper’s Into the Nightlife, from 8:31 to 8:34am.

* I don’t know if this is a real apology, because I can’t guarantee I won’t do it again. At least I’m honest.


GetSharp lives!

If you’ve only interacted with 3Sharp through me, Paul, Missy, and Tim, then you’ve missed a whole key aspect of the talent we’ve got here at 3Sharp. Our group (Infrastructure Solution Group or ISG, formerly known as the Platform Group) is just a small part of what goes on around here.

GetSharp is 3Sharp’s personal implementation of PKS, the Podcasting Kit for Sharepoint. PKS was the brainchild of a fairish bit of the rest of the company. Quite simply, it’s podcasting for SharePoint — think something like Youtube, mixed into SharePoint with a whole lot of awesome (like the ability to use Live IDs). When I saw the first demo of what we were doing with GetSharp, I was blown away. I’m happy to have uploaded the videocast series on Exchange 2007 we did for Windows IT Pro, and I’ve got a series on virtualization I’ll be working on when I get back to work next week.


This is what I do for fun???

For the last three weeks, I’ve been on vacation.

Much of that vacation has consisted of quality Xbox 360 time, both by myself (Call of Duty: World at War for Christmas) and with Steph and Chris. (Alaric had a friend over today and we had a nice six-way Halo 3 match; the adults totally dominated the kids in team deathmatch, I might add.) However, I’d also slated doing some much-needed rebuilds on my network infrastructure here at home: migrating off of Exchange to a hosted email solution (still Exchange, just not a server *I* have to maintain), decommissioning old servers, renumbering my network, building a new firewall that can gracefully handle multiple Xbox 360s, building some new servers, and sorting through the tons of computer crap I have. All of this activity was aimed at reducing my footprint in the back room so we can unbury my desk and move Alaric’s turtle into the back room where she should have a quieter and warmer existence.

Yeah, well. Best laid plans. I’ve gotten a surprising amount of stuff done, even if I have taken over the dining room table for the week. (Gotta have room to sort out all that computer gear, y’know. Who knew I had that much cool stuff?) My progress, however, has slowed quite a bit the last couple of days as I ran into some unexpected network issues I had to work my butt off to resolve.

Except that now I think I just figured out the two causes. Combined, they made my “new” network totally unusable and masked each other in all sorts of weird and wonderful ways. It was rather reminiscent, actually, of the MCM hands-on lab. I guess I’ve been practicing for my retest.

Ah, well. I still have one day of freedom left before I head back to work. I might actually be ready to go.


Self-imposed silence

I really hate that there are things I want — nay, need — to write about right this minute and I can’t because it would be revealing too much about sensitive stuff that isn’t my place to talk about.

What good is a personal blog if you can’t use it to rant and kvetch and cry and work out dealing with the bumps the Universe throws your way?


Christmas 2008 Newsletter

This page is a placeholder for our soon-to-be-written 2008 Christmas newsletter. Check back soon!


So long, Exchange!

This holiday weekend, I finally accomplished a task I’ve been meaning to do for a while: I got rid of my email.

More precisely, I’m no longer hosting my email domains on my own server here in the house like I have been for the past eight years. I’ve finally made the switch to hosted email. With all of the free email domains out there, this may have been an easy choice, but Steph and I are not your run-of-the-mill email consumers. We’ve gotten used to having the calendar and scheduling features of Exchange and Outlook here at the house, so it was pretty clear I needed a hosted Exchange solution.

Last night, I flipped the switch — I double-checked that all of our domains and email addresses were configured and then changed our MX records to point at the new service. (An MX record (Mail eXchanger) is an entry in DNS that tells the rest of the Internet who to send your email to. Almost all email systems use at least one of these records.). As a result, some time early this morning all mail to us started going to our mailboxes on the new provider. Over the next couple of days, we’ll be transferring our existing messages up to the new mailboxes and shutting down my trusty Exchange 2007 server here at the house.

Actually, I’ll be recycling the hardware — it’s one of my beefier servers, and I can use it to do some other tasks around here and upgrade some of my low-end machines. This helps me consolidate servers, shut down more boxes, get rid of more clutter, and lower my bills. It also means that we no longer need to have our current DSL line and static IP address; we can explore newer, faster options that will better fit a household with multiple Xbox consoles. It also helps de-clutter my time; running a healthy email server takes time that I wasn’t putting into it here. (I shamefully confess that I went to run backups on my email a couple of weeks ago and discovered, rather to my horror, that it had been over a year since I’d last done so!) Now I don’t have to worry about those tasks. I also don’t have to worry about spam; the hosted service includes a really decent anti-spam service (the same one we use at work).

Still, after a decade of being responsible for managing my own email services (eight years running here at home, plus another couple of years being a sysadmin at an ISP), it feels rather strange to no longer be able to put my hands on the physical box hosting my email.


Oh, Alaric Ganger, no!

For the most part, we have good kids. We get compliments on their public behavior all the time. Usually, this is because we’ve taught the kids to hold it together until they get back home, where we give them a bit more leeway. However, what it also means is that when they mess up in public, they tend to do it large and with style. Witness Alaric’s current example: getting caught taking his new pocketknife to school. In Washington, this is a bad thing, although not as bad as it could have been in another school district that has a “zero tolerance” policy. At least our district gives the principal leeway on how they handle things. Alaric’s a very lucky boy; he avoided suspension and has instead been spending the weekend writing two reports.

I’ve copied the first of the reports below, because I’m really proud of how it came out. He turned this out after three drafts and a day of research and notecard activities. All of the wording is his own.

The Importance of Responsibility
This is a report on the importance of responsibility and making responsible choices.

I found the definition of “responsibility” in the Merriam-Webster Online Dictionary. One definition of responsibility is, “the quality or state of being responsible.” I think the definition that fits responsible is, “able to choose for oneself between right and wrong.”
This is what responsibility means to me: the habit of making the right choice. When I’m responsible, I make the right choices most of the time. I don’t make bad choices so bad stuff doesn’t happen.

Responsibility is important because it will result in affecting the people around you in a good way. Responsible behavior helps keep bad effects away; for example, if I eat too much junk food and not enough healthy food, I’ll get fat. One of the results of being responsible is making people pleased.
When you make choices, consequences come along. The type of consequence usually depends on whether the choice is good or bad. For example: I take my sister’s doll; I get grounded for the rest of the day. Some consequences will affect other people around you.

A responsible choice is a choice that is good. An example of a responsible choice is: I want to bring a toy to school, and it’s not toys from home day, so I don’t. When you make a bad choice, you get a bad consequence; when you make a good choice, you get a good consequence.
Making good choices is not always easy, so we need guidelines from other people. There are many ways to tell if a choice is responsible. Here are some of them: parents, teachers and principals, or the student hand book.

Now you know the importance of responsibility and making responsible choices.

He still is working on the one about the law on weapons at school.


After slight technical difficulties and a simple but complicated operation, the patient has recovered

I don’t remember which day it was two weeks ago that I discovered that my web server was no longer accepting queries, but I do remember the distinct annoyance I felt when I got home from work, made my way through the back room to the computer rack, logged on to the management console, and saw that the server was powered off.

That’s nothing to the irritation I felt ten seconds later when it wouldn’t power back up.

This server is an oddity in my collection; it’s not the standard desktop/server size for motherboards and power supplies. As it turned out through some testing a couple of days later (when I found some free moments), the power supply had given up the ghost. Unfortunately, I didn’t have another power supply that would fit into that particular case, nor could I locate one in the local area.

So, today I got to do a motherboard-ectomy. For the uninitiated, that’s where you take the contents of one computer (in this case, the motherboard and the disk drive) and transplant them into a new case. It’s a relatively straightforward process, just long and (usually) cramped in a couple of places. This process was actually simpler than normal — the web server motherboard is so much smaller than a regular one that the usual cramped space problems didn’t happen — but was complicated in other ways by the need to jury-rig a couple of things in place (very minor tweaks; the delay was more from finding the right pieces to do things as close to The Right Way as possible).

However, it all went well — and as you can now see, the web server is now back up and running. With some of the changes coming in my network in the next month, this is a temporary measure, but at least Steph and I can blog again.


What happens in Vegas gets blogged

Update (11/15/08 1240PST): Fixed the URLs in the links to point to the actual decks. Sorry!

Time this year has flown! Hard to believe that I’ve just finished up my last conference for the year — Exchange Connections Fall at the fabulous Mandalay Bay resort and conference center in Las Vegas. This was my second trip to Vegas this year (the first was in May for the Exchange/DPM session at MMS), and I really prefer the city in November: far fewer people, much more pleasant temperatures.

I gave the following three sessions yesterday:

  • (EXC16) The Collaboration Blender — This session is adapted from the Outlook and SharePoint: Playing Well Together article I wrote for Windows IT Pro magazine (subscription required). Exchange and SharePoint are both touted as collaboration solutions and have some overlapping functionality, so this session explores some of the overlaps and compares and contrasts what each is good for. (In other words, we spend a lot of time talking about Exchange public folders.) And where does Outlook fit into this mess? There’s even a handy summary table!
  • (EXC17) Exchange Virtualization — As I confessed to my attendees, this session was a gamble that paid off. Back when I proposed the topic, there was no official statement of Microsoft support for Exchange virtualization (no, “Don’t!” doesn’t really count). I guessed that by the time November rolled around, Hyper-V would have finally shipped and they’d have shifted that stance — and I was right. Because I focus more on the Hyper-V side of things, I invited VMWare to send a representative to the session to present their take on the subject. The resulting session was very good, and I learned a bunch of things too.
  • (EXC18) Exchange Protection using Data Protection Manager — Although a lot of the content here was the same material that I’ve already presented this year (what, 4-5 times now?), I did have to make some changes thanks to the brilliant curve ball that Jason Buffington and his crew in the DPM team threw me. You see, Connections now has all Microsoft speakers speak on one day (imaginatively named “Microsoft Day” for some reason), and that day was Tuesday. While Jason couldn’t be here, Karandeep Anand (who is the DPM bomb!) was — and I’ve been trading decks and VMs and material back and forth with Jason and Karandeep for over a year now. Rather than give a less brilliant copy of the session Karandeep had already done, I added in some new material focusing on the internals of the Exchange store and how that affects Exchange protection, removed the demo, and really attacked the topic from the Exchange side of things. I think it worked. Either that or it was people staying to get free copies of the DPM book that my publisher thoughtfully provided.

A lot of my fellow speakers dread speaking on the last day, but I’ve found that I’ve come to enjoy it. Sure, you have smaller attendance numbers — but the people who are there (especially if you get lucky enough to do the last session on the last day) are the people who really want to be there. I also encourage questions from the audience during the presentation, with the caveat that if they’re too detailed or going to be answered later I’ll defer them; I like the interactivity. I usually learn something from my attendees, which makes it a good time for everyone.

Back to the grind. I know I’ve been way too quiet on the blogfront lately, and I promise, I’ve got some fresh new content in the works. First, though, I have to catch up on the paying work. For some reason, my corporate overlords seem to expect me to do billable work too, not just speak and blog. Ah, well. At least I didn’t get RickRolled on my birthday!


This is just the start

Despite the fact that I’m now counting the hours until the election is officially over — election season has been *so long* and so incredibly divisive from all angles — I’m aware of the fact (and even somewhat excited by the fact) that no matter how it turns out, it’ll be one for the history books. The hope, of course, is that it’s one for the history books for the right reasons.

However, there’s a very disturbing trend I’ve seen here and there, both online and in interactions with various people, and that trend is this: if we can just make it to election day and choose The Right Candidates, we’ll be fine. All the wrong-thinking people will be shown the error of their ways during the next four years, the economy will be fixed, energy problems will be solved, and the world will be saved.

This, my friends, is magical thinking, and it’s precisely the sort of thinking that has led us to this point in history. It is the manifestation of the human wish for easy, single-solution problems and for immediate fixes. It is the failure of courage to realize that we’re in this for the long haul; if we really want to make a difference, we can’t just get riled up for a couple of months, go vote, and then go home and wait for everything to just suddenly get better. It is the ability to ignore or excusing the problems and deficiencies in Your Guys while fixating on those of the Other Guys. It is a failure of accountability and responsibility, the unwillingness to take meaningful action when confronted by broken promises and campaign lies.

Let me be clear, even though many will say that I’m being a defeatist: no single election will save the world, let alone America. There are too many people out there focused clearly on their goals (good or bad) who are willing to expend the type of energy and effort every day that some people have lately discovered in this election process. If you’re one of those people and you’re ready to step back down to a comfortable life after election day — you’re ready to end the last few months of reading and research and activism and just get back to “normal” — then here is my advice to you:

Don’t vote.

No, seriously.

If you aren’t willing to sustain that level of energy and drive forward with it for at least the next four years — to check up on your elected officials and make sure that they’re doing the things they said they would, that they’re being the responsible leaders they claimed to be, that they’re working towards the ends that you put them in office to work for — then don’t vote to put them in office. In order to do the job you want them to do, they need your support not just to get into office, but to actually do the work. If you’re not going to be there to support them, that’s like pledging to a charity and never writing the check; it makes you feel good, but there’s no real impact to you.

America’s problems will not get fixed overnight. They will not get fixed during a single Presidential term. They will not magically go away. Now that you’re up off the sidelines, if you really want things to get better, you have to stay up and active. Your elected officials cannot and will not make the changes themselves; experience has shown us this time and time again, regardless of party or affiliation.

If you haven’t already, go vote. But when you vote, realize that this is just the start. You’re in this for the long haul. If you’re not prepared to make that commitment, you’re got some thinking to do.


Masters update: short form

I have gotten a lot of email from people who wished me well and wanted to find out the status of my recent Masters rotation. I’m working on a bigger write-up, but here’s the short form:

  1. It was intense. I had a ton of fun, I learned more than I thought I could, and I met a lot of great people who are scary smart. I was also exhausted after it was all said and done.
  2. It was worth the money. Paul breaks it down for you here, and I agree with every data point. I think it’s fair to ignore the cost of travel, because no matter where you go for training, you’d have to pay it.
  3. I’m not yet a Master. There’s four tests you have to pass, and I only nailed three of them. I’m now patiently waiting word for retests, as are several of my classmates, and then we’ll knock ‘em dead.

Thank you, everyone, for your well-wishes and questions. As I said, I’m working on a longer post or series of posts, but those will be a bit delayed in coming because I want to run them by the folks at the MCM/MCA program to make sure that I’m not talking about stuff I shouldn’t be.


…does this mean I’ll get an apprentice?

For the next three weeks, I’ll be squirreled away in a hidden location, having my brains surgically removed and replaced with a quantum-computing device filled with Exchange knowledge. Good times!

Seriously, though, I’ll be off to the October rotation of the three-week Microsoft Certified Master: Microsoft Exchange Server 2007 program. The Master certification is a new certification that Microsoft is rolling out, placed between the MCITP and MCA certifications. It’s so new, in fact, that it doesn’t yet appear on the Find a Microsoft Certification by Technology page.

So, newness established, what does this Master certification entail? First, it’s not your typical Microsoft certification.

To ensure that people going through this experience are ready for it, they’re actually screening candidates. For the Exchange Master program, the published criteria are:

  • 5+ years Exchange 2003
  • 1+ years Exchange 2007
  • Thorough understanding of Exchange design/architecture, AD, DNS, and core network services
  • Certification as a MCITP: Enterprise Messaging (Exchange 2007 exams 70-236, 70-237, and 70-238)
  • Certification as a MCSE Windows 2003 or MCTS: Windows Server 2008 Active Directory Configuration (exam 70-640)

Scrape all that together, and what do you get?

  • Three weeks of “highly intensive classroom training” — and by all reports, they’re not kidding when they say that. I’ve been through plenty of Microsoft classes, and for this one, my corporate lords have completely cleared the decks for me.
  • Three computerized written tests (I assume one per week). I have no idea what these are going to be like, but after having done three exams in the past month, I really hope they’re a notch above the standard Microsoft certification exam.
  • One lab-based exam (administered at the end). Now, I really like the thought of hands-on tests; one of the best job interviews I ever went through included a hands-on test. However, they’re a lot more stressful precisely because you can’t fake things or puzzle out the the right answer through careful elimination. You have to know your stuff.

Assuming I survive and my head doesn’t asplode, in a month I’ll get to call myself an Exchange Master. This, of course, leads to the obvious question: do I get an apprentice? If so, I have a suggestion:

The determined apprentice

I really want an apprentice. I think I deserve one. You listening, 3Sharp?


Some nifty Windows Mobile tools

One of the projects I’ve been working on recently involves managing Windows Mobile devices; Tim and I have gotten to spend a bit of time playing with some very cool software. However, we both noticed that Windows Mobile makes some tasks unnecessarily complicated, such as verifying basic network connectivity. For example, can you tell me how to do any of the following under WM 6.0:

  1. Determine which network interfaces you have running at any given moment
  2. Determine the actual IP address configuration a network interface has
  3. Run basic connectivity tools such as ping and traceroute to validate that your device can talk to other network devices

Thanks to a tip from someone at Microsoft, I was introduced to the lovely free tools provided by Enterprise Mobile, including the spiffy Windows Mobile IP Utility. This lovely tool gives you a great view of what’s going on network-wise with your device…including see the pseudo-devices that are created when you cradle your device (and the funky networking that goes on there).

They also make the GUI CAB Signing Utility, which is especially useful if you’re pushing software applications out to your Windows Mobile device and want them signed. It’s basically a GUI wrapper around the .NET Framework’s signtool.exe binary, allowing you to easilly select one or more .CAB files, pick an appropriate certificate from your Personal certificate store (must have the Code Signing capability), select the output directory, and let it rip. I’ve got a screenshot of it in action in this separate picture over here. For some reason, my computer keeps giving me a signtool error, but the folks at Enterprise Mobile have contacted me and are going to help me troubleshoot this issue over the next few days. Very cool for them!


No, just the Doctor

After looking at a lot of options, Alaric has decided that he wants to be the ninth Doctor for Halloween. This is good for us — it’s a simple costume, in theory, especially since I’ve already got a sonic screwdriver prop I can lend him. The Ninth Doctor has very simple clothing, especially compared to some of the earlier versions, and beats trying to put together a “Vader’s Apprentice” or “Master Chief John 117″ costume. However, finding a suitable jacket for an affordable (I’m thinking $15-20 here) price is going to be the challenge.

Anyone out there got good ideas of how to get a suitably sized jacket (boy size 10; men’s small is too large) for the boy in time, in an affordable range? It doesn’t have to be an exact match.

Feel free to forward this plea for help on.

%d bloggers like this: