Skip Ribbon Commands
Skip to main content

Cory's Blog

:

Quick Launch

Stenoweb Home Page > Cory's Blog > Posts > Strategies for Upgrading TECT
January 27
Strategies for Upgrading TECT

TECT is a big system and one of its up-coming challenges is that it's running Microsoft Windows Small Business Server 2011 Standard. All of that, or "SBS" for short, is Microsoft's easy to use infrastructure-in-a-box solution for businesses of approximately 25-75 employees that wanted to have an e-mail server and SharePoint site. The system requirements are fairly modest by today's standards, although it absolutely runs better if you can give it more hardware. The only meaningful requirement is that your server have 4 gigabytes of memory. It runs better if you have 8, and TECT has 16. It can use a maximum of 32 gigabytes of physical memory. A retail license to it will set you back about $1100 and CALs are a bit over $80 a seat. It's not "cheap" but it's less expensive than building out separate servers (SBS is from before virtualization was a thing) with Windows licenses and CALs and application licenses and CALs for each box.

In all, it's a fairly cost-effective way to run a system, but it has the major disadvantage of being a little bit of a house of cards, in terms of configuration. My particular installation is stable, but it's also not optimal. The other (probably more pressing) disadvantage is that as with Windows XP and Server 2003 products going off of support in the next few months (2014-04-08, baby!), SBS 2011 Standard will be leaving supported status by the end of 2019. (There's no exact date given, but the base OS is scheduled to stop receiving updates by late 2019 or very early 2010.) I know what you're thinking: That's four years away!

Four years though it may be, I don't want to stick myself with something extremely active and very difficult to migrate at the end of the support cycle. By 2019, Microsoft will have loosed at least one more major version of SharePoint and Exchange upon the world, and a number of versions of Windows Server. The other challenge of upgrading TECT is that the longer I wait, the more entrenched I'll be in a particular mode of operation, and the chance goes up that there are other active user accounts on the system with data that needs to be preserved when making a change to the system.

For the uninitiated, upgrading server products isn't exactly what I'd call trivial. There are hardware compatibility lists, and other requirements such as specific OS requirements. In addition to that, there are things to consider in terms of how to move the application from one version of an application (like SharePoint) to the next. From my perspective, both the integrated nature of SBS and the fact that it's built on the 2010 stack (when the complete 2013 stack is available) are risks. In addition to that, I currently have SBS running directly on TECT's hardware. This isn't bad per se, but it is a somewhat inefficient use of hardware: TECT is a very big machine with a very high upgrade ceiling, and virtualizing everything will help make more efficient use of the hardware I have.

The problem is what strategy to take when upgrading all of this stuff to newer versions that will be under support longer, and whether I should do things like abandon my whole setup and rebuild in-place, or build a new infrastructure on separate hardware, cut-over at some point, and then beef up TECT a little bit and move the new virtual machines onto it, or build out additional servers (on some kind of virtual host) and move services one-at-a-time.

Because it is provable that nobody else is using TECT, the option to simply cut off service today and then set up a new infrastructure is somewhat appealing, but it's also the laziest option. The next most desirable option, is to get some additional hardware (ram for my mini and secondary laptop, or a new inexpensive desktop box I can cram a whole bunch of ram into) and build an entire duplicate infrastructure, which I can then switch on at a moment's notice and later migrate my data to, before finally turning off TECT and then adding it to the new kit.) This gives me the opportunity to nuke and pave the test setup multiple times before finalizing on a configuration that works the best, and gives me the freedom to build up without worrying about affecting anything (or worrying as much about how I'll migrate data, an advantage I gain from the fact that the only data to migrate is my own.) The third option is the most complicated, but is also probably the most likely to match what is happening in "the real world" – environments like those at my workplace can't simply be torn down and rebuilt every year, because a) they are so large and complicated b) there is no time when someone isn't using it.

The main disadvantage of either of these "better" options is that they're pretty expensive. An inexpensive box I could build up today with 32 gigs of memory and a midrange desktop processor in order to do VM testing will cost nearly $1000. (It may also be more powerful than TECT is, but that's another issue.) If I just install over TECT as it is today, then I can avoid buying new hardware, but I also only have one opportunity to build out the new infrastructure correctly.

Just for clarity, my intent when this particular project is done is to be running Windows Server 2012 or 2012 R2 or bare-metal Hyper-V on TECT with virtual machines for ActiveDirectory/DNS/DHCP/files, Exchange, and SharePoint 2013. The idea here is that I can more finely tune the amount of resources each of those programs gets, and of course it'll be easier to later split out some of these roles into more virtual machines, if necessary. In addition, because I can add so much RAM to TECT (96GB with one processor, 192GB with two), it wouldn't be a problem to add tasks like a virtualized Debian Linux or FreeBSD UNIX machine to TECT. (Things I simply can't do today, causing me to use an old ThinkPad X31 as a linux shell server.)

The main question I have is how to source the hardware on which to do tests of this infrastructure, and what to do it once "testing" is over and the VMs I've made are deployed. One thought was to use my upcoming desktop computer as a test machine. It's going to be a very powerful desktop computer and my estimate is that I'll need somewhere between 16 and 24 gigabytes of memory for the new virtual machines. I can run that in the background on a desktop that's doing my regular work. With a second network adapter (which is like a $30 add-on) I can even run those VMs on a separate test network with a new DSL modem, so that lighting up the whole infrastructure is as simple as swapping my DSL modem. (This requires that I buy a new DSL modem, or switch the ones I have around again – not impossible, but worth noting may be a challenge.)

The other possible source of hardware was to buy or build a cheap box specifically for this purpose. The main advantage therein is that I have some more memory to play with in the testing environment because the VMs won't have to contend with games, Word, and iTunes, but the main disadvantage is once the migration is complete, I'll have a cheap desktop PC in an unbalanced configuration with not much to do. This is the issue currently "plaguing" topham, which has 8TB of storage in it and is pretty close to its ram ceiling, with 4GB of installed memory. It may or may not accept more RAM, but even if it took 32 gigs of ram, I don't know if I'd use it for this new testing environment, just because its chipset isn't very good at handling load. I also don't know what the virtualization tasks I'd like to do would look like with 32 gigs of ram and an old, slow, dual-core processor.

One more possible source of hardware is to creatively and temporarily reuse stuff I already have on hand. I can upgrade eisbrecher, the Sony Vaio I dislike using with a big internal disk and up to 12 gigabytes of memory and MILVAX, my mac mini with a bigger disk (or thunderbolt storage) and up to sixteen gigabytes of memory. (Once I do that, superslab, my ThinkPad can have the cast-off DIMMs from those machines, bringing it to eight gigs of memory.) These systems can then sit on a dark test network. The main advantage to this plan is that it's inexpensive and that I'm going to have a clear and obvious use for the mini and vaio when I'm done setting up a network on them and when I've migrated those VMs to TECT. The main disadvantage to the "existing hardware" plan is that I am 100% certain I need more than 16 gigs of ram, total. The load must be split across two systems in this case, and that makes connecting the test rig to the Internet, say, at work, a more complicated proposition. Not impossible, mind. Just more complicated.

Another scenario I thought about was to both max out my current Mini on memory and to buy a second Mini and max its memory, and use the two of those for the "lab." Mac minis take almost no electricity and although this is nearly as expensive as a single large box, splitting the load between two of them means that later on, one of them can continue on being a test box or doing other server duties while the other becomes my Macintosh again.

One final possible scenario is that I run a backup of TECT in its current state and restore that backup to a new piece of hardware, which performs TECT's current tasks while I build out a new infrastructure on TECT itself. This has the advantage that TECT is still running my core needs consistently while my big server box is being used for its true calling: virtualizing a bunch of tasks. I could even do this scenario with, say, topham – a box I already have. Under this scenario, topham's last hurrah can be to collect my mail and show my SharePoint site while TECT receives niceties like more ram and possibly a new disk controller and gets used to build out the new infrastructure. The advantage to this plan is that once the new infrastructure is done, I don't need to wait for TECT to be cleared out before I can move the new infrastructure onto it. (That should be trivial anyway, however, I'll just be copying Hyper-V disks and configuration files from one machine to the next.) The main challenge to this strategy is I don't know how well that will capture htings like my custom SharePoint sites and what the challenges will be to bringing up an intermediate version of TECT, including issues such as the network driver on topham and eisbrecher, and running a centralized server without enough storage for the files I've got on it currently.

Given that this post is about upgrading TECT, I suppose I could also talk about the other strategy. I have TECT, it is running and by and large it does what I need and want. I could simply choose to leave it alone until such a time, far into the future, that it does none of that. The common refrain on a forum I visit is that if something works and you're familiar with it, it's obviously not worth putting effort into learning a new one or upgrading or changing to something with new and different functionality. Of course, the trouble there is that if I do it like that, the end of TECT will be a huge emergency. This is a poor plan because I hate putting out fires. I want to be thinking about the migration of TECT even if I don't actually do it right away.

The unpopular-but-reasonable plan is that I could move my e-mail service into a hosted system like Microsoft Exchange online or on a Linux/UNIX server (like a VPS) with IMAP and maybe a webmail client. This means TECT is less hyper-critical and I can migrate to some software that's still inexpensive. In addition to that, backups are easier when you don't need to worry about Exchange and SharePoint or virtual machines. Exchange Online isn't actually that expensive. For a single mailbox, it can be had as part of the $150/year Office 365 Small Business Premium package, which conveniently also gets me all of the Office applications I like for five computers. The main disadvantage of that is that it's an added cost I'm not already paying (I use perpetually-licensed copies of Office right now) and that it costs even more under that scenario to offer e-mail hosting services to other people.

The other question about cloud services is what to do about historic content. Exchange Online provides you with a 25 gigabyte mailbox, and my ultimate intent had been to move my old Gmail accounts to this service, and forward new messages therein to my Exchange account. (I'm currently manually moving messages with a desktop client on one of my work computers.)

One potential nicety of this configuration is I don't have to worry about

I previously mentioned two things that may be worth mentioning again:

  1. I have a secondary server running a Linux shell set up right now
  2. I would continue to have this machine and would like to have it be virtualized under TECT with the rest of the Windows infrastructure.

One option I don't often mention but which is definitely there is to run a Linux or UNIX system as my local mail server. I can maintain a SharePoint Foundation server more easily than I can an Exchange server, it would save a raftload of money on licensing (and system resources) and I could also begin immediately, by putting Hyper-V on eisbrecher or on the current TECT.

The other other option is that I could do e-mail hosting on something like a Mac mini with Mac OS X + Server.app. I can easily add this functionality to the Mac mini I have on hand today, and this has the same effect as previously of freeing TECT to be upgraded or changed for other purposes, such as running Windows Server 2012 R2 Essentials with Hyper-V machines for SharePoint and other tasks. (In fact, I could run a UNIX mail server as a Hyper-V VM alongside SharePoint, a shell server, and the Windows Infrastructure services provided by WSE.)

The main challenge of Mac OS X Server is that I am not already familiar with running it. I'm sure it's simple, and somehow that's off-putting to me, as I'm used to my server infrastructure being complicated, even when it's Windows SBS. The main challenge of another Linux or UNIX is, likewise, I'm just not familiar with mail transfer on those infrastructures. Not only would I need to learn how to do that, but I'd need to choose and configure the platform of my choice. Cognitively, it's a little bit easier to stick with Exchange. (Despite the fact that Exchange is a huge, vast, monster of a communications infrastructure platform, with architecture design and scaling and planning all on its own.)

The actual deployment of the whole thing is a bit up in the air. I like having a graphical console available on TECT, so I may pick the full version of Windows Server 2012 or 2012 R2. On the other hand, I have enough other hardware around that I'm not worried about being able to manage it remotely. Another possible variable is whether or not I want to put Windows Server Essentials 2012 R2 directly on the hardware and virtualize the applications (SharePoint and Exchange) or run WSE alongside the other machines as a VM. For modularity's sake, I suspect it'll be the latter, but it's worth considering.

The other-other-other possibility is that I will run Windows Server Essentials 2012 R2 on another server, such as a Mac Mini or an Intel NUC, for the sake of infrastructure, and use TECT as a Hyper-V host for VMs of a file server, Exchange, SharePoint, and Linux shell box. The question there is which data (if any) deserves to be on the "main" server and which goes into TECT, and if I go this route, what is the justification in particular for the separate server. (The only thing I can think of is that virtualizing your primary domain controller isn't considered to be the best strategy, and even less so if it's your only domain controller.)

Of course, under that strategy, TECT still needs more memory to run both Exchange and SharePoint in independent VMs, and the up-to-$1000 that will be spent on hardware and software for a new infrastructure services box could have gone toward hardware and software for TECT. And, marginal though it may be, this route still involves paying for more electricity, and either building out a bigger backup infrastructure, or having two backup systems – one for the small box and one for the big box. (This may not be a bad plan though, as it would be even more complicated to build a unified backup system for those two boxes, and would almost certainly require yet another box with an operating system and an application and hardware, all of which will need to be refreshed on some kind of regular basis.)

None of this is answers, this is just speculation and some rambling about what is possible. All of that, and we haven't even talked about the fact that TECT is already three years old, and is working its way through the typical server lifecycle. Of course, one of the great things about Hyper-V is that, in theory, when I buy a new piece of server hardware, all I have to do is turn it on, install the server OS or Hyper-V itself, turn off my VMs, copy them over, and turn them back on.

Comments

There are no comments for this post.