Skip Ribbon Commands
Skip to main content

Quick Launch

Stenoweb Home Page > Cory's Blog
October 29
The Mac Pro 5,1 is a Bad Choice

This week, we're hammering away at my favorite topic: Buying Macs probably isn't a good idea. This week's target is the Mac Pro 5,1 specifically. The Mac Pro 5,1 is better known as the 2010/2012 Mac Pro. It was a relatively minor revision to the 4,1, which was 2009's version of the machine. This would have been a solid machine to buy when new if you bought it close to the start of its life and you needed a lot more horsepower than an iMac could provide, or if you needed some upgrade you couldn't (at the time) install in an iMac. If you have one today because you bought it new, it might be worth popping in some upgrades to get a couple more years out of it, but I hardly recommend buying one today.

For starters, iMacs have become amazing. The displays Apple ships on both its 21.5-inch and 27-inch iMacs are some of the best in computing. Perhaps the Surface Studio is one of the closest competitors. Whether it's for multimedia production, consumption, gaming or office tasks, Apple's displays are universally heralded as great and frequently elicit positive comments from Windows users. iMacs themselves have become more powerful as well, with iMacs now offering some of the fastest single-thread processors in up to four cores. The mainstream CPUs Apple would be using have since been updated to six and now nine cores and those CPUs thoroughly outperform the top chips available in the Mac Pro 5,1. In addition, from 2010 to 2016 (when the chips in the currently shipping iMacs launched) everything else in mainstream systems has more than doubled in speed.

In addition to all that, ThunderBolt, added to iMacs in 2011, brings PCI Express connectivity outside the machine, and "box with a PCIe card slot" has been a common upgrade available for various types of Macs since. The newest models have several of these connectors available. Bulk data storage in home and small business networks has continued shifting toward networks as well, with technical users opting to build fast in-home network and hide piles of disks out of the way in servers with more disk bays than the Mac Pro ever had. Tools like FreeNAS can also often be used to make data available outside the home network as well.

As I alluded to, Apple has managed to stunt the potential of the iMac a little bit. I don't know if I've written about this yet but my personal theory is Apple hasn't upgraded the iMac to 8th generation CPU because an iMac with a hex-core i5 at $1,299 would make the Mac Pro with a hex-core E5 at $4,999 look like an exceptionally bad value. And, they're not wrong. That's why no other reputable workstation vendor is still selling Ivy Bridge machines. An i7-8700K or i7-8086K will give the Mac Pro 6,1 a run for its money but won't actually outperform it in all situations, but it will outperform the 5,1, even using the top available Xeon X5690 CPUs. Notably, even the i7 configuration of the 2018 15" MacBook Pro will outperform a fully configured Mac Pro at CPU tasks.

As mentioned briefly, Apple did release    a successor Mac Pro. The Mac Pro 6,1 was released in late 2013 in response to some regulations in Europe and as a bit of a new concept for Apple's pro machine. The Mac Pro 6,1 was designed expressly for the task of 4k video editing in Final Cut Pro. A task it continues to handle very well, but Final Cut Pro is designed to work well on low performance hardware at any rate, by doing tricks such as background rendering and leveraging GPUs extensively. The Mac Pro 6,1 was designed in 2012-2013 when GPUs were not as advanced as they started to become shortly after, and in favor of making it a smaller machine Apple switched to a single Xeon socket and got rid of some of the internal expansion options. It's a good computer, but many argue that it's not the machine they need – to which I'm calling shenanigans.

My thinking here is that regular professionals in general never buy the maximum configuration up front, and only highly motivated and highly technical people are popping open Apple's CPU modules and doing their own processor swaps. This works and is technically within the scope of the machine, however the kind of upgradeability Mac users enjoyed in the '90s and early 2000s is based entirely on the fact the bus all the way out to the G4 remained mostly the same.

Worse than the performance, especially what with The Plateau(TM) still generally in full effect, is that the Mac Pro 5,1 is so expensive. For your $1500+ setting one up, you can either build a reasonably good generic mainstream PC that should at least doubly outperform the old Mac Pro in every way, or you can put the money (less of it, even) toward a contemporary old PC. Dual Westmere Xeon based workstatons from Dell, Lenovo, and HP tend to cost between $75 and $300, depending on the configuration. Part of this is that Windows 10 doesn't need an upgraded graphics card from what's available with the machines, but a new card of any type can be slotted in with no products, compared to firmware issues that Mac Pros have with non-"approved" graphics cards.

And that's really the core of why buying any Mac is sort of a bummer right now. Anything modern, equipped well to run a current version of the OS, is disappointingly either expensive or inflexible. A bunch of the demand comes back to the demand for what Dan Frakes for MacWorld called the Mythical Midrange Macintosh Minitower.

In general, the MMMM basically describes a Dell OptiPlex or XPS, but with an Apple logo on it. It would be a machine comparable to something like the Power Macintosh 6000 or 7000 series from the mid to late 1990s. Basically, a mainstream processor with a range of options, a "normal" amount of RAM, some disks, and a handful of slots. Basically, the way it would pan out technically is an iMac but with a display or two.

But, Apple doesn't build that. The Mac Pro 7,1 which should launch in the coming year won't be that and it's unlikely that the allegedly "more pro focused" Mac mini will not be either, and iMacs will eventually pick up some more computing power, but they will likely not become easier to upgrade. (Notably: iMacs in the current generation have modular RAM, CPUs, and storage, but getting to all those bits is of course not something you'd just casually do in a few minutes between other projects this coming Thursday.)

The Mac Pro 5,1 was a fine computer in 2010. It was still passable in 2012. With some effort and a few compromises, it will still run the newest Mac OS, but buying a new one today is a bad idea. They're extremely badly overpriced, not particularly powerful by modern standards. If you could set one up for $200, I'd probably recommend it wholeheartedly.

Alas.

October 22
Cloud in a Box

As part of my ongoing ideation along the lines of "What if Computing, but Different?" one of the things I think would make people feel better about modern computing is if more or it could be in their homes.

In today's installment, what if "The Cloud" as in services run and maintained by a different (usually commercial) entity but part of the way that service was provisioned is you used it via a box or virtual appliance provisioned on your own hardware, located in your home or office. There's versions of this in play today, notably in networking and security applications, where a firewall or router or similar type of appliance is managed by an external entity. Even cloud-managed antiviruses on local computers are a thing now.

I'm mixed on The Cloud. On one hand, the way Microsoft has been doing it for a few years, you can choose from what makes sense for your needs. If that means you buy a copy of Office, well here it is. If you want to subscribe to it, here's that option, and it comes with a bunch of online storage and mobile integration. I think things become problematic once we become naturally on the hook for these sorts of recurring subscription software applications. They're extremely profitable, and I can see why a business might find it easier to pay for something this way, but I also think that (in Adobe's case especially) it makes it more difficult to access tools you used to be able to buy and then keep until you had some reason to update. This is especially true on Windows where you can usually install old software with little to no trouble.

But, if we're stuck with the cloud, there may as well at least be ways to make it less bad for those with slow or inconsistent Internet access.

But, I'm thinking of something a little bit different. In my version, it would be a little bit more like, what if you had a Google Apps account or a Microsoft Office 365 account and copies of the data and code needed to run the service were stored on a machine in your house. Basically, what if one of the machines that runs Office 365 did everything the product had, and your data on it, and maybe some other information, and Microsoft was on the hook for maintaining the box itself, but as part of running it this way, it runs better in areas with slower Internet connectivity and it would work better as a device in, say, offices with multiple people whose account data could be on the machine.

I imagine in most situations, not everybody would get such a box. In fact, in a better world, the Internet connections at more places would be better and a box in one home, say, on a block or in a neighborhood would increase performance for the people in the local area. Though, for a neighborhood's worth of Office 365 usage, you start to need a bigger box – something it makes sense to deploy in a community center or near the local set of telco infrastructure boxes.

In one potential arrangement, somebody could add such a device to their subscription to a service in exchange for being able to use more storage space with the service. In another, a big box could be given for free or used as a cache or geo-redundant storage option for other subscribers, with the payoff being reduced subscription prices for the person hosting the machine, in exchange for the overhead incurred in their power bill and internet usage.

In a large environment, say, a corporate or institutional campus, a large box (or a group of them) could be placed for caching, speed, and to keep services "running" when networking becomes unavailable. Caching in this specific manner would mean both sides of a connectivity loss (let's say, the Internet to a school campus) can retain access to the service. People off campus can be redirected back to the copy of the service at the canonical datacenter and people on campus can work from the cache box. Changes would be re-synced at best effort when connectivity was restored.

Of course, this is not so distant from the late '90s "every home has good Internet now" vision of home hosting. And, in the late 2000s, Microsoft elaborated on this vision with Windows Home Server, which primarily integrated with Windows Media Center and a few other services (the backup tools, for example) to provide what they saw at the time as an "important" addition to the home computing infrastructure. These days, we're seeing this stuff get added to cloud service providers (iCloud Disk, OneDrive, Google Drive) which increasingly relies on fast Internet connections and consistently, relatively un-congested data paths from the home to the provider, which we know isn't particularly realistic.

This is something I've been wanting for a while. I've written about it in various forms, most notably about an idea for a network-based tv set-top box to best-effort download high quality (or: settable quality) versions of TV shows and movies during off peak periods, to make transfers go faster and to (potentially) save network charges. The closest we've gotten is at any given moment some percentage of Netflix' catalog is available for download on a device like an iPad or iPhone, primarily for travel.

In this case, the idea of an on-prem $SERVICE appliance would really be less about control or ownership, because unless you paid for it outright it would almost certainly be owned and leased under some kind of agreement, and your account is likely still on the service itself, but it would be more about comfort and performance, and perhaps better control over Internet usage in general, especially in exceedingly dire networking situations, such as via satellite or long-range Wi-Fi. I think being able to offer a service like that would enable some different and interesting use cases. For example, I've long wanted Office 365 or OneDrive to be a reasonable spot for more of my own daily usage. I'd especially love an option to present OneDrive as an actual mapped or logical "drive" on my computer, and I would love to be able to use it effectively without having a big secondary internal disk and using the sync agent.

Of course, the next most logical conclusion here is a service where you can lease a box configured a way and you get the services provided by the box, with options (for varying prices) on how you want to handle things like redundancy, backups, etc. I have an Internet connection capable of hosting my own email and document management services. If leasing an Office 365 box and using it with your own domain was an option, I might do it.

Of course, there are numerous problems with this kind of suggestion. You either have to pick only one box and have it be your main router or give it control of your router, or you'd have to (manually) do a bunch of stuff in DNS that isn't reasonably possible on home connections, making it less than viable to suggest someone could pick up an Adobe CC caching appliance and an Office 365 appliance.

Of course, ultimately it might be better to escape The Cloud entirely. I've long thought a better and less maintenance-prone version of something like Windows SBS would be an appropriate home server. Of course, you'd need to convince Microsoft to include a lot of functionality for a certain number of users in something that could be added to an inexpensive piece of hardware. For all that I'm ultimately imagining an ideal home server would probably run around $2500, before you do things like account for a UPS or for a backup service or device. That, of course, ties in with my lamentations about how badly RDX is priced (because, as a piece of hardware it would be perfect for the task) and how there's no modern affordable big-capacity removable media.

The main reason "Office 365 in a box on your desk" sounds good is because it provides an option for people who don't have the wherewithal or don't want to be on the hook for building something on their own but provides some (admittedly: not all or maybe even most) of the benefits of hosting something locally. The reason this is important is because we can't trust the open source community to build something like this (a plug-and-play home server that maintains itself save for a simple control panel). They never have, and they never will – I'm convinced they're utterly uninterested in the idea. Commercial vendors are the next best bet, but anything a commercial vendor sells for use in perpetuity is going to be abandoned instantly if it can't make billions in profit and will likely operate on similar software and hardware support cycles to the computer industry today, which is to say, the computing industry expects nobody has a computer older than five years old. The idea is actively derided.

So, adapt it to an existing subscription model we know vendors love so much and introduce a few well known scenarios to make it possible to have the vendor be on the hook for the health of the machine, even if the vendor has to send you a new one every few years.

October 15
The Perils of Late Night Troubleshooting

Last night as I was waiting to fall asleep, I noticed the Internet had stopped working. It had been a big long day and I was in pain and it was cold but I got up because I knew my roommate was likely still doing things.

I had just taken some nyquil and had done plasma exchange earlier so I was kind of in a compromised state from those things, as usual. In such a state of undress, I wander to the shelf the server is on, and the window is open so I have to fix that, and a disk has fallen off the RAID so I pull that out and put it back in, and disk activity is crazy, so I mis-diagnose it as that the server is in the process of falling over at the behest of its storage.

Turns out, that really wasn't what was happening, and my first hint should have been when I got up and my desktop didn't respond, because last time that happened, I was able to gracefully restart the server to no effect, but when I finally rebooted the desktop things returned to normal.

So I finally notice that and at this point I've held the power button down to hard reboot the server, which is of course Bad™ for all of the virtual machines on it, and then I have to wait for it to come up, at which point it still hasn't occurred to me to cut the network connection to the machine in my bedroom and turn it off, then just in case power cycle the modem and the other wireless device, and it all came back.

Of course, it didn't all come back and now the thing I'm paying for is that one of my virtual machines appears unrecoverable. In fact, it is the lynchpin virtual machine. I spared no saltiness in telling my roommate that due to trying to troubleshoot while on nyquil after having to get back up, I made some wrong decisions. When I left the house earlier today, the backup software was estimating it would take about ten hours to restore the file.

It's fortunate for me that I didn't have to restore cronk, (or worse: maron) but it is still annoying. Calvin and I will lose a few days of email, unless Outlook is exceptionally smart about cached messages (I could go save them, but I don't think I got anything important.) I'll keep a copy of the "bad" VHD files in case something comes up, but I don't think it will.

The main thing this does it make me think about how important it is to start splitting functions out into different virtual machines, and perhaps switch to a more traditional Windows Infrastructure set up. I hadn't told my roommate, but I've also been considering switching to a better router that I can trust giving DNS and DHCP back to, which would mean my client computers would be less impacted by events like this. In fact, being able to do more routine maintenance like rebooting the individual virtual machines and the hypervisor machine more often (and have it not impact my roommate's access) would probably be a benefit.

Such a change won't prevent my desktop from causing problems, as I believe it did last night, but it might prevent those problems from impacting everything else on the network. I have more thoughts to share on the "network improvement" front but will hold off for another time.

October 01
Good Macs to buy for Mac OS X 10.14

Now that Mac OS X 10.14 "Mojave" has been released, it's time for me to write once more about how annoying it is that there are no reasonably good systems you can buy to run it on.

As I mentioned before, the majority of Apple's product line is shamefully out of date for the planet's most valuable company. Very few new Macs are worth buying at their prices, and so there's nothing reasonable or even upgradeable to buy if you want to have a Mac but you don't, strictly speaking, want to run it as your main computer.

For someone who needs a Mac, there's a reasonable justification to just buy the computer you need today. Used Macs are an option, probably the best in this case, but there is the challenge of buying something reasonable configurable with the intent to keep it up to date or buying something that comes the way you think you'll want in a few years to have bought it.

It's tough to justify getting a Mac if you just like having them around. For a casual user of Macs, and for someone who has just a few Mac OS X apps, my personal advice is to just stop, unless your needs are truly simple enough you can get away with the old baseline Mac mini. (I think you shouldn't, but you could.)

The used market is probably the best bet to get a machine. A 2012 Mac mini will likely cost almost as much as a 2014 Mac mini does, especially the Quad-core models or the version with the discrete GPU, which never felt like a very big value add for me. The 2012 Mac mini is one of the last "consumer" upgradeable Macs, so it can run 16GB of RAM you can buy at any time and up to two SATA disks.

The MD101LL/A, a long personal favorite whipping boy of mine, primarily for being the distasteful favorite of people who want "performance" and "expandability" without actually caring about either, should run 10.14 well enough, once a solid state disk was added. Basically, any machine from 2012 with an SSD in it or to which one can be added would be "fine."

There's only one or two machines I can categorically recommend against:

The first is the "2010" or "2012" Mac Pro. This machine, with the model identifier of 5,1, is still going for a lot of money because you can expand it internally to have most of the same functionality of the newer 2013 model. I suspect either of these machines would be given a run for its money by some of the newest mainstream hardware if an 8th generation 6-core chip wouldn't do it, a 9th generation 8-core chip almost certainly would. IPC has doubled since there, HyperThreading is still there on i7s and every other component has at least doubled in speed.

Ultimately, just as I said a few weeks ago, I don't know if I agree that there are good Macs to buy right now. The two machines that have been updated to 8th generation chips cost a lot because they also feature Touch Bars. Everything else in Apple's product line is meaningfully outmoded or simply a bad configuration for anybody who wants to do anything at all on a computer.

Ideally, Apple would be announcing new Macs in the next few weeks. There's a holiday season approaching and even though they've missed the big order deadline for academia, schools (at least higher education) do order machines year-round.

September 24
Legacy Disks

I started writing this last weekend, but the time I spent on fixing the server and building shelves meant I didn't have an awful lot of time left over for writing.

This weekend, my housemate was out of town for a day or so, so I figured I would take advantage of the quiet time by patching some virtual machines and then the host machine. I approved an update to the backup software and then started patching one of the VMs and then the entire system fell over and didn't come back for around an hour and a half until I looked.

My house is still very much in a state of flux. A few months ago, I moved the desk that used to be in the office space at the front of the house into my bedroom, and then I started getting sick, so I had to stop making changes. I had to clear some stuff off a table I put there, which had been filled with stuff my housemate put there so she could use the other table I had stuff on, and then I started fiddling with the machine. As I moved some more stuff around it became clear that I needed to address the problem of the arrangement of the area.

I shuffled some stuff for the night and set about researching the problem. It turns out the error I was seeing happens for basically any number of totally random reasons, but my closest hint was a result I found suggesting it happened to somebody under high IOPS load. One of the disks on the machine had totally died and the cache battery is in questionable state, so there was my answer.

I couldn't get that disk to re-appear so I went to bed and headed out the next day to get a disk. I bought two disks and some other computing sundries I've been meaning to get at the local Staples store. I'm happy they had what I needed, and what they had is one of the few disks that matches my needs almost exactly. It's a newer manufacturing revision of the same disks I have been putting in my server since 2011 or 2012.

It's interesting to think about because while these disks aren't the worst value in terms of storing a lot of data, but they're not very good. I received a suggestion to just upgrade the RAID card and get newer, bigger disks. Not a bad recommendation entirely, but not viable right then since I was trying to solve a problem getting the machine to even run.

I got home with the new disks and started to get ready to unpack them and then noticed it.

I already had a disk labeled as having been bought just around a year ago, in August 2017. Probably from the previous time a disk dropped entirely. I tend to buy them in twos. So I installed the existing disk and labeled the next two. Put it in and started the rebuild. I left it on the PERC screen to do that.

Just in thinking about the trouble I've had getting disks over the past year or so, I know this isn't the first time I've come to this conclusion but "small" internal spinning hard disks are basically a legacy technology at this point. On the other hand, my controller can't go above 2TB disks, otherwise I would be looking at putting a few bigger disks in RAID 1 or 10 and perhaps adding an SSD or two.

And really at this point other than a few weak points such as only having one power supply, the biggest problem with the server is IOPS.

I replaced the disk and was able to get the machine running, but because I spent most of Sunday waiting for the rebuild and my roommate was extremely excited to get back online, I just brought the machine back up and haven't since had time to do the patches I originally wanted to do. Fortunately, my roommate stepped out last night and I was able to get a few things patched back up. I've left more of the virtual machines turned off for the past week in hopes that running fewer virtual machines will have things be more stable, at least until I can get a few of my heaviest VMs onto some solid-state disks, or I can reconfigure things to be a little more performant.

Every now and again I think about splitting my virtualization workload back onto smaller machines and making things a little more manageable in terms of disk IOPS is another reason to do it. Even things like entirely re-physicalizing servers comes up from time to time.

This gets at a deeper issue with the TECT setup. When I started working on it in 2010, it was essentially a miniaturized version of what we now refer to as hyperconverged. For convenience and to save a little bit compared to buying two or three servers and either using one as an iSCSI target or just distributing VMs between them randomly. Having a single big system reveals weaknesses in the way I chose to set up the system. The big RAID 6 array is particularly badly suited to a virtualization environment, especially when Windows is involved, and doubly so when any of the virtual machines get desktop-style usage.

The machine was purchased at a time when I knew I'd have increasing data storage needs over the years and 2TB disks had just started to exist. I chose the machine I have because it was the best way to get eight disks into a single machine. To move forward with the configuration, I need a new RAID controller, new disks, and ideally some number of solid state disks to create a mix of storage areas suitable to different needs. How it will all be set up is a detail for later, but as always it's important to remember that backups are a concern as always. The software I'm using today doesn't support different backup sets, so I couldn't, say, run a backup of the VMs on one disk to a first external disk and then a backup of the VMs on another disk to a second external drive.

Ideally, I would decide what I wanted to do and then build out the backup system for it first. That almost certainly means choosing and installing a USB 3.0 or better controller and then picking up a big disk, or perhaps a NAS or something along the lines of a Drobo.

It's an issue I've known for a little while and I know the solutions to it essentially involve dropping a lot of money on a new controller and new disks, I just need to, you know, do that and get it over with.

September 10
You Probably Shouldn’t Buy A New Mac Right Now

I often dismiss discussion about Apple's products being "old" as somewhat alarmist. Yeah, that chip isn't the newest one, but what meaningful improvement would a newer chip bring to the people using that product?

I still think that's true, in most cases, people want a product to be updated but can't point to what the benefit of updating it. Especially the way Apple updates things, where a machine will get thinner and lose ports or have, at best, even battery life. Or, Apple's implementation of a set of chips will result in something that benchmarks lower than the equivalent PC machines using that hardware.

However, I'm pretty much here for the idea that we should frame this circumstance with the fact Apple has a trillion dollars now and should have the money to drop a new processor into its computers from time to time. They don't always have to be first, and I know some of the machines have a reason for being where they are, but I still think it's shameful anyway.

With that in mind, it's very difficult to recommend almost anything in Apple's lineup today, for the simple reason that almost all of it is older than what you'd get from a PC vendor for the same money, and because some of Apple's products today are so old, it's tough to tell what will look bad next time Apple axes off support for old machines. Earlier this summer, the new system requirements for Mac OS X 10.14 "Mojave" were announced and they cut deep: Anything with a Sandy Bridge Intel Core 2nd Gen or earlier is no longer supported, with the exception of a single machine: The 2010 Mac Pro (5,1) with an upgraded graphics card. The cut was made because of support for certain features that the GPU in the 3rd generation CPUs has. This is why the Mac Pro can skip the axe with an upgraded graphics card installed.

The main reason the cut feels deep is that in the Mac scene for the past few years, a common argument is an SSD and a RAM upgrade makes any older machine feel new again. This isn't wrong: My own Mac is a Mac mini from 2011 into which I put some more RAM and an SSD and it has been a sprightly performer since. However, it'll be cut off by 10.14 and it was starting to show its age and some of its other limitations anyway.

Unfortunately, because they're all so old and/or so expensive, there are no new Macs I want to buy. I would extend this to saying I wouldn't really recommend most current Macs without reservation.

In their laptop family, they're using CPU age and capability to tier their various machines into good/better/best slots. If you look at 13" machines, the oldest among them is the 13" MacBook Air, which hasn't been updated since 2014, using a 5th generation CPU. (They speed-bumped it in 2017, but that only barely counts.) Next is the 13" MacBook Pro with no touch bar, where Apple has kept 7th generation processors, and the 13" MacBook Pro with Touch Bar, where Apple upgraded to the new quad-core 15W 8th generation processors.

No other computer OEM is intentionally using old hardware to set price points or justify pricing tiers. Any that are are only using hardware one generation behind, and often that's only because the least expensive CPU models (See: Pentium Gold, Pentium Silver) are themselves at the previous generation. (See: Surface Go.)

Some examples are a little more egregious. Having a laptop with a 7th generation CPU is not bad but keeping a line of desktops at 7th generation for no good reason is a step above. There's no stated reason why Apple hasn't advanced the iMac's platform, but if I had to guess, the reason is that having iMacs at $1,299 with six-core CPUs exist before they discontinue or augment the Mac Pro 6,1, which starts with six processor configurations for $2,999, would look bad. With talk about the 9th generation (and 8-core mainstream CPUs) already floating around, that old Mac Pro looks really bad.

The Mac Pro deserves its own special mention: it launched just under five years ago in December 2013. As of this writing, just a few days ago it officially became the longest sold Mac, replacing the Mac Plus, which sold from January 1986 to October 1990. The Mac Pro was launched to much fanfare with the line "Can't innovate anymore, my ass" which is a little ironic because it launched as an Ivy Bridge EP system on the evening of the Haswell EP launch, and the somewhat unique dual GPU design became outmoded by the launch of bigger, better single GPUs that were better at context-switching between displaying graphics and doing GPGPU work, which was part of the original rationale for having built a system with two small GPUs instead of designing for one or two big GPUs, as most PC workstations at the time did. In addition: density on everything doubled from Westmere to Ivy Bridge, but in response, Apple halved the size and compute capability of the platform, meaning if you tried really hard you could get a pair of Westmere Xeons into the old model that could technically outperform the newer machine. As such, used prices on Mac Pro 5,1 systems have stayed high, even as it becomes obvious the machines are going to be the least well supported by upcoming versions of Mac OS X.

The last system to mention is the Mac mini, which is in the unenviable spot of being slower than its predecessor, less repairable and upgradeable, and un-touched in the product lineup since 2014 when it was announced.

In 2014, I defended the Mini as probably "needing" to be the way it was to be worth building to Apple. Basically, as my speculation goes, the Mac mini is electrically like a MacBook Air or 13" MacBook Pro from its day, and that would be why it got saddled with soldered RAM and no room to install a second SATA device, as the 2011 and 2012 Minis had. In retrospect, that probably wasn't strictly true in 2014, and with the knowledge that Apple has a trillion dollars just sitting around, it feels like excusing the mini for being that much less expensive to design seems like a bad look, so I'm not going to do it. I think that's the explanation, but I don't think it's a good excuse, and it probably wasn't in 2014 either.

The MacBook Pros with touch bar stand out as the only Macs that have been refreshed this year. Those systems should be as good as they always are if you need that kind of system. However, those systems are some of the more expensive machines Apple sells and they are equipped for technical computing, software development, multimedia production, etc.

The MacBook Pro with no touch bar is a good computer, 7th generation is less offensive for something at that product tier (although still annoying as everyone else has updated their midrange machines to 8th gen) than some of Apple's machines still running around with 3rd, 4th, and 5th generation CPUs.

As I said about the MD101LL/A close to the end of its selling life, the MacBook Air would be a good deal at around $600 or 700, Apple would probably still be making a profit on them at that point. The lower price on a machine like that would be a good attraction and a good gimme to the people who were still doing fine on (probably faster) high end machines that got obsoleted out of running 10.14, and it would just look better.

If Apple wants to sell a MacBook Air for $999, it should probably be an updated system.

In a better world, instead of just providing deep discounts on machines it hasn't updated in years, Apple would just update the machines. I can see why it may take a little longer than usual to rev the product line to a new machine. Apple has had a slow time of revamping the Mac Pro, and there have been rumors for a year or two about a new budget model Mac laptop. I remain annoyed that they appear to have used old form factors as a reason not to put any new guts into their machines. I hope I'm wrong about their not updating the iMac for optics reasons, but I'm not overall that optimistic about it.

Back to my buried lede: I can't really recommend almost any machine today. The Touch Bar MacBook Pros are the only machines using current processors and those are high end machines with premium hardware features (the touch bar) for which Apple is charging. It's a good machine if you need that kind of machine. Apple doesn't make a reasonable choice for almost any other kind of machine it sells. You'll almost be better off either buying used hardware, waiting, or going ahead and switching away from Mac, if you can at all get away with it. This could change in the next few days as Apple has one of its fall hardware events, but I believe the immediately coming event is primarily for phones, tablets, and watches.

August 27
New Mirrorless Cameras

Nikon recently announced a new Mirrorless camera system, the first two cameras, the first three lenses, and previewed an upcoming flagship lens. I think this is neat for a few reasons, but first, a little bit of meta:

I posted in a few places to start to try to speculate about whether this would be a mirrorless camera to address the concerns I have with the EOS M, which caused it not to be my main camera. Instead, I immediately got replies (before I could expound upon why I thought it was interesting) to the effect that a camera from another vendor would be better. This is the photography equivalent of being told to use another linux distribution or a different programming language. It might or might not be true but in one of the cases, I hadn't even gotten to the point where I was thinking about cameras in terms of accessibility for the muscle weakness caused by my chronic illness. In the other situation, the entire thing was dismissed because of the existence of a particular lens.

I'm excited for a few reasons, even though this kind of system isn't currently what I'm looking at, because mirrorless has for the past few years looked like where I'll be heading if I ever want to get a newer camera. Because of my chronic illness, a few years ago I bought a Canon EOS M to use in travel situations, because my Nikon D300 is big and heavy. Since then, midrange and high end digital SLR cameras haven't gotten much smaller, so I never really considered replacing the D300 or the EOS M.

Before smartphone cameras were good, and before I was diagnosed with my chronic illness, I pretty much hung an SLR or digital SLR around my neck and ran around with it all day, stopping intermittently to record interesting things. Most of what I shot ended up being what normal people would just use a phone to capture today, and that's where I am now. I retain my two cameras essentially for special events only.

I like the D300 and the EOS M a lot, for different reasons. The M is compact and easier to wear around my neck and take with me, but it runs through batteries like crazy and the ergonomics aren't wild. It shoots video, but I got it five years ago before 4k was practical and so it "only" shoots 720p and 1080p. Accessories for it are a little thin on the ground, because it appears Canon has changed some of the standards on the M line since that model.

The D300 is great, I have a lot of good F mount lenses (this specific component was something interesting to me about the Z cameras, a Z to F adapter is available) and I already know I like the ergonomics of Nikon's cameras.

Basically, the question I wanted to ask is if for about what I paid for the D300 when it was new ten years ago, a Z6 would be a good compromise between modern high performance, better video, comfort, weight, and good optics, plus flexibility to use all my existing lenses when appropriate. (Especially given the cost of the Z series lenses today, while they're still the hot new stuff.)

I don't think I can justify the cost of a new camera right now, and for better or worse, I end up using my iPhone for most of my day to day photography. However, part of my excitement for Nikon revisiting mirrorless as a genre is just that it recapture a time when I was excited for new cameras in general. When I got the D300 in 2007 or 2008, the thing it enabled was for me to just shoot without thinking about it. It appears that's still Nikon's motive with the Z series, and I think that's what digital imaging really should be about – making it easier to get the images you want. Whether that's with a $90 compact camera that is used to take pictures casually or with a $3000 professional camera that fires off a few thousand actuations daily.

This goes along with any discussion of tools. Nikon appears to have no interest in designing a Hole Hawg, as it were, they understand how important photographic tool usability is, and that's what I discovered in 2007 when I got the D300 and it's one of the reasons I haven't given it up even when I have the EOS M and its portability, video, and twice as many pixels.

I'll be watching the Z line with interest as a hypothetical future entry level Z3 or Z5 may end up being the ideal camera for me.

August 20
The Compromised Macs List

I was out and about yesterday thinking about what I would write on my blog this weekend, and was both pleased and annoyed when I came across the thing that would capture my attention for the next few hours.

It's well known at this point that I'm not the biggest fan of Low End Mac. LEM was founded over twenty years ago at a time when some of the Macs they're most known for their opinions on (the 6200 in particular) were still selling in retail used computer stores for hundreds of dollars. In 1997, it made perfect sense to give the advice that if you were sitting in front of a 6200 and a 7200 with the same specs and for the same price in a Computer Renaissance, you should pick the 7200.

In that sense, Low End Mac as a spec database and reference resource does a lot to de-contextualize its advice and the editorialization of various machines. The site does little to differentiate between pages that contain "reference" information and pages that contain "editorial" information, and in both cases with particular machines (I have written about this as it pertains to the Power Macintosh/Performa 5200 and 6200 before) LEM explicitly continues to rely on "technical analysis" of the machine and the platform that has long since been disproven as essentially made up by people with no good reason to write what they did, other than that at some point a Power Macintosh 6200 harmed a pet or family member.

I think that the 6200 was the first machine classified as a "road Apple" (nickname for when a horse poops on a road) by LEM, but since then, basically anything qualifies a machine for what has since been re-classified "Compromised Machines." The list completely de-contextualizes these machines from what computing was like when they were new and how much they cost. So, you get machines where the compromises made were in the name of either making development faster and easier, or simply makes a machine cost less. I think in a lot of sense, this has poisoned the word "compromise" – because, it's important to realize that any computer is purchased with a set of compromises. The compromise on the Performa 6200 is that it's architecture is essentially "Performa 630 but with a PowerPC upgrade preinstalled". The compromise on a contemporary machine like a 7200 or a 9500 is going to be that in trade for the better platform and higher performance, you trade and you need to do more work in terms of selecting components and software.

So anyway, the Big Thing That Happened is Burger Becky (who is extremely nice and this shouldn't be taken as being about her at all) tweeted a link to LEM's site, with an article about how the Macintosh Quadra 800 is considered a "compromised Mac" (although in terms of the site's structure, the URL is still "/roadapples/").

I did miss a few details as I was looking on my phone, but I think it's still severe. (Worth noting: this article claims to have been written in 1999, a detail I couldn't see on my phone.) I was surprised to see any part of the 800 was on the Road Apples list, and keep in mind the list didn't get rebranded as "compromised" until a lot later. Like, 2015 or so. "Fortunately" it appears they are referring only to the case, but the article is phrased in such a way that it does not reveal that until the very end, and nowhere is it acknowledged that the Quadra 800 is very arguably tied with the 800 and 950 for the top 68k Mac, depending essentially on whose benchmarks you're using and what your specific needs are. It's got a faster platform and better memory access than the 950 and the lower CPU speed than the 840 is offset by more standard support for things like A/UX, support for more memory, faster memory access, and overclocking and other upgrades can offset the CPU speed difference anyway.

Notably, part of the article I missed on the phone is that it applies to the 840, 8100, and 8500/9500 as well – this is fair, since all of those machines share the same overall case design, despite internal variations. The main criticism is essentially that the case is too difficult, even bordering on "dangerous" to open up, which I would argue is "true" but only to the extent that most PCs were at the time. Is it true that Apple had better designs? Yes. Did they use them? No. Do I know why? Also no. Does that matter? Really, no. It's a bad machine for people who are in and out of their computers all the time, but most Macs are, really.

The other worthwhile mention here is the article hasn't addressed the fact in the nineteen years since it was published, a lot of the machines lauded for having very easily accessible cases (Looking at you, Power Macintosh 7200/7500) have had their plastics degrade. These are still better case designs but it's a good example of how LEM fails to take context into account. Any context, at all, other than (without stating it) some of the context from the time the article was written. The entire site is like this.

The overall context here is that when LEM started, most '030 and '040 based 68k Macs were just a few years and were still viable as daily use computers. Prices were falling and as I've been doing with mine, a high-end Macintosh Quadra does a reasonably good job of running most of the "everyday computing" software that was available in 1998. (Office 98, notably excepted, although I suspect it would have worked, Office 97 on PC will run on a 486.). If you were working in any kind of group setting, Office itself would likely have been your biggest limitation.

This kind of thing is one of the most frustrating things about how popular Low End Mac is as a resource in the vintage Mac scene. You get people who come into other spaces with unrealistic ideas about what machine they "should" get or what's "best" and they've read Low End Mac and are rightly confused because LEM doesn't as a resource reflect modern reality. Low End Mac doesn't often address issues that have come up in the last ten to fifteen years, for example, and so their site is helpful if you were buying a 68k Mac to use as a third computer or for a particular task in 1999 or 20002, the reality and environment have changed a lot.

On twitter, I was perhaps a little more alarmist than I should have been, thinking this was perhaps new content. If so, it would be a little disturbing to say the least, in part because it does a huge disservice to talk about the 800 or the 840 without addressing that, yes, they're basically the fastest 68k Macs. (They both outrun the Quadra 950, which has the same CPU speed, the 840 because it's got a faster CPU and the 800 because of memory access, so the main reason to have a 950 is if you're building something needing a lot of slots.) I still think the article is bad, not because I disagree that the 800/840/8100/8500 case is bad (it is) but because I disagree that that's something that merits putting the machine – one of the fastest and most versatile Macs from its era – on the shit-list.

August 13
Surface Go Impressions

This past weekend I found myself in Phoenix for other reasons, but it was a good opportunity to stop by the Microsoft store to take a gander at the Surface Go. I've liked the small Surface devices since they launched. I bought a Surface RT on the day after launch day. I bought the Surface 3 with similar gusto, and here I am with a Surface Go bought within a week of availability. These computers, as with the iPad, are close to the perfect size to put in a reasonably sized shoulder or air travel carry on bag or leave alongside a main computer for sideboard organizational tasks or reading.

Last year, I bought a Surface Laptop, officially to replace both the Surface 3 (which is still fast enough and is a great size, but has worn out and isn't so great on the "Go" any more) and my old at-home mainstay the ThinkPad T400, but the Surface Laptop is big enough it failed to completely replace my older Surfaces, and even the third generation iPad, as day-to-day carry devices.

So I bought the Surface Go and I've got some impressions about it. I was originally going to write a review and include benchmarks comparing the Go to the 3 and the Laptop, at things like browserbench, cinebench, and a few other things. However, ultimately, you aren't buying a $400 or $549 computer because it wins at benchmarks. There are few entries in this market and even fewer of them are full-fat Windows computers, and even fewer of those have "Lake" processors and NVMe storage.

If you're looking at benchmarks, you likely want a bigger computer and you're probably fine with all that entails. I know there are literally dozens of you who want a Lake-Y computer with Thunderbolt 3 so you can use an external GPU. This is the kind of computer you buy to keep on a coffee table to read articles, or something you keep in a bag to do computer on the go. It should be able to do all of that stuff. In the benchmark tests I did, it outperforms the Surface 3 "a bit" and it sits in the middle of a few different benchmarks I did of Sandy Bridge (second gen) laptops. I think the slower one was on battery or in an energy saving mode and the other two are at full bore. The scores there are that the faster sandy bridge (dual core) laptops are close to twice as fast as this computer, and other 7th gen Intel Core in the Surface Laptop is over twice as fast as those systems.

But, The Plateau™ essentially means that because of the NVMe storage, this machine is "Good Enough" for almost everything your daily computer user will need. It will probably even run Slack and Chrome at the same time.

The biggest technical problem with this system ends up being related to the configuration options. As with almost every review and impression I've seen so far, I wish 8/128, or at least 4/128 with NVMe was the default configuration. For "light" users, I don't think the eMMC storage will be that big of a limitation, and a 4/64 configuration puts it in line with where high end tablets such as the iPad Pro are in terms of memory for active multi-tasking. Mid to long-term, I think 4GB of RAM could be a limitation, especially depending on if "light usage" ever evolves to include things like electron-based chat programs. The difference there is Windows has swap and iOS doesn't, preferring instead to suspend entire applications to disk instead of swapping bits of memory out. This is done to reduce the CPU and network impact of non-active applications on iOS and Android as well. The other bummer is that 64 gigs of storage can get tight very quickly on Windows. I have Windows 10, Office 365, two chat programs and my OneNote and Outlook data are downloaded and my system has 33 gigs used. Things like the twice-yearly Windows 10 feature updates need a lot of disk space, and if I were to install any games or other big software packages they would instantly fill my disk. I think the default configuration should probably be upped either to the 8/128 and $550 should be an 8/256 model, or the base model should be 4/128 using SATA or NVMe storage, to make the experience that much better.

The next problem with this machine really is the small keyboard. The original Surface benefitted from having a 10-inch widescreen and the Surface 3 benefitted from extremely wide bezels and a re-orientation of the device with the Windows button on the right-hand side, meaning there was still room for a very large keyboard. The Surface Go's Type Cover is almost a full inch narrower than that of the Surface 3, itself almost as wide as the Surface RT. In a couple of days of intermittent use, I've gotten used to the keyboard and am a lot more accurate with it. I've heard some suggestions that it'll be difficult to go back and forth between keyboards, but I haven't had this myself, in swapping between the Surface Go, Surface Laptop, and my computer at work, which has a ThinkPad UltraNav USB keyboard on it.

The device is worth considering if you have and like your previous "Mini" Surface system. It is faster, even if not by much, the higher configuration options should make multi-tasking easier. The display is a slightly lower resolution but I think is a better panel, the previous Surface pens and keyboards do work, which can allow you to side-step the issue of the very small keyboard, the cameras are better and the machine supports Windows Hello if you want to use it. The wireless is better. SurfaceConnect is a much better charging interface and USB Type C is competent for charging, and the regular C to A adapter you can buy from Microsoft (or Apple, or Belkin, or Google) is robust enough for things like flash drives and portable hard disk cords. It can charge, although slowly, from things like USB battery packs (functionality I used a lot on my Surface 3, but the Micro-USB connector on that machine was pretty flimsy and mine's misshapen and no longer holds cables well) and old phone and iPad chargers. It will charge overnight from pretty much any USB charger, and it'll run and charge from any 2.4-amp charger such as an iPad or Surface 3 brick. Type C power adapters such as the one from the Nintendo Switch and MacBook or newer Type C phone and tablet chargers should also run the device well.

Since Type C power is of note to me, personally, I want to say that there's one big-ish caveat for anyone who will be doing heavy "compute" on this device. (You shouldn't do heavy compute on this device, however, even though you can.) You must either buy a good Power Delivery charger (Switch, MacBook, an 18w+ phone adapter, whatever). If you do something like render a big Blender file while powered by (in my case) a Surface 3 charger with a generic A to C cord, the CPU will lock to about a quarter of its optimal speed, 400MHz. So, your video export will get done eventually, but it'll take a lot longer than if you use a regular SurfaceConnect power charger. This doesn't impact my workloads a lot, and once the load evens out, the CPU can run intermittently at up to its normal 1.6GHz speed.

There's a lot more to talk about, and I recommend anyone interested take the time to check the device out in person and look at other reviews. Microsoft's retail stores are accommodating to people who camp out and try the keyboard, the camera, test the software they load on them (the store load-out is a little weird compared to how you'll set one up when you get it, but that's fine, they're trying to push the idea of the Windows Store hard.)

Ultimately, my conclusions are that it is "Good Enough" especially given it's one of the only devices in this size band. The other machines are the iPad and the MacBook. What I mean by that is that every other device with a 10-12" display is comically large because they're designed either for education or for the $180 price range. The closest competitor in terms of computers that will fit in my bag is the Apple iPad, which is a good computing device, but if you need more functionality or flexibility than what iOS has (especially what iOS has today, not what Apple might deign to add in three years) then the Surface Go is probably the device you want. If you can carry more, then more options are available, and almost all other devices will be more powerful and more flexible in some variety, and a few are also less expensive, depending on what your priorities are.

August 06
Trillion Dollars

On August 2, 2018, it was announced that Apple’s overall value had reached or exceeded a trillion dollars.

I don’t normally comment on “AAPL” as a business in the modern context, specifically, because I feel like it’s addressed well enough everywhere else. I’m sure dozens of other blogs and twitter feeds will have this take, but I do think it’s interesting the way some of us (and above all else this is a call-out post for myself) treat Apple when it comes to issues like updated products.

In short, knowing that Apple has, or could have, a trillion dollars hanging around, it seems ridiculous half their product line consists of a bunch of machines that are between two and three years old, when newer components have existed and are in many cases drop-in upgrades, or are on platforms that can be designed as drop in upgrades to the design in question.

The most egregious example of late is really the Mac mini. It is technically newer than the Mac Pro, but it’s also a platform that should be easier to upgrade. In addition, the Mini itself is using Haswell (4th generation) components, but Broadwell is available as a drop-in upgrade, and the 8th generation has been available for most of the year and is a very meaningful upgrade. In addition, the Mac mini is needlessly (because it doesn’t run on a battery) restricted to the lowest voltage types of components, and is built without basic upgradeability and expandability.

I’ve argued in Apple’s favor before, suggesting the reason they do this is to keep the cost of the Mac mini platform down, but newer versions of the systems you could reasonably build the Mini around have come out and would be much better performers. Even so, despite having less overall “desktop experience” marketshare than all of the other computer makers, Apple has more cash laying around than any of them, and so they can probably afford to just build an updated Mac mini with the bits bits to suit that platform’s needs.

Hardware issues are one thing, but using Apple’s products is often full of little cuts. For example, the free allocation of cloud storage is 5GB, regardless of how many devices you have, or what else you’re running. It stings to go buy a $1000 phone or a $2000 laptop (or more) and receive an angry message in just a few days of taking pictures or putting files on the desktop, due to the default configurations of Apple’s devices.

I think iCloud is a good product and I don’t dislike the things iCloud does, such as automatically backing up your documents folder, but Microsoft and Google’s services each start you off with 15 gigabytes free, and they each have ways to get more space free. For example, Google Photos will allow an “unlimited” number of photos at a reduced rate. Microsoft throws you a few extra gigs for even using the service at all, and again on the Google side, there are frequently special deals for storing more data in Google’s services if you have Android or ChromeOS devices.

Beyond all that, there’s the thousand tiny cuts of MacOS and iOS over the years. Little bugs or oddities about the experience that didn’t get fixed for a long time or haven’t been fixed yet.

Ultimately it stings to know that Apple has all this money, t hey could hire people, they could build more Mac models, they could improve the iPad’s utility and credibility as a “real computer” and they just don’t.

To put a finer point on it, I don’t think this is a “new Apple” or “Tim Cook” thing. Apple was pretty clearly on this trajectory before Steve Jobs died. It just happens to be now that they hit a trillion dollars.

Apple likes to think of itself as small, but it’s clear they’re really not. They’re the dragon of not just the computing industry, but the entire world - the highest valued corporation in existing - sitting on a giant pile of cash for no good reason.

Apple needs to either get rid of some of this cash - paying its taxes would probably be a good starting point - or hire more people if they need to work on issues such as keeping the Mac hardware line up to date, and fixing issues in its other software.​

1 - 10Next
 

 About this blog

 
About this blog
Computer and physical infrastructure, platforms, and ecosystems; reactions to news; and observations from life.