Skip Ribbon Commands
Skip to main content

Quick Launch

Stenoweb Home Page > Cory's Blog
August 20
The Compromised Macs List

I was out and about yesterday thinking about what I would write on my blog this weekend, and was both pleased and annoyed when I came across the thing that would capture my attention for the next few hours.

It's well known at this point that I'm not the biggest fan of Low End Mac. LEM was founded over twenty years ago at a time when some of the Macs they're most known for their opinions on (the 6200 in particular) were still selling in retail used computer stores for hundreds of dollars. In 1997, it made perfect sense to give the advice that if you were sitting in front of a 6200 and a 7200 with the same specs and for the same price in a Computer Renaissance, you should pick the 7200.

In that sense, Low End Mac as a spec database and reference resource does a lot to de-contextualize its advice and the editorialization of various machines. The site does little to differentiate between pages that contain "reference" information and pages that contain "editorial" information, and in both cases with particular machines (I have written about this as it pertains to the Power Macintosh/Performa 5200 and 6200 before) LEM explicitly continues to rely on "technical analysis" of the machine and the platform that has long since been disproven as essentially made up by people with no good reason to write what they did, other than that at some point a Power Macintosh 6200 harmed a pet or family member.

I think that the 6200 was the first machine classified as a "road Apple" (nickname for when a horse poops on a road) by LEM, but since then, basically anything qualifies a machine for what has since been re-classified "Compromised Machines." The list completely de-contextualizes these machines from what computing was like when they were new and how much they cost. So, you get machines where the compromises made were in the name of either making development faster and easier, or simply makes a machine cost less. I think in a lot of sense, this has poisoned the word "compromise" – because, it's important to realize that any computer is purchased with a set of compromises. The compromise on the Performa 6200 is that it's architecture is essentially "Performa 630 but with a PowerPC upgrade preinstalled". The compromise on a contemporary machine like a 7200 or a 9500 is going to be that in trade for the better platform and higher performance, you trade and you need to do more work in terms of selecting components and software.

So anyway, the Big Thing That Happened is Burger Becky (who is extremely nice and this shouldn't be taken as being about her at all) tweeted a link to LEM's site, with an article about how the Macintosh Quadra 800 is considered a "compromised Mac" (although in terms of the site's structure, the URL is still "/roadapples/").

I did miss a few details as I was looking on my phone, but I think it's still severe. (Worth noting: this article claims to have been written in 1999, a detail I couldn't see on my phone.) I was surprised to see any part of the 800 was on the Road Apples list, and keep in mind the list didn't get rebranded as "compromised" until a lot later. Like, 2015 or so. "Fortunately" it appears they are referring only to the case, but the article is phrased in such a way that it does not reveal that until the very end, and nowhere is it acknowledged that the Quadra 800 is very arguably tied with the 800 and 950 for the top 68k Mac, depending essentially on whose benchmarks you're using and what your specific needs are. It's got a faster platform and better memory access than the 950 and the lower CPU speed than the 840 is offset by more standard support for things like A/UX, support for more memory, faster memory access, and overclocking and other upgrades can offset the CPU speed difference anyway.

Notably, part of the article I missed on the phone is that it applies to the 840, 8100, and 8500/9500 as well – this is fair, since all of those machines share the same overall case design, despite internal variations. The main criticism is essentially that the case is too difficult, even bordering on "dangerous" to open up, which I would argue is "true" but only to the extent that most PCs were at the time. Is it true that Apple had better designs? Yes. Did they use them? No. Do I know why? Also no. Does that matter? Really, no. It's a bad machine for people who are in and out of their computers all the time, but most Macs are, really.

The other worthwhile mention here is the article hasn't addressed the fact in the nineteen years since it was published, a lot of the machines lauded for having very easily accessible cases (Looking at you, Power Macintosh 7200/7500) have had their plastics degrade. These are still better case designs but it's a good example of how LEM fails to take context into account. Any context, at all, other than (without stating it) some of the context from the time the article was written. The entire site is like this.

The overall context here is that when LEM started, most '030 and '040 based 68k Macs were just a few years and were still viable as daily use computers. Prices were falling and as I've been doing with mine, a high-end Macintosh Quadra does a reasonably good job of running most of the "everyday computing" software that was available in 1998. (Office 98, notably excepted, although I suspect it would have worked, Office 97 on PC will run on a 486.). If you were working in any kind of group setting, Office itself would likely have been your biggest limitation.

This kind of thing is one of the most frustrating things about how popular Low End Mac is as a resource in the vintage Mac scene. You get people who come into other spaces with unrealistic ideas about what machine they "should" get or what's "best" and they've read Low End Mac and are rightly confused because LEM doesn't as a resource reflect modern reality. Low End Mac doesn't often address issues that have come up in the last ten to fifteen years, for example, and so their site is helpful if you were buying a 68k Mac to use as a third computer or for a particular task in 1999 or 20002, the reality and environment have changed a lot.

On twitter, I was perhaps a little more alarmist than I should have been, thinking this was perhaps new content. If so, it would be a little disturbing to say the least, in part because it does a huge disservice to talk about the 800 or the 840 without addressing that, yes, they're basically the fastest 68k Macs. (They both outrun the Quadra 950, which has the same CPU speed, the 840 because it's got a faster CPU and the 800 because of memory access, so the main reason to have a 950 is if you're building something needing a lot of slots.) I still think the article is bad, not because I disagree that the 800/840/8100/8500 case is bad (it is) but because I disagree that that's something that merits putting the machine – one of the fastest and most versatile Macs from its era – on the shit-list.

August 13
Surface Go Impressions

This past weekend I found myself in Phoenix for other reasons, but it was a good opportunity to stop by the Microsoft store to take a gander at the Surface Go. I've liked the small Surface devices since they launched. I bought a Surface RT on the day after launch day. I bought the Surface 3 with similar gusto, and here I am with a Surface Go bought within a week of availability. These computers, as with the iPad, are close to the perfect size to put in a reasonably sized shoulder or air travel carry on bag or leave alongside a main computer for sideboard organizational tasks or reading.

Last year, I bought a Surface Laptop, officially to replace both the Surface 3 (which is still fast enough and is a great size, but has worn out and isn't so great on the "Go" any more) and my old at-home mainstay the ThinkPad T400, but the Surface Laptop is big enough it failed to completely replace my older Surfaces, and even the third generation iPad, as day-to-day carry devices.

So I bought the Surface Go and I've got some impressions about it. I was originally going to write a review and include benchmarks comparing the Go to the 3 and the Laptop, at things like browserbench, cinebench, and a few other things. However, ultimately, you aren't buying a $400 or $549 computer because it wins at benchmarks. There are few entries in this market and even fewer of them are full-fat Windows computers, and even fewer of those have "Lake" processors and NVMe storage.

If you're looking at benchmarks, you likely want a bigger computer and you're probably fine with all that entails. I know there are literally dozens of you who want a Lake-Y computer with Thunderbolt 3 so you can use an external GPU. This is the kind of computer you buy to keep on a coffee table to read articles, or something you keep in a bag to do computer on the go. It should be able to do all of that stuff. In the benchmark tests I did, it outperforms the Surface 3 "a bit" and it sits in the middle of a few different benchmarks I did of Sandy Bridge (second gen) laptops. I think the slower one was on battery or in an energy saving mode and the other two are at full bore. The scores there are that the faster sandy bridge (dual core) laptops are close to twice as fast as this computer, and other 7th gen Intel Core in the Surface Laptop is over twice as fast as those systems.

But, The Plateau™ essentially means that because of the NVMe storage, this machine is "Good Enough" for almost everything your daily computer user will need. It will probably even run Slack and Chrome at the same time.

The biggest technical problem with this system ends up being related to the configuration options. As with almost every review and impression I've seen so far, I wish 8/128, or at least 4/128 with NVMe was the default configuration. For "light" users, I don't think the eMMC storage will be that big of a limitation, and a 4/64 configuration puts it in line with where high end tablets such as the iPad Pro are in terms of memory for active multi-tasking. Mid to long-term, I think 4GB of RAM could be a limitation, especially depending on if "light usage" ever evolves to include things like electron-based chat programs. The difference there is Windows has swap and iOS doesn't, preferring instead to suspend entire applications to disk instead of swapping bits of memory out. This is done to reduce the CPU and network impact of non-active applications on iOS and Android as well. The other bummer is that 64 gigs of storage can get tight very quickly on Windows. I have Windows 10, Office 365, two chat programs and my OneNote and Outlook data are downloaded and my system has 33 gigs used. Things like the twice-yearly Windows 10 feature updates need a lot of disk space, and if I were to install any games or other big software packages they would instantly fill my disk. I think the default configuration should probably be upped either to the 8/128 and $550 should be an 8/256 model, or the base model should be 4/128 using SATA or NVMe storage, to make the experience that much better.

The next problem with this machine really is the small keyboard. The original Surface benefitted from having a 10-inch widescreen and the Surface 3 benefitted from extremely wide bezels and a re-orientation of the device with the Windows button on the right-hand side, meaning there was still room for a very large keyboard. The Surface Go's Type Cover is almost a full inch narrower than that of the Surface 3, itself almost as wide as the Surface RT. In a couple of days of intermittent use, I've gotten used to the keyboard and am a lot more accurate with it. I've heard some suggestions that it'll be difficult to go back and forth between keyboards, but I haven't had this myself, in swapping between the Surface Go, Surface Laptop, and my computer at work, which has a ThinkPad UltraNav USB keyboard on it.

The device is worth considering if you have and like your previous "Mini" Surface system. It is faster, even if not by much, the higher configuration options should make multi-tasking easier. The display is a slightly lower resolution but I think is a better panel, the previous Surface pens and keyboards do work, which can allow you to side-step the issue of the very small keyboard, the cameras are better and the machine supports Windows Hello if you want to use it. The wireless is better. SurfaceConnect is a much better charging interface and USB Type C is competent for charging, and the regular C to A adapter you can buy from Microsoft (or Apple, or Belkin, or Google) is robust enough for things like flash drives and portable hard disk cords. It can charge, although slowly, from things like USB battery packs (functionality I used a lot on my Surface 3, but the Micro-USB connector on that machine was pretty flimsy and mine's misshapen and no longer holds cables well) and old phone and iPad chargers. It will charge overnight from pretty much any USB charger, and it'll run and charge from any 2.4-amp charger such as an iPad or Surface 3 brick. Type C power adapters such as the one from the Nintendo Switch and MacBook or newer Type C phone and tablet chargers should also run the device well.

Since Type C power is of note to me, personally, I want to say that there's one big-ish caveat for anyone who will be doing heavy "compute" on this device. (You shouldn't do heavy compute on this device, however, even though you can.) You must either buy a good Power Delivery charger (Switch, MacBook, an 18w+ phone adapter, whatever). If you do something like render a big Blender file while powered by (in my case) a Surface 3 charger with a generic A to C cord, the CPU will lock to about a quarter of its optimal speed, 400MHz. So, your video export will get done eventually, but it'll take a lot longer than if you use a regular SurfaceConnect power charger. This doesn't impact my workloads a lot, and once the load evens out, the CPU can run intermittently at up to its normal 1.6GHz speed.

There's a lot more to talk about, and I recommend anyone interested take the time to check the device out in person and look at other reviews. Microsoft's retail stores are accommodating to people who camp out and try the keyboard, the camera, test the software they load on them (the store load-out is a little weird compared to how you'll set one up when you get it, but that's fine, they're trying to push the idea of the Windows Store hard.)

Ultimately, my conclusions are that it is "Good Enough" especially given it's one of the only devices in this size band. The other machines are the iPad and the MacBook. What I mean by that is that every other device with a 10-12" display is comically large because they're designed either for education or for the $180 price range. The closest competitor in terms of computers that will fit in my bag is the Apple iPad, which is a good computing device, but if you need more functionality or flexibility than what iOS has (especially what iOS has today, not what Apple might deign to add in three years) then the Surface Go is probably the device you want. If you can carry more, then more options are available, and almost all other devices will be more powerful and more flexible in some variety, and a few are also less expensive, depending on what your priorities are.

August 06
Trillion Dollars

On August 2, 2018, it was announced that Apple’s overall value had reached or exceeded a trillion dollars.

I don’t normally comment on “AAPL” as a business in the modern context, specifically, because I feel like it’s addressed well enough everywhere else. I’m sure dozens of other blogs and twitter feeds will have this take, but I do think it’s interesting the way some of us (and above all else this is a call-out post for myself) treat Apple when it comes to issues like updated products.

In short, knowing that Apple has, or could have, a trillion dollars hanging around, it seems ridiculous half their product line consists of a bunch of machines that are between two and three years old, when newer components have existed and are in many cases drop-in upgrades, or are on platforms that can be designed as drop in upgrades to the design in question.

The most egregious example of late is really the Mac mini. It is technically newer than the Mac Pro, but it’s also a platform that should be easier to upgrade. In addition, the Mini itself is using Haswell (4th generation) components, but Broadwell is available as a drop-in upgrade, and the 8th generation has been available for most of the year and is a very meaningful upgrade. In addition, the Mac mini is needlessly (because it doesn’t run on a battery) restricted to the lowest voltage types of components, and is built without basic upgradeability and expandability.

I’ve argued in Apple’s favor before, suggesting the reason they do this is to keep the cost of the Mac mini platform down, but newer versions of the systems you could reasonably build the Mini around have come out and would be much better performers. Even so, despite having less overall “desktop experience” marketshare than all of the other computer makers, Apple has more cash laying around than any of them, and so they can probably afford to just build an updated Mac mini with the bits bits to suit that platform’s needs.

Hardware issues are one thing, but using Apple’s products is often full of little cuts. For example, the free allocation of cloud storage is 5GB, regardless of how many devices you have, or what else you’re running. It stings to go buy a $1000 phone or a $2000 laptop (or more) and receive an angry message in just a few days of taking pictures or putting files on the desktop, due to the default configurations of Apple’s devices.

I think iCloud is a good product and I don’t dislike the things iCloud does, such as automatically backing up your documents folder, but Microsoft and Google’s services each start you off with 15 gigabytes free, and they each have ways to get more space free. For example, Google Photos will allow an “unlimited” number of photos at a reduced rate. Microsoft throws you a few extra gigs for even using the service at all, and again on the Google side, there are frequently special deals for storing more data in Google’s services if you have Android or ChromeOS devices.

Beyond all that, there’s the thousand tiny cuts of MacOS and iOS over the years. Little bugs or oddities about the experience that didn’t get fixed for a long time or haven’t been fixed yet.

Ultimately it stings to know that Apple has all this money, t hey could hire people, they could build more Mac models, they could improve the iPad’s utility and credibility as a “real computer” and they just don’t.

To put a finer point on it, I don’t think this is a “new Apple” or “Tim Cook” thing. Apple was pretty clearly on this trajectory before Steve Jobs died. It just happens to be now that they hit a trillion dollars.

Apple likes to think of itself as small, but it’s clear they’re really not. They’re the dragon of not just the computing industry, but the entire world - the highest valued corporation in existing - sitting on a giant pile of cash for no good reason.

Apple needs to either get rid of some of this cash - paying its taxes would probably be a good starting point - or hire more people if they need to work on issues such as keeping the Mac hardware line up to date, and fixing issues in its other software.​

July 31
Updates: Life & Health

It has been a while since I have blogged! Four months! And that was something I drafted a few months prior to that. [[haha]]

Let’s just call a spade a spade and admit that it’s been a year since I have posted on the blog with any kind of regularity. I would like to start posting again, but the trouble is always in whether or not I have time and energy for other things.

A lot of it comes down to just choosing what you’d like to do. I’ve been on Twitter, The Fediverse, and doing other things, but I’ve also had health things happening.

For the most part, it’s been due to health. Around a year ago I started having extremely severe health problems which have led me on a pretty bad up-and-down.

The short version is as follows:

  • I had big blood clots in/on both lungs
  • I didn’t know, until I fell and the ambulance had to come get me
  • Hospitalized for a few days
  • Placed on blood thinners
  • After that, the most recent attempt at placing vascular access (which we had written off just a few days before) reopened and we were able to start using it. Thereafter, the graft, which is of a special type, has gone down again every 2-3 months. This means I still have a catheter in my chest, so I can still get the plasma exchange treatments, but it’s annoying.

    The most recent big event is the graft has clotted once more and my normal surgeon has asked me not to come back, forcing me to (really long story) go to Phoenix on a day when it was 115˚ and meet with a new surgeon and schedule a new operation, this time, on what was supposed to be my first scheduled day of vacation.

    I’ve made my compromise and I’m going to reschedule my vacation, but I’d be lying if I said I wasn’t really annoyed about all of this.

    Ideally, watch out for more dispatches in this space.​

    February 19
    Antsle and home-labbing and self-hosting

    Meta: As per always, I wrote some of this this a few months ago but it got delayed for motivation reasons. Some of the details I put in here are based on some configurations that were available in around August of last year, and it appears some new configurations have been added.

    A month or two (read: sometime last year), I discovered the Antsle via an ad on the /r/homelab subreddit, which I read sometimes, but don't post to. I thought it was an interesting idea, and the ad they aim specifically at the homelab subreddit is particularly interesting, because it basically claims to compete with VPS services from cloud providers.

    In this article, I will not address privacy concerns: I think you either believe a cloud product is "fine" and it sufficiently addresses privacy and data safety concerns, or you do not, and you likely wouldn't use one anyway.

    Antsle is a small virtualization appliance running a Gentoo-based distribution and some customized management tools. The hardware is mediocre: there are 4- and 8-core versions based on a now-old Intel Atom processor, and it can be equipped with up to four 2.5-inch disks and 64 gigabytes of memory. Antsle does you a solid by starting the configurations with SSDs, although there are HDD expansion options, and if the need arose, you could install an HDD later. (Although they say this voids any remaining warranty.)

    The machines are extremely costly for what you get. The top end configuration with an 8-core CPU, 16TB of SSD storage, and 64 gigabytes of RAM is over $12,000. The base Antsle unit with a quad-core CPU, 8GB of RAM and two 128GB SSDs (a nice touch) is $760. It bears mentioning again the CPUs they've used are a high-end Atom. They should do most things fine, but they are by no means speed demons. (EDIT: the new upper ceiling is $15,600, although that machine has a better processor, a Xeon D with more cores and a higher RAM ceiling, plus room for more storage devices.)

    The site claims the Antsle is for developers and geeks who want to use VPS services and get advantages of cloud computing, but don't want to, you know, actually use cloud computing.

    The other issue I have with this marketing is that it seems odd to market an appliance to the developer and geek market. The Antsle is normal x86 computer onto which you could install another OS, but a huge part of their marketing is about using their mix on the ideas of virtualization and containerization. These customers are the ones most likely to want to run their own software stack or are willing to do the installation on the appliance operating system of their choice.

    The Product page talks about easy access to your Antsle from anywhere and use cases such as hosting web sites, but it doesn't talk about how. They don't address the potential costs or inconvenience of using a home or small office Internet connection to run one of these devices.

    The way this must happen is either your Antsle phones home (or another service) and your data flows through a remote datacenter when you access it from out of your home or office, or you must purchase an Internet connection with one or more static IPs that allows in its terms of service running servers, and a domain name (usually needed on cloud services as well.) This translates into a business class Internet connection, which I have covered before as being kind of a scam, but just for an idea, I have such a connection and five public static IP addresses, and I pay $180/month for it. That's nearly a $140 upcharge from what this speed would cost on my provider's residential. The other ISP in my area charges even more for lower speeds.

    An Antsle is favorably priced if you do not count the cost of an Internet connection. The baseline Antsle would cost around $64 monthly over the course of a year. An average EC2 instance that's competitive with a low end Antsle costs around $52/month, in perpetuity, which includes a public static IP address. If you need to pay $100 over your current Internet costs to get a public static IP address, then you're looking at double what EC2 costs, every month in perpetuity, before you get any hardware.

    At the lowest performance level, if you always have to pay for your Internet connection, and your Internet connection costs around $100 more for more speed, the removal of a quota, and the addition of a static IP address (this is just a guess, but) then the cost of an EC2 instance will never cross over the cost of running an Antsle.

    That's only one situation, but it's worth noting that Antsle materials rely extremely heavily on advertising it as something on which you can run public-facing services. So, we have to presume that you're buying up to the highest possible Internet connection speed and purchasing one or more static IPs to use for service. The cost of DNS services and any software you might be licensing on the device will be the same for both EC2 and an Antsle.

    It's worth mentioning here that regular tower servers are a lot less expensive than Antsle for configurations that are much faster. Dell has a configuration of the PowerEdge T30 with a much faster quad-core Xeon E3 processor, 8GB of RAM and a 1TB hard disk for $550 (down from the $760 base price on the lowest end Antsle.) The T30 is easier to maintain and upgrade, so putting in a second hard disk and configuring mirroring should be easy and inexpensive. If you step up one level in Dell's product line, you can configure the machine from the factory with solid state disks and multiple drives.

    Part of the advantage of buying such a system from Dell or building your own (compared to the Antsle, specifically) is that you can balance the system to your needs. For example, the PowerEdge T130 lets you choose a dual-core Celeron CPU (which will still be faster than the Atom in the Antsle), 8GB of RAM and a mirrored pair of 1TB hard disks for around $740.

    The trade-offs here are that the machine won't be completely silent and you are trading solid state storage for two slower hard disks, which should be fine for passive server workloads, even with several virtual machines. I personally tend to believe that silence is a little overrated anyway, but if you need silence, the best thing about servers is that you can generally locate them away from your primary work or living area. A small server based on a Xeon E3 processor will run fine in a closet or a big enough cabinet, even.

    Buying a regular server has the same trade-offs against Amazon EC2, which is that you are using your own electricity and network connectivity to run the server. The reason I insist on including connectivity in this is because Antsle's web site shows a lot of usage of it to host client web sites. I'm presuming the goings on here are that a web designer or programmer would use it to host a client site. Web sites, even interactives ones, don't take an awful lot to host these days in terms of hardware, so you can either run one web server that listens on multiple names or a VM for each site, depending.

    The other thing that needs to be considered is that residential Internet connections usually aren't good enough to host web sites. I have a connection with 20 megabits upload speed. It works fine for my personal needs, but it would be bad for paying customers. Service providers running cloud or even machine colocation services put a lot of effort into making sure that machines under their control stay running. An Antsle is still susceptible to power interruptions in your home or office, as well as network connectivity issues.

    Another option that the hobbyist market (especially, say, /r/homelab, which is where I saw the device) usually pursues is used hardware. Even a machine such as a years-old business desktop that has been cleaned up a little bit and given some more RAM and storage will be much faster and much less expensive than the Antsle. The money could be used on purchasing more capacity or something like backup hardware or tools.

    For developers with desktops, I can't help but imagine a better option is to buy a second storage device, some more RAM, and run virtualization software like VMWare Workstation or Virtualbox. If you don't need an entirely new computer (or a graphics card) RAM is… less expensive than an entirely new server, and an SSD or hard disk is inexpensive. The Antsle is pictured next to an iMac on their web site. If you are using an iMac and it is anywhere near replacement, a good enough Antsle to be worth not running your development environment directly on the iMac costs enough to offset the costs of just buying an iMac pro and removing that benefit.

    One of the disadvantages of the Antsle is the funding model for the hardware, which exists for any on-premise service. Antsle is offered as a product that you buy, straight-up. If you could lease an Antsle or if it was available on a payment plan outside of funding you can get personally, especially if coupled with a VPN service to defray the cost of having a public IP address (which some devices such as the HPE EC200 are).

    It looks like Antsle isn't offering this kind of funding, leasing, or payment plans directly, however. Dell does, but there's no indication that Dell's funding is any different from buying the machine on a credit card you already have, or using funds from something like a short term personal or business loan. Depending on the financing used, there may be different terms and the actual amount paid will differ based on taxes and interest.

    Antsle is interesting, I understand that it's not for me, so it may look better to other people with different needs, but I struggle to think of the use case it is for. Perhaps the only situation I can think of is if you are a developer who needs a low-power server on which to do development, and you literally only own laptops, and also you live in an extremely small studio apartment or use an extremely small office space.

    February 12
    iMac Pro

    Last year, Apple held a small event with a mere handful of big names in the Mac blogosphere. It was at this event we got the first whiff of the iMac Pro, which was merely mentioned as a "great" new iMac model in the pipeline. The event was to address the aging Mac Pro 6,1, which Apple discovered was designed in such a way the newest graphics processors can't be cooled by the innovative (but oddly specific) cooling system.

    The Mac Pro 6,1 was widely panned basically since it was announced as being a bad successor to the Mac Pro 5,1, which was a large tower computer with room for two processors, four hard disk drives, two optical drives, and four PCI Express slots. Years after its introduction, the Mac Pro 5,1 was still lauded as the best system for creative professionals. (Note: the linked article appears to be written by a business specializing in selling customized Mac Pros and aftermarket options specifically for the Mac Pro 5,1.)

    I won't belabor the point too much, but it's worth noting for all intents and purposes the 6,1 is a fine computer. The scalability problem allegedly keeping Apple from updating is the dual GPU design was done under the presumption multiple midrange GPUs would become the standard for creative professionals, mainly because the AMD GPUs available in 2012 weren't particularly great at running general purpose workloads at the same time they were operating as graphics cards. As such, Apple built a system meant for one CPU, but two GPUs. It created a system approximately 5% faster than the old one, at a great increase in cost, and at the expense of internal flexibility. So, to use a car analogy, it was like giving someone a sports car when what they asked for was a pickup truck or a four-door sedan.

    Apple's solution is two-fold. The first step in the plan is the iMac Pro. The second is an upcoming "modular" computer, about which we know nothing, other than it will allegedly be a good successor to the 5,1.

    The iMac Pro, as the first part of Apple's plan to replace the Mac Pro 6,1 builds on the idea of the 6,1 by integrating a Skylake Xeon W processor and Radeon Vega RX graphics chip (new parts at the start of their lifecycles!) into the body of a 27-inch iMac and adding a bunch of Thunderbolt 3 controllers. The idea is part of the problem of the Mac Pro 6,1 was finding good displays for it, due to its Mini DisplayPort/Thunderbolt 2 outputs. The design should be much more scalable, as Apple works in the future to keep the system updated.

    The announcement of the iMac Pro's availability was met with a lot of interesting commentary, almost exclusively about the price. The baseline configuration is $5,000 for a system with an 8-core CPU, 32 gigs of RAM, 1TB of SSD storage, an 8-gig Radeon Vega 56 graphics chip, and what's likely the best 27-inch display you can buy. The top configuration is around $13,999.

    Immediately, most of the commentary was (as it always is) about how you can build an equivalent PC Workstation for a lot less money. This is technically untrue and the argument almost always relies on a user not needing most of what's available in a configuration, and the fact that Apple often intentionally chooses only high-end parts out of a range. For example, there are desktop workstations from PC OEMs with Xeon W processors, but there are options to configure those system with quad-core CPUs and 16 gigabytes of RAM, which Apple does not allow. The other thing to consider as part of the iMac Pro's cost is its 27", P3-capable 5k display, which isn't available inexpensively anywhere else. The nearest configurations from Dell can match the iMac Pro's CPU, memory, and storage configurations, and then use a low-end GPU and low end display. This is, of course, the beauty of the generic PC market – not everybody needs Apple's 5k display or a Radeon Vega GPU, and building a different system allows you to put that money into different things.

    The interesting comparison I heard from a lot of people, and it was surprising to see this from some Mac power users, was a "workstation" built using high end desktop enthusiast parts would be well received. This is where we start to get into some interesting discussions about what makes something a "workstation."

    Traditionally, something was a workstation if the vendor called it as such. Computer vendors have traditionally been careful with the w-word, because it meant claiming you believe your product is a step or two above the competition. For most of the 1990s, this meant while the Mac and PC markets contained professional computers they didn't contain workstations, because for every Power Macintosh 9600, there's a much better equipped SGI Octane or Sun Ultra or Compaq AlphaStation with 64-bit processors, a better operating system, faster networking, properly implemented and much faster SCSI subsystems, and so on.

    In the 2000s, workstation-class hardware started becoming less expensive and the money needed to rev up all the different platforms wasn't quite so available as it had previously been. Intel had been building chips suitable for low end technical workstations and Windows NT was up to the task of being a workstation OS. AMD also produced the 64-bit extensions to x86 and licensed them to Intel. Apple had just acquired a workstation vendor, NeXT, and started coupling the better OS with some of its newest hardware.

    In the early 2000s, Apple started advertising its hardware and software to classical RISC UNIX workstation users who were looking for a modern platform, especially as some of the RISC UNIX vendors failed to commit to building new workstations based around their old UNIX operating systems, either on the old processors or on other hardware.

    I've had a lot of discussions with people on the finer details of this point. It is argued that because Apple was putting its foot almost exactly up to the "workstation" line with ads such as Sends Other UNIX boxes to /dev/null and efforts such as the Xserve G4 it's safe to say all Macs running OS X at that point were workstations because of Mac OS X. I tend not to agree with this point because Apple had built UNIX systems before, none of which it tried to classify as UNIX workstations in the sense that, for example, a Macintosh IIfx could compete against a SPARCstation. By 1990 when the IIfx came out, Sun had moved onto the SPARC architecture and in raw compute numbers, a SPARCstation was a couple of times faster than the IIfx.

    The change in the mid-to-late 2000s is "workstation" as a term went from meaning a machine that existed in a different performance class from normal desktop computers to something that is qualified to run specific applications, or is qualified by particular hardware features, regardless of what performance class a machine is. (For example: error correcting memory.)

    I have been talking about The Plateau for a few years now. I should probably start a page or category for it. The relevance here is leading into the 2010s, new workstation products started to use Intel Xeon processors aligning very closely with mainstream desktop platforms. This ended up representing a new low end in workstations in particular, positioned for "entry level" work and certified often for tasks such as viewing CAD files, 2d CAD, software development, general purpose UNIX chores, and so on.

    This is speculation, but my theory is Apple, upon starting to use the term "workstation", decided its workstations should in the traditional sense be a step or two above normal Macs. There was a period of crossover in 2009-2010 where quad-core iMacs were starting to exist, at a time when the baseline Mac Pro was a quad-core configuration, for just $200 over the top end iMac. Then the two lines diverged again, with the Mac Pro clearly demanding a premium for its performance and reliability improvements over the iMac

    Over the course of a few generations of iMacs getting new processors and graphics and with the Mac Pro 5,1 then 6,1 standing still, the gap narrowed again, but Apple has reopened it with the iMac Pro. Keeping that gap open requires that Apple keeps the machine updated, but that should be easier to do with the new thermal configuration.

    The internal upgradeability of the iMac Pro has been discussed a lot as well. The memory can be replaced, but only at a service center. Tear-downs reveal that the solid-state storage is on modules, although the modules are unique, and that the processor can be replaced.

    Storage flexibility was a prominent criticism of the Mac Pro 6,1, particularly as it pertained to the machine's video editing credentials. (Important, given that Apple pretty much designed it explicitly for 4k video editing) I don't know the state of the ultra high end video editing today, but up to that point, it was common for the highest end video editing systems to feature not multiple internal hard disks, but FibreChannel or SAS cards to connect to disk arrays. They fit in well with the large video tape recorders, other import/export equipment, sound boards, and interconnect boards, plus program monitors that often end up in the highest end systems.

    I think criticisms regarding upgradeability are a little misplaced. Primarily, Mac users remember (or hear about) a time in the distant past when the machines could be upgraded with new parts and processor generations well beyond what is considered reasonable or necessary today. These upgrades often didn't deliver anywhere near the potential performance of a whole new computer, but I understand the appeal as a way to get a little more life out of a machine in an era when the bare minimum baseline price for a fast new machine is over $3000.

    Upgrades for capability and capacity, like RAM or storage, would be nice to be easier to do, but they appear to be doable and storage upgrades housed externally are in a better position than they have ever been, so it's not particularly worrying to see, say, a machine where the primary way to add storage is via ThunderBolt 3.

    January 15
    Adobe’s Profitability and Licensing

    Meta: I wrote this a few months ago, but I'm finally posting it now. I've since had an opportunity to poke at some of the things I wrote here, some of which became true, and will have more thoughts on that later.

    Outside of news at the high end of the enthusiast desktop microprocessor market, tech hasn't done anything that I specifically want to write a lot about for a while. I'm getting back into the swing of things after not posting for a while, due to NaNoWriMo, and I figured an easy thing to talk about would be Adobe. This is partly adapted from a tweet (and replies) I posted a few months ago.

    For context, Adobe released some earnings information a few months ago.

    Perhaps the most relevant bit is this:

    Adobe achieved record quarterly revenue of $1.77 billion in its second quarter of fiscal year 2017.

    Adobe says a few more things here, but what it boils down to is essentially… they killed off most of their perpetually licensed software products and replaced them wholesale with services that include desktop software.

    Even though this ostensibly happened a few years ago, profits are up-up-up. It makes sense, you could buy Creative Suite 6 up through around a year ago by calling in, but that has since stopped.

    Adobe Creative Cloud is kind of a near-and-dear subject to me in a weird way. Long ago, I was a photography student, so I came to the university with my Mac and very shortly after it was available, I purchased a copy of Adobe Creative Suite CS3 Design Standard, and a copy of Dreamweaver CS3 on the side. I did this because I wanted to be able to use Photoshop and Bridge, with Illustrator and InDesign, and I wanted to build a web site, but I didn't particularly care for Flash.

    I used that copy until I stopped having a Mac and then I handed it off to another person who needed it and had gotten a Mac. They used it until it stopped working well with the current versions of Mac OS X.

    Part of why this was possible was educational pricing. The other part is due to the perpetually licensed nature of software at the time I could keep using these tools for several years without paying for them over again. I had what I needed, I didn't have things I didn't need or couldn't use, and because I wasn't swapping files with other users of these programs, issues surrounding format compatibility weren't important.

    When Adobe announced Creative Cloud, it was initially an alternative licensing scheme. It looked like it would be a great deal for people who frequently needed to buy newer copies of the software to keep up with feature needs or with collaboration, and for people who needed all or most of the different functions.

    Despite the fact that Adobe didn't (and honestly, still doesn't) do a whole lot to make Creative Cloud really compelling as a cloud service, it's compelling pricing and it's interesting and likely effective as a way to encourage people to stay up to date, but as Creative Suite 6 is no longer available and as Adobe makes it more and more difficult to purchase licenses to Acrobat and Lightroom separately of the rest of the Creative Cloud (or outside of Document Cloud and the Creative Cloud Photography plan) it becomes less and less compelling for people who don't need all of these products to stick with Adobe at all. Lightroom 6 appears to still exist, but you need to dig deep to find it. The same is true of standalone versions of Acrobat Professional.

    I would be more amenable to the idea of, say, using an educational discount if Adobe's terms for that product didn't dictate that students can now only use the "special" educational rate for one year. After that, the rate goes up to $360/year, which is still less than the $600 yearly retail cost, but less easy to swallow than the old cost, which might have been $300 once in an educational career, and certainly less amenable than the $240/year special they advertise heavily to students.

    If I may, I'd like to take a detour into Office licensing. Microsoft releases new versions of its Office software suite every "few" years. Historically, they change the file format a bit less than once a decade, and they allow any "supported" (basically, today minus ten years) version of Office to connect to hosted services such as OneDrive, SharePoint Online, and OneDrive for Business.

    Buying a full, perpetual copy of Office is still relatively easy to do and has always been an "expensive but not that expensive" proposition. $150 for the home version and $400 for the professional version is the current pricing. Buying Office 365 access ranges from $80 for four years to $150 yearly, depending on the customer and the desired functionality.

    For the subscription service, Microsoft does the right thing by integrating Office with services people are likely already to have (Skype, Hotmail/Live/Outlook) and adding benefits such as a terabyte of storage space. On the "regular" home version of Office 365, the software can be installed on up to five Macs or PCs (compare with "One" for Adobe, and with no good way to license it twice for convenience) and up to five sub-accounts can be created with their own e-mail, Skype, and OneDrive service. In a family situation, these services can be used to reduce the cost of licensing Office for everybody's computer.

    Adobe, on the other hand, provides 20 gigabytes of storage space which appears to include the space needed to host your portfolio web site. I can't imagine in what context 20 gigabytes of space is a particularly useful online storage bucket for tools like Photoshop and Premiere Pro. It's not unimaginable for a single large Photoshop project to exceed 20 gigs. The photo upload folder from my cell phone is just shy of 15 gigs. A single particularly active day shooting with my eight-year-old digital SLR camera can yield 15+ gigs of data. My newer camera has a 128-gig memory card in it, and can shoot video.

    It seems preposterous that 20 gigabytes of space would have any use at all for work in most of Adobe's applications. And then Microsoft goes and gives you a whole terabyte of space to use for your resume and your taxes – documents that use mere kilobytes of space.

    There are rumors (although, nothing solid from Adobe) that a next-generation Lightroom component or perhaps a stand-alone Lightroom service will offer more storage space, to do something Apple and Google (and Microsoft, to a certain extent) have been trying to do for a few years now: put The Cloud at the center of a photo workflow.

    I think this could be the pivot that would make paying monthly for an Adobe cloud service centered around having enough room to store a full photo and video library make sense. In total, I probably have a bit under 400 gigabytes of photos and videos I have shot over the years. A perpetual problem of mine has been managing the library with multiple computers, and quickly accessing images when I am not at my "main" or "photo" computer. (Mostly because good software to manage photo libraries costs a lot to license for several computers.)

    The rumor that has been floated was that Adobe is working on a browser-based version of Lightroom that uses the library stored primarily in a new terabyte of online space, and presumably syncs to mobile and desktop versions on client computers. With the correct synchronization and setup (As I mentioned on Twitter, I have wanted to use my iPad to view and organize photos since I've had one, which was literally the day they were available at retail) it should be possible, perhaps even easy to pull a bunch of new images into an iPad and have them magically go to your main photo computer and your online account, where they are backed up.

    iPads are fast enough to view and work with photos, and their beautiful, accurate, high-resolution displays should make them particularly good at it. Apple has also been selling USB connectivity hardware to make image transfer from cameras and storage hardware for long enough that it feels like a shame this doesn't already exist.

    This one change on its own doesn't justify the fact that Creative Cloud is $600 yearly. It seems problematic that there's no good way for somebody who wants to run a simple pure HTML web site to do so with Dreamweaver, or that Flash should be perpetually locked behind one of these plans.

    I don't know if there's a good way to solve this problem. Without doing things like watermarking the output of programs (which can dampen even hobbyist or educational output) or returning to a system where subscription tiers are based on what applications are available, and adding the highest end applications, such as those used for audio and video processing, is what gets you to the most expensive bundles.

    I understand why Adobe did it, I understand the product stack simplicity and the fact that this allows users to grow into parts of the suite they may previously have considered unavailable. (For example: using Premiere Pro to build videos to put on your web site, to add a video track to a podcast you edited in Audition, or to build a title sequence for your video in After Effects.)

    As somebody who uses other tools for some of these tasks, and doesn't have time to grow out of some of the more basic tools, the best option really does appear to just, as I said in the last tweet of the thread, go elsewhere.

    Not every Adobe product has a viable competitor, but realistically, most of them do. If my focus is print design with inDesign, I can go look at Microsoft Publisher or Quark XPress (whose financials might be interesting to look at these days.)

    In the early days as Adobe started to push Creative Cloud and announced that CS6 would not be updated and would not be replaced with a CS6.5 or CS7, many opined that it would be their undoing. It appears clear now that Adobe has no trouble maintaining profitability with every product they sell being a subscription. I would further argue it has always been clear subscription-based licensing is best for software vendors, and that they have a lot of incentive to move in that direction. The primary limitation has always been connectivity, which by the late 2010s had been resolved.

    It's possible that in the overall computing market, the good of this will outweigh the bad, as people who can't bear Adobe's licensing model move to competitors that once looked like they would languish to their death (I'm looking at Quark with this one), and as new competitors (Affinity, Pixelmator, Pinegrow) appear to attempt to undercut high end graphics software.

    As such, I don't think Adobe is looking to "fix" the "problem" that their software is largely inaccessible. I don't even believe they believe it's a problem. There's also the question of what it means to be accessible, in this sense. Just because you could go to CompUSA and buy a copy of Flash MX or inDesign CS off a shelf, were those products any more "accessible" at their prices and with the complexities of licensing them, especially with issues such as upgrade pricing and cross-platform changes.

    This kind of issue is part of what motivates open source software developers, which is good. However, most open source software makes a poor or barely viable replacement for these kinds of tools. Often, while you're learning generic concepts in an education program, you're also learning the mechanics of specific tools that are common in an industry. Using a variant of Blender meant for video editing may be worthwhile for a home movie, and is probably even a good tool in general, it's unlikely to be or to even be like what is used in professional contexts, and could teach "bad" workflow habits or techniques.

    Likewise, the interface on GIMP is intentionally very different from that of Photoshop. Compare with LibreOffice, which aggressively styles itself after Microsoft Office 2003, for a variety of reasons. Scribus, similarly, avoids making itself look like InDesign or Xpress, the industry standard print layout tools.

    Whether this is meant because these developers think they can design better software than professionals who both do this work and have been working on this software for years, or if it's done out of malice, I couldn't tell you. That also sidesteps the fact that most of this software simply isn't set up to deal with certain issues. Scribus is, at best, a competitor to Microsoft Publisher, and GIMP is, at best, a competitor to Paint Shop Pro, essentially.

    At the end of the day though, a budget is a budget and it's up to computer users to decide what theirs is and find solutions that work within it. Adobe has never been about building budget conscious software, and Creative Cloud is nothing if not an affirmation of Adobe's believe that they are at the top of their markets.

    January 08
    Quick Computer Security Thoughts

    Meta: It's been a while since I've posted! I have some other things in the works, but this was a quick gimme based on some recent events, and it has been good to sit down and write it.

    Last week, the early announcement of the Meltdown and Spectre attacks surprised many, although not by an awful lot. Word had started to get around on twitter the night before, and I managed to get in an early word.

    As a quick re-cap: Meltdown and Spectre are two newly discovered and disclosed security vulnerabilities. Meltdown applies primarily to recent Intel chips and allows unprivileged processes to access all memory locations on a computer. Spectre is a bit harder to pin down, but the important things to note are that it uses out-of-order execution and branch prediction on modern CPUs to provide access to memory locations.

    The early buzz was entirely about Meltdown. Before all the information was out, people were reporting patching it might cause a 30% slowdown. Of course, in real testing, most user-facing workloads have a much less severe penalty, although many server tasks will have trouble.

    The looming specter of the whole situation, though, is Spectre. Spectre is more difficult to exploit, but could have much more severe impacts, and will be much more difficult to protect against in software.

    Permanently fixing vulnerabilities related to Spectre will require entire new CPU silicon. A few proof-of-concept attacks on Spectre are already being patched against, but there are so many possibilities it's likely server computer hardware (and, anything running a desktop-class OS as well) can't be considered "safe" until it's simply replaced by a new generation of hardware.

    The next generation of computer hardware currently in design phase likely doesn't fix for this. It's possible that the generation after that also won't fix these problems in hardware. We're looking to systems that are two or more generations away to fix this, and that will take between two and five years, just depending on what's needed to mitigate these risks.

    Once a CPU is designed and verified, there will be the matter of producing enough of it to meet demand. Cloud service providers and enterprise datacenters will be doing their best to get at these chips first. In a situation where hypothetically every server system doing work in cloud or service provider setting needs to be replaced, it could be years before silicon is available for consumer and desktop applications.

    I think by the time this is published, the moment of true widespread panic will really be over. A huge rush on server-class systems may or may not appear in a few generations, and even if it does, it probably won't look too abnormal for a processor generation launch. The chipmakers (Intel and AMD) may do a server-first release, but they may not bother, opting as they generally do to build consumer silicon first.

    My prediction is that the hype will pass and that systems departments will, as ever, increase monitoring and attempt to decrease exposure. In theory, this is what a good information systems department is already doing, so it's going to be a matter of doing the same thing, but more, instead of doing a different thing.

    On the desktop side of things, I think that following guides for security such as Decent Security is as important as ever.

    I can't stress this point enough. It has been fun to watch the vintage computing circles on Twitter fall over themselves to come up with the most creative ways to avoid these hardware vulnerabilities, and it has been in good fun, but a necessary side-effect of digging out a twenty-year-old machine to avoid a modern hardware vulnerability is that software vulnerabilities are re-introduced. This is especially true on anything running closed-source software, or for which modern releases of open source software are no longer available.

    I love pulling out my old computers, but everyone should keep their modern patched Internet-faring computers ready to go. Part of regular security operations in the computer industry will involve vendors releasing more patches for Spectre-class vulnerabilities as they're discovered. Users of modern operating systems that get patches from a vendor will benefit from those patches as they become available.

    Meanwhile, most of my desktop systems have already been patched against Meltdown. My Mac downloaded and installed the patch before the new year, which might say something else about the state of information sharing among security professionals, but that's for later, if ever. My Windows systems also have the patch and as I predicted on Twitter, I haven't noticed any difference. I have yet to go play a game and record my screen and a camera at once, but I'm not particularly worried about that working well, it was fine last time I tried it.

    Personally, I'm not making any big plans to rush out and replace any hardware I know or suspect to be affected by Spectre, mostly because there's nothing better right now. My server and my desktop are each pushing six or seven years at this point and depending on what my budget looks like in a few years, I think I will be able to make a relatively easy case for replacing either of them. My laptop is still less than a year old. Its replacement isn't even on the thought roadmap yet.

    I think Spectre has larger possible implications for the model of centralized services and the cloud, but that will have to wait for another time. New vulnerabilities are always exciting, but the takeaway should still be to run a modern OS, patch it regularly, and keep an eye out for possible trouble and monitor the machine's behavior.

    July 31
    How Software Changes

    One of the issues I think about from time to time is this idea that computers get faster and when computers get faster, that straightforwardly means that the software that runs on them automatically benefits.

    This is true to a certain extent. You can usually do things like overclock a CPU or put in faster storage or more or better memory and get benefits such as applications that run faster. However, it's not always the case.

    It's important to acknowledge that there are different ways computers speed up, and different ways software applications use the speed that's given to them.

    This stuff tends to be more directly important when you consider non-trivial computing tasks. It doesn't really matter how much faster a system gets for things like Word and Outlook, because you can only launch Word so instantly, and word processing has largely been a task where computers have been waiting for the user to do their part for the past twenty to thirty years.

    Video and audio, on the other hand, as well as other higher end applications, present interesting scaling challenges. The way we do these things has shifted a lot, even in just the past ten years, let alone the past twenty or thirty years.

    Video is the example I happen to have most experience with, because it was around ten years ago that I was trying to get into some video production stuff myself. I had been doing some light editing and had been experimenting with digitizing VHS/S-VHS tapes at the end of high school, and I had been given use of a DV camcorder for much of my senior year of high school. I assisted with the on-campus TV station when I got to the university, and did so for a few years.

    At the time, dual core CPUs were new, but dual processing as a concept really wasn't, it was just that it got more compact. Quad-processing was appearing in workstations, but that's not such a huge junk.

    In the late '90s and early 2000s, working with video on a computer tended to be very tape based – either in the sense that you were importing or reading video from a tape directly, or in the sense that your digital file format was itself somewhat tape-like, or based on tape operations. In the early 2000s in particular, some of the early tapeless digital formats for professional video capture came onboard, such as Sony XDCAM and Panasonic P2, and although you could copy the files, the mechanisms were entirely still there to capture this video from the camera or from a deck to the computer.

    This goes deep. Deep enough that at the time, people and tutorial books and even software vendors cared deeply about ensuring that people using the computer to edit video were, in essence, as polite to the machine as possible. What this generally meant was using file formats that were friendly to video editing. Ideally something like Motion JPEG, which was just what it sounds like: video compressed using a collection of frames that are each JPEG compressed.

    Most compressed video is compressed in some way or another using two kinds of frames: key frames, which contain the entire picture, and intermediary frames, which describe differences between the last key frame and the current frame. The idea here is that you can make video a lot smaller, especially video where lots of things stay the same, if you either cleverly place key frames or if they're spaced relatively evenly throughout a static image. Motion JPEG and likely other similar formats work by making every single frame a key-frame. In the '90s when you could get a Silicon Graphics system to capture video, Motion JPEG was a common way to do it, because the compression was otherwise too difficult to do in any kind of real time.

    Key-framed video works well for distribution, but the thing that most video editing guides were sure to mention at the time was that this type of video was more difficult to edit, for a variety of reasons. The guides weren't wrong, per se, but in retrospect, reasonably robust video editing software handled this problem with relative ease, and software that didn't got around it usually by simply refusing to import video of the wrong format. The point here is that this suggestion has almost entirely gone out the window. It's just accepted that (in general) modern software can deal with this. Similar to the key-frame issue is other issues surrounding what particular compression codec gets used for video you're editing. In the mid 2000s, h.264 video existed, but it was a big no-no to edit on that type of video, again, out of politeness to the machine that would need to do it.

    And again, this changed. We now cut all day long on formats like AVCHD and MP4 files that use h.264 and even h.265 compression, because cameras and phones of all kinds easily compress good looking video into these formats, and because computer horsepower is so cheap that it doesn't matter if your video editing software needs to compensate for where the key frames are. (It never really did, especially in Final Cut Pro, which always edited on references to video files, rather than by placing absolute cuts in source files.)

    At some point along the way, we basically got tired of the limitation that video must come into a computer in exactly real time, and the whole process got much better for it. Part of what enabled this is that cameras changed. It was probably going to happen as flash media (such as Compact Flash and SD cards) got better. Along the way, we had interesting ideas such as disc-based cameras, and there were hard disk fanny packs for DV and HDV cameras, but flash media won out in the end.

    The other part of what enabled this is that computers got faster. As you move from one, to two, to four, to now eighteen cores in a "high end but not unreasonable" desktop computer, your ability to get the computer to do more things for you or just deal with adverse conditions improves. Of course, at any speed, video editing in particular is helped by having more memory available and faster, bigger storage. Graphics processors are also a big help. In the early 2000s, a system's GPU generally only impacted how quickly it could render things given to it by the CPU, but that started to change, especially as the 2010s rolled through, with the graphics processor itself taking a more active role in what was displayed. Today, GPUs do all manner of video editing tasks in lieu of or as an assistant or coprocessor to the CPU. It's to the point where in reality, having a good graphics processor is likely more important to most video editors than having a high core count desktop CPU.

    So, computer hardware in that respect has changed a lot in the past ten years. If I go pick up a copy of Final Cut Pro 6 from around 2006 or 2007, it will in fact run on a brand new Mac, but it will get exactly none of the benefit from all of the advances made over the past ten years. Final Cut Pro 6 stumbles all over itself, badly, if you aren't so kind as to give it the correct files, and it (as a 32-bit application) makes very poor use of high amounts memory. It also doesn't make great use of a lot of RAM.

    You can edit a video with Final Cut Pro 6 on a modern computer using modern files. It lets you do things, but it's a deeply unsatisfying experience as you see only one or two of four threads, even on an older Mac mini get used, and as you see only two or three of eight gigabytes of memory get used, and as you need to do things like render previews of the video with every simple edit.

    Some of these things are just par for the course in terms of how video used to work. Even when editing with the DV file format, which is one that Final Cut likes, you had to take frequent render breaks or spend a lot of time just guessing at what a final product would be like. Final Cut did nothing in the background, because the hardware of the day just couldn't support it. (It's worth noting here that in overall system performance, a pretty mid-range business desktop from around 2011-2012 is probably around four times as fast as a high end workstation from five or six years earlier in 2005.)

    The reason I keep picking on Final Cut Pro here is that in 2011, Apple introduced a new version of the software, "Final Cut Pro X" to replace Final Cut Pro 7. I think it would probably be fair to describe Final Cut Pro X as a complete re-imagining of what editing video should be like in the modern era of computing.

    Remember: in 2011, an iMac had a quad-core CPU, a powerful discrete graphics processor, could run 16 gigabytes of RAM, and could accommodate SSDs that were much faster than any PowerPC G5-based computer or even the first generation of Intel-based Macintoshes in 2005 and 2006 could. Such fast storage barely even existed in 2005, let alone on the Mac. Some high end Macs could use that much RAM, but it was often never installed because 64-bit software didn't start existing on the Mac until after the move to Intel CPUs.

    Apple's movement on this issue was faster and more sudden than almost anybody else in this industry, mainly because that's just how Apple does things. A lot of the movement in the program was based around taking things from a strictly filmstrip-based perspective and letting the software do more guessing on your behalf. A lot of it was based around the idea that as a content creator, you shouldn't really have to care about the technical details of the content. Final Cut Pro does a lot of things in the background, and the way it works as a program also encourages working more quickly. In general, for example, editors spend less time waiting for renders to happen, because Final Cut renders video at low speed in the background while you're working. In addition, the timeline just supports playback of more types of files, so a render doesn't need to occur to play video recorded by a webcam, iPhone, or anything of that nature.

    On the down-side, there was a point at which if you were a Final Cut Pro editor, you had to buy a new piece of software and then spend time re-learning and pre-perfecting techniques you used to create a certain look. On the up-side, your workflow could become a lot faster and you weren't anywhere near as constrained to particular file formats or doing pre-processing before you could start editing. (Another common meme from the old days: rendering proxy files to edit on, mainly to make up for particularly bad storage in laptops and low end computers, only to re-attach the originals and re-render the output at the end of a long project.)

    Some applications, such as Adobe Premiere Pro have kept up with these trends. Others, such as Avid Media Composer appear to have doubled down on what I consider to be some particularly bad habits.

    This leads me to pivot into audio a little bit, because the real context for some of these discussions has been that somebody on Ye Olde Computer Forum has asked for some wisdom (in different words) on buying some hardware to form a Pro Tools HD setup, circa about 2003-2005.

    The details here, which we actually discovered a few pages into the thread, are that this person is using a version of Pro Tools Native on a laptop that is a few years old. Inexplicably, even though the program isn't really using a lot of horsepower or RAM, the program stops working abruptly and gives an otherwise unidentified CPU error. The reader is left to presume that this probably means that the program hit some part in this person's audio file that is so complicated, it takes 100% CPU power to process, tops out, and then can't continue because a frame was dropped. A vexatious problem in any real-time media application, and a huge reason why in the days of yore with video, you might capture low-resolution proxy files, render or convert everything to an easy to use format, edit on that, and then let the computer chug for a day or so re-capturing all your video and meticulously re-assembling your project in high resolution.

    This person wants two things:

    1. To use the Pro Tools HD processing cards (in this case, PCI-X cards) to built out a system (which would be a desktop from around 2003, compared to the existing laptop from around 2012) hopefully avoiding the mysterious CPU error
    2. Pro Tools HD 8 (the software version they will get) has an effect they want to use, that didn't become available in the "Native" (software only) version of Pro Tools until recently.

    I think ultimately the person wanted us to say that yes, in fact, it's reasonable to spend $600 (the price for the cards they wanted to get) for some hardware that would enable them to build out an audio system circa around 2003. To do this, they would need one of a very slim selection of PCI-X having computers from the time, enough stuff to make that computer go and maintain it (to be fair: they probably have that, it is a vintage computer forum) and then they would need to relearn an older version of this software, only to perhaps find that because their laptop is going to be massively more powerful than, say, a Pentium 4 workstation or an early revision of Power Macintosh G5, their production needs may still be unmet.

    Here's where Avid sort of looks bad, I think. There is almost certainly no good reason for the "native" version of this software to be locked down the way it is. This person has a laptop with a quad-core CPU, a good GPU, a lot of RAM and potentially a lot of very fast storage. They have a desktop with 8-12 CPU cores and capacity for at least 32 gigabytes of RAM, plus PCI Express expansion slots for faster storage. Other audio applications would almost certainly allow for around about as many capture channels as that hardware can allow. Avid is using modern single-purpose DSP cards dictate licensing and feature levels on its software products.

    To me, doing it this way pretty much ignores that there is now better ways to do this work. A modern CPU can almost certainly outstrip these DSP cards, whose only real function appears to be for compression and to enable certain effects, even though neither of those things should require specialized hardware any more.

    It's reasonable that a single or dual-CPU workstation from 2003 doesn't have the horsepower to do this. But, something from a decade later? The thing people in the discussion said was that "audio hasn't changed" – with the implication being that this was a "solved problem" and that as with word processing, there aren't improvements that can be made in process or efficiency.

    Of course, I don't consider audio to be a "solved problem" especially if here in 2017, when we're getting 8-core CPUs at the mainstream desktop level and 12+ core CPUs at the enthusiast level, before we get into actual workstation and server CPUs, you still need thousands of dollars worth of DSPs from the '90s to do compression on audio to capture it to disk and then play it back.

    For better or worse, I think the solution here needs to be that communities using these tools need to look at Avid and ask why this is the case. An iPhone or an iPad can easily record multi-track audio. An interface for doing so is of course necessary, but outboard processing hardware really shouldn't be.

    If I were an Avid customer today, I think I would either be inciting a mutiny or I would simply stop being an Avid customer.

    The conclusion to this sub-point is of course that this person has either already bought the Pro Tools 8 kit, or they're going to anyway, because a bunch of pointed leading questions about what guides their needs and what might make the best use of hardware they already have is not worth the time and effort.

    I get that people doing creative things with their computer just want to sit down and do it, but this stuff is usually worth discussing because if a change in tools can lead to better results or faster results, then the justification not to do it seems thin. In the case of Pro Tools, I think something needs to be asked about what really causes the CPU error. I know that with my video editing work (when it appears, which I will admit, it is infrequently) newer software will immediately lead to a speed-up in my work, just because it will be able to better take advantage of the modern computers I have, and it will work more easily with the modern file formats I use.

    I don't have particularly concrete examples, but advances in computer hardware generally need to be matched by advances in computer software, generally, to achieve the most meaningful productivity increases for non-trivial tasks. Computers may not feel faster, although a side-effect of much of what is improving in the past five to seven years means that they should in fact feel faster, especially as operating systems get fine tuned and as application software gets updated to take advantage of new hardware configurations.

    This isn't exactly a continuous climb, though. If an application is, say, 64-bit aware, it doesn't really need to become more 64-bit every time new computers that support more RAM come out. If an application is multi-threaded, it doesn't necessarily need to become more multi-threaded each time a new generation of CPUs comes out. What needs to change, of course, is that if RAM ceilings jump enough that your application struggles to get a performance benefit out of it, when it should, or when an application isn't designed to use more than a certain number of threads, or when CPUs are so heavily threaded that there's room for the application to do more work at a time.

    There will be a point at which something will cross over from being something difficult to something easy, perhaps even trivial. I would say that video is there, but it's really not – advances in video capture technology and the fact that people will always want or need things like effects, multi-camera operations, different output formats, and so on will likely mean that performance enhancements in computers will be meaningful to video editors for the nearly predictable future.

    However, something like photo management and even things like print design and web design, something that in the 1990s was reserved for the highest end of computers, is now something any random laptop, even a $300, can easily do. Illustration and low end CAD tasks don't need relatively powerful computers any more. Other things like programming, virtualization, and even still image editing really depend greatly on the technique and a few other factors.

    As always, I think it's an exciting time to be interested in computers. I would be lying if I said I thought that there was truly an unexciting time to have something to do with tech. One of the things I'd like to do over the next few weeks is get my hands on some free/trial software – Final Cut Pro and Avid Media Composer First at least, perhaps Adobe Creative Cloud (for PremierePro) and do some video editing testing. I want to see what I can push a few different systems I have to do and what the experience ends up being like.

    I of course have my copy of Final Cut Pro 6 and have worked around some of its quirks. I have Avid Media Composer First installed on the system, and so that is probably what I'll use and test first.

    One other thought that I haven't mentioned about Final Cut Pro 6: It and the other members of the Final Cut Studio 2 bundle that I have suffer severely from changes in Mac OS X. I can't at all in good conscious really recommend anybody try to use this as a day to day editing tool on a modern computer. This is perhaps one of the most salient points I can make. It's already badly non-performant on something like a Mac Mini from 2011. It will run and technically work on something like a much newer iMac, but each time I go to use it to build out a project, I spend more time fiddling with the software itself, dealing with, say, breakages in the way side-utilities such as Compressor work than I do editing the video. I end up producing sub-standard or incorrectly compressed files and hoping YouTube fixes things on their end, problems I wouldn't have if I used something "more modern" – whether that's iMovie, Final Cut Pro X, or some other tool.

    It's to the point where, even if I had something to say or something to record and I thought I had done a good job with recording, I dread doing the post-production, so I never will. That's a different issue and video has always been one of the more complicated formats to work with, with so many different parts that ultimately matter to a good production. Some of that is a different issue, some of it is needing to make the things you can easy to do so that the overall process isn't too overwhelming.

    June 26
    Data Storage Dilemma

    With the most interesting major tech announcements of the way for a while, and with a new laptop easing my mind in the way of "how will I write when the Surface 3 dies?" I've had time to think about other things. Not that I have, of course.

    Instead, I watched YouTube, and in the back of my mind, I was thinking about the thing that deep down, we all know I really want to do: Save every YouTube video to local storage so I can fall asleep to the sweet, dulcet tones of people recording their progress in Cities: Skylines. And then watch those files again later when I want to see what happened.

    Video, both legitimately downloaded into my iTunes library and from things like podcasts, isn't the only thing that eats disk space on my systems. About ten years ago when data was much smaller, my solution to this problem would be to burn a new disc every month or so with data I wanted to keep but didn't need on my disk any more. It worked out well because with the slow Internet connection I had and the relatively slow rate at which I created or otherwise acquired data, there wasn't an awful, one or two DVDs (or if I was feeling spendy, one dual layer DVD) was enough.

    Today, writeable DVDs and Blu-ray discs exist, but preparing them is as inconvenient as it has ever been, and these discs, which are costly in their write-once form and even more costly if you try to reuse them for backups, have been massively outpaced by the falling costs and increasing capacities of things like external hard disks and USB flash disks. Optical media is often unreliable, long term. Most of the CDs and DVDs I burned in the early 2000s have degraded to the point that it's questionable whether I'll get the data off them. Any other form of relatively capacious external storage device is very expensive and very enterprise focused. The next best thing, DAT320, was more affordable than LTO, although less robust and also now discontinued.

    It strikes me that it would be great to have a modern removable data storage format that's more robust than hard disks, bigger than flash drives and blu-ray discs, and ideally fast.

    The problem is of course that there are always compromises. You can't, say, build a storage format that's capacious, fast, and cheap. If you could we'd all have LTO tapes at home. I think that the trade-offs are going to be in capacity and in speed. It won't be as fast as real external hard disks or as big as LTO tapes. In trade for being pretty cheap and being treated like external media, I'm imagining it'll either be a new form of optical or magneto-optical media, or some kind of flexible magnetic storage, in the style of something like zip drives, or perhaps Bernoulli. Honestly, I wouldn't even mind if it was massively cost-reduced DAT/DDS media that had at least doubled or quadrupled capacity. (Ideally, the native storage capacity would be 500 gigs or so.)

    I think that for most people don't need an awful lot of that. In fact, most home computer backups aren't any bigger than single disks you can buy today, which is 8TB for external disks and 10TB for internals, plus the size of things like Drobos and home focused NAS devices.

    But, the thing I want to do is create multiple media sets for my backup of a big server system that I want to then take away for safe keeping. The other thing I want to do is store data in a semi-archival state. External hard disks are big enough, it's easy to duplicate them, but it's also easy to kill them and a certain amount of inactivity will cause them to die.

    The other problem with external hard disks for archival is that they cost a lot up front and they're often much larger than the amount of data I want to "archive" at any given moment, and I don't necessarily want to pull my disks out to add data and then later duplicate it.

    I think the common thing to do these days if to put data that's being hoarded in fake unlimited cloud storage locations. I suspect that if it were easier to use something better suited to the task, people wouldn't abuse those tools. The key to this is making the device fast enough. It has to be faster than using an average Internet connection to upload files, but it can be slower than a proper hard disk, I think, and making it inexpensive enough to be able to load up on cartridge. It would also be better if as part of an "archiving" solution there was a way to catalog the contents of the media, although if they're not tapes you should also be able to just browse the devices.

    I think that at most the mechanism should cost a few hundred dollars, no more than 500 if possible, and the media should be pretty reasonably priced. If in trading off the convenience of flash disks for lower cost of these cartridges, you can get the media down to around $20 a pop, it would start to make a lot of sense for low end archiving and backup applications.

    In an ideal configuration, the mechanisms cost a little less and the media is perhaps a little slower, but it holds a lot more in trade for the speed. I think that the "data hoarding" crowd would be fine with something cheap that worked slowly but perhaps worked in a configuration where multiple drives could run in tandem or where there was a cheap loader or stacker would of course be beneficial, but things like that add complexity and cost.

    The trouble is that there are a lot of ifs here. At the top end, for people who are building large multi-terabyte disk arrays to alleviate storage problems, you can almost certainly just get a tape drive and eat the cost. At the low end, RDX costs a lot relative to hard disks, but it's a good durable backup option for systems with less than 4TB of storage, and it's a removable system that works with spanning archive. At the far small end, cloud storage systems with a quota of a terabyte or so are often better as a primary or only storage solution, but mistrust in cloud technology often ends up meaning that some people end up with their data stored locally (not a bad thing) with no or insufficient backups.

    This technology is not really marketable or something that is likely possible to exist. It conveniently combines the best aspects of LTO and RDX but cheaper than either of them. I think that there is "a market" for this kind of thing but I don't really believe that it's terribly big. In truth, I'm sure it's quite small.

    Part of the problem is that there are people who the data hoarder thing. The people who do data hoarding as a hobby often either have the wherewithal to run regular tapes, or are totally opposed to the idea and might not be interested in such a device.

    1 - 10Next
     

     About this blog

     
    About this blog
    Computer and physical infrastructure, platforms, and ecosystems; reactions to news; and observations from life.