Skip Ribbon Commands
Skip to main content

Quick Launch

Stenoweb Home Page > Cory's Blog
July 31
How Software Changes

One of the issues I think about from time to time is this idea that computers get faster and when computers get faster, that straightforwardly means that the software that runs on them automatically benefits.

This is true to a certain extent. You can usually do things like overclock a CPU or put in faster storage or more or better memory and get benefits such as applications that run faster. However, it's not always the case.

It's important to acknowledge that there are different ways computers speed up, and different ways software applications use the speed that's given to them.

This stuff tends to be more directly important when you consider non-trivial computing tasks. It doesn't really matter how much faster a system gets for things like Word and Outlook, because you can only launch Word so instantly, and word processing has largely been a task where computers have been waiting for the user to do their part for the past twenty to thirty years.

Video and audio, on the other hand, as well as other higher end applications, present interesting scaling challenges. The way we do these things has shifted a lot, even in just the past ten years, let alone the past twenty or thirty years.

Video is the example I happen to have most experience with, because it was around ten years ago that I was trying to get into some video production stuff myself. I had been doing some light editing and had been experimenting with digitizing VHS/S-VHS tapes at the end of high school, and I had been given use of a DV camcorder for much of my senior year of high school. I assisted with the on-campus TV station when I got to the university, and did so for a few years.

At the time, dual core CPUs were new, but dual processing as a concept really wasn't, it was just that it got more compact. Quad-processing was appearing in workstations, but that's not such a huge junk.

In the late '90s and early 2000s, working with video on a computer tended to be very tape based – either in the sense that you were importing or reading video from a tape directly, or in the sense that your digital file format was itself somewhat tape-like, or based on tape operations. In the early 2000s in particular, some of the early tapeless digital formats for professional video capture came onboard, such as Sony XDCAM and Panasonic P2, and although you could copy the files, the mechanisms were entirely still there to capture this video from the camera or from a deck to the computer.

This goes deep. Deep enough that at the time, people and tutorial books and even software vendors cared deeply about ensuring that people using the computer to edit video were, in essence, as polite to the machine as possible. What this generally meant was using file formats that were friendly to video editing. Ideally something like Motion JPEG, which was just what it sounds like: video compressed using a collection of frames that are each JPEG compressed.

Most compressed video is compressed in some way or another using two kinds of frames: key frames, which contain the entire picture, and intermediary frames, which describe differences between the last key frame and the current frame. The idea here is that you can make video a lot smaller, especially video where lots of things stay the same, if you either cleverly place key frames or if they're spaced relatively evenly throughout a static image. Motion JPEG and likely other similar formats work by making every single frame a key-frame. In the '90s when you could get a Silicon Graphics system to capture video, Motion JPEG was a common way to do it, because the compression was otherwise too difficult to do in any kind of real time.

Key-framed video works well for distribution, but the thing that most video editing guides were sure to mention at the time was that this type of video was more difficult to edit, for a variety of reasons. The guides weren't wrong, per se, but in retrospect, reasonably robust video editing software handled this problem with relative ease, and software that didn't got around it usually by simply refusing to import video of the wrong format. The point here is that this suggestion has almost entirely gone out the window. It's just accepted that (in general) modern software can deal with this. Similar to the key-frame issue is other issues surrounding what particular compression codec gets used for video you're editing. In the mid 2000s, h.264 video existed, but it was a big no-no to edit on that type of video, again, out of politeness to the machine that would need to do it.

And again, this changed. We now cut all day long on formats like AVCHD and MP4 files that use h.264 and even h.265 compression, because cameras and phones of all kinds easily compress good looking video into these formats, and because computer horsepower is so cheap that it doesn't matter if your video editing software needs to compensate for where the key frames are. (It never really did, especially in Final Cut Pro, which always edited on references to video files, rather than by placing absolute cuts in source files.)

At some point along the way, we basically got tired of the limitation that video must come into a computer in exactly real time, and the whole process got much better for it. Part of what enabled this is that cameras changed. It was probably going to happen as flash media (such as Compact Flash and SD cards) got better. Along the way, we had interesting ideas such as disc-based cameras, and there were hard disk fanny packs for DV and HDV cameras, but flash media won out in the end.

The other part of what enabled this is that computers got faster. As you move from one, to two, to four, to now eighteen cores in a "high end but not unreasonable" desktop computer, your ability to get the computer to do more things for you or just deal with adverse conditions improves. Of course, at any speed, video editing in particular is helped by having more memory available and faster, bigger storage. Graphics processors are also a big help. In the early 2000s, a system's GPU generally only impacted how quickly it could render things given to it by the CPU, but that started to change, especially as the 2010s rolled through, with the graphics processor itself taking a more active role in what was displayed. Today, GPUs do all manner of video editing tasks in lieu of or as an assistant or coprocessor to the CPU. It's to the point where in reality, having a good graphics processor is likely more important to most video editors than having a high core count desktop CPU.

So, computer hardware in that respect has changed a lot in the past ten years. If I go pick up a copy of Final Cut Pro 6 from around 2006 or 2007, it will in fact run on a brand new Mac, but it will get exactly none of the benefit from all of the advances made over the past ten years. Final Cut Pro 6 stumbles all over itself, badly, if you aren't so kind as to give it the correct files, and it (as a 32-bit application) makes very poor use of high amounts memory. It also doesn't make great use of a lot of RAM.

You can edit a video with Final Cut Pro 6 on a modern computer using modern files. It lets you do things, but it's a deeply unsatisfying experience as you see only one or two of four threads, even on an older Mac mini get used, and as you see only two or three of eight gigabytes of memory get used, and as you need to do things like render previews of the video with every simple edit.

Some of these things are just par for the course in terms of how video used to work. Even when editing with the DV file format, which is one that Final Cut likes, you had to take frequent render breaks or spend a lot of time just guessing at what a final product would be like. Final Cut did nothing in the background, because the hardware of the day just couldn't support it. (It's worth noting here that in overall system performance, a pretty mid-range business desktop from around 2011-2012 is probably around four times as fast as a high end workstation from five or six years earlier in 2005.)

The reason I keep picking on Final Cut Pro here is that in 2011, Apple introduced a new version of the software, "Final Cut Pro X" to replace Final Cut Pro 7. I think it would probably be fair to describe Final Cut Pro X as a complete re-imagining of what editing video should be like in the modern era of computing.

Remember: in 2011, an iMac had a quad-core CPU, a powerful discrete graphics processor, could run 16 gigabytes of RAM, and could accommodate SSDs that were much faster than any PowerPC G5-based computer or even the first generation of Intel-based Macintoshes in 2005 and 2006 could. Such fast storage barely even existed in 2005, let alone on the Mac. Some high end Macs could use that much RAM, but it was often never installed because 64-bit software didn't start existing on the Mac until after the move to Intel CPUs.

Apple's movement on this issue was faster and more sudden than almost anybody else in this industry, mainly because that's just how Apple does things. A lot of the movement in the program was based around taking things from a strictly filmstrip-based perspective and letting the software do more guessing on your behalf. A lot of it was based around the idea that as a content creator, you shouldn't really have to care about the technical details of the content. Final Cut Pro does a lot of things in the background, and the way it works as a program also encourages working more quickly. In general, for example, editors spend less time waiting for renders to happen, because Final Cut renders video at low speed in the background while you're working. In addition, the timeline just supports playback of more types of files, so a render doesn't need to occur to play video recorded by a webcam, iPhone, or anything of that nature.

On the down-side, there was a point at which if you were a Final Cut Pro editor, you had to buy a new piece of software and then spend time re-learning and pre-perfecting techniques you used to create a certain look. On the up-side, your workflow could become a lot faster and you weren't anywhere near as constrained to particular file formats or doing pre-processing before you could start editing. (Another common meme from the old days: rendering proxy files to edit on, mainly to make up for particularly bad storage in laptops and low end computers, only to re-attach the originals and re-render the output at the end of a long project.)

Some applications, such as Adobe Premiere Pro have kept up with these trends. Others, such as Avid Media Composer appear to have doubled down on what I consider to be some particularly bad habits.

This leads me to pivot into audio a little bit, because the real context for some of these discussions has been that somebody on Ye Olde Computer Forum has asked for some wisdom (in different words) on buying some hardware to form a Pro Tools HD setup, circa about 2003-2005.

The details here, which we actually discovered a few pages into the thread, are that this person is using a version of Pro Tools Native on a laptop that is a few years old. Inexplicably, even though the program isn't really using a lot of horsepower or RAM, the program stops working abruptly and gives an otherwise unidentified CPU error. The reader is left to presume that this probably means that the program hit some part in this person's audio file that is so complicated, it takes 100% CPU power to process, tops out, and then can't continue because a frame was dropped. A vexatious problem in any real-time media application, and a huge reason why in the days of yore with video, you might capture low-resolution proxy files, render or convert everything to an easy to use format, edit on that, and then let the computer chug for a day or so re-capturing all your video and meticulously re-assembling your project in high resolution.

This person wants two things:

  1. To use the Pro Tools HD processing cards (in this case, PCI-X cards) to built out a system (which would be a desktop from around 2003, compared to the existing laptop from around 2012) hopefully avoiding the mysterious CPU error
  2. Pro Tools HD 8 (the software version they will get) has an effect they want to use, that didn't become available in the "Native" (software only) version of Pro Tools until recently.

I think ultimately the person wanted us to say that yes, in fact, it's reasonable to spend $600 (the price for the cards they wanted to get) for some hardware that would enable them to build out an audio system circa around 2003. To do this, they would need one of a very slim selection of PCI-X having computers from the time, enough stuff to make that computer go and maintain it (to be fair: they probably have that, it is a vintage computer forum) and then they would need to relearn an older version of this software, only to perhaps find that because their laptop is going to be massively more powerful than, say, a Pentium 4 workstation or an early revision of Power Macintosh G5, their production needs may still be unmet.

Here's where Avid sort of looks bad, I think. There is almost certainly no good reason for the "native" version of this software to be locked down the way it is. This person has a laptop with a quad-core CPU, a good GPU, a lot of RAM and potentially a lot of very fast storage. They have a desktop with 8-12 CPU cores and capacity for at least 32 gigabytes of RAM, plus PCI Express expansion slots for faster storage. Other audio applications would almost certainly allow for around about as many capture channels as that hardware can allow. Avid is using modern single-purpose DSP cards dictate licensing and feature levels on its software products.

To me, doing it this way pretty much ignores that there is now better ways to do this work. A modern CPU can almost certainly outstrip these DSP cards, whose only real function appears to be for compression and to enable certain effects, even though neither of those things should require specialized hardware any more.

It's reasonable that a single or dual-CPU workstation from 2003 doesn't have the horsepower to do this. But, something from a decade later? The thing people in the discussion said was that "audio hasn't changed" – with the implication being that this was a "solved problem" and that as with word processing, there aren't improvements that can be made in process or efficiency.

Of course, I don't consider audio to be a "solved problem" especially if here in 2017, when we're getting 8-core CPUs at the mainstream desktop level and 12+ core CPUs at the enthusiast level, before we get into actual workstation and server CPUs, you still need thousands of dollars worth of DSPs from the '90s to do compression on audio to capture it to disk and then play it back.

For better or worse, I think the solution here needs to be that communities using these tools need to look at Avid and ask why this is the case. An iPhone or an iPad can easily record multi-track audio. An interface for doing so is of course necessary, but outboard processing hardware really shouldn't be.

If I were an Avid customer today, I think I would either be inciting a mutiny or I would simply stop being an Avid customer.

The conclusion to this sub-point is of course that this person has either already bought the Pro Tools 8 kit, or they're going to anyway, because a bunch of pointed leading questions about what guides their needs and what might make the best use of hardware they already have is not worth the time and effort.

I get that people doing creative things with their computer just want to sit down and do it, but this stuff is usually worth discussing because if a change in tools can lead to better results or faster results, then the justification not to do it seems thin. In the case of Pro Tools, I think something needs to be asked about what really causes the CPU error. I know that with my video editing work (when it appears, which I will admit, it is infrequently) newer software will immediately lead to a speed-up in my work, just because it will be able to better take advantage of the modern computers I have, and it will work more easily with the modern file formats I use.

I don't have particularly concrete examples, but advances in computer hardware generally need to be matched by advances in computer software, generally, to achieve the most meaningful productivity increases for non-trivial tasks. Computers may not feel faster, although a side-effect of much of what is improving in the past five to seven years means that they should in fact feel faster, especially as operating systems get fine tuned and as application software gets updated to take advantage of new hardware configurations.

This isn't exactly a continuous climb, though. If an application is, say, 64-bit aware, it doesn't really need to become more 64-bit every time new computers that support more RAM come out. If an application is multi-threaded, it doesn't necessarily need to become more multi-threaded each time a new generation of CPUs comes out. What needs to change, of course, is that if RAM ceilings jump enough that your application struggles to get a performance benefit out of it, when it should, or when an application isn't designed to use more than a certain number of threads, or when CPUs are so heavily threaded that there's room for the application to do more work at a time.

There will be a point at which something will cross over from being something difficult to something easy, perhaps even trivial. I would say that video is there, but it's really not – advances in video capture technology and the fact that people will always want or need things like effects, multi-camera operations, different output formats, and so on will likely mean that performance enhancements in computers will be meaningful to video editors for the nearly predictable future.

However, something like photo management and even things like print design and web design, something that in the 1990s was reserved for the highest end of computers, is now something any random laptop, even a $300, can easily do. Illustration and low end CAD tasks don't need relatively powerful computers any more. Other things like programming, virtualization, and even still image editing really depend greatly on the technique and a few other factors.

As always, I think it's an exciting time to be interested in computers. I would be lying if I said I thought that there was truly an unexciting time to have something to do with tech. One of the things I'd like to do over the next few weeks is get my hands on some free/trial software – Final Cut Pro and Avid Media Composer First at least, perhaps Adobe Creative Cloud (for PremierePro) and do some video editing testing. I want to see what I can push a few different systems I have to do and what the experience ends up being like.

I of course have my copy of Final Cut Pro 6 and have worked around some of its quirks. I have Avid Media Composer First installed on the system, and so that is probably what I'll use and test first.

One other thought that I haven't mentioned about Final Cut Pro 6: It and the other members of the Final Cut Studio 2 bundle that I have suffer severely from changes in Mac OS X. I can't at all in good conscious really recommend anybody try to use this as a day to day editing tool on a modern computer. This is perhaps one of the most salient points I can make. It's already badly non-performant on something like a Mac Mini from 2011. It will run and technically work on something like a much newer iMac, but each time I go to use it to build out a project, I spend more time fiddling with the software itself, dealing with, say, breakages in the way side-utilities such as Compressor work than I do editing the video. I end up producing sub-standard or incorrectly compressed files and hoping YouTube fixes things on their end, problems I wouldn't have if I used something "more modern" – whether that's iMovie, Final Cut Pro X, or some other tool.

It's to the point where, even if I had something to say or something to record and I thought I had done a good job with recording, I dread doing the post-production, so I never will. That's a different issue and video has always been one of the more complicated formats to work with, with so many different parts that ultimately matter to a good production. Some of that is a different issue, some of it is needing to make the things you can easy to do so that the overall process isn't too overwhelming.

June 26
Data Storage Dilemma

With the most interesting major tech announcements of the way for a while, and with a new laptop easing my mind in the way of "how will I write when the Surface 3 dies?" I've had time to think about other things. Not that I have, of course.

Instead, I watched YouTube, and in the back of my mind, I was thinking about the thing that deep down, we all know I really want to do: Save every YouTube video to local storage so I can fall asleep to the sweet, dulcet tones of people recording their progress in Cities: Skylines. And then watch those files again later when I want to see what happened.

Video, both legitimately downloaded into my iTunes library and from things like podcasts, isn't the only thing that eats disk space on my systems. About ten years ago when data was much smaller, my solution to this problem would be to burn a new disc every month or so with data I wanted to keep but didn't need on my disk any more. It worked out well because with the slow Internet connection I had and the relatively slow rate at which I created or otherwise acquired data, there wasn't an awful, one or two DVDs (or if I was feeling spendy, one dual layer DVD) was enough.

Today, writeable DVDs and Blu-ray discs exist, but preparing them is as inconvenient as it has ever been, and these discs, which are costly in their write-once form and even more costly if you try to reuse them for backups, have been massively outpaced by the falling costs and increasing capacities of things like external hard disks and USB flash disks. Optical media is often unreliable, long term. Most of the CDs and DVDs I burned in the early 2000s have degraded to the point that it's questionable whether I'll get the data off them. Any other form of relatively capacious external storage device is very expensive and very enterprise focused. The next best thing, DAT320, was more affordable than LTO, although less robust and also now discontinued.

It strikes me that it would be great to have a modern removable data storage format that's more robust than hard disks, bigger than flash drives and blu-ray discs, and ideally fast.

The problem is of course that there are always compromises. You can't, say, build a storage format that's capacious, fast, and cheap. If you could we'd all have LTO tapes at home. I think that the trade-offs are going to be in capacity and in speed. It won't be as fast as real external hard disks or as big as LTO tapes. In trade for being pretty cheap and being treated like external media, I'm imagining it'll either be a new form of optical or magneto-optical media, or some kind of flexible magnetic storage, in the style of something like zip drives, or perhaps Bernoulli. Honestly, I wouldn't even mind if it was massively cost-reduced DAT/DDS media that had at least doubled or quadrupled capacity. (Ideally, the native storage capacity would be 500 gigs or so.)

I think that for most people don't need an awful lot of that. In fact, most home computer backups aren't any bigger than single disks you can buy today, which is 8TB for external disks and 10TB for internals, plus the size of things like Drobos and home focused NAS devices.

But, the thing I want to do is create multiple media sets for my backup of a big server system that I want to then take away for safe keeping. The other thing I want to do is store data in a semi-archival state. External hard disks are big enough, it's easy to duplicate them, but it's also easy to kill them and a certain amount of inactivity will cause them to die.

The other problem with external hard disks for archival is that they cost a lot up front and they're often much larger than the amount of data I want to "archive" at any given moment, and I don't necessarily want to pull my disks out to add data and then later duplicate it.

I think the common thing to do these days if to put data that's being hoarded in fake unlimited cloud storage locations. I suspect that if it were easier to use something better suited to the task, people wouldn't abuse those tools. The key to this is making the device fast enough. It has to be faster than using an average Internet connection to upload files, but it can be slower than a proper hard disk, I think, and making it inexpensive enough to be able to load up on cartridge. It would also be better if as part of an "archiving" solution there was a way to catalog the contents of the media, although if they're not tapes you should also be able to just browse the devices.

I think that at most the mechanism should cost a few hundred dollars, no more than 500 if possible, and the media should be pretty reasonably priced. If in trading off the convenience of flash disks for lower cost of these cartridges, you can get the media down to around $20 a pop, it would start to make a lot of sense for low end archiving and backup applications.

In an ideal configuration, the mechanisms cost a little less and the media is perhaps a little slower, but it holds a lot more in trade for the speed. I think that the "data hoarding" crowd would be fine with something cheap that worked slowly but perhaps worked in a configuration where multiple drives could run in tandem or where there was a cheap loader or stacker would of course be beneficial, but things like that add complexity and cost.

The trouble is that there are a lot of ifs here. At the top end, for people who are building large multi-terabyte disk arrays to alleviate storage problems, you can almost certainly just get a tape drive and eat the cost. At the low end, RDX costs a lot relative to hard disks, but it's a good durable backup option for systems with less than 4TB of storage, and it's a removable system that works with spanning archive. At the far small end, cloud storage systems with a quota of a terabyte or so are often better as a primary or only storage solution, but mistrust in cloud technology often ends up meaning that some people end up with their data stored locally (not a bad thing) with no or insufficient backups.

This technology is not really marketable or something that is likely possible to exist. It conveniently combines the best aspects of LTO and RDX but cheaper than either of them. I think that there is "a market" for this kind of thing but I don't really believe that it's terribly big. In truth, I'm sure it's quite small.

Part of the problem is that there are people who the data hoarder thing. The people who do data hoarding as a hobby often either have the wherewithal to run regular tapes, or are totally opposed to the idea and might not be interested in such a device.

June 19
Surface Laptop Impressions

I finally got the Surface Laptop in, a day after I would have liked to, but I'm not complaining too much, since it seems like the shipping company was ultimately able to accommodate my schedule on the second delivery attempt.

This is totally beautiful hardware. It meets the high standards I've come to put on Surface products, and it's, in general, a great upgrade from the computers I've been using.

As a quick review, I have a Surface 3 in less than stellar condition after I dropped it a few times, a Surface RT that sometimes fills in, and a ThinkPad E520 which I use at home. The idea behind the Surface Laptop was that it should replace the Surface 3 as an on-the-go computer and the E520 as a somewhat powerful computer at home or for longer trips.

I haven't had a chance to let the Surface Laptop really stretch its legs yet, but with what I've done so far, it has been good. The battery life isn't as much as I would have in an ideal world, and I'm still a little burned by Panos Panay's comments about USB Type C, and it's not the most powerful computer you can buy, but it doesn't have to be because it reaches a good compromise of size, build quality, and performance specifications.

I upgraded mine to Windows 10 Professional instantly – not because I don't think I could use 10S, but because I know up front that Pro is better for my needs. I'm not a developer, so I don't need to experience this computer as a flagship development test machine, which is the speculated reason for shipping it with 10S.

The display is great, I can use it reliably at about 125% of its native resolution, which equates to around 1800x1200 of work space (the native resolution is 2256x1504). The wireless networking is much better than the other systems I've been using.

I got the i5/8GB/256GB configuration in a gorgeous blue finish for $1299. I knew up front that I wasn't going to be fine with the 4/128 config. It would work, but I wasn't going to spend $999 on a new computer with the same configuration as not only the Surface 3, but of the ThinkPad T400. It's a configuration that works well, but I know that the web and web-based applications are getting heavier.

As of this writing, the software setup is simple. I installed Office 365 on it, and PuTTYTray. I have yet to set up anything else, but those two things are most of what I do. I initially installed Office 365 via the store, hoping to add the promotional Office 365 time to my existing subscription, but elected to uninstall that and install it normally from 365, mainly because it didn't include the desktop version of OneNote.

Things brings me to one of my complaints about Windows 10 in general. In addition to receiving a lot of software I can't or won't use that counts as (perhaps) paid shovelware, you can't install certain things that feel like they should be optional installs, such as OneNote. I like OneNote a lot, but on any system for which I've licensed Office 365 or a desktop version of Office, I want it to be the full desktop version, with easy handling of the notebooks stored on my personal server.

I've already been asked once or twice about my thoughts on Surface Laptop reparability. The obvious source has already published their teardown, and the gadget sites have talked about how the Surface Laptop can't be disassembled without destroying it. To add, the RAM (obviously) and now the storage (a change from the previous Surface Pros) are soldered on. The problem with this assessment is that the Surface Pro storage, while technically socketed was not in any way reasonable to access. Nobody was pulling apart their Surface Pros to replace the built-in storage with bigger or faster m.2 blades, and the practical effect was that it wasn't upgradeable.

The way I'll put it is this: If I find out that 8/256 isn't sufficient, I don't think that an upgrade to 16/512 would solve whatever the problem is. That kind of discovery is almost certainly going to be accompanied by a need for, say, a quad-core CPU or a beefy discrete graphics processor, or significantly improved i/o capabilities. Even the top end Surface Laptop, which retails for $2199, is still a Surface Laptop with all the limitations implied by a 15-watt dual core CPU, 16 gigs of RAM, and a single USB port. I think if my Surface Laptop is going to be insufficient for something, it's going to be so insufficient that no configuration would have done well.

And, the things I'm doing on this machine are simple enough that I'm not worrying about that happening for a few years. This machine is pretty explicitly for writing and communication. I would likely have ordered the i7/16/512 configuration if I thought I'd need to use Slack, though.

Panos Panay says that the Surface Laptop was designed to be a viable student computer for four years. I believe him, and I'm hoping I can perhaps squeeze a fifth year out of it. I don't know if I can reasonably expect an awful lot more than that, but it really depends on physical longevity in this case, I think.

My ThinkPad T400 was so over-bought that to this day, it's fast enough (with its dual core, 4/128 config) for my research, writing, and communications. The reason I'm not using the T400 is that its display broke, and buying new batteries for it is expensive and impractical. My Surface RT still runs well and gets good battery life, but it was never particularly fast, Internet Explorer 11 on it is nearly unusably slow for blog research. The Surface 3 technically outperforms the T400, but only nominally and only for tasks that are well threaded. The Surface 3 has the problems caused by being dropped – its touchscreen doesn't work, only a part of the display works for pen input, and it gets slow and unstable when running on battery below 30% of its capacity.

The Surface Laptop wasn't over-bought, but it wasn't under-bought. I think the state we are in right now where every Intel computer I have that's less than about ten years old works perfectly fine is indicative of the place we're in with the Surface Laptop. The computing plateau means that unless something game changing comes up or web sites grow too much, this machine will be "a useful computer" long after it stops being "a portable computer."

That said, here's to the next several years of portable computing.

June 12
WWDC 2017 Happened!

WWDC 2017 came and went. A lot of interesting stuff. As a heads up, I didn't pay an awful lot of attention to anything about the Apple Watch or the iPhone. I also haven't watched the video yet.

That said, it was an interesting year!

It transpires that most of what Vittici's wish-list video contained was true. iOS has gained drag-and-drop, a file management app, and the highest end of iPads can now run three applications at once, two in a side-by-side mode and a third floating on top of the other two. (This is essentially primordial windowing.)

There was new iPad hardware which looks relatively exciting, including the hotly rumored 10.5-inch iPad Pro. There was also a new revision of the 12.5-inch iPad Pro.

On the Mac OS X side of things, OS X High Sierra Snow Sierra 10.13 brings APFS to SSD-equipped Macs (nothing for hard disks quite yet) and is being talked about as another quality-of-life release like Snow Leopard. I think what a lot of people realize is that the other [Modifier] [Cat] release was Mountain Lion 10.8 and was a relatively problematic release. It's pretty widely agreed that 10.8 is the low point in Mac OS X history up to this point. (I personally think that the low point is probably 10.7, with 10.6 and 10.8 each being only slightly better, 10.5 and 10.9 were each that much better, but that's sort of splitting hairs.

The Mac hardware was almost all refreshed. Literally the only things left untouched were already the oldest things in the product stack – the Mac Pro and the Mac mini. Even the 2015 MacBook Air had a configuration update (slightly faster CPU, 16GB RAM was removed.)

The real stars of the show, for me, were the acknowledgement of external GPUs via Thunderbolt 3 and the iMac Pro. The new iMacs are a great update and most of the 21.5-inch models at $1299 and above have gained socketed CPUs and memory, but there isn't information on the new $1099 model, which has moved to a slightly better CPU.

The iMac Pro is an interesting machine, Apple has managed to cram an awful lot of CPU and GPU firepower into the machine, and it'll be really interesting to see if it can perform and stay cool.

The Mac mini and the Mac Pro have yet to happen, I'm thinking that we might see the Mini silently updated later on in the year, and the Mac Pro is guaranteed to happen as soon as "next year" but it could be even longer.

June 05
Thoughts Ahead of WWDC

This year's WWDC is highly anticipated. Based on Things I've Seen™ and some of my own wild guessing, I'm going to make a few predictions for what we'll see this coming week:

  • New iPhone
  • New release of iOS
  • New iPad model(s)
  • Refreshed Mac laptops

In general, everybody will be very disappointed for the following reasons:

  • The new iPhone will be 50% faster than the old one and have better everything, except it will not switch to USB Type C cabling and will still have no headphone port.
  • The new release of iOS will feel uninspired and will introduce bugs that will make the iPhone completely unusable for a small looking portion of the community that will feel extremely huge once they start complaining. It will not sufficiently transform the iPad Pro into a laptop replacement.
  • The new iPad will be 50% faster than its predecessor but the new release of iOS won't make that horsepower meaningfully usable for applications that can use that type of horsepower. A low memory ceiling will make web browsing on it painful compared to 2018's new iPad models, despite it being good enough in every other way.
  • The new Mac laptops will have Kaby Lake processors, but will otherwise be identical to the olde models. They might not even update the chipset to a new model. Prices will not meaningfully budge. Apple will discontinue the 2016 lineup but keep the 2015 lineup on sale. There's a non-zero chance the MD101LL/A will be re-introduced.

We'll see how my predictions do. I think that there will be a loud contingent who congratulates Apple on whatever they do. Notably absent will be any mention of the replacement for the Mac Pro, any update for the now-ancient Mac mini, and the supposed pro-focused iMac.

There's a lot of excitement about an iOS concept video that was shown off on one of the Apple blogs. The idea was to make the iPad more productive by introducing functions such as system-wide drag and drop, a finder, and a few other nice things. It would be nice, but I question whether Apple will be able to save the iPad from its sales decline. People who want to do "productivity" are now trending toward Macs again and people who want to do consumption on the go are fine with a big iPhone.

I still see the iPhone and the iPad as different products, but I think most of the people who can get away with using an iPad as their main computer do not strictly need or want the horsepower and cost of something like the iPad Pro. The people who want to use an iPad Pro to prove a point need iOS to be a little more capable, and people who need that functionality today are just buying Macs and Surfaces.

There's lots of rumors about a Siri speaker and I think it'll be interesting. I don't use Siri an awful lot so I don't really know what using it will be like. It's likely to be a lot better for privacy than anything from Google or Amazon, and I bet there will be people who buy them as airplay speakers. I would consider it, especially if Apple makes it compatible with Windows and Macs and with other audio sources.

One item I personally hope for is an updated iPhone SE. The platform that phone is based on is now a few generations old and we're looking forward to either an iPhone 7S or iPhone 8 coming out this year. I believe an updated SE would do well with what appears to be a pretty dedicated fan base for that form factor.

I think the particular era we're in with Apple is that they've just discovered that maybe they don't always get it right, even in the modern era, and that at least one, probably more, of their products doesn't meet their customers' needs in a big way. The other trouble with the Mac line is that the majority of the models on sale are over two years old. I've seen more than one post in "non-techie" corners of the Internet from people who bought a MacBook Air or MacBook Pro or Mac mini and were surprised to find out that it was from 2013, 2014, or 2015.

It's possible, albeit unlikely, that we will see a new MacBook Pro family that reflects this reality, but I think that what will really happen is that Apple, having recognized this issue in late 2016 and discussed it in 2017, will announce a few products that address the problems in 2018. New models will help a lot, and I think Apple needs to take a long hard look at their product family and address what might cause problems for customers. I don't think Apple is selling bad computers, but I do think it's possible that some of the models are needlessly compromised in ways that make the experience or configuration worse for people who buy them.

May 29
Surface Pro (5) Miscellany

I posted last week about the Surface Laptop and the current calculus for the Microsoft Surface family, mainly because I wanted to be correct for at least one day, should something have come up.

The Surface Pro was refreshed on Tuesday. Perhaps the most important thing to say up front is that I don't think it's life changing from a product perspective. The same configurations are available, and the new processor was never going to be a particularly big jump in performance. The physical size of the device is the same, it fits in the same docking stations and uses the same power adapters and has the same other ports as well.

Perhaps the big noteworthy change is that Microsoft now rates the Surface Pro for around 13.5 hours of battery life, which is like the rating on the Surface Laptop. If either of these estimates is as accurate as previous estimates have been (which is to say, I'm estimating real usage will get close to 75% of the stated life) then it's a huge thing.

Battery life is a funny thing. I don't think most people are explicitly interested in, say, a computer that'll run for twenty hours uninterrupted. However, I think most people would view that as a benefit, because it means they can either expect to be able to use it through a work day or on a long flight with no trouble. Another scenario I've seen (and used myself) is that a computer that lasts a long time on battery when you're hypermiling it will also last a fair while when you explicitly do not save energy. In particular, I'm thinking about situations where somebody might do some light gaming on a computer running on battery, or do traditionally "heavy" work such as graphic design or development, perhaps even virtual machines.

In that way, it's sort of interesting to side-step the Surface Pro and go directly to thinking about the possibilities for a new Surface Book with Performance Base, updated to the Kaby Lake architecture. The Surface Book with Performance Base is already rated for about 13.5 hours, so if the improvement from switching to Kaby Lake alone is that much, then it's possible we'll see some huge boosts on something with that much battery.

I like the idea of the Surface Pro a lot, but nothing here is going to make me go get one. For the past several years, I've always had low end surfaces, so I don't feel like I'm missing out, and I feel like the advantages of the Surface Laptop's form factor will outweigh the flexibility I lose from the Surface Pro form factor.

This is all especially true and relevant for me, as I know now after having used the Surface RT and the Surface 3 (with its broken screen that doesn't accept pen or touch input) that I don't use them as tablets too often, so the laptop form factor of the Surface Laptop isn't a detriment to me.

One of the things apparently mentioned at the event, although I haven't watched the video myself, was a USB Type C adapter. There isn't images of it yet, but we know that it's going to connect to the SurfaceConnect port. What we don't know is why. Panos Panay and Microsoft in general pretty clearly don't think USB Type C is ready. It's to the point where when talking about the adapter, Panos framed it as being for people who like dongles, a reference to the fact that to work with existing devices that have built-in cables, computers with USB Type C ports (and nothing else) need to use adapters. The adapters you can generally get aren't too offensive. Google, Apple, as well as numerous third parties all sell reasonable adapters. So, the adapters aren't offensive anyway.

To me, this just looks bad, both because Microsoft is willfully digging in its heels on the issue of using this modern connector and they're framing it as an issue of needing adapters. However, the Surface family of portable computers is already in a bad place on that front, because Mini DisplayPort is by its very nature a port of adapters. The difference, and the trade-off Microsoft appears to be making, is that mostly everybody already has Mini DisplayPort adapters and cables. They've been on Macs long enough, and Macs are common enough in education and corporate environments now that such an adapter is now a common sight on those machines.

From my perspective, the Surfaces need adapters to do many of the same things anyway, and I think Microsoft over-estimates the importance of handing people a USB flash drive, in education particularly, but also any other environment I've seen. When people do use USB ports, there's often a need for more than one, which would make the presence of a Type C port in addition to what Surfaces already have or type C ports instead of Type A and Mini DisplayPort welcome. Just generally I also think it's bad messaging to disparage your customers who "love dongles" so much. Microsoft is building good hardware, but sometimes there are just weird little things you wish they would do differently.

The thing I fear is that in two, perhaps four years from now when people are using notebooks Microsoft claimed loudly would be good for four years, the USB Type C tides will have shifted a lot more and we will feel like our platform-leading systems should have come with ports that were starting to appear on Dells and HPs of all descriptions. You can already buy Type C chargers for phones and computers commonly at retail, and there are already peripherals like external hard disks using USB Type C.

This complaint isn't new, either. The Surface Studio, when new, drew lots of criticism for (among other things) not including USB Type C or ThunderBolt 3, on systems where it would arguably have been very easy to do. I've said this several times, but my other concern remains the future availability of power adapters for the systems, and the fact that SurfaceConnect power supplies are apparel a confusing thing. Surface Pro 3 power supplies floating out in the wild might not charge systems with higher demand, but on the other end of things, the high-watt Surface Book power adapter will only charge one model of the Surfe Thace Book. It (supposedly) won't even charge other Surface Books, you must know and remember which one you have.

I'll probably still buy one. I'll like it a lot and when the issue of Type C really forces itself, I'll be unhappy and then I'll buy the attendant adapters or I'll buy a new machine. The port issue and Microsoft's bad messaging about it is really my only anxiety about Surfaces specifically. I have other anxieties, but those are mainly related to the direction of computing overall. In particular, I worry about what it will be like to use a machine with limited and fixed resources if software continues moving in the direction programs like Slack and Spotify have established. Slack and Spotify run well if you have enough horsepower, but they need that horsepower, even though they perform trivial functions that computers have done well (even at the same time) for over twenty years.

Those apps, and heavy web browsing loads, do better relative to the amount of memory and processor horsepower a system has. Today, I do fine on relatively low end hardware, such as the Surface 3 and some old laptops, but performance today doesn't necessarily mean performance tomorrow, especially in a situation where both my habits and the environment can change.

The fifth generation Surface Pro has meaningful change, but the visible changes feel minor. Even claims related to particular types of performance (I'm thinking about battery life in particular) can be meaningful, but only if the old model didn't meet your needs. The display is supposedly better, the keyboards have more Alcantara, the edges are rounded a little bit, and oh, by the way, the interior of the machine has been completely redesigned.

With the precedent set by the Surface Laptop and the Surface Pro (5), I'm excited for the Surface Book ("2") but I'm not going to wait for it. I think Microsoft is going to introduce it with Kaby Lake, perhaps even if the next Lake is available by the time it's announced, and I think the thing we're waiting for right now is an excuse to have an event and perhaps some new graphics silicon from nVidia, who recently announced a new "low end" discrete graphics chip for laptops, which might or might not be right for a second generation of Surface Book. Surface Studio will be the next thing after that to look for. I expect Kaby Lake, Pascal graphics (in keeping with tradition: Perhaps after Volta is available) and a continued resistance to integrating USB Type C connectors, USB 3.1 ports, or a ThunderBolt 3 controller with the attendant Type C port.

In all, unless the Surface Book or Studio is massively different, I don't think that the calculus has changed all that much.

May 22
Surface Laptop Thoughts

I have some other stuff in the queue but I took a drive to the city and I wanted to publish this quickly, so I can later react to my reaction.

Microsoft recently announced two technologies at an event focused on education. The first was Windows 10 S, which is supposed to streamlined, simple, speedy, in one or two very specific ways, it's said to be Microsoft's competitor to Google Chromebook computers. The drama surrounding Windows 10 S is that it will, by default, only allow UWP software downloaded through the Windows Store, although for my purposes today it doesn't matter a whole lot.

The next thing Microsoft presented, about which Panos Panay was particularly pumped, was the Surface Book. The short way to describe the Surface Book is that it's Microsoft's MacBook Air. It has a 15W CPU, a big battery, their proprietary power connector, a USB Type A port, and a Mini DisplayPort connector. It's got a wedge shape, and the display is around 13 inches, so it's a relevant comparison, I think. To add, it's available on a budget and it's pretty much directly aimed at students.

I had a chance to get down to the big city to look at the new Surface Laptop. It was ultimately a pretty cursory glance framed by some nice drives, but I'm glad I took the time to do it. This gives me a chance to form an opinion and publish it before the upcoming Surface announcements Microsoft will be making this week. I don't really think there'll be anything life-changing happening, but I wanted to talk a little bit about the Surface Laptop in particular and the current Surface Calculus before it changes.

First, to address quick impressions: I think that the Alcantara fabric looks and feels beautiful. It's no different, conceptually, than the textured plastic or the soft touch surface that used to grace certain premium laptops. I think that it'll be much easier to maintain and keep looking good than those devices. The store I went to has been open since the release of the Alcantara cover for the Surface Pro 3 and 4, and they had one of those on display. I can't confirm for certain that the sample I touched hadn't been replaced, but it was in great shape, and I see no reason for Microsoft to have swapped it out. In addition, the display is beautiful and the 3:2 aspect ratio is still particularly great, and the keyboard and trackpad are really good.

I didn't extensively test performance or spend a bunch of time doing any long-form writing on the device, but I don't expect either of those things to be problems. I'll add that the keyboard is a lot better than the Surface 3 keyboard, and the Surface Pro keyboards. Perhaps the main worry would be whether or not the keyboard ages as poorly as some of the Surface keyboards I have. I think the keyboard will age fine, especially given the condition of the Alcantara cover. The other benefit is because the keyboard is not removable, you won't have to pull it off and reattach it all the time.

It you look at a spec sheet or a direct comparison of the features of the devices, the Surface Laptop is pretty obviously a "budget conscious" device. It makes sacrifices in terms of cameras and configuration options, and among the Surfaces it has "relatively" low end storage options.

The biggest points at which I consider the Surface Laptop a win compared to its stablemates are the keyboard and the processor. The display is a lower resolution and has a lower PPI, although it's still very beautiful. The processor is, at least until tomorrow when an updated Surface Pro is rumored (leaked) to be released, the best you can get in a mobile Surface device. Although, Kaby Lake isn't a huge performance bump over Skylake, the main advantage is power management.

The battery life should be the best on either the Surface Book with Performance Base or the Surface Laptop, but whether that battery life advantage continues over to the Surface Laptop if you use it with Windows 10 Pro is yet to be seen.

Battery life and general longevity is one of the biggest reasons I worry about the fact that the Surface computers do not have Type C charging. There's already external batteries available that say they can charge certain Type C computers several times. There's no reason the next version of something like the PowerHouse won't also have Type C output. Officially, there aren't third party adapters available for the Surfaces, which is a bummer both because third party adapters could hypothetically address the issue of external batteries and charging in a car, and also because I fear that the SurfaceConnect adapters will become instantly unavailable once Microsoft finally does move beyond.

The other issue I have with the ports is that having a DisplayPort connector seems wasteful. If it was three USB Type C ports, you could choose peripheral configurations such as a card reader, an external hard disk, and a power connector; or two external hard disks. It's not strictly necessary, but I'm guessing most Surface Pro/Book/Laptop mini-DisplayPort connectors will largely go unused.

Perhaps Microsoft will eventually see the error of that particular decision. A move to USB Type C would also fix the problem of Surface power adapters having nearly unusably short power cords. The Surface 3 and RT, Pro3, and also the m3-based Pro4 have cords that are short enough that you can't plug the cord into an outlet on the wall behind a table and use the Surface on the table. The SurfaceConnect docking station should resolve this problem, as it has a power cord long enough put the brick on the ground and the dock on the table, and then there's a cable that further connects the Surface to the dock. Although with the dock, it may not really matter where you put the Surface, if you're planning on using an external display with it, which would seem to me like a reason why you'd buy the dock to begin with.

With the release of the Surface Laptop, the low end configurations of the Suface Book become relatively questionable purchases. It's a matter of deciding what you need, though. For $1299, you get the i5/8/256 Surface Laptop, compared with an i5/8/128 Surface Book for $1499. The things the Book gets you are the better display and the pen, plus better touch-screen functionality. However, that configuration of the Book never made a whole lot of sense to me, because for $1328 (with keyboard) you can get a Surface Pro4 in i5/8/256 configuration, and that device is smaller.

If you can do without the Pen, the Surface laptop is a pretty great deal, especially given that that $1299 configuration is pretty clearly what Microsoft believes is the sweet spot for the system. That configuration is a relatively good deal compared to competing systems. The Dell Latitude 7370, for example, is $1299 with a Core M3 processor in a 4/128 configuration. The Dell XPS 13 also doesn't match the configurations exactly, but the closest are the i5/8/128 version for $999 and the i7/8/256 version for $1349. The Dell XPS 13 and Latitude 7370 have lower resolution 16:9 displays and while both have Type C ports available, the XPS 13 has one and a DC jack where the Latitude has two and a Type C power adapter.

I think the real question is whether it's worth giving up that future compatibility with different power adapters and external batteries in order to get the Surface Laptop's display. I'm inclined to say that it is. By accident or not, Surfaces have defacto been my portable computers for a few years now.

Without writing extensively (yet, at least) here about different kinds of portable computers OEMs build, what I'll say here is that I think the compromises the Surface Laptop makes to reach its price point are fine. I want it to have different ports, but I want that keyboard, trackpad, and display more than different ports.

May 15
The Seventh Mac Pro

My apologies for the delay on this. I've had a surprising number of personal things come up, and I used some time I normal dedicate to blogging for novel writing during April.

A few weeks ago, John Gruber of Daring Fireball posted huge news. "The Mac Pro Lives" he called the article. The executive summary is that Apple noticed that they were still building and selling the 2013 Mac Pro, model designation 6,1. It should be straightforward to put a newer Xeon and some newer graphics chips on the platform, but Apple also noticed that the industry in general didn't move where they thought it would, which was going to be toward computers that had several midrange graphics processors in them, for parallelization of tasks.

The other, perhaps unsaid component to all of this is the vocal opposition the Mac Pro 6,1 has received since it was announced. I have always liked the Mac Pro 6,1, and for a long time I'd been highly skeptical of the things people were saying about it, in terms of the need for a certain type of expandability. That opposition never died down, however.

And, so, the biggest thing people are talking about with Apple's announcement isn't necessarily that they are going to move from an Ivy Bridge processor to a Skylake one, or from AMD GPUs that were ultimately announced in 2010 or 2010 to something more modern, but essentially the reaction has been: Finally, we will have slots, disk bays, and an nVidia GPU.

I'm not so sure about all those things, but ultimately what Apple said is that they intend to make it a flexible system that should meet the needs of different types of people. There's literally no details yet other than that they realize they "backed themselves into a thermal corner" with the existing design and that they won't have anything available this year. With that, there's a lot of discussion about what a 7,1 might be like, and what people are to do as they wait.

Predictably, I'm mixed on the whole thing. I believe Apple should not have left the 6,1 to rot without any changes for so long. On the other hand, most of what people are describing in their wish list is technically in the current Mac Pro, and switching to a new platform alleviates other issues, such as the ability to at least double memory density by switching to DDR4, get more cores with newer Broadwell-EP or Skylake-EP processors, and faster storage. The thing I would likely do in an updated version of the 6,1 is build an Apple-branded expansion box and then add a second slot for an SSD.

That said, I understand the flexibility everybody wants. In the '90s, that flexibility was the ability to choose between the Power Macintosh 5200, 6200, 7200, 7500, 8500, or 9500 as your needs and budget dictated, and then for many of those machines, you can install a variety of upgrades. Since that point, Apple has had at least computer with a few disk bays, a few PCI slots, and an at least nominally upgradeable CPU in it until the end of 2012. In 1995, it was the 7200 and up. In 1998 it was all three Beige Power Macintosh G3s, and in 2012 it was the Mac Pro 5,1, still running on the Westmere-EP CPUs from its introduction in 2010.

After the Power Macintosh G4 was discontinued in 2004, there was almost instant consternation about the lack of a midrange Mac. Basically, something you could buy for somewhere between $1299 and $1699, non-inclusive of a display and application software, and then upgrade for a few years. In 2006, MacWorld posted about the need for what it called the Mythical Midrange Mac Minitower. Today, that system would basically look like a Dell OptiPlex 9020 or 7050, or an XPS 8910.

After letting the 5,1 sit through the introduction of Sandy Bridge, Apple introduced what many saw as a signal of the "iOSification" of the platform. This isn't accurate, because everything in the Mac Pro is modular, even if it's proprietary. The Mac Pro 6,1 is a good computer, but the future of OpenCL based applications that take efficient advantage of multiple midrange GPUs, obviating the need for several beefy Xeon processors, never materialized. At the low end, being stuck on Ivy Bridge meant that the Mac Pro was left behind at single-threaded performance, and at the high end, professionals that used software that didn't use OpenCL (instead preferring more CPU threads or nVidia's CUDA APIs) (which is most of it) had a system that they saw as unfit or unworthy.

The criticism is fair: Outside of Final Cut Pro, the software didn't materialize, and so for most buyers of the Mac Pro, the 6,1 is unsuitable because it's only barely faster than a decked 5,1. It can't have an nVidia GPU added to accelerate Adobe and other CUDA-loving software, and even for pros using Final Cut Pro, a commonly cited issue was the fact that the Mac Pro 6,1 didn't have room for internal disks to store footage. Add to all of that, in Mac OS X, you can't even configure the GPUs to split the displays between them: One GPU always runs display and the other always runs OpenCL.

Ultimately, I disagree with the idea that the 6,1 or the new/tube Mac Pro is a "bad computer." The 6,1 is a good computer, it has found success in a few markets, but as the SGI O2 isn't the best UNIX chores box, in light of the fact that you end up paying a lot for a specialized architecture to make manipulating analog and digital video easier. I do agree that the 6,1 or something like it can't stand completely alone at the top of the Mac product line. Apple needs something akin to Indigo2 or Octane to go with its O2 or replace it.

I don't want to spend a whole lot of time in this posting speculating on exactly what the 7,1 should be. The thing I'll say here is that like the Octane, it'll likely end up with more RAM slots than the 6,1, it may end up with the ability to run dual CPUs and dual GPUs. What those GPUs are should be flexible. It should also have room for more than one storage device. I'm not so sure that it'll end up with a whole lot of bays for 3.5-inch mechanical disks: It's clear Apple considers that type of storage to be dead weight, and the options for external ThunderBolt storage have gotten much better in the time between the 2012 introduction of the 6,1 and today.

I'm excited to see what the 7th generation Mac Pro looks like. Between that and promised or implied updates to the iMac (perhaps even a more professionally oriented version of the iMac to spiritually succeed the 6th generation Mac Pro) and Mac mini. My own Mac mini is still gathering years, so a Mac desktop could be something I'll be interested in using eventually, especially with the desktop virtual machine I've been using with the Surface RT.

May 01
The Surface RT Still Runs

Something I forget from time to time is that over the years, I've moved from one computer to the next quickly enough that I have a perhaps alarmingly large collection of computers that are best described as "still running." I mean something specific by that.

The problem I've had since my ThinkPad T400 started having more problems this year, not only with its display, but with other parts of its hardware, was that I wasn't sure what would do my computing from that point forward. Not just on what computer am I going to sort and process photos, but on what computer would I compose future novels, what would I do budgeting on, where would I compose thoughts about society and technology, among other things.

The Surface 3 had been standing in for a lot of things I might normally have done on the T400, and the other thing I need to remind myself I have is several desktop computers, much better Internet connection availability, and also a beefy server with literally 45 gigabytes of free RAM and CPUs that sit idle all day.

One of the things I've long wanted to do, in fact, has been to set up a virtual machine on that server I can remotely access. With the death of the T400, software licensing to do that is now available, and I've been testing some of it, in particular Adobe Lightroom, in a virtual machine with Windows 10 on it.

The virtual machine has six gigabytes of RAM and access to three processor cores, a 200 gigabyte boot disk and a one-terabyte data disk, giving it a little bit of a leg-up over the ThinkPad T400. In fact, in some bulk processing tasks, it's got a huge advantage over the T400. I was able to do a bulk operation I sometimes do to my entire photo library on the virtual machine in about 18 hours, down from two weeks on the T400. There is a lot more room on the server for a VM that uses more RAM and takes more disk space, so I could easily increase the resources when I start to use it for more things.

The trouble remains what to do on the go. I can only do so much with the software that's on a computer like the Surface RT or the Yoga RT, which both still work and are still good for writing, but not necessarily great for research.

I think the compromise I'm making is that if I'm in some situation where I have good enough connectivity for research, I'm going to do it in a browser on a remote computer. When I'm at home, the RDP connection is good enough that I can use full color to browser and make simple edits to the photos, but I should also be able to synchronize the photos with another machine licensed for Lightroom, such as my main desktop.

This will make certain operations a lot slower. In particular, importing images from a camera card or a phone will be slow. I think there's probably some reevaluation of workflow that needs to be done there. When I set up the workflow I have today, I was using a camera that made then huge images and stored them on a relatively small cards. I was using 2 and 4 gigabyte cards, and it wasn't until a few years after that I bought my first 32-gig memory card. Compare to today where I keep a 128 gig memory card in my main camera, and I can shoot several vacations on the one card or for a few weeks at once. There's no need for me to be able to run Lightroom and carry my entire photo library with me on a portable computer, because I have enough cards and cards that are big enough that I no longer generally need to import photos while I'm out and about, something I spent a lot of effort being prepared to do in previous years.

And so I'm left with the Surface RT, which can use Office, and can browse some web pages. The main thing I really need to do with it, however, is write things, and it handles that task fine.

The Surface RT has the advantage of a pretty good screen, and it has a nice keyboard which I can easily replace with newer ones should the need come across, and it still works well with a lot of tablet applications such as Netflix and Kindle.

I think that the trouble is that people, I know I'm included in this, tend to think of devices not in terms of what they can do, but in terms of what they can't do. This is particularly true in mobile devices and for me when I tend to think about what a particular computer would have trouble doing, without thinking particularly well about whether or not being able to do that task is important any more.

The Surface RT still runs, and it does enough that I don't think it is important if there's things it doesn't do.

April 10
2017-04-10 Blog Entry

I'm taking a moment away from writing for Camp NaNoWriMo 2017 to talk about the experience of writing this year's novel. It's special primarily because I'm doing it in MacWrite Pro on my Macintosh Quadra 840av, and other vintage Macintosh computers.

I decided to write it on the system in part because on the literal eve of the event, I had gotten my Macintosh Quadra 840av set up in my primary office area with an Apple Multiple Scan 20 display. The thing to know about this setup is that it's beautiful and if you haven't used System 7 or Mac OS 8 at those kinds of resolutions, you're missing out. It's by no means absolutely necessary to use them, but a different world of productivity opens up above 640x480 that I don't see very often. I've used Mac OS 9 at higher resolutions, but mostly in the context of my early days online. I did not end up doing an awful lot of things like

So I start noodling on some ideas and I come across an old thing I'd like to rewrite and expand upon, and I decide to give it a go. For the first day or so, it was just a single MacWrite Pro file on the 840av, but it quickly grew to include a SimpleText file for notes about the plot, and a Resolve file for word count accounting purposes.

It has been particularly interesting to try all of this out. I have "used" MacWrite Pro and Resolve before, but never for actual work. It's been super instructive and to be honest a little hilarious to look at the juxtaposition of what "work" means and what was productive in the early 1990s compared with what we think of as being needed today.

For reference, I have published two successive articles complaining that my Microsoft Surface RT, a computer with a quad-core CPU, two gigabytes of RAM, a beautiful little display, wireless networking and a contemporarily useful version of Microsoft Office, plus a very long battery life, is too slow for me to use as an actual computer and so I need a new laptop. Doing a real writing project on vintage Mac hardware has, as such, been somewhat humbling. MacWrite Pro's default RAM allocation on both 68k and PowerPC processors is one megabyte. It will run, probably happily, in as little as 640 kilobytes if it must. Claris Resolve's minimum is 750, and its preferred is the same 1024. SimpleText's minimum is 192 and its preferred is 512.

So here I am working with a document that's about to hit 10,000 words (I'm behind, whoops) on a computer with 24 megabytes of memory. The OS is using a bit under 7, MacWrite Pro and Resolve each have 1 megabyte allocated, and of all things, the SSH client I have on here is using 6 megs.

The actual workflow is interesting. This is probably going to sound familiar, because a lot of people still do this particular thing, but with USB flash drives. I have saved my primary copies of all the documents on a floppy diskette which I've named specifically for the project. Every day or so, I make a copy onto a hard disk of one of the machines as a backup. It's very low-tech, but it easily enables me to do this work on any of the computers I set up, even if they aren't networked. And, I have a few different systems I'm using for the work so far. The first is of course the 840av. Then, I have a PowerBook 1400c/166. In my bedroom, I've put a 6100/66 with a 14-inch Macintosh Color Display, and at work, I have a PowerBook 180.

I also have a Macintosh LC520 I could set up somewhere, which would probably be the truest test of this workflow. That system has a generous five megabytes of memory installed. However, it's also using a much older version of the OS than the other machines are, so it should happily run Resolve and MacWrite Pro. The only real limitation there is the fact that I don't have enough keyboard cables and mice to set up that system.

The obvious weaknesses of this system (all of these systems, har har) is that I do not have good backups in place to warrant against failure. I have a large data cartridge drive on the 840av, but not enough media to trust it or to use it if the 840av itself were to get full. I have networking, but I don't have an appropriate file server set up at the moment.

The other concern is in workflow. I don't think I could write my normal daily blogs on this system, because transferring data is a big challenge. Regular NaNoWriMo is also a much higher pressure event and there is a lot more unpredictable travel involved. The PowerBooks 180 and 1400 are portable after a fashion, but neither of them gets good battery life, and although it's not strictly a problem right now, they are both over twenty years old (21 and 24) and certain bits of plastic have weakened over time.

The other thing that I find it important to mention is that a lot of people talk about old computers in terms of their ability to create a distraction-free writing environment. Unfortunately, those people then go and load a bunch of old games and network communication software on the machines. I've long known I would never be any more productive on, say, one of my 68k Macs as I am on one of my normal modern computers. I'm doing this not because I think it will somehow enhance my output or my capacity to write more things. I think that's dictated more by being able to move away from mainstream spaces. The offset created by a 20-inch CRT does actually help, but I can just put a laptop nearby or pull the LCD on my Mac mini forward and suddenly I once again have two computers nearby to deal with. I'm personally convinced that the key to my own productivity revolves more around only having one computer on hand and having the computer be comfortable to use and have tools I like on it, than trying to force specific limits based on having things installed or not installed on the machine. When it comes to my blogging workflow, things that could otherwise force "distraction free" are a huge impediment to productivity. For example, the real problem with the Surface RT is that browsing the web on it is slow. I could do it and gather links, but it would be faster to use some other computer.

Once I'm done with the month, I'll probably re-evaluate the Surface RT. It feels slow in relation to other modern computers, but its advantages in comparison to old Macs are numerous and noteworthy. The disadvantages it has can either be worked around or just offset.

1 - 10Next
 

 About this blog

 
About this blog
Computer and physical infrastructure, platforms, and ecosystems; reactions to news; and observations from life.