
107. Click In With Surface
Hardcore Software by Steven Sinofsky (Audio Edition)
Audio is streamed directly from the publisher (api.substack.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.
Show Notes
Happy Holiday to those in the US. This is a special double issue covering the creation and launch of Microsoft Surface, an integral part of the reimagining of Windows from the chipset to the experience. To celebrate such a radical departure from Microsoft’s historic Windows and software-only strategy this post is unlocked, so please enjoy, and feel free to share. I’ve also included a good many artifacts including the plans for what would happen after Windows 8 released that were put in place. The post following this is the very last in Hardcore Software. More on what comes next after the Epilogue.
As a thank you to email subscribers of all levels, this post is unlocked for all readers. Please share. Please subscribe for updates and news.
Back to 106. The Missing Start Menu
In 2010, operating in complete secrecy on the newest part of Microsoft’s campus, the Studios, was a team called WDS. WDS didn’t stand for anything, but that was the point. The security protocols for the Studio B building were strengthened relative to any other in the entire engineering campus. Housed in this building was a team working on one of the only projects that, if leaked, would be a material event for Microsoft.
WDS was creating the last part of the story to reimagine Windows from the chipset to the experience.
When we began the project, it was the icing on the cake. After the Consumer Preview, it had become the one thing that might potentially change the trajectory of Windows 8.
As Windows 7 finished and I began to consider where we stood with hardware partners, Intel, the health of the ecosystem, and competing with Apple, I reached the same conclusion the previous leader of Windows had—Windows required great hardware to meet customer needs and to compete, but there were structural constraints on the OEM business model that seemed to preclude great hardware from emerging.
At the same time, the dependence on the that channel meant there was no desire at Microsoft to compete with OEMs. In 2010, the Windows business represented 54% of Microsoft’s fiscal year operating income and Office was 49%—yes you read that correctly. BillG used to talk about that amount of revenue in terms of the small percentage of it that could easily fund a competitor or alternative to Windows. The “Year of Linux” was not just a fantasy of techies but a desired alternative for the OEMs as well. So far the OEMs had not chosen to invest materially in Linux, but that could change especially with an incentive created by Microsoft’s actions.
Like my predecessors, I believed Microsoft needed to build a PC.
Building PCs was something BillG was always happy to leave to other people. In an interview in 1992, Bill said, “There’s a reason I’m the second-biggest computer company in the world…. The reason is, I write software, and that’s where the profit is in this business right now.” On the other hand, the legendary computer scientist and arguably father of the tablet concept, Alan Kay, once said, “People who are really serious about software should make their own hardware.”
Microsoft was founded on the core belief that hardware should best be left to others. In the 1970s hardware was capital intensive, required different engineering skills, had horrible margins, and carried with it all the risks and downsides that pure software businesses, like the one BillG and PaulA had pioneered, did not worry about. With a standardized operating system, the hardware business would quickly consolidate and commoditize around IBM-compatible PCs in what was first a high-margin business that soon became something of a race to the bottom in terms of margins.
Microsoft’s fantastic success was built precisely on the idea of not building hardware.
BillG was always more nuanced. He and PaulA believed strongly in building hardware that created opportunities for new software. Microsoft built a hardware device, the Z-80 SoftCard, to enable its software to run on the Apple ][. Early on, Microsoft created add-in cards to play sound. PaulA personally drove the creation of the PC mouse, the famous green-eyed monster. Modern Microsoft built Xbox, but also Zune and the Kin phone.
Apple built great hardware and together with great software made some insanely great products.
To build hardware in this context meant to build the device that customers interacted with and to build all the software and deliver it in one complete package or, in economist’s parlance, vertical integration.
Mike Angiulo (MikeAng) and the ecosystem had the job of bringing diversity to the PC ecosystem, a diversity that Apple did not have. This diversity was both an enormous strength and the source of a structural weakness of the industry. PCs in any screen size or configuration one might need could always be found or even custom built covering any required performance and capacity. If you wanted something like a portable server or a ruggedized PC for a squad car or a PC to embed in 2 Tesla MRI, Windows had something for you. Apple with its carefully curated line essentially big, bigger, biggest with storage of minimum, typical, maximum across Mac were the choices. Even a typical PC maker like Dell would offer good, better, best across screen sizes and then vary the offering across home, small business, government, education, and enterprise. Within that 3 x 3 x 5 customization was possible at every step. This is the root of why Apple was able to have the best PC, but never able to command the bulk of the market.
The idea of vertical integration sounds fantastic on paper but the loss of the breadth of computing Windows had to offer was also a loss for customers. It is very easy to say “build the perfect hardware” but the world also values choice. One question we struggled with was if the “consumerization” of computing would lead to less choice or not. In general, early in adopting a technology there is less choice for customers. Increased choice comes with maturity in an effort to obtain more margin and differentiate from growing competition.
One bet we were making was that Windows on ARM and a device from Microsoft was the start of a new generation of hardware. It would start, much like the IBM PC 5150, with a single flagship and then over time there would be many more models bringing the diversity that was the hallmark of the PC ecosystem.
That is why we never built anything as central and critical as a mainstream PC, and never had we really considered competing so directly with Microsoft’s primary stream of profits and risking alienating those partners and sending them to Linux. The Windows business was a profit engine for the company (and still is today) and that profit flows through only a half dozen major customers. Losing even one was a massive problem.
Microsoft had also lost a good deal of money on hardware, right up to the $1.15 billion write-off for Xbox issues in 2007. Going as far back to the early 1990s and the original keyboard, SideWinder joystick, cordless phone, home theater remote, wireless router, and even ActiMates Barney our track record in hardware was not great. Microsoft’s hardware accessories were at best categorized as marketing expense or concept cars. It was no surprise my predecessors backed off.
Like the mouse, the sound card, and perhaps Xbox, I was certain that if we were to succeed in a broad platform shift in Windows that we would need to take on the responsibility and risk of building mainstream and profitable PC devices. We tried to create the Tablet PC by creating our own prototypes and shopping them to OEMs as proofs of concept. We repeated this motion with the predecessor to small tablets called Origami, same as we did for Media Center. Each of these failed to develop into meaningful run rates as separate product lines even after the software was integrated into Windows.
OEMs were not equipped to invest the capital and engineering required to compete with Apple. As an example, Apple had repurposed a massive army of thousands of aluminum milling machines to create the unibody case used in the MacBook Pro. Not only did no OEM want to spend the capital to do this, but there was also no motivation to do so. Beyond that, the idea of spending a huge amount capital up front on the first machines using a new technology until a process or supply chain could be optimized was entirely unappealing even if the capital was dedicated.
The OEMs were not aiming for highly differentiated hardware and their business needs were met with plastic cases that afforded flexibility in design and components. In practice, they often felt software was more differentiating than hardware, which was somewhat counterintuitive. The aggregate gross margin achievable in a PC software load was a multiple of the margin on the entire base hardware of a PC. The latest and coolest Android tablet was a fancy one made by Samsung, with a plastic case. The rise of Android, a commodity platform, all but guaranteed more plastic, lower quality screens, swappable parts, and the resulting lower prices.
The OEMs were not in a battle to take share from Apple. They were more than happy to take share from each other. Apple laptop share was vastly smaller than the next bigger OEM making Windows laptops. Each OEM would tell us they could double the size of their entire company by taking share from the other OEMs. That’s just how they viewed the opportunity. The OEMs were smart businesspeople.
Thinking we needed to build hardware, then building it, was one order of magnitude of challenge. Choosing to bring a product to mass market was another. Hardware is complicated, complex, expensive, and risky—risky on the face of it and seen as risky by Microsoft’s best customers.
The Surface team, organized within the Entertainment and Devices division home to keyboards and mice, finished the first release of their namesake computer. It was a Windows Vista-powered table like the ones popular in bars, pizza places, and hotel lobbies in the 1980s when they ran Frogger. The new platform software provided the cool demos for Windows 7 touch. Surface came about from a research project rooted in long term efforts around optics and display technology. The effort was productized as the original Surface, with Panos Panay (PanosP) brought in to help accomplish that from the peripherals group. Unfortunately, the commercial viability of the table was limited. RobbieB, the executive of the division (that also included Xbox and phones), was looking to offload the effort or better said make Windows pay for it.
We would move the team over to take on our hardware challenge if we could figure out an arrangement that would not torpedo the Windows business. It was this step that put a halt to all previous projects.
Managing what could become a first-party hardware effort from within the Windows team posed significant challenges, even obstacles. PC OEMs would rightfully become unglued if they believed the decidedly limited, information they shared with MikeAng’s ecosystem team about their plans was shared with other OEMs. If their information were to be shared with a first-party team, that was even worse. Some of the earliest concerns expressed to regulators about Microsoft had to do with the walls, or lack thereof, between different parts of the Microsoft ecosystem and competitive parts in Microsoft, for example, between Windows and Office, where Office might have an unfair advantage. I had no desire to further a conspiracy theory.
JulieLar offered her own set of challenges, but almost the opposite in nature. She was concerned about having a first-party hardware team that acted like an OEM more than a part of Microsoft. She wanted our hardware to fully embrace, not mangle, the Metro design language. I hardly wanted a team that would embrace the economics of OEM, such as crapware and expensive accessories.
Collectively we wanted a decidedly-non OEM to be our OEM.
Both Julie and Mike spent a lot of time with Panos to help us all arrive at a structure and work process that was set up for success. Panos joined Microsoft in 2006, relatively recently, after working at NMB Technologies, an expansive 60-year-old Japanese maker of electronics components including keyboard switches and skateboard ball-bearings (that I used to pay top dollar for in high school.) The complexities for Panos in jumping into the thicket of Windows were significant, not to mention the delicate nature of first-party PCs. The concerns for all parties were legitimate.
The idea of a whole hardware team moving over, rather than building one organically, was worrisome, but we did not have the time to start from scratch. By most any measure, the idea that in about 30 months we would have a new ultramodern PC built on all new components the industry had never used before by a team that had only made a table computer was crazy. Panos’s commitment to being part of the massive effort was not just significant but deeply sincere. He saw the opportunity the same way we did—a chance to reimagine Microsoft.
Working for Mike, Panos would expand the team in every dimension. Hiring and growing engineers for hardware, firmware, mechanical, plastic, manufacturing, acoustic, industrial, and safety, with designers from graphical to industrial to packaging, along with all the support functions I neglected to mention in that long list. We even had to embark on some special construction in the buildings to account for the needs of the kind of equipment required to build a PC.
WDS was formed in June 2010 just as we were beginning to code Windows 8. We did not even send out an org announcement mail, keeping with the secrecy of the project. The name “Surface” moved over with the team and would naturally stick with the device when we were ready to bring it to the public. For now, codenames and codewords were in order. Not just one codename, but an entire system of codenames. We had codenames for every part including unique codenames used for each vendor – part combination. There were codenames for every presentation. Over the course of the project, we maintained over 200 codenames. Why? The secrecy was to maintain our commitment to keeping information separate.
It is worth noting, we were piloting ARM work on NVIDIA Tegra hardware and seeding any groups across the company with ODM-style tablets housing Tegra components. No one inside Microsoft lacked hardware for testing, evaluating, or betting on Windows on ARM.
The next 18 months were the most remarkable sprint in hardware development I could have ever imagined. Panos built an extremely tight and remarkably talented team to deliver products in relatively short time. Everyone stepped it up to a new level. One of the most critical aspects of assembling this team was his direct manager, MikeAng, who created a cocoon around the team, isolating them from all the forces internally. Mike also mentored Panos on the product development methods used to create Windows while apprising him of the best ways to integrate with the software team to avoid thinking like an OEM—the key problem we set out to solve.
One of the things that Mike had to do which was super valuable for the whole effort was to be the “vault” for information about Surface and information about the PC ecosystem. The two orgs, Ecosystem and Surface, could have absolutely no information leakage between them. In many ways, the integrity of the Windows business model was in Mike’s hands. It was easy to trust Mike since I’d worked with him since he joined Microsoft out of college, and his post-Microsoft career included becoming a lawyer, so I think it is fair to say he was the perfect person for this role. As we’ll see it was just a bonus that he was exactly the right kind of engineer for this role as well.
The core of the WDS team had previously worked together, but the scale of their previous projects was small by comparison—everything was bigger in Windows. The team doubled from the early days of mice and keyboards to a broad range of peripherals and most recently Xbox consoles. Ten years in, Xbox was selling about ten million units a year (about 800,000 in 2021) and the business remained roughly break-even due to console margins. We had much grander aspirations.
The story of the two devices, both tablet form factors, that became the first Surface PCs was one of incredible design and engineering efforts going through a remarkable series of iterations in an incredibly short time and then scaling to manufacture millions. It was also a blur. Creating a PC was new for me as well. It is not only fair, but important, to say that without the collaboration across Julie’s product team, JonDe’s engineering team, GrantG’s test team, and Julie herself, we would not have Surface. Jensen Harris (JensenH) and the entire UEX development provided an attention to detail on everything from the BIOS boot screen the on-screen keyboard, sound scheme, to the out of box experience that rivaled anything Apple did. It is one thing to build a reference PC within the walls of Microsoft—something done many times before. It is entirely another thing to deliver the innovation and scale of products that we did, and then bring it to a worldwide market.
I cannot stress enough how much of a whole-organization effort Surface was.
We did not set out to build what the industry was calling a consumer tablet or a tablet for consuming information and lightweight computing needs, as many, or just Apple, characterized the tablet market relative to the PC market. The world did not need more of that. Microsoft’s heritage and business rested with productivity, so the overarching goal was to create a PC that was great for productivity, creation, and mainstream information work. The PC would just happen to fit the current definition of a tablet, which was smaller than an 11.6” laptop and usable as a screen and a traditional laptop posture.
To justify building our own hardware we needed a unique point of view.
We believed that if we could build a tablet that worked fantastically well for the web and Office, while being ultra-lightweight and ultra-reliable, we could redefine the device that constituted the bulk of PCs used in school and work. One of our most significant challenges was that the narrative for new devices was a dominated by tablet, meaning a keyboardless slate focused on consumption. This created the situation that even if we achieved our vision technically, we had an enormous communications challenge.
The first Microsoft device, code-named GT or Georgetown for no real reason, was a premium ARM tablet able to deliver on the promise of PC-style mobile productivity. The whole point of building hardware as a first party was to do what the PC ecosystem would likely struggle or simply decline to do, and ARM was first among the challenges faced. The PC makers had all been struggling to enter the smartphone market, and ARM required a significant amount of product engineering and investment that was inconsistent with the low-priced mindset. Besides, they were fully occupied with Netbooks and later Ultrabooks and seeing little success with smartphones. As described in section 101, those OEMs working with ARM were genuinely enthused and supportive, but somewhat like Detroit’s reaction to electric cars, the internal tension proved too great to create and bring to market ARM, at least for the launch of Windows 8.
Whether premium implied premium price along with a premium experience was an ongoing debate. Typically, in hardware designs, manufacturers work backward from a retail price goal, assuming a certain BOM, margins, volume to manufacture (impacting the BOM or bill of materials), marketing and sales costs, returns, and so on. As product makers, this makes intellectual sense, but it frontloads so many constraints that rarely do great products emerge. I chose to start from the user scenario and see how far we could go and what we could get for what price, knowing that as we converged difficult decisions would arise.
This was a point of constant discussion with SteveB, who—and this was not news or even unique to this project—favored low prices and high volumes. Gaining market share was always preferable and what had gotten Microsoft to where it was. I felt this time we needed to take a different approach—reimagine what a Windows device could be even if that meant we start at the high end, perhaps like Tesla was doing at the time with Elon Musk’s famously not-so-secret secret plan to iterate to a lower price with greater scale, funding that iteration with earlier high-priced products.
Early in the process, Mike and Panos set the tone for the role WDS would play in the overall strategy for Windows 8. It was extraordinarily helpful. The mantra for the joint efforts of hardware and software coming together was expressed as, “our job is to build a stage, a stage for the operating system and software.” This fit well with the whole design for Windows 8 software, which itself was viewed as a stage for apps and user creations. The idea that every layer of the system was trying to get out of the way of the other layers above aligned entirely with the Metro design language, and also exactly what no one ever did, especially OEMs. This was in stark contrast to typical PCs and even typical OS iterations of the past, which felt the need to announce themselves whether via branding (those Designed for Windows or Intel Inside stickers, for example, that don’t even appear on Apple laptops because they drove Steve Jobs crazy) or via icons, popups, or notifications.
We were not designing the hardware experience after the software experience (or before); rather, we were designing the hardware and software experience together—a reimagined experience.
I tried to keep my own role in check. I’d seen many past hardware cycles within the company where there was so much going up and down the chain, so much approval, and so much tweaking by executives. Plus, it is crazy easy to have an opinion on a device and act like some enlightened big-shot executive. Panos’s experience with executives at Microsoft rightfully made him wary of me at first. Executives often had an outsized view of their contribution while underestimating the amount of work, backchannel, and eyerolling they caused. Advertising, pricing, user interface design, and feedback on hardware were places that executives easily meddled. And for that, teams suffered.
I used a lot of different laptops. I bought nearly every iPad typing cover available (and there were dozens). I was a road warrior, so I was real-time testing. I used browsing, email, and Office a lot. I could have probably justified my own opinions, and many times I really wanted to, but had to resist the urge. I had asked Mike and Panos to pull off what by any measure would be a Herculean task. My responsibility was not to meddle.
From my vantage point in developing the product, there were five inflection points in the design process—a high-altitude view since there were thousands of choices made in developing the product. At some level these were choices to be made or constraints. The project could have easily become impossible or careened off path with so many potential degrees of freedom. Anyone who looked at the typical roundup of PC laptop reviews for back to school or holiday was left browsing a dizzying array of specs, ports, dimensions, and component choices. It would have been easy to get lost in all these decisions to be made. Each one of those decisions is related to many others in an ever-complex set of linear equations to be optimally solved.
The team picked these key points in the design process: chipset, the productivity posture, typing, materials, and peripherals.
By establishing the initial platform of ARM and a sub-11” form factor many of the traditional “spec” issues of designing the device would fall out. But this understates just how many and how fast, Panos and team needed to put constraints in place—constraints that would maintain a point of view and deliver a product.
The chipset was perhaps the most straight-forward choice for the team to make on technical grounds, though hardly without controversy or for lack of a better word politics. There was a good deal of pressure from the Windows Phone team and from some OEM partners who were still trying to build phones of their own (though on Android now) to go with Qualcomm. Qualcomm was viewed as a safe choice and the choice with the most intellectual property and lawyers to support that. On the other hand, the early hardware was far from ready to build a PC especially the graphics support which was so critical. It was the first taste of developing a product where the choices had implications beyond building the best experience. My own view from dealing with Qualcomm was that they were more concerned with the volume of devices we would commit to up front than with the quality or innovation we would deliver, even early in the process.
The teams working across Windows porting to ARM were favorable to NVIDIA given the choices we had. In particular, NVIDIA was strongest on graphics which were key to the Metro-style experience. The underlying graphics chips mapped closely to our DirectX API and made developing device drivers supporting the animations and effects we needed with the performance we required as straight-forward as could be given all that was new. Aside from that NVIDIA was a joyful partner and great at working with ODMs to supply test hardware.
Ralf Groene (RalfG) led hardware design efforts for WDS. He joined Microsoft with a classic design education and experience at several marquee design houses in Silicon Valley. If we were casting a movie for an industrial designer, Ralf would be perfect. RalfG and the design team were cycling through prototypes of sizes and overall form factors at an incredible pace. Engineering for productivity and typing fell to the design team, with a very strong collaboration with the UEX team to make sure the software could gracefully adapt to the presence or absence of a keyboard and trackpad.
The kickstand was such an amazing choice. Yet I lack a clear recollection of the genesis of it, only that the moment I saw it I was convinced it was an inspired choice. Having a kickstand solved the main problem tablets had, which is how to keep using one when you wanted to put it down or more importantly to type. The kickstand could easily have been a gimmick but in fact it became an iconic element and one that served to further the point of view centered on the dual modalities of consuming content hands-free and full-time work on the go. The kickstand would have tradeoffs in real-world usability, especially with the larger devices to come later, but the for the first generation I felt it was exactly the tradeoff to make.
The design team iterated with a kickstand, something that had been found on some early media players and phones. Should the kickstand be two small legs, one fatter leg, or something else? Should it support both landscape and portrait? The team settled on a full-width kickstand, almost a fold-out foot that provided a rock-solid level of stability. That stability came from a unique hinge design, created entirely by the mechanical engineers on the team. Opening and closing the hinge had the feel and robustness of a luxury car door, a German one of course. As the hinge closed the air dampening effect would give the motion a soft cushion with a nearly inaudible, but intentionally there, sound of luxury. I once spent time in the lab with the mechanical engineer on the team amazed at just the complexity and number of choices that were considered in the design and fabrication of the hinge. It was a great example of how every small detail is vastly more complex than readily apparent.
The beauty of the hinge and kickstand were that they reduced the thickness of the whole productivity scenario by quite a bit—the plastic cases of the iPad including Apple’s own cases were true compromises. The 2015 Folio, which added two layers of material in a complicated origami-folding design and later the 2018 Magic keyboard, which was just heavy at 600 grams or 130 grams more than the iPad itself, were both awkward. The latter keyboard added so much weight and thickness to the iPad that it was more practical to carry a laptop. Whereas Steve Jobs had to sit in a chair holding the iPad, with Surface one could hold it or flip out the kickstand and watch the move while enjoying popcorn, or having a video conference with the integrated HD webcam angled precisely enough so it would capture your head and not neck or below when placed on a table.
Important for the productivity posture was the overall size. When working, a screen can never be too big, but when traveling a screen can never be too small. Finding the balance was super important. The math would work out super well to have a screen that was 10.6-inch diagonal but with a widescreen layout of 16x9, the typical HD or Blu-Ray numbers. At 1366x768 the screen was equally optimal for the new Windows 8 snap view to show two apps side-by-side—GT as a stage for Windows 8—and also to display movies full screen without the drawbacks of letterbox modification that consumers hated.
There was quite a bit if a debate over 16:10 versus 16:9 aspect ratio and the possible screen resolutions. UEX favored the extra width of 16:9, though 16:10 offered more vertical space for productivity but could not support side-by-side apps. The availability of components and cost of using the UEX preference made it a tough choice. This was also an area where we were going against the iPad trend which picked a more traditional even old 4:3 aspect ratio used on NTSC TV and VGA monitors at 1024x786 which was also criticized in reviews for letterboxing widescreen video. The iPad was between wide screen video and the traditional trade book and Amazon Kindle aspect ratio of 3:2.
From a supply chain and manufacturing perspective, the choice was more complex. The supply chain for screens was following the volume of all manufacturers combined. No single manufacturer could simply order a size not being used by others without committing to volume. In 2011 the dominant size for tablets was a 10.1” screen with 1280x800 resolution, a 16:10 aspect ratio. We saw the 1280x800 resolution as a waypoint, picked for price by the Android tablet ecosystem. It was neither good for productivity in landscape nor good for reading in portrait mode.
We were deeply committed to having a screen that supported the side-by-side view of two apps in landscape. This came with a real tradeoff, however, in the usability as a pure tablet. In other words, even from the choice of screen we were optimizing for productivity. When used as a “reading tablet” the portrait mode orientation of 9:16 was particularly awkward. Reading with one hand holding the device was tiresome, like holding a college chemistry textbook in bed. Additionally, traditional laptops with 13” or 15” screens had moved in large numbers with Windows 7 to the 16:9 and 1366x768 resolution, indicating that resolution was an excellent design point for apps and would be around for years to come.
The screen had two other technical issues to work through with significant impact on the experience. First, the core display technology for LCD screens was in the midst of a transition from MVA (Multi-Domain Vertical Alignment) to IPS (In-Plane Switching) with the latter being the newer technology. Going with the newer technology would be preferred as a basis for applying a touch sensor, but that further constrained the potential supplies and sizes.
Second, the display panel would need an extra stage in manufacturing to attach a touch panel sensor. There were two approaches here as well. The standard approach, again for Android tablets, was known as air-bonding. In this technique the touch panel does not directly touch the underlying display panel leaving a small gap which eases manufacturing. Unfortunately, this also introduces parallax, an effect by which where you touch and where the sensor detects your touch do not always align in your brain. This is seen at check-out counters and ATMs which prefer the manufacturability and low-cost of this approach. While cheaper, it is a disaster for precision work on a PC. The newer approach, direct bonding, used by Apple and more premium tablets, directly bonded the touch sensor to the display. This was more expensive and had a non-trivial defect rate increasing costs as well.
The way you know from the supply chain that you are asking for something difficult is that the price goes up and delivery goes further out. The suppliers, all singing from the same page, strongly encouraged a 16:10 screen with 1280x800. Their view was that this was the volume tablet resolution. After many trips and many meetings, we were able to secure a high-volume supply of the 10.6”, 1366x768, IPS, direct bonded screen. This was a tough choice and added early risk, but once you lock in a supplier as a volume partner risk finds a way of decreasing. I suspect over time even for the 10.6” screen we would have gone with a more comfortable aspect ratio, but by then we would have had more answers to windowing and productivity than we had with Windows 8. I’m getting ahead of myself.
Productivity without a keyboard is not really productivity. We were not confused about this point as they were in Cupertino. Productivity without a mouse or trackpad, on the desktop, was not even possible. Because Windows RT had the desktop and desktop Office, a trackpad was required. The 10.6-inch diagonal screen would create a case with a 10.1-inch long edge, which by no accident was wide enough for a nearly full-size home row of keys for touch typists. This could be a huge advantage over the keyboard for the iPad, which was only 9.5 inches across, making a big difference in key size and spacing. Our screen size allowed for arrow keys and a row of function keys with the trackpad. These small differences bring a world of benefits when designing devices with which people have such intimate reactions. This is decidedly different than a desktop PC or even a large laptop. There’s something about a device that must be held that makes small decisions so much more important.
The keyboard presented a unique opportunity to innovate to maximize portability. Panos himself personally an expert in mechanical keyboards and part of Microsoft’s own efforts in innovative keyboards was well-positioned to develop two keyboards we would introduce with GT.
Fundamental to the keyboard was the idea that it would be faster and more accurate than touch typing on a glass screen and include a trackpad for precision, or “professional,” pointing as so many reviews of iPad noted was lacking. Since GT had a kickstand, the keyboard did not need to also serve as a redundant cover for the back but when folded to cover the screen would offer screen protection. The underside of the keyboard was covered with a layer of what I always termed northwest fleece with a soft woolen feel when carrying GT around in your hand.
Our folding keyboard could make an easy transition from productivity and typing to reading or watching while lounging around. The keyboard itself was a touch-based invention from Stevie Batiche (StevieB), who was the resident scientist/engineer leading all things touch and display, and a genius. We would also add a very thin mechanical keyboard, one of the thinnest mechanical keyboards ever released to market. Our hearts would be with the touch keyboard at launch owing to its productivity and on the go scenario.
StevieB’s ultrathin keyboard invention was essentially a touch panel with ever-so-slightly raised key outlines impressed on the panel. The electronics were created by laminating several layers of materials including a touch sensor and all the “wiring” together in a hot press machine—sort of a touch screen sandwich. The keyboard, called Touch Cover, was a mere 3.2mm and weighed only 7 ounces and under 200 grams. Given the dimensions of the screen, Touch Cover was able to incorporate a trackpad on par with many small laptop trackpads and a complete set of laptop keys including dedicated keys for Windows 8 charms along the top row, which also doubled as traditional PC function keys. In US English even the F and J keys had slight outward impressions for traditional typists to find the home row. There would be many skeptics about the keyboard when it came to typing proficiency. While there were many risks in the entire project, at the time Touch Cover seemed like the riskiest part of the project. It would be so immediately visible and open to snap opinions. The press that typed for a living were used to their preferred “feel” when it came to laptop keyboards, often a sore spot for just about every Windows laptop review.
GT had a beautiful hinge and an incredible touch keyboard, but how to attach the keyboard posed another challenge. Because of the hinge the cover could not be attached to the back of GT, but that would be dumb anyway—adding weight with no purpose was what we saw on the iPad. Once again, the mechanical engineers had a go at the problem. They put magnets to work. Not just any magnets, but a series of magnets of exactly the right strength to support the device, even swinging it from the keyboard, while also easy to remove. The magnets did not just attach the keyboard to the tablet but they were the connectors for the signal and power. Attaching the keyboard had to perfectly align the connectors or it would not work.
The magnetic keyboard was a more difficult to path to perfection and reliability. Upon first seeing it, just as most others, I was skeptical. Would it fall off? Would it properly align even when connected in a sloppy manner? Would flipping it around to use as just a tablet prove “goofy” or would it really work? Would Windows correctly enable the on-screen keyboard at the right time and get out of the way entirely when a keyboard was attached? Would it work just as well for the thin mechanical model?
The process of connecting the keyboard to the device proved to be more than just reliable but something of a signature. The first time we showed the Surface to the team that would lead creating television commercials they immediately connected with the “click” sound the engineering team worked so hard to get right. That click would become the centerpiece of the initial campaign along with the profile of GT and the keyboard. A subdued version of a magnetic connector also with a click would be used for the very small charging adapter. Ironically, Apple’s pioneering MagSafe connector that was so popular on MacBooks was not used on iPad, which took its lead from iPhone for charging. Even today Apple continues to struggle with magnets when it comes to covers and keyboard cases for the iPad. The new 2022 iPad typing portfolio with a kickstand is kind of a mess.
The removable keyboard provided several advantages. To compete in the “pure” tablet market the keyboard could be priced optionally and also be reviewed as an accessory versus required. It also permitted personalization by choice of colors since the laminating process was able to substitute any color materials for the back or front. The back of the Touch Cover could be customized in both material and look, for example corporate logos or art as we later offered. Finally, StevieB’s innovative touch panel was not restricted to keys and could be used to create any touch surface. Since it could be removed easily, we envisioned the potential to have specialized Touch Covers dedicated to specific applications. One example we showed early on was a synthesizer drum app mated to a custom keyboard. At a small, private event held one night in Los Angeles I even had a chance to watch actor/musician Zack Efron of High School Musical have fun with the prototype music generating cover.
The materials choice for inner fame and device case were almost always where the OEMs made choices that were best for the bottom line—plastic provided low cost, light weight, rigidity, ample room for cooling, and agility across component and peripheral changes. This is where Microsoft’s ability to provide a significant investment could make a real difference in the final product. The engineering challenge with case materials is the triangle of cost, weight, and rigidity. Cost is not only the cost of the material, but the manufacturing cost of making the case, with all the curves, holes, and cutouts that make it a computer. Materials can be inexpensive and lightweight but too flexible to be durable—the screen is glass and needs structure to prevent it from flexing and breaking. Materials can be rigid and lightweight but cost a tremendous amount, such as ceramics used in fighter jets. Aluminum is lightweight and relatively rigid, but to bring the cost down Apple invested a huge amount of upfront capital in order to take blocks of aluminum and use mechanical milling to turn it into a PC.
We needed the device to be as thin as possible and from the start we considered every fractional millimeter we could save. We knew that bringing over Windows in one product cycle, including the desktop, would bring real challenges to our ability to compete with the nine hours of video playback possible on the iPad. We needed every available millimeter for battery. An innovation in materials could prove a game-changer.
In research, the materials scientists became intrigued by a relatively new process of injection molding a magnesium alloy. Such a material was expensive and the manufacturing process complex, but the resulting parts were lighter than machined aluminum, extremely rigid and strong, and created by more flexible molding. At this point, the injection molding process had only been used for small parts such as watch frames or jet engine components, but the materials partner, Mike, and Panos’s team thought it possible to use for the much larger cases. The material could be readily colored via a permanent and robust vapor deposition process, which afforded other opportunities.
MikeAng, a mechanical engineer by training and certified alloy welder, dove into the process, visiting the factory in Asia. He returned with movies of cases being injection molded that looked like scenes from 20th century industrial America—sparks flying, molten flows of metal, giant machine presses.
Betting GT on this new process was one of the more uncertain aspects of the hardware design, and also very expensive, relatively. The high upfront costs made us all nervous. With any upfront cost, called NRE, or non-recurring engineering costs, the only way of justifying them is to make a lot of the product which spread the cost across many devices. That was certainly our intent.
The material would get the name VaporMg, pronounced vapor mag. Finished in a smooth, so smooth, black-gray, the material was another manifestation of the collective efforts of design, materials, and manufacturing coming together. While we originally planned to make the entire case out of VaporMg, supply constraints resulted in a more traditional aluminum frame with VaporMg used more sparingly. While that resulted in a bit of increased weight, the resulting scaled manufacturing and cost reduction were a good tradeoff.
There’s no doubt that VaporMg was the most extravagant choice in the whole project and one I felt the most over-extended in considering. In a big company with plenty of money, it is not uncommon to see branching out into new areas take on almost comically bad cost controls relative to industry norms. Without experience or baselines to compare and with all the excitement of “those other people doing this must be dumb” it is so easy to do this. Across the product, Mike, Panos, and team had extremely good controls and constant attention to the BOM, bill of materials, and NRE. Mike was a product of our frugalness in Office and brought that to this project and team. The VaporMg choice was one I felt we should go with even though it was so uncharacteristic for me or much of Microsoft.
One thing we considered was that we could reuse the materials process and plant capacity with other OEMs and license it to them for whatever devices they wanted to build. The idea of acting as a source for various components was a step we considered for trackpads and touchscreens as well as we saw OEMs unwilling or unable to take on the NRE costs to create competitive laptops. Once again though, I probably failed to consider they also viewed such costs unnecessary when it came to their share battle competing with other OEMs more than competing with Apple.
VaporMg was so strong that the team turned a standard GT chassis into a skateboard, on a lark. As a former skateboarder this thrilled me to no end. At one of the many press events hosted on campus offering a behind-the-scenes look at developing hardware, I skated around the lobby of Studio A. Not to worry, I was wearing a helmet though it was a borrowed bike helmet we located just for the photos. We snapped some photos that continue to live on. To further demonstrate the strength of the material, we showed off the drop-test machine that simulated dropping a tablet with different forces and angles and even directly on the kickstand—each time GT performed admirably. Protected by the Touch Cover, the combination felt relatively indestructible. The screen was securely protected without any extra casing on the back maintaining the thin profile and light weight.
Finally, productivity for a premium laptops and tablets circa 2012 still depended on wires or more specifically dongles. Apple’s MacBook Air approach of wireless everything was conceptually great but practically a pain, certainly in the early days. Carrying around dongles to connect to projectors or to USB devices was annoying and error prone. As a tool for productivity, GT needed to show up at a meeting and get handed the slides on a USB drive, pop the drive in, launch PowerPoint, and then connect to a projector, or handle devices like microphones and speakers. As a stage for Windows 8, we had optimized this type of flow from a performance and user interface perspective—get to the user’s work product and get to work as quickly as possible.
Mike and Panos framed GT as a device that should connect in the way people needed it to connect. GT had a standard USB port, standard audio output, and a mini-HDMI port in addition to the dedicated magnetic-charging connector. The USB port was rather tricky as it set a minimum thickness that was not an issue on the iPad that did not have a universal USB connector. The fat USB port was a bit ugly and some on the industrial design team referred to it as a “missing tooth” in an otherwise sleek profile. At one point we were so close to the standard for a connector we risked losing the trade group designation and permission to use the logo. This in addition to strong support in hardware and software for all the current wireless protocols for Wi-Fi and Bluetooth including support for wireless speakers and displays. This array of connectivity defined the soul of the device and shouted out point of view that the device was not a peripheral but a full computer.
Power adapters had become the bane of my existence as someone who used a variety of devices at home and work and travel. It was still too early to have a standard connector—Apple was still using the wide pin connector, which was painful and finnicky despite the super nice magnetic connector. GT created an ultra-skinny magnetic connector. I was, however, much more interested in the power brick. I could not stand the typical PC that had a three-foot cord to the PC, then a brick, then another three-foot cord to the wall. It turned out people in Japan disliked those too and Yodobashi Camera came to the rescue with an aftermarket model, a single 2 meter cable with a brick at one end with folding prongs. I stocked up on those. I really wanted GT to have an adapter with simple folding prongs. I wanted this so much I didn’t even bother to ask about non-US plugs. As a constant traveler, I was also cursed with hotel rooms with too few outlets. Back then I almost always had to unplug a lamp which usually involved moving furniture around. I often traveled with an after-market phone charger that had an extra pass-through outlet to charge a USB device. I needed this because PC USB ports did not charge when the PC went into standby, which I usually discovered at 6 a.m. local time.
Okay, so I meddled once after complaining one too many times.
One day RalfG invited me over to the studio and brought me through dozens of models for power adapters. I knew as soon as I got to the studio that Panos was managing me, but in a way that was entirely okay given how close we’d become. As we walked from 3-D model to model, I opined and envisioned about what it would be like in a hotel room. They explained to me all sor