Intel chief technology officer Justin Rattner reflects on how Intel and Apple worked together for the last six years of Jobs' life, using engineering innovation to create products we never knew we needed, but were indispensable at the very moment they came to market


Author: Justin Rattner, chief technical officer, Intel Corporation USA

Steve Jobs’ death in 2011 caused much of the world to stop and reflect on his extraordinary ability to use engineering innovation to create products we never knew we needed, but were indispensable at the very moment they came to market. It seems appropriate to reflect on how Intel and Apple worked together for the last six years of Steve’s life to make just a few of his powerful visions come to life.

Few people know how many times Apple came close to moving to Intel architecture. It had happened so many times, that the Intel-specific changes to the Mac OS were well documented by Intel’s OS engineering teams. Each time Apple thought it might be time to switch from IBM’s PowerPC architecture to Intel, they would dust off the source code and prepare to engage. Each time, of course, they were left at the altar – but, never feeling jilted, they re-archived what they had done and waited for Apple to call again.

It was Intel’s Centrino platform architecture that really got Apple thinking it was time to switch. IBM and Motorola simply could not match Intel’s processor performance and energy efficiency. By the middle of the last decade, the gap between Intel and IBM microprocessors was too great for Apple to ignore. A team was quickly assembled to bring the Apple product line to Intel architecture.

I will never forget the day Steve came to Intel to share the secret with Intel’s top 100 leaders. His very presence in a small auditorium in Santa Clara had the crowd buzzing with excitement. When Steve told us that the deed was done, and Apple computers would henceforth be powered by Intel, it was a breath-taking moment in our history.

But it was what Steve said about working with Intel that made us feel very special. He said the Apple engineers had expected their Intel counterparts to be very corporate, very inflexible in their approach to product design. To their surprise and great delight, the Intel engineers shared the same passion, the same technical zeal to make the new Intel-based Macs the best they could possibly be.

It was Steve’s ability to understand what drove engineers to greatness that made him so exceptional. He understood that despite our discipline and training in mathematics and science, we love what we do – and we do it not for the money, but for the sheer pleasure in seeing the fruits of our labour make the world a better place to live.

Those early Intel Macs led to many more collaborations. The MacBook Air is one such example. The folks in Cupertino wanted to build by far the thinnest notebook computer ever seen, but their work was frustrated by the thickness of the processor package. Fortunately, Apple’s requests finally reached our Assembly and Test Technology Development (ATTD) team in Arizona where they had invented a very thin package, but could not get Intel’s own product planners to embrace it. Suddenly, ATTD had a customer that saw their ultra-thin package as the basis for a new class of notebook computer. The MacBook Air was suddenly off the back burner and headed to market.


Steve Jobs

The latest bit of Intel magic to make its way into Apple’s product family is called Thunderbolt. If you have bought any new Macintosh recently, you have a Thunderbolt connector somewhere on the case. What makes Thunderbolt unique for Intel is the fact that it is the first technology to go straight from Intel Labs, the research team I lead at Intel, directly to product.

Once Apple learned we had a cheap and versatile 10 gigabit per second I/O technology, they made it their number-one priority. Even my boss, Intel’s CEO Paul Otellini, was stunned when Steve said that Thunderbolt was their top technology request of Intel in 2009. Steve honoured us at the product launch by not only mentioning the Intel contribution to Thunderbolt, but by giving the technology its official name: Thunderbolt by Intel, courtesy of Steve Jobs and Team Apple.

Over much of the period I have just discussed, research at Intel was undergoing some extraordinary changes as well. To understand the challenges we faced, I need to explain a little bit of history that began even before Intel was founded. In the 1960s, one of the hottest technology companies around was Fairchild Semiconductor. It was headed by the legendary Robert Noyce, the inventor of the planar integrated circuit.

Running its research lab was another chip industry legend, Gordon Moore, the father of the famous ‘law’ that governs the advancement of semiconductor technology. Gordon’s lab was famous for inventing all kinds of breakthrough chip technologies, but Fairchild Semiconductor, as a company, struggled to move those great ideas from the research lab and into the factory.

When the time came to found Intel, Noyce and Moore, along with another chip industry legend, Andy Grove, agreed that there would be no research at Intel. New chip technology would be developed right on the factory floor. Intel was thus born with an anti-research sentiment, and it stayed that way for a least its first 15 years. The anti-research attitude was broken in the mid-1980s, when a small research team was formed under John Caruthers to look a bit further out in time.

Components Research, as it was called, was chartered to work on processes and devices two technology generations (or nodes) ahead of current production. It still exists today, employs a little more than 100 or so people, but is responsible for most of the recent semiconductor breakthroughs you may have read about. These include strained silicon, high-K, metal gate transistors and, most recently, the tri-gate or 3D transistor we are putting into production at 22 nanometers.

Intel did not form the equivalent of Components Research for microprocessors for another ten years, but the Microprocessor Research Lab did not get off to a very good start. When I took charge of it in late 2000, not one major microprocessor feature had been invented in MRL. Much of what it had created was being licensed to companies outside of Intel. I put an immediate stop to that and insisted that we plan for success in any new research. If there was not an obvious landing zone inside of Intel for a new technology, we should not even start the research.

That realisation led to a transformation in our thinking about many aspects of non-semiconductor research at Intel. It launched a process that we would later understand as a fundamental rethink of what it meant to do research in the industrial setting.


After much debate, we came to understand that the low hit rate problem was due to timing differences between research and product development. Too often, a research project would reach proof of concept, but the development teams would be tied up getting a new product out the door. After months of waiting, the research team would move on to other work and the motivation to transfer the previous results would fade.

Similarly, when a development team would come looking for new ideas for their next product, the researchers were busy with other work and had little interest in returning to what they viewed as old work. We came to refer to this problem as the ‘valley of death’, given its remarkable ability to kill perfectly good technologies before their time.

The solution to the problem, as it turned out, was right under our noses. Of particular relevance was the way our Components Research and semiconductor Technology Development teams go about creating the next generation semiconductor technology using a process they call pathfinding. The key to their pathfinding process was assembling a team made up of both researchers and developers for a sufficient period of time, typically 12-18 months to affect the technology transfer.

Our challenge was to scale out the pathfinding process to cover the literally dozens of new technologies coming out of Intel Labs. Despite our fears, adapting pathfinding to the broad areas of research we pursue in Intel Labs has been a remarkable success. The process is so successful that today the product groups literally fight over the pathfinding slots. It is also not unusual for more developers to be assigned to a pathfinding project than researchers.

At any moment, we have over 50 distinct joint pathfinding projects between Intel Labs and the various product development teams. We also raised the bar by which we define successful pathfinding from simply transferring the technology to actually impacting the product roadmaps. We even included a joint pathfinding objective in our employee bonus programme.


An early Intel microprocessor from 1973, the year Justin Rattner started with the company

While roadmap impact is certainly a critical part of being a 21st century industrial research lab, it is not the whole story. To better understand what works and what does not work in modern industrial research and how it differs from academic research or government research, we initiated a benchmarking effort with various multi-national, industrial research labs around the world.

Included in the study were all your favourite labs including IBM Research, Microsoft Research, Google Research (a misnomer, by the way) and GE Global Research. We looked at about a dozen different labs in all. One thing we learned was the importance of balancing research directed at existing product lines and research aimed at exploring technologies that had no immediate business relevance. This 50-50 split has been in place for the last four years and has worked remarkably well.

While it would be easy to argue for a much higher spend on the business-directed side, we feel we create much more long-term value for Intel by keeping exploratory research in equal proportion.

Another thing we learned was the role research plays in reducing long-term development costs. Product failures in our business are extremely expensive. A typical mainstream microprocessor may cost €500 million to develop. That cost is on top of the billions of euro it costs to build the factories to manufacture such a microprocessor. Intel can ill afford a mistake of that scale, yet they can and do happen, more often than you might suspect.

What a research laboratory can do is test out those new ideas before they become part of a product, validate the good ones and weed out the bad ones. This is called ‘failing fast’ and it is something we do with great pride. So crucial are these validations prior to product development that we even have an award for it. We call it the ‘First Penguin’ prize. It comes from the observation that it always takes that one penguin to jump off the ice floe for the rest to follow. Our penguins willingly make that sacrifice and we are delighted to recognise their risk taking.

One consequence of having a substantial amount of exploratory research taking place is the need to find other avenues for these technologies to reach the marketplace. One such outlet is open source software. One of our great successes has been our Open Computer Vision library. OpenCV has become the de facto standard toolset for doing computer vision research and building actual applications. We even made a little money by delivering a set of optimised software libraries for the kernel vision functions.

Most recently, we released a new tool called Parallel JavaScript into open source. It allows web programmers to harness the power of multi- and many-core processors within the browser. The initial version for the Mozilla Firefox browser had over 1000 downloads in the first few weeks it was available. Versions for other browsers are in development.

For other technologies, we have recently established a venturing practice within Intel Labs. While few technologies represent compelling venture opportunities, those that do usually require substantial financial investment before they can be brought to market. Just to calibrate you on the size of such investments, significant research efforts for us represent an annual investment in the range of €1-10 million per year. Taking just one of those technologies to product requires an investment of from €10-100 million per year.


Given that fact, we look for opportunities to enter markets where the total available market will reach €1 billion within three years. There are not many such ventures, but the ones we are pursuing are truly compelling.

In addition to venturing and validating, there is one more role for the 21st-century industrial research lab and that is the one we call ‘visioning’. Thanks to people like Steve Jobs, it is no longer sufficient to put a bunch of technical ideas together and call it a competitive product. The days of ‘data sheet’ product design are quickly fading in the IT industry.

To either be Apple after Steve or to compete with Apple after Steve, products and product lines need to spring from a vision of what the user needs to be more productive, to have more fun, and to be more social. Technology has reached the point where we as users should not settle for anything less. We call this experience-driven design, and it is really reshaping the way we think about and plan new products at Intel.

Unfortunately, experience-driven design is not something they teach you in engineering school with perhaps one or two exceptions. In fact, the disciplines that are critical to experience-driven design are not even taught in engineering school. You find the experts in the field coming out of anthropology and the social sciences. Disciplines such as ethnographic research and behavioural economics, along with human interface design, are what it takes to compete here.

To create these visions for future products based on experience-driven design, we have added a new research division to Intel Labs. It is headed by an ethnographer named Genevieve Bell who is also an Intel Fellow. She leads a team of about 100 researchers who are charged with creating these visions, prototyping these visions, and formally testing these visions. They then take each vision into pathfinding with a particular product group to turn that vision into reality. Our healthcare products, our Classmate family of educational PCs and our Smart TV products are all a result of this methodology. We expect most, if not all, of Intel’s products will benefit from it in the future.

These three dimensions; visioning, validating and venturing, in our minds form the core of 21st-century industrial research. While Bell Labs may have been the model of 20th-century industrial research, and there are still a number of cases of companies chasing that vision, it is increasingly dated and out of step with today’s fast-moving information and communications technologies. We are trying to set a new course for the 21st-century and we hope many other companies will join us in the endeavour.

Justin Rattner is the corporate vice president and the chief technology officer of Intel Corporation. He is also an Intel Senior Fellow and head of Intel Labs, where he directs Intel’s global research efforts in processors, programming, systems, security, communications and user experience and interaction. Rattner joined Intel in 1973. He was named its first principal engineer in 1979 and its fourth Intel Fellow in 1988. He holds BS and MS degrees from Cornell University in electrical engineering and computer science.

This article is based on Justin Rattner’s presentation at the Irish Academy of Engineering – Intel Labs Europe ‘Lecture Series on Engineering Research and Innovation’. O'RiordanTechApple,Intel Ireland,software
  Author: Justin Rattner, chief technical officer, Intel Corporation USA Steve Jobs’ death in 2011 caused much of the world to stop and reflect on his extraordinary ability to use engineering innovation to create products we never knew we needed, but were indispensable at the very moment they came to market....