This book will help you accelerate your progress without expensive personnel or technological changes. It starts with raising your standards, aligning your people and culture, sharpening your focus, picking up your pace and transforming your strategy.
Most good companies operate well within siloes. Great companies do that and operate incredibly well across silos
A high trust environment is crucial. Admit failures, make sure you’re consistent and deliver on what you promise, focus on solutions to problems and not people
No customer success teams – person and team who owns a customer needs to be responsible for their happiness. Align incentives
Biz dev is for more unusual customer contacts. Sales is for highly repeatable sales processes. Over invest in lead gen before you invest in a big sales team. Lead gen, repeatable sales process, then sales team. Once you have that, invest aggressively in sales
Growth has shown to have extremely high correlation with value created. This should be the key focus for software companies, especially smaller, younger ones
You have to know your growth targets and what your growth levers are. What can you do to grow faster? Why not do that?
Execution is king but will only go so far if the strategy is off. Make sure that your operating executives are also the head strategists for their units. The CEO must also be the chief strategist
Hire drivers, not passengers. Get the wrong people off the bus
Attack weakness, not strength
Create a cost advantage or neutralize someone else’s
It’s much easier to expand a market than create a new one
Early adopters buy differently than late adopters. Aim for the early adopters first but then have to use those examples to lure in late adopters
Stay close to home in the early going – more resources and attention can be given to local customers and get better and more frequent feedback
Build the whole product or solve the whole problem as fast as you can
Architecture is everything
Prepare to transform your strategy sooner than you expect – as a leader, you need to operate well ahead of the current dynamic
Build a strong culture – it matters more than you think
It is the persistent actions, beliefs, behaviors of a group of people and sets the norms and standards
This book is about social change, moving from information to innovation. “Information is a difference in matter-energy that affects uncertainty in a situation where a choice exists among a set of alternatives. One kind of uncertainty is generated by innovation, defined as an idea, practice, or object that is perceived as new by an individual or another unit of adoption. An innovation presents an individual or an organization with a new alternative or alternatives, as well as new means of solving problems. However, the probability that the new ideas is superior to previous practice is not initially known with certainty by individual problem solvers. Thus, individuals are motivated to seek further information about the innovation in order to cope with the uncertainty that it creates. Information about an innovation is often sought from peers, especially information about their subjective evaluations of the innovation. This information exchange about a new idea occurs through a convergence process involving interpersonal networks. The diffusion of innovations is essentially a social process in which subjectively perceived information about a new idea is communicated from person to person. The meaning of an innovation is thus gradually worked through a process of social construction.”
Diffusion a social matter even more than a technical one – how potential adopters view a change agent affects their willingness to adopt new ideas
A technological innovation embodies information and thus reduces uncertainty about cause-effect relationships in problem solving
Attributes that help speed diffusion
(Perceived) Relative advantage – the improvement one innovation vs. what precedes it, perceived > objective advantage
Many adopters want to participate actively in customizing an innovation to fit their unique situation. Innovation diffuses more rapidly when it can be reinvented and that its adoption is more likely to be sustained
Taking into account people’s perception of an innovation cannot be overstressed
Rationality = using most effective means to reach a goal
Fastest routes to adoption come when felt needs are met
Mass media has a short, spiky effect on adoption whereas interpersonal communication is more sustainable
Compatibility – degree to which an innovation is perceived as being consistent with the existing values, past experiences, and needs of potential adopters
This dependence on the experience of near peers suggests that the heart of the diffusion process consists of the modeling and imitation by potential adopters of their network partners who have previously adopted. Diffusion is a very special process that involves interpersonal communication relationships
One of the most distinctive problems in the diffusion of innovations is that the participants are usually quite heterophilous. Homophilous situations slows the spread of the innovation as these groups tend to socialize “horizontally” and don’t break through to other groups/classes
The structure of a social system can facilitate or impede the diffusion of innovations. The impact of the social structure on diffusion is of special interest to sociologists and social psychologists, and the way in which the communication structure of a system affects diffusion is a particularly interesting topic for communication scholars. Katz remarked, “It is as unthinkable to study diffusion without some knowledge of the social structures in which potential adopters are located as it is to study blood circulation without adequate knowledge of the veins and arteries.”
Opinion leaders thus exemplify and express the system’s structure. These are incredibly powerful and valuable members to have on your side
A communication network consists of interconnected individuals whoa re linked by patterned flows of information
Complexity – degree to which an innovation is perceived as difficult to understand and use
There are 5 main steps in the innovation-decision process – knowledge, persuasion, decision, implementation, and confirmation
Trialability – degree to which an innovation may be experimented with on a limited basis
Observability – degree to which the results of an innovation are visible to others
Salience = degree of importance to an individual, want more information and will tell others about it
Social marketing – segmentation and formative research lead to effective messages, positioning, price, communication channels
Tactics to reach critical mass
Highly-respected individuals in a system’s hierarchy for initial adoption of the interactive innovation should be targeted
Individuals’ perceptions of the innovation can be shaped, for instance, by implying that adoption of it is inevitable, that it is very desirable, or that the critical mass has already occurred or will occur soon
Introduce to intact groups whose members are likely to be relatively more innovative
Incentives for early adoption of the interactive innovation should be provided, at least until critical mass is reached
Look for change agents and innovation champions who stand behind your product and who throw their support behind you, thus overcoming the indifference or resistance that the new idea may provoke
“One of the greatest pains to human nature is the pain of a new idea. It…makes you think that after all, your favorite motions may be wrong, your firmest beliefs ill-founded…Naturally, therefore, common men hate a new idea, and are disposed more or less to ill-treat the original man who brings it. – Walter Bagehot, Physics and Politics
Routinization occurs when the innovation has become incorporated into the regular activities of the organization and loses its separate identity. Sustainability, a closely related concept to routinization, is defined by the degree to which an innovation continues to be used after the initial effort to secure adoption is completed. Sustainability is more likely if widespread participation has occurred in the innovation process, if reinvention took place, and if an innovation champion was involved. This fifth stage, routinization, marks the end of the innovation process in an organization
As much as change is about adapting to the new, it is about detaching from the old – Ronald Burt
Seems like the “godfather” to such books as Geoffrey Moore and others have written. Learning about the attributes that help speed innovation – perceived relative advantage, compatibility, complexity, trialability, and observability – were worth the price of admission
Banning the inevitable usually backfires. Prohibition is at best temporary, and in the long run counterproductive. A vigilant, eyes-wide-open embrace works much better. My intent in this book is to uncover the roots of digital change so that we can embrace them. Once seen, we can work with their nature, rather than struggle against it. The 12 forces are Becoming, Cognifying, Flowing, Screening, Accessing, Sharing, Filtering, Remixing, Interacting, Tracking, Questioning, and then Beginning.
Our greatest invention in the past 200 years was not a particular gadget or tool but the invention of the scientific process itself. Once we invented the scientific method, we could immediately create thousands of other amazing things we could have never discovered any other way. This methodical process of constant change and improvement was a million times better than inventing any particular product, because the process generated a million new products over the centuries since we invented it. Get the ongoing process right and it will keep generating ongoing benefits. In our new era, processes trump products.
That bears repeating. All of us—every one of us—will be endless newbies in the future simply trying to keep up. Here’s why: First, most of the important technologies that will dominate life 30 years from now have not yet been invented, so naturally you’ll be a newbie to them. Second, because the new technology requires endless upgrades, you will remain in the newbie state. Third, because the cycle of obsolescence is accelerating (the average lifespan of a phone app is a mere 30 days!), you won’t have time to master anything before it is displaced, so you will remain in the newbie mode forever.
However, neither dystopia nor utopia is our destination. Rather, technology is taking us to protopia. More accurately, we have already arrived in protopia. Protopia is a state of becoming, rather than a destination. It is a process. In the protopian mode, things are better today than they were yesterday, although only a little better.
Although Nelson was polite, charming, and smooth, I was too slow for his fast talk. But I got an aha! from his marvelous notion of hypertext. He was certain that every document in the world should be a footnote to some other document, and computers could make the links between them visible and permanent. This was a new idea at the time. But that was just the beginning. Scribbling on index cards, he sketched out complicated notions of transferring authorship back to creators and tracking payments as readers hopped along networks of documents, in what he called the “docuverse.”
We don’t know what the full taxonomy of intelligence is right now. Some traits of human thinking will be common (as common as bilateral symmetry, segmentation, and tubular guts are in biology), but the possibility space of viable minds will likely contain traits far outside what we have evolved. It is not necessary that this type of thinking be faster than humans’, greater, or deeper. In some cases it will be simpler.
Our most important mechanical inventions are not machines that do what humans do better, but machines that can do things we can’t do at all. Our most important thinking machines will not be machines that can think what we think faster, better, but those that think what we can’t think.
What are humans for? I believe our first answer will be: Humans are for inventing new kinds of intelligences that biology could not evolve. Our job is to make machines that think different—to create alien intelligences.
“Right now we think of manufacturing as happening in China. But as manufacturing costs sink because of robots, the costs of transportation become a far greater factor than the cost of production. Nearby will be cheap. So we’ll get this network of locally franchised factories, where most things will be made within five miles of where they are needed.”
Now we are transitioning into the third age of computation. Pages and browsers are far less important. Today the prime units are flows and streams. We constantly monitor Twitter streams and the flows of posts on our Facebook wall. We stream photos, movies, and music. News banners stream across the bottom of TVs. We subscribe to YouTube streams, called channels. And RSS feeds from blogs. We are bathed in streams of notifications and updates. Our apps improve in a flow of upgrades. Tags have replaced links. We tag and “like” and “favorite” moments in the streams. The foundational units of this third digital regime, then, are flows, tags, and clouds.
A universal law of economics says the moment something becomes free and ubiquitous, its position in the economic equation suddenly inverts. When nighttime electrical lighting was new and scarce, it was the poor who burned common candles. Later, when electricity became easily accessible and practically free, our preference flipped and candles at dinner became a sign of luxury. In the industrial age, exact copies became more valuable than a handmade original. No one wants the inventor’s clunky “original” prototype refrigerator. Most people want a perfect working clone. The more common the clone, the more desirable it is, since it comes with a network of service and repair outlets. Now the axis of value has flipped again. Rivers of free copies have undermined the established order. In this new supersaturated digital universe of infinite free digital duplication, copies are so ubiquitous, so cheap—free, in fact—that the only things truly valuable are those that cannot be copied. The technology is telling us that copies don’t count anymore. To put it simply: When copies are superabundant, they become worthless. Instead, stuff that can’t be copied becomes scarce and valuable. When copies are free, you need to sell things that cannot be copied. Well, what can’t be copied? Trust, for instance. Trust cannot be reproduced in bulk. You can’t purchase trust wholesale. You can’t download trust and store it in a database or warehouse it. You can’t simply duplicate someone’s else’s trust. Trust must be earned, over time. It cannot be faked. Or counterfeited (at least for long). There are a number of other qualities similar to trust that are difficult to copy and thus become valuable in this cloud economy. The best way to see them is to start with a simple question: Why would anyone ever pay for something they could get for free? And when they pay for something they could get for free, what are they purchasing? In a real sense, these uncopyable values are things that are “better than free.” Free is good, but these are better since you’ll pay for them. I call these qualities “generatives.” A generative value is a quality or attribute that must be generated at the time of the transaction. A generative thing cannot be copied, cloned, stored, and warehoused. A generative cannot be faked or replicated. It is generated uniquely, for that particular exchange, in real time. Generative qualities add value to free copies and therefore are something that can be sold. Here are eight generatives that are “better than free.”
Personalization – Personalization requires an ongoing conversation between the creator and consumer, artist and fan, producer and user. It is deeply generative because it is iterative and time-consuming. Marketers call that “stickiness” because it means both sides of the relationship are stuck (invested) in this generative asset and will be reluctant to switch and start over. You can’t cut and paste this kind of depth.
Embodiment – In this accounting, the music is free, the bodily performance expensive. Indeed, many bands today earn their living through concerts, not music sales. This formula is quickly becoming a common one for not only musicians, but even authors. The book is free; the bodily talk is expensive. Live concert tours, live TED talks, live radio shows, pop-up food tours all speak to the power and value of a paid ephemeral embodiment of something you could download for free.
Patronage – Deep down, avid audiences and fans want to pay creators. Fans love to reward artists, musicians, authors, actors, and other creators with the tokens of their appreciation, because it allows them to connect with people they admire. But they will pay only under four conditions that are not often met: 1) It must be extremely easy to do; 2) The amount must be reasonable; 3) There’s clear benefit to them for paying; and 4) It’s clear the money will directly benefit the creators.
These eight qualities require a new skill set for creators. Success no longer derives from mastering distribution. Distribution is nearly automatic; it’s all streams. The Great Copy Machine in the Sky takes care of that. The technical skills of copy protection are no longer useful because you can’t stop copying. Trying to prohibit copying, either by legal threats or technical tricks, just doesn’t work. Nor do the skills of hoarding and scarcity. Rather, these eight new generatives demand nurturing qualities that can’t be replicated with a click of the mouse. Success in this new realm requires mastering the new liquidity.
What counts are not the number of copies but the number of ways a copy can be linked, manipulated, annotated, tagged, highlighted, bookmarked, translated, and enlivened by other media. Value has shifted away from a copy toward the many ways to recall, annotate, personalize, edit, authenticate, display, mark, transfer, and engage a work. What counts is how well the work flows.
Removing friction increases the pie
Fluidity of growth—The book’s material can be corrected or improved incrementally. The never-done-ness of an ebook (at least in the ideal) resembles an animated creature more than a dead stone, and this living fluidity animates us as creators and readers.
We currently see these two sets of traits—fixity versus fluidity—as opposites, driven by the dominant technology of the era. Paper favors fixity; electrons favor fluidity. But there is nothing to prevent us from inventing a third way—electrons embedded into paper or any other material. Imagine a book of 100 pages, each page a thin flexible digital screen, bound into a spine—that is an ebook too. Almost anything that is solid can be made a little bit fluid, and anything fluid can be embedded into solidness. What has happened to music, books, and movies is now happening to games, newspapers, and education. The pattern will spread to transportation, agriculture, health care. Fixities such as vehicles, land, and medicines will become flows.
These are the Four Stages of Flowing:
The third disruption is enabled by the previous two. Streams of powerful services and ready pieces, conveniently grabbed at little cost, enable amateurs with little expertise to create new products and brand-new categories of products. The status of creation is inverted, so that the audience is now the artist. Output, selection, and quality skyrocket.
Each video posted demands a reply with another video based upon it. The natural response to receiving a clip, a song, a text—either from a friend or from a professional—is not just to consume it, but to act upon it. To add, subtract, reply, alter, bend, merge, translate, elevate to another level.
eBooks and networked books
Ebooks today lack the fungibility of the ur-text of screening: Wikipedia. But eventually the text of ebooks will be liberated in the near future, and the true nature of books will blossom. We will find out that books never really wanted to be printed telephone directories, or hardware catalogs on paper, or paperback how-to books. These are jobs that screens and bits are much superior at—all that updating and searching—tasks that neither paper nor narratives are suited for. What those kinds of books have always wanted was to be annotated, marked up, underlined, bookmarked, summarized, cross-referenced, hyperlinked, shared, and talked to. Being digital allows them to do all that and more. We can see the very first glimpses of books’ newfound freedom in the Kindles and Fires. As I read a book I can (with some trouble) highlight a passage I would like to remember. I can extract those highlights (with some effort today) and reread my selection of the most important or memorable parts. More important, with my permission, my highlights can be shared with other readers, and I can read the highlights of a particular friend, scholar, or critic. We can even filter the most popular highlights of all readers, and in this manner begin to read a book in a new way. This gives a larger audience access to the precious marginalia of another author’s close reading of a book (with their permission), a boon that previously only rare-book collectors witnessed. Reading becomes social. With screens we can share not just the titles of books we are reading, but our reactions and notes as we read them. Today, we can highlight a passage. Tomorrow, we will be able to link passages. We can add a link from a phrase in the book we are reading to a contrasting phrase in another book we’ve read, from a word in a passage to an obscure dictionary, from a scene in a book to a similar scene in a movie. (All these tricks will require tools for finding relevant passages.) We might subscribe to the marginalia feed from someone we respect, so we get not only their reading list but their marginalia—highlights, notes, questions, musings. The kind of intelligent book club discussion as now happens on the book sharing site Goodreads might follow the book itself and become more deeply embedded into the book via hyperlinks. So when a person cites a particular passage, a two-way link connects the comment to the passage and the passage to the comment. Even a minor good work could accumulate a wiki-like set of critical comments tightly bound to the actual text. Indeed, dense hyperlinking among books would make every book a networked event. The conventional vision of the book’s future assumes that books will remain isolated items, independent from one another, just as they are on the shelves in your public library. There, each book is pretty much unaware of the ones next to it. When an author completes a work, it is fixed and finished. Its only movement comes when a reader picks it up to enliven it.
Turning inked letters into electronic dots that can be read on a screen is simply the first essential step in creating this new library. The real magic will come in the second act, as each word in each book is cross-linked, clustered, cited, extracted, indexed, analyzed, annotated, and woven deeper into the culture than ever before. In the new world of ebooks and etexts, every bit informs another; every page reads all the other pages. Right now the best we can do in terms of interconnection is to link some text to its source’s title in a bibliography or in a footnote. Much better would be a link to a specific passage in another passage in a work, a technical feat not yet possible. But when we can link deeply into documents at the resolution of a sentence, and have those links go two ways, we’ll have networked books. You can get a sense of what this might be like by visiting Wikipedia. Think of Wikipedia as one very large book—a single encyclopedia—which of course it is. Most of its 34 million pages are crammed with words underlined in blue, indicating those words are hyperlinked to concepts elsewhere in the encyclopedia. This tangle of relationships is precisely what gives Wikipedia—and the web—its immense force. Wikipedia is the first networked book. In the goodness of time, each Wikipedia page will become saturated with blue links as every statement is cross-referenced. In the goodness of time, as all books become fully digital, every one of them will accumulate the equivalent of blue underlined passages as each literary reference is networked within that book out to all other books. Each page in a book will discover other pages and other books. Thus books will seep out of their bindings and weave themselves together into one large metabook, the universal library. The resulting collective intelligence of this synaptically connected library allows us to see things we can’t see in a single isolated book. Over the next three decades, scholars and fans, aided by computational algorithms, will knit together the books of the world into a single networked literature. A reader will be able to generate a social graph of an idea, or a timeline of a concept, or a networked map of influence for any notion in the library. We’ll come to understand that no work, no idea stands alone, but that all good, true, and beautiful things are ecosystems of intertwined parts and related entities, past and present.
Once snippets, articles, and pages of books become ubiquitous, shuffleable, and transferable, users will earn prestige and perhaps income for curating an excellent collection.
The universal networked library of all books will cultivate a new sense of authority. If you can truly incorporate all texts—past and present in all languages—on a particular subject, then you can have a clearer sense of what we as a civilization, a species, do and don’t know. The empty white spaces of our collective ignorance are highlighted, while the golden peaks of our knowledge are drawn with completeness. This degree of authority is only rarely achieved in scholarship today, but it will become routine.
Books were good at developing a contemplative mind. Screens encourage more utilitarian thinking. A new idea or unfamiliar fact uncovered while screening will provoke our reflex to do something: to research the term, to query your screen “friends” for their opinions, to find alternative views, to create a bookmark, to interact with or tweet the thing rather than simply contemplate it. Book reading strengthened our analytical skills, encouraging us to pursue an observation all the way down to the footnote. Screening encourages rapid pattern making, associating one idea with another, equipping us to deal with the thousands of new thoughts expressed every day. Screening nurtures thinking in real time.
On networked screens everything is linked to everything else. The status of a new creation is determined not by the rating given to it by critics but by the degree to which it is linked to the rest of the world.
Possession is not as important as it once was. Accessing is more important than ever.
Products encourage ownership, but services discourage ownership because the kind of exclusivity, control, and responsibility that comes with ownership privileges are missing from services.
The switch from “ownership that you purchase” to “access that you subscribe to” overturns many conventions. Ownership is casual, fickle. If something better comes along, grab it. A subscription, on the other hand, gushes a never-ending stream of updates, issues, and versions that force a constant interaction between the producer and the consumer. It is not a onetime event; it’s an ongoing relationship. To access a service, a customer is often committing to it in a far stronger way than when he or she purchases an item.
Naturally, the producer cherishes this kind of loyalty, but the customer gets (or should get) many advantages for continuing as well: uninterrupted quality, continuous improvements, attentive personalization—assuming it’s a good service.
As more items are invented and manufactured—while the total number of hours in a day to enjoy them remains fixed—we spend less and less time per item. In other words, the long-term trend in our modern lives is that most goods and services will be short-term use. Therefore most goods and services are candidates for rental and sharing.
For a long time there were two basic ways to organize human work: a firm and a marketplace. A firm, such as a company, had definite boundaries, was permission based, and enabled people to increase their efficiency via collaboration more than if they worked outside the firm. A marketplace had more permeable borders, required no permission to participate, and used the “invisible hand” to allot resources most efficiently. Recently a third way to organize work has emerged: the platform. A platform is a foundation created by a firm that lets other firms build products and services upon it. It is neither market nor firm, but something new. A platform, like a department store, offers stuff it did not create. Levels of highly interdependent products and services form an “ecosystem” that rests upon the platform. “Ecosystem” is a good description because, just as in a forest, the success of one species (product) depends on the success of others. It is the deep ecological interdependence of a platform that discourages ownership and promotes access instead. The platform’s job is to make sure it makes money (and adds value!) whether the parts cooperate or compete. Which Amazon does well. At almost every level of a platform, sharing is the default—even if it is just the rules of competition. Your success hinges on the success of others.
Dematerialization and decentralization and massive communication all lead to more platforms. Platforms are factories for services; services favor access over ownership.
The web is hyperlinked documents; the cloud is hyperlinked data. Ultimately the chief reason to put things onto the cloud is to share their data deeply. Woven together, the bits are made much smarter and more powerful than they could possibly be alone. There are practical limits to how gigantic one company’s cloud can get, so the next step in the rise of clouds over the coming decades will be toward merging the clouds into one intercloud.
As we increase dematerialization, decentralization, simultaneity, platforms, and the cloud—as we increase all those at once, access will continue to displace ownership.
In his 2008 book Here Comes Everybody, media theorist Clay Shirky suggests a useful hierarchy for sorting through these new social arrangements, ranked by the increasing degree of coordination employed. Groups of people start off simply sharing with a minimum of coordination, and then progress to cooperation, then to collaboration, and finally to collectivism. At each step of this socialism, the amount of additional coordination required enlarges.
Instead of money, the peer producers who create these products and services gain credit, status, reputation, enjoyment, satisfaction, and experience.
What happens if we turn the old model inside out and have the audience/customers in charge? They would be Toffler’s prosumers—consumers who were producers. As innovation expert Larry Keeley once observed: “No one is as smart as everyone.”
This is true for other types of editors as well. Editors are the middle people—or what are called “curators” today—the professionals between a creator and the audience. These middle folk work at publishers, music labels, galleries, or film studios. While their roles would have to change drastically, the demand for the middle would not go away. Intermediates of some type are needed to shape the cloud of creativity that boils up from the crowd. This hybrid of user-generated and editor-enhanced is quite common.
The dream of many companies is to graduate from making products to creating a platform. But when they do succeed (like Facebook), they are often not ready for the required transformation in their role; they have to act more like governments than companies in keeping opportunities “flat” and equitable, and hierarchy to a minimum.
Each of these tiny niches is micro-small, but there are tens of millions of niches. And even though each of those myriad niche interests might attract only a couple of hundred fans, a potential new fan merely has to google to find them. In other words, it becomes as easy to find a particular niche interest as to find a bestseller.
The largest, fastest growing, most profitable companies in 2050 will be companies that will have figured out how to harness aspects of sharing that are invisible and unappreciated today. Anything that can be shared—thoughts, emotions, money, health, time—will be shared in the right conditions, with the right benefits. Anything that can be shared can be shared better, faster, easier, longer, and in a million more ways than we currently realize. At this point in our history, sharing something that has not been shared before, or in a new way, is the surest way to increase its value.
Many of these filters are traditional and still serve well: We filter by gatekeepers, intermediates, curators, brands, government, cultural environment, friends, ourselves
The danger of being rewarded with only what you already like, however, is that you can spin into an egotistical spiral, becoming blind to anything slightly different, even if you’d love it. This is called a filter bubble. The technical term is “overfitting.” You get stuck at a lower than optimal peak because you behave as if you have arrived at the top, ignoring the adjacent environment. The more effective the “more good stuff like this” filter is, the more important it becomes to alloy it with other types of filters. A filter dedicated to probing one’s dislikes would have to be delicate, but could also build on the powers of large collaborative databases in the spirit of “people who disliked those, learned to like this one.” In somewhat the same vein I also, occasionally, want a bit of stuff I dislike but should learn to like.
Way back in 1971 Herbert Simon, a Nobel Prize–winning social scientist, observed, “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention.” The maximum potential attention is therefore fixed. Its production is inherently limited while everything else is becoming abundant. Since it is the last scarcity, wherever attention flows, money will follow.
Paul Romer, an economist at New York University who specializes in the theory of economic growth, says real sustainable economic growth does not stem from new resources but from existing resources that are rearranged to make them more valuable. Growth comes from remixing. Brian Arthur, an economist at the Santa Fe Institute who specializes in the dynamics of technological growth, says that all new technologies derive from a combination of existing technologies. Modern technologies are combinations of earlier primitive technologies that have been rearranged and remixed. Since one can combine hundreds of simpler technologies with hundreds of thousands of more complex technologies, there is an unlimited number of possible new technologies—but they are all remixes. What is true for economic and technological growth is also true for digital growth. We are in a period of productive remixing.
For instance, quotation symbols make it simple to indicate where one has borrowed text from another writer. We don’t have a parallel notation in film yet, but we need one. Once you have a large text document, you need a table of contents to find your way through it. That requires page numbers. Somebody invented them in the 13th century. What is the equivalent in video? Longer texts require an alphabetic index, devised by the Greeks and later developed for libraries of books. Someday soon with AI we’ll have a way to index the full content of a film. Footnotes, invented in about the 12th century, allow tangential information to be displayed outside the linear argument of the main text. That would be useful in video as well. And bibliographic citations (invented in the 13th century) enable scholars and skeptics to systematically consult sources that influence or clarify the content. Imagine a video with citations. These days, of course, we have hyperlinks, which connect one piece of text to another, and tags, which categorize using a selected word or phrase for later sorting. For example, if I wanted to visually compare recent bank failures with similar historical events by referring you to the bank run in the classic movie It’s a Wonderful Life, there is no easy way to point to that scene with precision. (Which of several sequences did I mean, and which part of them?) I can do what I just did and mention the movie title. I might be able to point to the minute mark for that scene (a new YouTube feature). But I cannot link from this sentence to only those exact “passages” inside an online movie. We don’t have the equivalent of a hyperlink for film yet. With true screen fluency, I’d be able to cite specific frames of a film or specific items in a frame. Academic research has produced a few interesting prototypes of video summaries, but nothing that works for entire movies. Some popular websites with huge selections of movies (like porn sites) have devised a way for users to scan through the content of full movies quickly in a few seconds. When a user clicks the title frame of a movie, the window skips from one key frame to the next, making a rapid slide show, like a flip book of the movie. The abbreviated slide show visually summarizes a few-hour film in a few seconds. Expert software can be used to identify the key frames in a film in order to maximize the effectiveness of the summary.
The holy grail of visuality is findability—the ability to search the library of all movies the same way Google can search the web, and find a particular focus deep within. You want to be able to type key terms, or simply say, “bicycle plus dog,” and then retrieve scenes in any film featuring a dog and a bicycle. In an instant you could locate the moment in The Wizard of Oz when the witchy Miss Gulch rides off with Toto. Even better, you want to be able to ask Google to find all the other scenes in all movies similar to that scene. That ability is almost here.
However, in every system that I have experienced where anonymity becomes common, the system fails. Communities saturated with anonymity will either self-destruct or shift from the purely anonymous to the pseudo-anonymous, as in eBay, where you have a traceable identity behind a persistent invented nickname. For the civilized world, anonymity is like a rare earth metal. In larger doses these heavy metals are some of the most toxic substances known to a life. They kill. Yet these elements are also a necessary ingredient in keeping a cell alive. But the amount needed for health is a mere hard-to-measure trace. Anonymity is the same. As a trace element in vanishingly small doses, it’s good, even essential for the system. Anonymity enables the occasional whistle-blower and can protect the persecuted fringe and political outcasts. But if anonymity is present in any significant quantity, it will poison the system.
Large quantities of something can transform the nature of those somethings. More is different. Computer scientist J. Storrs Hall writes: “If there is enough of something, it is possible, indeed not unusual, for it to have properties not exhibited at all in small, isolated examples.
With the right tools, it turns out the collaborative community can outpace the same number of ambitious individuals competing.
A good question is not concerned with a correct answer. A good question cannot be answered immediately. A good question challenges existing answers. A good question is one you badly want answered once you hear it, but had no inkling you cared before it was asked. A good question creates new territory of thinking.
The Beginning is a century-long process, and its muddling forward is mundane. Its big databases and extensive communications are boring. Aspects of this dawning real-time global mind are either dismissed as nonsense or feared. There is indeed a lot to be legitimately worried about because there is not a single aspect of human culture—or nature—that is left untouched by this syncopated pulse. Yet because we are the parts of something that has begun operating at a level above us, the outline of this emerging very large thing is obscured. All we know is that from its very beginning, it is upsetting the old order. Fierce pushback is to be expected.
What I got out of it
Inspiring and exciting to think about how these trends will come to play out and impact nearly every aspect of our lives
“By focusing on the software platform we hope to offer the reader a perspective on the business dynamics and strategies of industries, old and new, that have been powered by these invisible engines…All of us quickly recognized that software platform businesses have at least two sides. Software platforms consist of services that are often made available to developers through APIs. They are also made available to computer users, but those computer users typically avail themselves of API-based services by buying applications that in turn use APIs. It is only a slight exaggeration to say that all software platform makers all the time invest in getting both developers and users to use their platforms. The developers/users are like the men/women, cards/merchants, advertisers/eyeballs, and buyers/sellers that we mentioned above. In fact, software platforms sometimes appeal to more than two distinct groups—including hardware makers and content providers. The economics of two-sided platforms provides a number of insights into pricing, design, organization, and governance of platform-based businesses. We were interested in understanding how this new economic learning could help shed light on the strategies followed by software platforms. On the flip side, we were interested in understanding how a diverse set of industries based on software platforms could be probed to provide insights for students of this new economics. This book is the result. It blends economics, history, and business analysis. It is intended for anyone who wants to better understand the business strategies that have been followed in industries based on software platforms. We focus on pricing, product design, and integration into downstream or upstream suppliers.”
Most successful software platforms have exploited positive feedbacks (or network effects) between applications and users: more applications attract more users, and more users attract more applications. Nurturing both sides of the market helped Microsoft garner thousands of applications and hundreds of millions of users for its Windows platform.
The modular approach has numerous advantages. If a new program (or other complex system) can be specified as N modules, N teams can work in parallel. Moreover, individual modules can subsequently be improved without touching other parts of the overall program, and they can be used in other programs.
Operating systems provide services to applications through Application Programming Interfaces (APIs). These services range from rudimentary hardware services, such as moving a cursor on a monitor, to sophisticated software services, such as drawing and rotating three-dimensional objects. The APIs serve as interfaces between these services and applications…It is easy to see why application developers find the ability to access system services through APIs appealing. Rather than every application developer writing hundreds of lines of code to allocate memory to an object, to take the example above, the operating system developer writes 116 lines of code and makes the system services this code provides available to all application developers through the API.
Software platforms make services available through APIs. Developers benefit from these because they avoid having to write some of their own code. Users benefit from a greater variety of and lower prices for applications. The economics of multisided platforms provides a set of tools for understanding the past, present, and future of software platforms.
Multisided businesses can generate profits for themselves and benefits for their customers if they can figure out ways to increase and then capture indirect network externalities. There are three major ways in which they do this. First, they serve as matchmakers. Second, they build audiences. Advertising-supported media do mainly that: they use content to attract eyeballs and then sell access to those eyeballs to advertisers. Third, they reduce costs by providing shared facilities for the customers on each side. That’s the shopping mall case with which we began.
Businesses in multisided markets often subsidize one side of the market to get the other side on board—sometimes explicitly by charging low or negative prices. A dating club may charge men a higher price just because they have more inelastic demand and because it is easy to identify that group of consumers. But businesses in multisided markets have an additional reason to price discriminate: by charging one group a lower price the business can charge another group a higher price; and unless prices are low enough to attract sufficient numbers of the former group, the business cannot obtain any sales at all. In contrast, economic analyses of multisided platforms, along with the industry case studies discussed in the following chapters, show that successful multisided platform businesses must pay careful attention to all relevant groups, and typically must worry more about balance among them than about building share with one of them. Getting the balance right seems to be more important than building shares. Platform markets do not tip quickly because as a practical matter, it takes time to get things right. And the first entrant often does not win in the end: many other firms may come in and successfully tweak the pricing structure, product design, or business model. The businesses that participate in such industries have to figure out ways to get both sides on board. One way to do this is to obtain a critical mass of users on one side of the market by giving them the service for free or even paying them to take it. Especially at the entry phase of firms in multisided markets, it is not uncommon to see precisely this strategy. Another way to solve the problem of getting the two sides on board simultaneously is to invest to lower the costs of consumers on one side of the market. As we saw earlier, for instance, Microsoft invests in the creation of software tools that make it easier for application developers to write application software for Microsoft operating systems and provides other assistance that makes developers’ jobs easier. In some cases, firms may initially take over one side of the business in order to get the market going.
The copyleft provision means that if people choose to distribute software that is based in part on other software covered by the GPL, they must distribute their new software under the GPL. GPL software thereby propagates itself.
Bundling features into the software platform is often efficient for the platform producer and for end users, as it is for most information goods, because it lowers distribution costs and expands demand.
Multisided platforms must consider marginal costs and price sensitivity in pricing, like single-sided businesses, but they must also consider which side values the other side more. Software platforms generally charge low prices on one side in order to attract customers who can then be made available to the other side. Getting the balance right among all sides is more important than building market share.
Per-copy charges also helped Microsoft capitalize on its investment in programming languages in the face of great uncertainty as to which computer makers would succeed. A flat fee would have earned less from the top sellers and would have discouraged other makers from even trying. Microsoft retained this basic pricing model when it went into the operating system business.
In retrospect, having multiple operating systems run on a hardware platform is a poor strategy. The idea, of course, was to ensure that the hardware, not the operating system, became the standard that defined the platform and determined its evolution. Indeed, IBM followed an important economic principle for traditional industries: all firms would like everyone else in the supply chain to be competitive. IBM didn’t seem to recognize that this was far from a traditional industry. If IBM’s strategy had worked, and if several operating systems had been installed on substantial numbers of IBM PCs, what would have happened? Most likely, having multiple operating systems would have made the hardware platform less popular than having a single operating system. Applications are generally written for software platforms, not the underlying hardware. The more fragmented the installed base of operating systems, the less attractive it is to write an application for any one of them.
Four key strategies helped Microsoft obtain the leading position in personal computers: (1) offering lower prices to users than its competitors; (2) intensely promoting API-based software services to developers; (3) promoting the development of peripherals, sometimes through direct subsidies, in order to increase the value of the Windows platform to developers and users; and (4) continually developing software services that provide value to developers directly and to end users indirectly.
Technically, this is a two-part tariff, consisting of an access fee (the price of the razor) plus a usage fee (the price of the blade). Here the blade can be thought of as having two related roles. It meters the use of the durable good, and it sorts customers into those who are willing to pay more and those who are willing to pay less. These metering devices tend to increase profits and help companies better recover their fixed costs of investment. Because it is particularly attractive to make money on the blades, it is especially attractive to reduce the price of the razor, perhaps to below cost, or perhaps even to zero in extreme cases. For video game console makers this razorblade strategy made a lot of sense. Getting the console into the hands of many people increased the demand for the games it could play. Moreover, it made buying a console less risky for households, who had no good way of knowing how valuable the console would be until they saw the games produced for it. The game-console company, which was in the best position to forecast the quality of those games, took the risk: it lost money if consumers didn’t buy many games, and it made money if they did. The people who ultimately bought a lot of games were those who valued the console the most, so making profits mainly or even entirely on games enabled the console makers to earn the most from those willing to pay the most for their system.
When consumers value product differentiation and platforms can offer innovative and unique features, multiple platforms can coexist despite indirect network effects that make bigger better.
The console video gaming industry operates a radically different business model from other software platform industries. Game manufacturers tightly integrate hardware and software systems; they offer consoles to consumers at less than manufacturing cost, and they earn profits by developing games and charging third-party game developers for access to their platforms.
Palm, on the other hand, regrouped. It surveyed Zoomer buyers to find out what they liked and didn’t like, what they used and didn’t use: What these people said opened the company’s eyes. More than 90% of Zoomer owners also owned a PC. More than half of them bought Zoomer because of software (offered as an add-on) that transferred data to and from a PC. These were business users, not retail consumers. And they didn’t want to replace their PCs—they wanted to complement them. People weren’t asking for a PDA that was smart enough to compete with a computer. They wanted a PDA that was simple enough to compete with paper.
When you’re playing Bobby Fischer—and you want to win—don’t play chess. Make sure whatever game you’re playing—be it network delivery of media vs. stand-alone PC, whatever you’re in—that you’re not playing a game someone else has mastered when you have an option to play another game. —Rob Glaser, Founder of RealNetworks, May 20011
Interestingly, many are made by Microsoft, which integrated into mouse production in 1983 mainly to be sure that the sort of mouse specified by its nascent Windows system would be available in the marketplace. Microsoft developed and patented a mouse that could connect to a PC through an existing serial port rather than to a special card installed within the computer. This innovation reduced the cost of the mouse and thus of mouse-using computers running Windows. Apple as a vertically integrated hardware and software platform maker has always produced its own mice.
What is the cure? From A’s point of view, one cure is to have many competing producers of good b. Competition will then hold the price of b close to cost (including a reasonable return on capital) regardless of A’s pricing, so that A both effectively determines the system price (via the price of a) and captures all the economic profit. Generally, it is more attractive to rely on others to supply a complement (instead of buying it or making it), all else equal, if there are many producers of that complement who compete intensely. Hence the common strategic advice, “Commoditize the complements.”
In a famous 1951 paper, Nobel Laureate George Stigler argued that this proposition implies that “vertical disintegration is the typical development in growing industries, vertical integration in declining industries.”
Interestingly, we are aware of no examples of software platforms that initially integrated into the applications/games/content that subsequently exited that business entirely. On the other hand, almost all such platforms have adopted a two-sided strategy and made significant investments in attracting third-party suppliers. Partial integration is the norm. The only exceptions are those successful software platform vendors that launched without integration; they have remained out of the applications business. The tendency of computer-based industries to disintegrate over time is even clearer—with interesting exceptions—when we consider integration with the supply of basic hardware and peripherals. The Microsoft strategy of having the hardware complement its operating system produced by a competitive, technologically dynamic industry has served to make its operating systems more valuable and to speed their market penetration. Microsoft is not above using integration on occasion to stimulate important markets for complements, as its entry into mouse production, discussed earlier, illustrates.
In a rephrasing of Mr. Katz’s words, Michael Dell told Microsoft upon refusing the Xbox deal offered to him: When Sony cuts the prices on their PlayStations, their stock price goes up. Every time I cut prices, my stock price goes down. If you don’t understand why that happens, you don’t understand the console business. I understand why this is strategic to Microsoft. I don’t understand why this is strategic to Dell.
“Oh, ‘tanstaafl.’ Means ‘There ain’t no such thing as a free lunch.’ And isn’t,” I added, pointing to a FREE LUNCH sign across room, “or these drinks would cost half as much. Was reminding her that anything free costs twice as much in the long run or turns out worthless.” —Robert Heinlein
In practice, it generally does matter which side pays, because two key assumptions made in the textbook discussion don’t apply. First, there are often significant transactions costs that prevent the customers on the two sides of most markets from just “sorting it out” themselves. Take the payment card example. Although most card systems prohibit merchant surcharging because it degrades the value of their product to cardholders, several countries have barred card systems from imposing such a no-surcharge rule. In those countries, however, most merchants don’t surcharge. One reason is that it is costly to impose small charges on customers. Those merchants that do surcharge often charge more than they are charged by the card system—an indication that they are using the fact that a customer wants to use her card as a basis for groupwise price discrimination.
When balance matters in a mature two-sided business, the pricing problem is much more complex than in a single-sided business. Marginal cost and price responsiveness on both sides matter for both prices, and so does the pattern of indirect network effects. In general, if side A cares more about side B than B cares about A, then, all else equal, A will contribute more total revenue. Thus, newspapers make their money from selling advertising, not from selling papers. The textbook pricing formula for a single-sided market gives the optimal markup over marginal cost as 1 over a measure of price responsiveness (the price elasticity of demand), so low price responsiveness implies high markups. The corresponding formula for a two-sided business involves marginal costs on both sides, price responsiveness on both sides, and measures of the strength of indirect network effects in both directions. In particular, balance may require charging a price below marginal cost to a group with low price responsiveness, something a singlesided business would never do, if it is critical to attract members of that group in order to get members of the other group on board.
The idea is initially to subsidize one side (or, more generally, to do whatever it takes) in order to get it on board even though the other side is not yet on board, and to use the presence of the subsidized side to attract the other side.6 This differs from the single-sided penetration pricing strategy discussed above because the key here is to generate indirect network effects, to use the subsidized side as a magnet to attract the other side. After entry has been successfully effected and both sides are on board, of course, the rationale for the initial subsidy vanishes, and one would expect to see a corresponding shift in pricing policy. One of the regularities we discuss below, however, is that pricing structures—the relative amounts paid by the various sides—appear fairly robust over time; there are not many examples of pricing low to one side at first and then raising prices significantly later.
A fundamental decision facing all multisided platform businesses is choice of a price structure: How much should the platform vendor charge each side relative to the others? Since transactions involving some sides may have significant associated variable costs (the production and distribution costs of video game consoles, for instance), the most illuminating way to analyze observed price structures is to look at the contributions of each side to gross margin or variable profits: revenue minus side-specific variable cost. Should a two-sided platform derive most of its gross margin from one side of the market, and if so, which side, or should it choose a more balanced structure, with both sides making significant contributions to gross margin?
Like all multisided platforms, the pricing structures of the software platforms we have encountered in this book reflect the need to get all unintegrated sides on board: end users, application/game/content developers, and manufacturers of hardware and peripheral equipment. The structures we have examined have three remarkable features. First, all of them are extremely skewed: almost all earn a disproportionate share of their variable profits on only one side of the market, either end users or developers. Second, for all but video games, the platform earns the bulk of its net revenues from end users. The third remarkable feature, which we consider in the next section, is that these structures have been stable over time.
Components selling occurs when the firm offers A and B separately (cars and bicycle racks). • Pure bundling occurs when the firm only offers A and B together as a single bundled product, AB (men’s laced shoes). • Mixed bundling occurs when the firm offers the bundle AB and either or both of its components, A and B (such as the Sunday New York Times and the New York Times Book Review).
It is common to bundle together products that are complements, such as automobiles and tires, but firms may find that it pays to bundle products that aren’t complements. We already saw an example of this above. Bundling persuaded two consumers to buy a product even though each wanted only a single component. This saved the manufacturer costs. The idea that bundling of noncomplements can be used to enhance profits goes back to a classic paper by Nobel Prize winning economist George Stigler. Stigler tried to explain why movie distributors at one time required theaters to take bundles of pictures. Bundling can be used in a different way to facilitate price discrimination, which we discussed in the preceding chapter. That is, if different groups of consumers place different values on groups of components, bundles can be designed so that those with stronger demand pay more. The idea is possible to design bundles of components that cause consumers to sort themselves by the bundles they choose into groups with different willingness to pay. (Marketers call this “segmentation.”) In the case of autos, some will want the car with the sports package, while others will want only the basic package. The seller can then charge a premium to groups that have a particularly high demand for a particular package and offer an especially aggressive price to consumers that are very sensitive to price but are also willing to take the no-frills deal. For this to work, there must be a predictable correlation between combinations of components and demand (for example, price-sensitive consumers generally have a low demand for frills). A number of studies have found, for example, that automobile companies have much higher markups on luxury models than on base models. Bundling drives innovation and creates industries.
The ability to select bundles of features to sell helps firms segment their customers, control costs, and enhance profits. Bundled products offer consumers convenience, lower costs, and products tailored to their needs and wants.
Bundling decisions by multisided platforms, such as software platforms, are more complex since they must take into account the effect on all customer groups. Multisided businesses must consider both the additional customers they get on one side as a result of including a new feature and the additional customers they will get on the other side from having those additional customers. They may also include features that harm one side directly but benefit the platform overall by getting more customers on board on another side.
Bundling makes sense for businesses whenever the cost of adding additional features is lower than the additional sales generated thereby—even if most purchasers do not value or use all the features in a product bundle.
Creative destruction has been a hallmark of economic progress for millennia, but it has proceeded at a glacial pace for most of history. The Industrial Revolution sped this process up. Even so, it took decades for change to filter through the economy following innovations such as the spinning jenny, steam engine, and electric generator. The information technology revolution has quickened the pace of industrial change greatly. The plummeting costs of computer processing and storage make it possible to create products and industries that were not only infeasible but also unimaginable a few years earlier. Software platforms further accelerate the process of creative destruction, mainly because code is digital and malleable. Think how easy it is to add a new feature to a software platform and distribute that change electronically over the Internet to potentially billions of computing devices around the world.
One is familiar: developers. TiVo is evangelizing its software platform by providing tools and offering prizes for the best applications in several categories, including games, music, and photos.
History teaches us that it takes decades for technological changes to work their way through the economy, destroying, creating, and transforming industries. The third industrial revolution got off to a quick start. We suspect that it will continue through at least the first few decades of the twenty-first century and that our invisible engines will ultimately touch most aspects of our business and personal lives.
What I got out of it
Some of the examples are a bit outdated but the principles are just as valuable as ever – how to think about multisided markets, pricing, positioning, and so much more
For those within the high tech sector, or who manage investments in these companies, this imperative translates into a series of deceptively simple questions: what can we do during a tornado to best capitalize on our opportunity? How can we tell when one is coming, and what we can do to prepare? How can we sense when it is ending, and what should we do then? Finally, going forward, how can we reframe our strategic management concepts to better accommodate tornado market dynamics in general?
The winning strategy does not just change as we move from stage to stage, it actually reverses the prior strategy. This is why this is so difficult and counterintuitive – what made you successful at an earlier stage causes failure at later stages. Early stages you must not segment, in the chasm and bowling alley you must segment, in the tornado you must not segment, on main street you must segment
Truly discontinuous innovations – paradigm shifts – are new products or services that require the end user and the marketplace to dramatically change their past behavior, with the promise of gaining equally dramatic new benefits.
The only way to cross the chasm is to put all your eggs in one basket. That is, key to a winning strategy is to identify a single beachhead of pragmatist customers in a mainstream market segment and to accelerate the formation of 100% of their whole product. The goal is to win a niche foothold in the mainstream as quickly as possible – that is what is meant by crossing the chasm. Then, once in the tornado, you need to quickly switch strategies and gain mass market share at any cost, positioning your products horizontally as global infrastructure
Many leaders are not cut out to lead the company through each of these phases. That’s fine and to be expected, but know what stage you’re in, what type of CEO you have, and when they might need to be replaced
Once any infrastructure is substantially deployed, power shifts from teh builders – the professional services firms – to the operators, or what we have come to call the transaction services firms. The key to the transaction services model is that the requisite infrastructure has already been assimilated (keeping support costs down) and amortized (minimizing ongoing investment
For every stage of the technology adoption life cycle, there is an optimal business model
early market – professional services.
bowling alley – application products
tornado – infrastructure products – a period of mass-market adoption when the general marketplace switches over to the new infrastructure paradigm
main street – transaction services
This sequence of events unleashes a vortex of market demand. Infrastructure, to be useful, must be standard and global, so once the market moves to switch out the old for the new, it wants to complete this transition as rapidly as possible. All the pent-up interest in the product is thus converted into a massive purchasing binge, causing demand to vastly outstrip supply. Companies grow at hypergrowth rates, with billions of dollars of revenue seeming to appear from out of nowhere.
Overview of the tech adoption lifecyle
The forces that operate in the bowling alley argue for a niche-based strategy that is highly customer-centric
Those in the tornado push in the opposite direction toward a mass-market strategy for deploying a common standard infrastructure
Then on Main St., market forces push back again toward a customer-centric approach, focusing on specific adaptations of this infrastructure for added value through mass customization
Given these dramatic reversals in strategy, it is imperative that organizations be able to agree on where their markets are in the life cycle
In the meantime, the economic cataclysm of the tornado deconstructs and reconstructs the power structure in the market so rapidly that simply understanding who is friend and who is foe becomes a challenge
Within the newly emerging market structure, companies must compete for advantage based on their status within it
Positioning in this context consists of a company taking its rightful place in the hierarchy of power and defending it against challengers
And finally, moving fluidly from strategy to strategy is the ultimate challenge of any organization, demanding an extraordinarily flexible response from its management team
Safe path is to overinvest when invading any new segment, seeking to accelerate market leadership, and then divert resources as soon as the position is achieved
Post tornado market share by revenue tends to be 50% for the gorilla, 15% for chimp 1, 15% for chimp 2, and 30% for the monkeys
The lessons that Oracle taught – attack the competition ruthlessly, expand your distribution channel as fast as possible, ignore the customer
The lessons that HP taught – just ship, extend distribution channels, drive to the next lower price point
The lessons that Wintel taught – recruit partners to create a powerful whole product, instituitionalize this whole product as the market leader, commoditize the whole product by designing out your partners
+1 opportunities – what do we have to offer at little or no incremental cost to ourselves that the market would pay us more money for? Compelling fantasy like Nike and Mont Blanc do this better than anyone
Trust, it turns out, is a complicated and challenging relationship, as much so in business as in parenting or marriage. Like everything else we have been discussing in recent chapters, it is ultimately about power. The paradox of trust is that by intelligently relinquishing power, one gains it back many times over. Once you reach your persona limits, this is the only economy of scale that can help. And because hypergrowth markets will push you to your personal limits faster than most other challenges in business, this is a fitting thought on which to close this book
What I got out of it
Fascinating insights into the paradoxical path that it takes to be successful with technologically disruptive companies
“This book is about innovation—about how it happens, why it happens, and who makes it happen. It is likewise about why innovation matters, not just to scientists, engineers, and corporate executives but to all of us. That the story is about Bell Labs, and even more specifically about life at the Labs between the late 1930s and the mid-1970s, isn’t a coincidence.” The people helping to make it happen including Mervin Kelly, Jim Fisk, William Shockley, Claude Shannon, John Pierce, and William Baker.
Where is the knowledge we have lost in information? —T. S. Eliot, The Rock
Yet understanding the circumstances that led up to that unusual winter of 1947 at Bell Labs, and what happened there in the years afterward, promises a number of insights into how societies progress. With this in mind, one might think of a host of reasons to look back at these old inventions, these forgotten engineers, these lost worlds.
Edison’s genius lay in making new inventions work, or in making existing inventions work better than anyone had thought possible. But how they worked was to Edison less important.
Contrary to its gentle image of later years, created largely through one of the great public relations machines in corporate history, Ma Bell in its first few decades was close to a public menace—a ruthless, rapacious, grasping “Bell Octopus,” as its enemies would describe it to the press. “The Bell Company has had a monopoly more profitable and more controlling—and more generally hated—than any ever given by any patent,” one phone company lawyer admitted.
AT&T’s savior was Theodore Vail, who became its president in 1907, just a few years after Millikan’s friend Frank Jewett joined the company.11 In appearance, Vail seemed almost a caricature of a Gilded Age executive: Rotund and jowly, with a white walrus mustache, round spectacles, and a sweep of silver hair, he carried forth a magisterial confidence. But he had in fact begun his career as a lowly telegraph operator. Thoughtfulness was his primary asset; he could see almost any side of an argument. Also, he could both disarm and outfox his detractors. As Vail began overseeing Bell operations, he saw that the costs of competition were making the phone business far less profitable than it had been—so much so, in fact, that Vail issued a frank corporate report in his first year admitting that the company had amassed an “abnormal indebtedness.” If AT&T were to survive, it had to come up with a more effective strategy against its competition while bolstering its public image.
Vail didn’t do any of this out of altruism. He saw that a possible route to monopoly—or at least a near monopoly, which was what AT&T had always been striving for—could be achieved not through a show of muscle but through an acquiescence to political supervision. Yet his primary argument was an idea. He argued that telephone service had become “necessary to existence.” Moreover, he insisted that the public would be best served by a technologically unified and compatible system—and that it made sense for a single company to be in charge of it. Vail understood that government, or at least many politicians, would argue that phone subscribers must have protections against a monopoly; his company’s expenditures, prices, and profits would thus have to be set by federal and local authorities. As a former political official who years before had modernized the U.S. Post Office to great acclaim, Vail was not hostile toward government. Still, he believed that in return for regulation Ma Bell deserved to find the path cleared for reasonable profits and industry dominance. In Vail’s view, another key to AT&T’s revival was defining it as a technological leader with legions of engineers working unceasingly to improve the system.
The Vail strategy, in short, would measure the company’s progress “in decades instead of years.” Vail also saw it as necessary to merge the idea of technological leadership with a broad civic vision. His publicity department had come up with a slogan that was meant to rally its public image, but Vail himself soon adopted it as the company’s core philosophical principle as well. It was simple enough: “One policy, one system, universal service.” That this was a kind of wishful thinking seemed not to matter.
“Of its output,” Arnold would later say of his group, “inventions are a valuable part, but invention is not to be scheduled nor coerced.” The point of this kind of experimentation was to provide a free environment for “the operation of genius.” His point was that genius would undoubtedly improve the company’s operations just as ordinary engineering could. But genius was not predictable. You had to give it room to assert itself.
From the start, Jewett and Arnold seemed to agree that at West Street there could be an indistinctness about goals. Who could know in advance exactly what practical applications Arnold’s men would devise? Moreover, which of these ideas would ultimately move from the research department into the development department and then mass production at Western Electric? At the same time, they were clear about larger goals. The Bell Labs employees would be investigating anything remotely related to human communications, whether it be conducted through wires or radio or recorded sound or visual images.
The industrial lab showed that the group—especially the interdisciplinary group—was better than the lone scientist or small team. Also, the industrial lab was a challenge to the common assumption that its scientists were being paid to look high and low for good ideas. Men like Kelly and Davisson would soon repeat the notion that there were plenty of good ideas out there, almost too many. Mainly, they were looking for good problems.
Quantum mechanics, as it was beginning to be called, was a science of deep surprises, where theory had largely outpaced the proof of experimentation. Some years later the physicist Richard Feynman would elegantly explain that “it was discovered that things on a small scale behave nothing like things on a large scale.” In the quantum world, for instance, you could no longer say that a particle has a certain location or speed. Nor was it possible, Feynman would point out, “to predict exactly what will happen in any circumstance.”
The Great Depression, as it happened, was a boon for scientific knowledge. Bell Labs had been forced to reduce its employees’ hours, but some of the young staffers, now with extra time on their hands, had signed up for academic courses at Columbia University in uptown Manhattan.
“The [Bell] System,” Danielian pointed out, “constitutes the largest aggregation of capital that has ever been controlled by a single private company at any time in the history of business. It is larger than the Pennsylvania Railroad Company and United States Steel Corporation put together. Its gross revenues of more than one billion dollars a year are surpassed by the incomes of few governments of the world. The System comprises over 200 vassal corporations. Through some 140 companies it controls between 80 and 90 percent of local telephone service and 98 percent of the long-distance telephone wires of the United States.”
The 512A was an example of how, if good problems led to good inventions, then good inventions likewise would lead to other related inventions, and that nothing was too small or incidental to be excepted from improvement. Indeed, the system demanded so much improvement, so much in the way of new products, so much insurance of durability, that new methods had to be created to guarantee there was improvement and durability amid all the novelty.
We usually imagine that invention occurs in a flash, with a eureka moment that leads a lone inventor toward a startling epiphany. In truth, large leaps forward in technology rarely have a precise point of origin. At the start, forces that precede an invention merely begin to align, often imperceptibly, as a group of people and ideas converge, until over the course of months or years (or decades) they gain clarity and momentum and the help of additional ideas and actors. Luck seems to matter, and so does timing, for it tends to be the case that the right answers, the right people, the right place—perhaps all three—require a serendipitous encounter with the right problem. And then—sometimes—a leap. Only in retrospect do such leaps look obvious.
There was something in particular about the way he [William Shockley] solved difficult problems, looking them over and coming up with a method—often an irregular method, solving them backward or from the inside out or by finding a trapdoor that was hidden to everyone else—to arrive at an answer in what seemed a few heartbeats.
By intention, everyone would be in one another’s way. Members of the technical staff would often have both laboratories and small offices—but these might be in different corridors, therefore making it necessary to walk between the two, and all but assuring a chance encounter or two with a colleague during the commute. By the same token, the long corridor for the wing that would house many of the physics researchers was intentionally made to be seven hundred feet in length. It was so long that to look down it from one end was to see the other end disappear at a vanishing point. Traveling its length without encountering a number of acquaintances, problems, diversions, and ideas would be almost impossible. Then again, that was the point. Walking down that impossibly long tiled corridor, a scientist on his way to lunch in the Murray Hill cafeteria was like a magnet rolling past iron filings.
Essentially Kelly was creating interdisciplinary groups—combining chemists, physicists, metallurgists, and engineers; combining theoreticians with experimentalists—to work on new electronic technologies.
If the ingredients in the alloy weren’t pure—if they happened to contain minute traces of carbon, oxygen, or nitrogen, for instance—Permendur would be imperfect. “There was a time not so long ago when a thousandth of a percent or a hundredth of a percent of a foreign body in a chemical mixture was looked upon merely as an incidental inclusion which could have no appreciable effect on the characteristics of the substance,” Frank Jewett, the first president of the Labs, explained. “We have learned in recent years that this is an absolutely erroneous idea.”
For Scaff and Theurer—and, in time, the rest of the solid-state team at Bell Labs—one way to think of these effects was that purity in a semiconductor was necessary. But so was a controlled impurity. Indeed, an almost vanishingly small impurity mixed into silicon, having a net effect of perhaps one rogue atom of boron or phosphorus inserted among five or ten million atoms of a pure semiconductor like silicon, was what could determine whether, and how well, the semiconductor could conduct a current. One way to think of it—a term that was sometimes used at the Labs—was as a functional impurity.
The formal purpose of the new solid-state group was not so much to build something as to understand it. Officially, Shockley’s men were after a basic knowledge of their new materials; only in the back of their minds did a few believe they would soon find something useful for the Bell System.
On November 17, Brattain and an electrochemist in the solid-state group, Robert Gibney, explored whether applying an electrolyte—a solution that conducts electricity—in a particular manner would help cut through the surface states barrier. It did. Shockley would later identify this development as a breakthrough and the beginning of what he called “the magic month.” In time, the events of the following weeks would indeed be viewed by some of the men in terms resembling enchantment—the team’s slow, methodical success effecting the appearance of preordained destiny. For men of science, it was an odd conclusion to draw. Yet Walter Brattain would in time admit he had “a mystical feeling” that what he ultimately discovered had been waiting for him.
Any Bell scientist knew about the spooky and coincidental nature of important inventions. The origins of their entire company—Alexander Bell’s race to the patent office to beat Elisha Gray and become the recognized inventor of the telephone—was the textbook case.
If an idea begat a discovery, and if a discovery begat an invention, then an innovation defined the lengthy and wholesale transformation of an idea into a technological product (or process) meant for widespread practical use. Almost by definition, a single person, or even a single group, could not alone create an innovation. The task was too variegated and involved.
“It is the beginning of a new era in telecommunications and no one can have quite the vision to see how big it is,” Mervin Kelly told an audience of telephone company executives in 1951. Speaking of the transistor, he added that “no one can predict the rate of its impact.” Kelly admitted that he wouldn’t see its full effect before he retired from the Labs, but that “in the time I may live, certainly in 20 years,” it would transform the electronics industry and everyday life in a manner much more dramatic than the vacuum tube. The telecommunications systems of the future would be “more like the biological systems of man’s brain and nervous system.” The tiny transistor had reduced dimensions and power consumption “so far that we are going to get into a new economic area, particularly in switching and local transmission, and other places that we can’t even envision now.” It seemed to be some kind of extended human network he had in mind, hazy and fantastical and technologically sophisticated, one where communications whipped about the globe effortlessly and where everyone was potentially in contact with everyone else.
He could remember, too, that as the tubes became increasingly common—in the phone system, radios, televisions, automobiles, and the like—they had come down to price levels that once seemed impossible. He had long understood that innovation was a matter of economic imperatives. As Jack Morton had said, if you hadn’t sold anything you hadn’t innovated, and without an affordable price you could never sell anything. So Kelly looked at the transistor and saw the past, and the past was tubes. He thereby intuited the future.
“A Mathematical Theory of Communication”—“the magna carta of the information age,” as Scientific American later called it—wasn’t about one particular thing, but rather about general rules and unifying ideas. “He was always searching for deep and fundamental relations,” Shannon’s colleague Brock McMillan explains. And here he had found them. One of his paper’s underlying tenets, Shannon would later say, “is that information can be treated very much like a physical quantity, such as mass or energy.”
One shouldn’t necessarily think of information in terms of meaning. Rather, one might think of it in terms of its ability to resolve uncertainty. Information provided a recipient with something that was not previously known, was not predictable, was not redundant. “We take the essence of information as the irreducible, fundamental underlying uncertainty that is removed by its receipt,” a Bell Labs executive named Bob Lucky explained some years later. If you send a message, you are merely choosing from a range of possible messages. The less the recipient knows about what part of the message comes next, the more information you are sending.
(1) All communications could be thought of in terms of information; (2) all information could be measured in bits; (3) all the measurable bits of information could be thought of, and indeed should be thought of, digitally. This could mean dots or dashes, heads or tails, or the on/off pulses that comprised PCM.
His calculations showed that the information content of a message could not exceed the capacity of the channel through which you were sending it. Much in the same way a pipe could only carry so many gallons of water per second and no more, a transmission channel could only carry so many bits of information at a certain rate and no more. Anything beyond that would reduce the quality of your transmission. The upshot was that by measuring the information capacity of your channel and by measuring the information content of your message you could know how fast, and how well, you could send your message. Engineers could now try to align the two—capacity and information content.
Shannon’s paper contained a claim so surprising that it seemed impossible to many at the time, and yet it would soon be proven true. He showed that any digital message could be sent with virtual perfection, even along the noisiest wire, as long as you included error-correcting codes—essentially extra bits of information, formulated as additional 1s and 0s—with the original message. In his earlier paper on cryptography, Shannon had already shown that by reducing redundancy you could compress a message to transmit its content more efficiently. Now he was also demonstrating something like the opposite: that in some situations you could increase the redundancy of a message to transmit it more accurately.
And yet Kelly would say at one point, “With all the needed emphasis on leadership, organization and teamwork, the individual has remained supreme—of paramount importance. It is in the mind of a single person that creative ideas and concepts are born.” There was an essential truth to this, too—John Bardeen suddenly suggesting to the solid-state group that they should consider working on the hard-to-penetrate surface states on semiconductors, for instance. Or Shockley, mad with envy, sitting in his Chicago hotel room and laying the groundwork for the junction transistor. Or Bill Pfann, who took a nap after lunch and awoke, as if from an edifying dream, with a new method for purifying germanium. Of course, these two philosophies—that individuals as well as groups were necessary for innovation—weren’t mutually exclusive. It was the individual from which all ideas originated, and the group (or the multiple groups) to which the ideas, and eventually the innovation responsibilities, were transferred.
He would acknowledge that building devices like chess-playing machines “might seem a ridiculous waste of time and money. But I think the history of science has shown that valuable consequences often proliferate from simple curiosity.” “He never argued his ideas,” Brock McMillan says of Shannon. “If people didn’t believe in them, he ignored those people.”
In truth, the handoff between the three departments at Bell Labs was often (and intentionally) quite casual. Part of what seemed to make the Labs “a living organism,” Kelly explained, were social and professional exchanges that moved back and forth, in all directions, between the pure researchers on one side and the applied engineers on the other. These were formal talks and informal chats, and they were always encouraged, both as a matter of policy and by the inventive design of the Murray Hill building.
Physical proximity, in Kelly’s view, was everything. People had to be near one another. Phone calls alone wouldn’t do. Kelly had even gone so far as to create “branch laboratories” at Western Electric factories so that Bell Labs scientists could get more closely involved in the transition of their work from development to manufacture.
Bell Labs had the advantage of necessity; its new inventions, as one of Kelly’s deputies, Harald Friis, once said, “always originated because of a definite need.”
To innovate, Kelly would agree, an institute of creative technology required the best people, Shockleys and Shannons, for instance—and it needed a lot of them, so many, as the people at the Labs used to say (borrowing a catchphrase from nuclear physics), that departments could have a “critical mass” to foster explosive ideas.
There was no precise explanation as to why this was such an effective goad, but even for researchers in pursuit of pure scientific understanding rather than new things, it was obvious that their work, if successful, would ultimately be used. Working in an environment of applied science, as one Bell Labs researcher noted years later, “doesn’t destroy a kernel of genius—it focuses the mind.”
An instigator is different from a genius, but just as uncommon. An instigator is different, too, from the most skillful manager, someone able to wrest excellence out of people who might otherwise fall short. Somewhere between Shannon (the genius) and Kelly (the manager), Pierce steered a course for himself at Bell Labs as an instigator. “I tried to get other people to do things, I’m lazy,” Pierce once told an interviewer.
Pierce’s real talent, according to Friis and Pierce himself, was in getting people interested in something that hadn’t really occurred to them before.
Pierce had been correct in some respects about the traveling wave tube’s potential. But as he came to understand, inventions don’t necessarily evolve into the innovations one might at first foresee. Humans all suffered from a terrible habit of shoving new ideas into old paradigms. “Everyone faces the future with their eyes firmly on the past,” Pierce said, “and they don’t see what’s going to happen next.”
A terrestrial signal could be directed toward the orbiting satellite in space; the satellite, much like a mirror, could in turn direct the signal to another part of the globe. Pierce didn’t consider himself the inventor of this idea; it was, he would later say, “in the air.”
Ideas may come to us out of order in point of time,” the first director of the Rockefeller Institute for Medical Research, Simon Flexner, once remarked. “We may discover a detail of the façade before we know too much about the foundation. But in the end all knowledge has its place.”
Why move in this direction? What kind of future did the men envision? One of the more intriguing attributes of the Bell System was that an apparent simplicity—just pick up the phone and dial—hid its increasingly fiendish interior complexity. What also seemed true, and even then looked to be a governing principle of the new information age, was that the more complex the system became in terms of capabilities, speed, and versatility, the simpler and sleeker it appeared. ESS was a case in point.
I liked Fisk very much. But the combination of Fisk, who didn’t know a lot about what was going on in the bowels of the place, and Julius, who knew everything about what was going on in the bowels of the place, was a good combination.”
Colleagues often stood amazed that Baker could recall by name someone he had met only once, twenty or thirty years before. His mind wasn’t merely photographic, though; it worked in some ways like a switching apparatus: He tied everyone he ever met, and every conversation he ever had, into a complex and interrelated narrative of science and technology and society that he constantly updated, with apparent ease.
To Pollak, this was a demonstration not of Bill Baker’s cruelty but of his acumen—in this case to push his deep belief that science rests on a foundation of inquiry rather than certainty. Also, it revealed how nimble Baker’s mind really was. “A very small number of times in my life I’ve been in the presence of somebody who didn’t necessarily answer the question I asked. They answered the question I should have asked,” Pollak says. “And Bill Baker was one of those people. And there are other people who just build a mystique and give the impression of a mystique around them. And Bill had that, too.”
New titles might not have increased his influence. By the start of the 1960s Baker was engaged in a willfully obscure second career, much like the one Mervin Kelly had formerly conducted, a career that ran not sequentially like some men’s—a stint in government following a stint in business, or vice versa—but simultaneously, so that Baker’s various jobs in Washington and his job at Bell Labs intersected in quiet and complex and multifarious ways. Baker could bring innovations in communications to the government’s attention almost instantly.
“So often,” says Ian Ross, who worked in Jack Morton’s department at Bell Labs doing transistor development in the 1950s, “the original concept of what an innovation will do”—the replacement of the vacuum tube, in this case—“frequently turns out not to be the major impact.” The transistor’s greatest value was not as a replacement for the old but as an exponent for the new—for computers, switches, and a host of novel electronic technologies.
Innovations are to a great extent a response to need.
In the wake of the 1956 agreement, AT&T appeared to be indestructible. It now had the U.S. government’s blessing. It was easily the largest company in the world by assets and by workforce. And its Bell Laboratories, as Fortune magazine had declared, was indisputably “the world’s greatest industrial laboratory.” And yet even in the 1960s and 1970s, as Bill Baker’s former deputy Ian Ross recalls, the “long, long history of worry about losing our monopoly status persisted.” To a certain extent, Bill Baker and Mervin Kelly believed their involvement in government affairs could lessen these worries. In the view of Ross and others, such efforts probably helped delay a variety of antitrust actions. Ross recalls, “Kelly set up Sandia Labs, which was run by AT&T, managed by us, and whenever I asked, ‘Why do we stay with this damn thing, it’s not our line of business,’ the answer was, ‘It helps us if we get into an antitrust suit.’ And Bell Labs did work on military programs. Why? Not really to make money. It was part of being invaluable.”
The fundamental goal in making transistor materials is purity; the fundamental goal in making fiber materials is clarity. Only then can light pass through unimpeded; or as optical engineers say, only then can “losses” of light in the fiber be kept to an acceptable minimum.
Indeed, a marketing study commissioned by AT&T in the fall of 1971 informed its team that “there was no market for mobile phones at any price.” Neither man agreed with that assessment. Though Engel didn’t perceive it at the time, he later came to believe that marketing studies could only tell you something about the demand for products that actually exist. Cellular phones were a product that people had to imagine might exist.
Pierce later remarked that one thing about Kelly impressed him above all else: It had to do with how his former boss would advise members of Bell Labs’ technical staff when they were asked to work on something new. Whether it was a radar technology for the military or solid-state research for the phone company, Kelly did not want to begin a project by focusing on what was known. He would want to begin by focusing on what was not known. As Pierce explained, the approach was both difficult and counterintuitive. It was more common practice, at least in the military, to proceed with what technology would allow and fill in the gaps afterward. Kelly’s tack was akin to saying: Locate the missing puzzle piece first. Then do the puzzle.
Shannon had become wealthy, too, through friends in the technology industry. He owned significant shares in Hewlett-Packard, where his friend Barney Oliver ran the research labs, and was deeply invested in Teledyne, a conglomerate started by another friend, Henry Singleton. Shannon sat on Teledyne’s board of directors.
“Ideas and plans are essential to innovation,” he remarked, “but the time has to be right.”
“It is just plain silly,” he wrote, “to identify the new AT&T Bell Laboratories with the old Bell Telephone Laboratories just because the new Laboratories has inherited buildings, equipment and personnel from the old. The mission was absolutely essential to the research done at the old Laboratories, and that mission is gone and has not been replaced.”
At the time of the breakup, in fact, it was widely assumed in the business press that IBM and AT&T would now struggle for supremacy. What undermined such an assumption was the historical record: Everything Bell Labs had ever made for AT&T had been channeled into a monopoly business. “One immediate problem for which no amount of corporate bulk can compensate is the firm’s lack of marketing expertise,” one journalist, Christopher Byron of Time, noted. It was a wise point. Bell Labs and AT&T had “never really had to sell anything.”3 And when they had tried—as was the case with the Picturephone—they failed. Government regulation, as AT&T had learned, could be immensely difficult to manage and comply with. But markets, they would soon discover, were simply brutal. AT&T’s leaders, such as CEO Charlie Brown, “had never had the experience or the training to compete,” Irwin Dorros, a former Bell Labs and AT&T executive, points out. “They tried to apply the skills that they grew up with, and it didn’t work.” In later years, the downsizing at Bell Labs, in terms of both purpose and people, would mostly be linked to this inability to compete.
The purpose of innovation is sometimes defined as new technology. But the point of innovation isn’t really technology itself. The point of innovation is what new technology can do. “Better, or cheaper, or both”—Kelly’s rule—is one way to think about this goal.
A large group of physicists, certainly, created a healthy flow of ideas. But Kelly believed the most valuable ideas arose when the large group of physicists bumped against other departments and disciplines, too. “It’s the interaction between fundamental science and applied science, and the interface between many disciplines, that creates new ideas,” explains Herwig Kogelnik, the laser scientist. This may indeed have been Kelly’s greatest insight.
Eugene Kleiner, moreover, a founding partner at the premier venture capital firm Kleiner Perkins, was originally hired by Bill Shockley at his ill-fated semiconductor company. But the Silicon Valley process that Kleiner helped develop was a different innovation model from Bell Labs. It was not a factory of ideas; it was a geography of ideas. It was not one concentrated and powerful machine; it was the meshing of many interlocking small parts grouped physically near enough to one another so as to make an equally powerful machine. The Valley model, in fact, was soon so productive that it became a topic of study for sociologists and business professors. They soon bestowed upon the area the title of an “innovation hub.”
“You may find a lot of controversy over how Bell Labs managed people,” John Mayo, the former Bell Labs president, says. “But keep in mind, I don’t think those managers saw it that way. They saw it as: How do you manage ideas? And that’s very different from managing people. So if you hear something negative about how John Pierce managed people, I’d say, well, that’s not surprising. Pierce wasn’t about managing people. Pierce was about managing ideas. And you cannot manage ideas and manage people the same way. It just doesn’t work. So if somebody tells you Pierce wasn’t a great manager . . . you say, of what?”
Pierce, to put it simply, was asking himself: What about Bell Labs’ formula was timeless? In his 1997 list, he thought it boiled down to four things: A technically competent management all the way to the top. Researchers didn’t have to raise funds. Research on a topic or system could be and was supported for years. Research could be terminated without damning the researcher.
What seems more likely, as the science writer Steven Johnson has noted in a broad study of scientific innovations, is that creative environments that foster a rich exchange of ideas are far more important in eliciting important new insights than are the forces of competition.
To think long-term toward the revolutionary, and to simultaneously think near-term toward manufacturing, comprises the most vital of combinations.
What I got out of it
The dominance of AT&T and how they were able to structure the organization to take advantage of the talent at Bell Labs was great to learn more about. Having to build or invent something which will have to go to market is important, having a diverse group of people who interact often, and “A technically competent management all the way to the top. Researchers didn’t have to raise funds. Research on a topic or system could be and was supported for years. Research could be terminated without damning the researcher.”
Kidder brings the computer revolution to life by studying life inside Data General
IBM set up two main divisions, each one representing the other’s main competition.
Herb Richman, who had helped to found Data General, said, “We did everything well.” Obviously, they did not manage every side of their business better than everyone else, but these young men (all equipped with large egos, as one who was around them at this time remarked) somehow managed to realize that they had to attend with equal care to all sides of their operation—to the selling of their machine as well as to its design, for instance. That may seem an elementary rule for making money in a business, but it is one that is easier to state than to obey. Some notion of how shrewd they could be is perhaps revealed in the fact that they never tried to hoard a majority of the stock, but used it instead as a tool for growth. Many young entrepreneurs, confusing ownership with control, can’t bring themselves to do this.
When they chose their lawyer, who would deal with the financial community for them, they insisted that he invest some of his own money in their company. “We don’t want you running away if we get in trouble. We want you there protecting your own money,”
Richman also remembered that before they entered into negotiations over their second public offering of stock, after the company had been making money for a while and the stock they’d already issued had done very well indeed, their lawyer insisted that each of the founders sell some of their holdings in the company and each “take down a million bucks.” This so that they could negotiate without the dread of losing everything (“Having to go back to your father’s gas station,” Richman called that nightmare). As for the name of the theory behind selling enough stock to become millionaires, Richman told me, “I don’t know how you put it in the vernacular. We called it the Fuck You Theory.”
“DEC owned 85 percent of the business and there was no strong number two. We had to distinguish ourselves from DEC,” Kluchman remembered. “DEC was known as a bland entity. Data General was gonna be unbland, aggressive, hustling, offering you more for your money…. We spread the idea that Data General’s salesmen were more aggressive than DEC’s, and they were, because ours worked on commissions and theirs worked on salaries. But I exaggerated the aggressiveness.” According to Kluchman, DEC actually gave them some help in setting up “the Hertz-Avis thing.” DEC’s management, he said, ordered their salesmen to warn their customers against Data General. “It was great! Because their customers hadn’t heard about us.”
Where did the risks lie? Where could a company go badly wrong? In many cases, a small and daily growing computer company did not fall on hard times because people suddenly stopped wanting to buy its products. On the contrary, a company was more likely to asphyxiate on its own success. Demand for its products would be soaring, and the owners would be drawing up optimistic five-year plans, when all of a sudden something would go wrong with their system of production.
You did not have to be the first company to produce the new kind of machine; sometimes, in fact, it was better not to be the first. But you had to produce yours before the new market really opened up and customers had made other marriages. For once they are lost, both old and prospective customers are often gone for good.
Some of the engineers closest to West suspected that if he weren’t given a crisis to deal with once in a while, he would create one. To them he seemed so confident and happy in an emergency.
By the mid-1960s, a trend that would become increasingly pronounced was already apparent: while the expense of building a computer’s hardware was steadily declining, the cost of creating both user and system software was rising. In an extremely bold stroke, IBM took advantage of the trend. They announced, in the mid-sixties, all at one time, an entire family of new computers—the famous 360 line. In the commerce of computers, no single event has had wider significance, except for the invention of the transistor. Part of the 360’s importance lay in the fact that all the machines in the family were software compatible.
Software compatibility is a marvelous thing. That was the essential lesson West took away from his long talks with his friend in Marketing. You didn’t want to make a machine that wasn’t compatible, not if you could avoid it. Old customers would feel that since they’d need to buy and create all new software anyway, they might as well look at what other companies had to offer, they’d be likely to undertake the dreaded “market survey.” And an incompatible machine would not make it easy for new customers to buy both 16-bit Eclipses and the new machine.
Kludge is perhaps the most disdainful term in the computer engineer’s vocabulary: it conjures up visions of a machine with wires hanging out of it, of things fastened together with adhesive tape.
West had a saying: “The game around here is getting a machine out the door with your name on it.”
Cray was a legend in computers, and in the movie Cray said that he liked to hire inexperienced engineers right out of school, because they do not usually know what’s supposed to be impossible. West liked that idea. He also realized, of course, that new graduates command smaller salaries than experienced engineers. Moreover, using novices might be another way in which to disguise his team’s real intentions. Who would believe that a bunch of completely inexperienced engineers could produce a major CPU to rival North Carolina’s?
West invented the term, not the practice—was “signing up.” By signing up for the project you agreed to do whatever was necessary for success. You agreed to forsake, if necessary, family, hobbies, and friends—if you had any of these left (and you might not if you had signed up too many times before). From a manager’s point of view, the practical virtues of the ritual were manifold. Labor was no longer coerced. Labor volunteered. When you signed up you in effect declared, “I want to do this job and I’ll give it my heart and soul.”
How do such moments occur? “Hey,” Wallach said, “no one knows how that works.” He remembered that during the time when he was working on the Navy computer for Raytheon—the one that got built and then scrapped—he was at a wedding and the solution to a different sort of problem popped into his mind. He wrote it down quickly on the cover of a matchbook. “I will be constantly chugging away in my mind,” he explained, “making an exhaustive search of my data bank.”
Much of the engineering of computers takes place in silence, while engineers pace in hallways or sit alone and gaze at blank pages. Alsing favored the porch and staring out at trees. When writing code, he said, he often felt that he was playing an intense game of chess with a worthy opponent. He went on: “Writing microcode is like nothing else in my life. For days there’s nothing coming out. The empty yellow pad sits there in front of me, reminding me of my inadequacy. Finally, it starts to come. I feel good. That feeds it, and finally I get into a mental state where I’m a microcode-writing machine. It’s like being in Adventure. Adventure’s a completely bogus world, but when you’re there, you’re there. “You have to understand the problem thoroughly and you have to have thought of all the myriad ways in which you can put your microverbs together. You have a hundred L-shaped blocks to build a building. You take all the pieces, put them together, pull them apart, put them together. After a while, you’re like a kid on a jungle gym. There are all these constructs in your mind and you can swing from one to the other with ease.
“West’s not a technical genius. He’s perfect for making it all work. He’s gotta move forward. He doesn’t put off the tough problem, the way I do. He’s fearless, he’s a great politician, he’s arbitrary, sometimes he’s ruthless.”
“One never explicitly plays by these rules.” And West remarked that there was no telling which rules might be real, because only de Castro made the rules that counted, and de Castro was once quoted as saying, “Well, I guess the only good strategy is one that no one else understands.”
Not Everything Worth Doing Is Worth Doing Well.
there’s no such thing as a perfect design. Most experienced computer engineers I talked to agreed that absorbing this simple lesson constitutes the first step in learning how to get machines out the door. Often, they said, it is the most talented engineers who have the hardest time learning when to stop striving for perfection. West was the voice from the cave, supplying that information: “Okay. It’s right. Ship it.”
In fact, the team designed the computer in something like six months, and may have set a record for speed. The task was quite complex.
That fall West had put a new term in his vocabulary. It was trust. “Trust is risk, and risk avoidance is the name of the game in business,” West said once, in praise of trust. He would bind his team with mutual trust, he had decided. When a person signed up to do a job for him, he would in turn trust that person to accomplish it; he wouldn’t break it down into little pieces and make the task small, easy and dull.
“With Tom, it’s the last two percent that counts. What I now call ‘the ability to ship product’—to get it out the door.”
Rasala liked a contentious atmosphere, a vigorous, virile give-and-take among himself and his crew. “Smart, opinionated and nonsensitive, that’s a Hardy Boy,” he declared. Above all, Rasala wanted around him engineers who took an interest in the entire computer, not just in the parts that they had designed.
Firth had just begun to study programming, but the error was “just obvious” to him. Remembering this incident years later, Firth said that the engineer had probably been “programming by rote. He wanted to make his program look like programs he’d seen before, and that clearly wasn’t gonna work.” Firth always tried to avoid such an approach. “I like to work around ‘why,’ ” he told me. “I prefer not to know the established limits and what other people think, when I start a project.”
He also said: “No one ever pats anybody on the back around here. If de Castro ever patted me on the back, I’d probably quit.”
The clerk had some trouble figuring what the beer we bought ought to cost, and as we left, West said, out of her earshot, “Ummmmh, one of the problems with machines like that. You end up making people so dumb they can’t figure out how many six-packs are in a case of beer.”
West didn’t seem to like many of the fruits of the age of the transistor. Of machines he had helped to build, he said, “If you start getting interested in the last one, then you’re dead.” But there was more to it. “The old things, I can’t bear to look at them. They’re clumsy. I can’t believe we were that dumb.” He spoke about the rapidity with which computers became obsolete. “You spend all this time designing one machine and it’s only a hot box for two years, and it has all the useful life of a washing machine.” He said, “I’ve seen too many machines.” One winter night, at his home, while he was stirring up the logs in his fireplace, he muttered, “Computers are irrelevant.”
“It doesn’t matter how hard you work on something,” says Holberger. “What counts is finishing and having it work.”
“I get quite a lot of work done in the morning while taking a shower,” says Veres. “Showers are kinda boring things, all things considered.” Now in the shower, before leaving for work, he conceives a new approach.
“The way West was with us, it provided a one-level separation—someone far enough away to lay blame on.”
At one point, Jim Guyer said: “We didn’t get our commitment to this project from de Castro or Carman or West. We got it from within ourselves. Nobody told us we had to put extra effort into the project.” Ken Holberger burst out laughing. Guyer raised his voice. “We got it from within ourselves to put extra effort in the project.” Laughing hard, Holberger managed to blurt out, “Their idea was piped into our minds!” “The company didn’t ask for this machine,” cried Guyer. “We gave it to them. We created that design.” Others raised their voices. Quietly, Rasala said, “West created that design.”
Engineers are supposed to stand among the privileged members of industrial enterprises, but several studies suggest that a fairly large percentage of engineers in America are not content with their jobs. Among the reasons cited are the nature of the jobs themselves and the restrictive ways in which they are managed. Among the terms used to describe their malaise are declining technical challenge; misutilization; limited freedom of action; tight control of working patterns.
“He set up the opportunity and he didn’t stand in anyone’s way. He wasn’t out there patting people on the back. But I’ve been in the world too long and known too many bosses who won’t allow you the opportunity. He never put one restriction on me. Tom allowed me to take a role where I could make things happen. What does a secretary do? She types, answers the phone, and doesn’t put herself out too much. He let me go out and see what I could get done. You see, he allowed me to be more than a secretary there.
West never passed up an opportunity to add flavor to the project. He helped to transform a dispute among engineers into a virtual War of the Roses. He created, as Rasala put it, a seemingly endless series of “brushfires,” and got his staff charged up about putting them out. He was always finding romance and excitement in the seemingly ordinary. He welcomed a journalist to observe his team; and how it did delight him when one of the so-called kids remarked to me, “What we’re doing must be important, if there’s a writer covering it.”
West sits in his office and declares, “The only way I can do this machine is in this crazy environment, where I can basically do it any way that I want.”
Steve Wallach gave the speech he had once dreaded, describing Eagle’s architecture to a jury of peers, at a meeting of a society of computer professionals, and when he was done, they got up and applauded—“the ultimate reward,” he said.
What I got out of it
Really insightful read on a company and time I didn’t know much about. West seems to have been an amazing leader, someone who was able to inspire his team to do amazing things quickly, ship them out the door, and make his idea their idea – the keystone for any leader
Susan Crawford goes into why fiber is superior to copper and cable and why it becoming ubiquitous in the US is so important.
What China, Singapore, the Nordic countries, Korea, Japan, and Hong Kong have that other developed countries don’t is last mile fiber going into the home of residents. If copper wire is a 2 inch wide pipe, the fiber being used in these countries is like a 15 mile wide river – that is how superior the data transmission on fiber is compared to copper
The US is falling behind in this instantaneous connectivity which could hurt us as other countries such as China move ahead and are able to iterate and innovate faster with nearly 0 latency connectivity. Just like the installation of electric lightbulbs was a wedge for other electric appliances and innovations, this instant connectivity will open up huge markets
The problem with fiber is not capacity or longevity (electricity, water or hardly anything else can mess with it) but distribution. Everybody who wants access to the fiber has to be directly coupled into it or close enough so that they can propel wireless signals
Only about 14% of connections in the US are fiber based whereas it is the norm in Singapore. In addition, they’re extremely expensive and difficult to come by unless you live in a very rich area. The US was the global leader when it came to copper but is falling far behind in fiber compared to other developed nations. The problem lies in latency and scarcity.
Copper has to be close to the central source and is subject to interference. Cable will never be as frictionless as glass. Glass doesn’t have to be amplified and can pump way more data than copper or cable, it is more flexible and easier to upgrade
The world is going wireless but fiber is still vital. Wireless needs wires to travel any distance whatsoever. They are complementary, mutually beneficial. Only fiber will be able to handle the flood of data that comes when everyone is connected, mobile and able to access constant and fast connectivity
5G is hoping to use multipass encoding over the 10 MHz wavelength which will helps bypass the Shannon Limit and encode more information on the same frequency wavelength. However, this requires a huge investment in towers or base stations. For example, 30,000 base stations to amplify, encode, and send out the signals were needed for AT&T to roll out 4G, but it is estimated that 10 million base stations would be needed for full coverage of 5G. Only fiber can handle the capacity needed to make this happen. It may sound paradoxical, but the future of wireless depends on fiber
Stockholm is leading the way and creating ubiquitous, cheap, and fast connectivity. The city considers connectivity as a basic right and the government paid for laying down the infrastructure and then leased it out the fiber to companies to recap their investment. This has been immensely profitable – throwing off nearly $30 million in free cash flow per year which is being used to expand the service and to subsidize other city goals
The cities and countries who are able to make fiber utility-like will have a leg up in terms of economic growth and innovation
The great capital investment needed to install fiber is sometimes the choking point and often for the return is not directly measurable. Like electricity did for electrical innovations, constant connectivity from fiber spurs creativity, innovation, and growth which is the backbone of a healthy and growing economy
Although laying down fiber is capital intensive, 80% of the cost is from labor which means a lot of jobs would be created. Also, although it typically takes close to 10 years to repay the initial investment, the returns after that are quite healthy as additional investment islimited it is mostly straight cash flow after that
Upfront capital costs are big negatives as are current state laws and regulations
The author ends with a good analogy between how railroads, and later the highways, served to opened up the country and spur growth. Fiber is the next highway we need to build
What I got out of it
Thought the author was a bit dramatic and repeated points but the central point is important. Ubiquity of fast connectivity spurs innovation and creativity, the backbone of a healthy economy
Blitzscaling: The Lightning-Fast Path to
Building Massively Valuable Companies by Reid Hoffman, Chris Yeh
Blitzscaling is when
you put speed over efficiency, even in the face of uncertainty. This constant
and fast feedback will help you adopt, evolve, and move forward faster than
your competitors. Getting this feedback early and moving quickly on it is the
name of the game – especially if you are a platform and have a two sided model.
Blitzscaling is a risky decision but, if your competitor has taken this path,
it is less risky than doing nothing. This book will walk you through how to do
it, when to do it, why to do it, and what it looks like. The cost and
inefficiencies are worth it because the downside of not doing it when new
technology enables is far greater – irrelevance.
If you’d prefer to listen to this article, use the player below.
You can also find more of my articles in audio version at Listle
Blitzscaling will help you make better decisions
where speed is the ultimate super power
Blitzscaling works as both offense and defense –
you can catch people off guard and as if you don’t, you might not survive. You
can leverage your initial competitive advantage into a long-term one before the
market and competitors can respond. You can get easier access to capital as
investors prefer to back market leaders allowing you to further your advantage
of competitors. Blitzscaling allows you to set the playing field to your
McKinsey found the companies that had 60% growth when
they reached $100 million in revenues are 8x more likely to reach $1
billion then those who are growing at 20% of the similar size. They have
first scaler advantage. At this point, the ecosystem around this massive
company recognize them as the market leader and shift their behavior to
better suit them which leads to positive feedback loops
Startups, just like certain materials and chemicals,
go through phase changes. A dominant global leader is not simply a startup
times of thousand it is a fundamentally different machine. Just as ice
skates are useless on water, the same tactics used in the startup may be
useless once you have achieved product market fit.
The five phases of Blitzscaling: Family, tribe,
village, city, nation
The first step is creating a business model that can
grow. This sounds elementary but it’s amazing how many startup founders
miss this simple piece. You must have a business model that can scale or
else it’ll break before you can reach dominance. Business model
innovation is more important than most people think as technology today
is not the differentiator it used to be. Most great startups are like Tesla
which combine existing technologies in a unique and special way rather
than like Space-X where they had to invent their own technologies
Blitzscaling is a strategic innovation and hurls much common
wisdom out the window. Founders should only blitzscale when they
determine that the most important factor in their company’s survival is
speed into the market. It is a big bet but can pay off handsomely.
The revenue model don’t have to be perfect when you do
it. Your only goal is scale into a market that is winner take all or
winner take most. However, not every company should blitzscale if
product-market fit isn’t there or if the business model isn’t there
You should blitzscale when there’s a big new
opportunity, when the size of the market and gross margins overlap to
create potential huge value. You should also blitzscale when there is no
dominant market leader or oligopoly that controls the market
Another way to think about blitzscaling is by climbing
learning curves faster than others
Blitzscaling is not meant to go on forever. You should
stop when growth slows, when average revenue per employee slows, when gross
margins begin to climb, and other similar leading indicator show that
your growth is slowing. You should also slow when you are reaching the
upper bounds of a market
In blitzscaling mode, raise more cash (much more cash)
than you think you’ll need. Typically you should try to raise enough
money for 18 to 24 months of survival. When trying to raise money nothing
is more powerful than not needing it. Only spend money on things which
are life or death if not solved
As startups blitzscale, they have to balance
responsibility with their power
Try to partner with currently blitzscaling companies
or companies which have blitzscaled in the past
The role and skills needed by the CEO and top
management are different for every level and size of the startup. It is
never static and is always changing
Business model growth factors
Market size – paying customers, great distribution,
fixed and expanding margins
Distribution – leveraging existing networks, virality
High gross margins – more revenues lead to more cash
on hand which can be put to use, more attractive to investors
Network effects – direct, indirect, two sided, local,
compatibility and standards
There are two growth limiters: product market fit and
8 key transitions
From small teams to large teams. This can be a tough
psychological change for founders and early employees as it is now
impossible to be part of every decision and have clarity into every
From generalists to specialists
From managers to executives. Executives organize and
lead managers and managers execute day to day operations. Hire people
who are known to at least one current team member, start them small and
let them prove their value and gain other’s trust, then think about
From dialogue to broadcasting. Establishing formal
and consistent communication is extremely important as you grow. Chesky
sends out a weekly email on Sunday nights which highlight growth metrics
but also give the team updates and clarity on how the company is doing
and other important topics so that everyone continues to feel involved
From improvisation and inspiration to data. At the
beginning you have no customers to listen to but over time you have to
track team metrics and analyze the data so that you can improve and adapt.
Track the number of user’s raw engagement and churn to begin with and
then customize and go deeper as is necessary for your product or
service. No company should have more than 3 to 5 metrics as more tends
to lead to confusion. It doesn’t necessarily matter what data you
collect but what data gets presented to decision-makers.
From single threading to multithreading. The author
doesn’t know of one start up that didn’t start out as singularly
focused. They can branch out from there but it is important to have a
deep focus when you’re first getting started
From pirate to navy. From continuous offense to a
blend of offense and defense. You must strike a balance between the power
of being small and nimble and the benefits of being large and having
scale. Much like JBS Haldane stipulates, you are fundamentally different
when you scale. You can’t run a city the same way you run a tribe and
you can’t run a nation the same way you run a city
From founder to leader. Your role as the founder will
change as the company scales and grows and you must adapt to it or you
won’t be serving the company as it needs you to. You have to keep your
personal learning curve ahead of the businesses’ growth curve. There are
three ways to scale yourself: delegation, amplification, and simply
Doing things which don’t scale when you’re growing quickly.
It might be best to find a hack that you’ll have to throw away later than
taking your time and running an elegant piece of software
Ignore your customers at least at this stage in your
growth. You have to provide whatever customer service you can that
doesn’t slow you down – most likely this will be no customer service. However,
you cannot ignore culture a strong culture is really important and is
defined by consistent values and actions across the company. The
executive in charge of the functional area which drives the culture of
the company tends to be the natural successor to the CEO
Awesome analysis on Zara the clothing retailer who uses
split scaling techniques. Although it is a retailer, they use speed to
their advantage and focus less on efficiency
Incumbents have some natural advantages such as scale,
the power and resources to continuously innovate, longevity, and mergers and
acquisitions but the disadvantages include poorly aligned incentives, managerial
overhang, lack of risk appetite, public pressure since they’re publicly
A good way to gauge risk is by thinking through the knowns
and the unknowns and systemic risk and non-systemic risk. Therefore, you
must act immediately if there’s some big systemic risk, do something short
term to solve your problem, note the problem now so that you can solve it
later and let it burn (if unknown and non-systemic)
Instability and change are the new norm and the only
way to thrive is to know that you have to adapt faster than the change
around you. Be an infinite learner, be a first responder who is
willing and able to act, veer towards industries, people, and companies
that are biased towards blitzscaling as this is where the fastest and
biggest growth lies
Real value is created when innovative technologies
allow for innovative products / services, with an innovative business,
model to emerge
It’s important to differentiate between first mover
advantage and first scaler advantage. First movers often die but
successful first scalers tend to achieve a very powerful position
Do everything by hand until it’s too painful. Then
Common patterns of dominant businesses:
Bits versus atoms (software/digital rather than physical)
Free or freemium
Newsfeeds which drive user engagement and retention
You must focus on adaptation rather than optimization
You should always have a plan a Plan B and plan Z that
you can fall back on in case your first option doesn’t work out and then
your option in case worst case scenarios come up
In the early days prioritize hiring those who can add
value immediately and not the absolute perfect candidate
Tolerate bad management. At the beginning it is more
important to move quickly than to have perfect organization and processes
Launch a product that embarrasses you. You don’t want
to wait so long until it’s perfect want to get out and see what the
market thinks of it
You have to listen to your customers. Not only what
they say, but you also have to know when to ignore them – must learn to
You have to know which fires to fight which ones to
say no to and which ones you actually have some control over. Only then
can you know which problems to tackle and in which order. Distribution,
product, customer service, operations are some of the most important
What I got out of it
Blitzscaling is the
pursuit of growth and speed, even in the face of uncertainty. It is a big
gamble but is necessary sometimes if coming to market first, fastest, and
biggest gives you a shot at owning a big market. A great playbook for anybody
thinking about pursuing this strategy
The idea of increasing returns has come up every few decades but Brian Arthur’s precise and fully-modeled papers caused us to clearly understand what kinds of models have what kinds of implications. One outstanding characteristic of Arthur’s viewpoint is emphatically dynamic in nature. Learning by using or doing plays an essential role, as opposed to static examples of returns to scale (those based on volume-area relations). The object of study is a history. Another distinctive feature of most of the work is its stochastic character. This permits emphasis on the importance of random deviations for long-run tendencies. Other tendencies include the multiplicity of possible long-run states, depending on initial conditions and on random fluctuations over time, and the specialization (in terms of process or geographical location) in an outcome achieved. Increasing returns may also serve as a reinforcement for early leading positions and so act in a manner parallel to more standard forms of increasing returns. A similar phenomenon occurs even in individual learning, where again successes reinforce some courses of action and inhibit others, thereby causing the first to be used more intensively, and so forth. There are in all of these models opposing tendencies, some toward achieving an optimum, some toward locking in on inefficient forms of behavior.
The papers here reflect two convictions I have held since I started work in this area. The first is that increasing returns problems tend to show common properties and raise similar difficulties and issues wherever they occur in economics. The second is that the key obstacle to an increasing returns economics has been the “selection problem” – determining how an equilibrium comes to be selected over time when there are multiple equilibria to choose from. Thus the papers here explore these common properties – common themes – of increasing returns in depth. And several of them develop methods, mostly probabilistic, to solve the crucial problem of equilibrium selection.
Arthur studied electrical engineering so was vaguely familiar with positive feedback already and became more intrigued when he read about the history of the discovery of the structure of DNA and read whatever he could about molecular biology and enzyme reactions and followed these threads back to the domain of physics. In this work, outcomes were not predictable, problems might have more than one solution, and chance events might determine the future rather than be average away. The key to this work, I realized, lay not in the domain of the science it was dealing with, whether laser theory, or thermodynamics, or enzyme kinetics. It lay in the fact that these were processes driven by some form of self-reinforcement, or positive feedback, or cumulative causation – processes, in economics terms that were driven by nonconvexities. Here was a framework that could handle increasing returns.
Great discoveries tend to come from outside the field
Polya Process – path-dependent process in probability theory
In looking back on the difficulties in publishing these papers, I realize that I was naive in expecting that they would be welcomed immediately in the journals. The field of economics is notoriously slow to open itself to ideas that are different. The problem, I believe is not that journal editors are hostile to new ideas. The lack of openness stems instead from a belief embedded deep within our profession that economics consists of rigorous deductions based on a fixed set of foundational assumptions about human behavior and economic institutions. If the assumptions that mirror reality are indeed etched in marble somewhere, and apply uniformly to all economics problems, and we know what they are, there is of course no need to explore the consequences of others. But this is not the case. The assumptions economists need to use vary with the context of the problem and cannot be reduced to a standard set. Yet, at any time in the profession, a standard set seems to dominate. I am sure this state of affairs is unhealthy. It deters many economists, especially younger ones, from attempting approaches or problems that are different. It encourages use of the standard assumptions in applications where they are not appropriate. And it leaves us open to the charge that economics is rigorous deduction based upon faulty assumptions. At this stage of its development economics does not need orthodoxy and narrowness; it needs openness and courage.
I did not set out with an intended direction but if I have had a constant purpose it is to show that transformation, change, and messiness are natural in the economy. The increasing-returns world in economics is a world where dynamics, not statics, are natural; a world of evolution rather than equilibrium; a world or probability and chance events. Above all, it is a world of process and pattern change
Positive Feedbacks in the Economy
Diminishing returns, what conventional economic theory is built around, imply a single economic equilibrium point for the economy, but positive feedback – increasing returns – makes for many possible equilibrium points. There is no guarantee that the particular economic outcome selected from among the many alternatives will be the “best” one. Furthermore, once random economic events select a particular path, the choice may become locked-in regardless of the advantages of the alternatives
Increasing returns do not apply across the board – agriculture and mining (resource-based portions) – are subject to diminishing returns caused by limited amounts of fertile land or high quality deposits. However, areas of the economy which are knowledge-based are largely subject to increasing returns. Even the production of aircraft is subject to increasing returns – it takes a large initial investment but each plane after that is only a fraction of the initial cost. In addition, producing more units means gaining more experience in the manufacturing process and achieving greater understanding of how to produce additional units even more cheaply. Moreover, experience gained with one product or technology can make it easier to produce new products incorporating similar or related technologies. Not only do the costs of producing high-technology products fall as a company makes more of them, but the benefits of using them increase. Many items such as computers or telecommunications equipment work in networks that require compatibility; when one brand gains a significant market share, people have a strong incentive to buy more of the same product so as to be able to exchange information with those using it already.
Timing is important too in the sense that getting into an industry that is close to being locked in makes little sense. However, early superiority does not correlate with long term fitness
Like punctuated equilibrium, most of the time the perturbations are averaged away but once in a while they become all important in tilting parts of the economy into new structures and patterns that are then preserved and built on in a fresh layer of development
Competing technologies, increasing returns, and lock-in by historical events
There is an indeterminacy of outcome, nonergodicity (path dependence where small events cumulate to cause the systems to gravitate towards that outcome rather than others). There may be potential inefficiency and nonpredictability. Although individual choices are rational, there is no guarantee that the side selected is, from any long term viewpoint, the better of the two. The dynamics thus take on an evolutionary flavor with a “founder effect” mechanism akin to that in genetics
Path dependent processes and the emergence of macrostructure
Many situations dominated by increasing returns are most usefully modeled as dynamic processes with random events and natural positive feedbacks or nonlinearities. We call these nonlinear Polya processes and show that they can model a wide variety of increasing returns and positive feedback problems. In the presence of increasing returns or self reinforcement, a nonlinear Polya process typically displays a multiplicity if possible asymptotic outcomes. Early random fluctuations cumulate and are magnified or attenuated by the inherent nonlinearities of the process. By studying how these build up as the dynamics of the process unfold over time, we can observe how an asymptotic outcomes becomes “selected” over time
Very often individual technologies show increasing returns to adoption – the more they are adopted the more is learned about them; in then the more they are improved, and the more attractive they become. Very often, too, there are several technologies that compete for shares of a “market” of potential adopters
Industry location patterns and the importance of history
This study indeed shows that it is possible to put a theoretical basis under the historical-accident-plus-agglomeration argument (mostly arbitrary location for determining where a city is established but then more people flock to it, it receives more investment, more buildings come up, etc. which leads to agglomeration and increasing returns).
When a prospective buyer is making purchasing decisions among several available technically-based products, choosing among different computer workstations, say, they often augment whatever publicly available information they can find by asking previous purchasers about their experiences – which product they chose, and how it is working for them. This is a natural and reasonable procedure; it adds information that is hard to come by otherwise. But it also introduces an “information feedback” into the process whereby products compete for market share. The products new purchasers learn about depend on which products the previous purchasers “polled” or sampled and decided to buy. They are therefore likely to learn more about a commonly purchased product than one with few previous users. Hence, where buyers are risk-averse and tend to favor products they know more about, products that by chance win market share early on gain an information-feedback advantage. Under certain circumstances a product may come to dominate by this advantage alone. This is the information contagion phenomenon
Self-Reinforcing Mechanisms in Economics
Dynamical systems of the self-reinforcing or autocatalytic type – systems with local positive feedbacks – in physics, chemical kinetics, and theoretical biology tend to possess a multiplicity of asymptotic states or possible “emergent structures”. The initial starting state combined with early random events or fluctuations acts to push the dynamics into the domain of one of these asymptotic states and thus to “select” the structure that the system eventually “locks into”.
Self-reinforcing mechanisms are variants of or derive from four generic sources:
Large set up or fixed costs (which give the advantage of falling unit costs to increased output)
Learning effects (which act to improve products or lower their cost as their prevalence increases)
Coordination effects (which confer advantages to “going along” with other economic agents taking similar action)
Self-reinforcing expectations (where increased prevalence on the market enhances beliefs of further prevalence)
Besides these 4 properties, we might note other analogies with physical and biological systems. The market starts out even symmetric, yet it ends up asymmetric: there is “symmetry breaking.” An “order” or pattern in market shares “emerges” through initial market “fluctuations.” The two technologies compete to occupy one “niche” and the one that gets ahead exercises “competitive exclusion” on its rival. And if one technology is inherently superior and appeals to a larger proportion of purchasers, it is more likely to persist: it possesses “selectional advantage.”
Some more characteristics: multiple equilibria (multiple “solutions” are possible but the outcome is indeterminate, not unique and predictable); possible inefficiency, lock-in, path dependence
We can say that the particular equilibrium is locked in to a degree measurable by the minimum cost to effect changeover to an alternative equilibrium. In many economic systems, lock-in happens dynamically, as sequential decisions “groove” out an advantage that the system finds it hard to escape from. Exiting lock-in is difficult and depends on the degree to which the advantages accrued by the inferior “equilibrium” are reversible or transferable to an alternative one. It is difficult when learning effects and specialized fixed costs are the source of reinforcement. Where coordination effects are the source of lock-in, often advantages are transferable. As long as each user has certainty that the others also prefer the alternative, each will decide independently to “switch”. Inertia must be overcome though because few individuals dare change in case others do not follow
Path Dependence, Self-Reinforcement, and Human Learning
There is a strong connection between increasing returns mechanisms and learning problems. Learning can be viewed as competition among beliefs or actions, with some reinforced and others weakened as fresh evidence and data are obtained. But as such, the learning process may then lock-in to actions that are not necessarily optimal nor predictable, by the influence of small events
What makes this iterated-choice problem interesting is the tension between exploitation of knowledge gained and exploration of poorly understood actions. At the beginning many actions will be explored or tried out in an attempt to gain information on their consequences. But in the desire to gain payoff, the agent will begin to emphasize or exploit the “better” ones as they come to the fore. This reinforcement of “good” actions is both natural and economically realistic in this iterated-choice context; and any reasonable algorithm will be forced to take account of it.
Strategic Pricing in Markets and Increasing Returns
Overall, we find that producers’ discount rates are crucial in determining whether the market structure is stable or unstable. High discount rates damp the effect of self-reinforcement and lead to a balanced market, while low discount rates enhance it and destabilize the market. Under high discount rates, firms that achieve a large market share quickly lose it again by pricing high to exploit their position for near-term profit. And so, in this case the market stabilizes. Under low discount rates, firms price aggressively as they struggle to lock in a future dominant position; and when the market is close to balanced shares, each drops its price heavily in the hope of reaping future monopoly rents. The result is a strong effort by each firm to “tilt” the market in its favor, and to hold it in an asymmetric position if successful. And so, in this case strategic pricing destabilizes the market
The simple dynamics and stochastic model of market competition analyzed in this paper reveals striking properties. First, positive feedback or self-reinforcement to market share may result in bistable stationary distributions with higher probabilities assigned to asymmetric market shares. The stronger the positive feedback, the lower the probability of passing from the region of relative prevalence of one product to that of the other. Second, when producers can influence purchase probabilities by prices, in the presence of positive feedback, optimal pricing is highly state-dependent. The producers struggle for market shares by lowering prices, especially near pivot states with balanced shares.
What I got out of it
Influential read discussing self-reinforcement, lock-in, increasing returns in knowledge-based economies/industries, path dependence, and more. Extremely applicable for business, investing, economics, learning, and more. A great mental model to have in your toolbox