The Inhuman Future of Digital Reading
Skip other details (including permanent urls, DOI, citation information)
This work is protected by copyright and may be linked to without seeking permission. Permission must be received for subsequent distribution in print or electronically. Please contact : [email protected] for more information.
For more information, read Michigan Publishing's access and usage policy.
Abstract
Digital books and other publications are being consumed not only by linear visual reading a la paper, but also (increasingly, mostly) in a wide variety of other ways, by software agents as well as software-assisted individuals: aural consumption, on-the-fly translation, automated processing for discovery, search, summarization, topic analysis, remixing, and other uses. While eBook websites and apps are designed for human users, the underlying content they traffic in must be structured appropriately for these functions to be enabled, and for content to flow across systems and platforms. This talk will cover challenges and emerging solutions for representing publication-level content as interoperable machine-processable data that also facilitates delivering rich, interactive experiences.
Transcription
I'm Bill McCoy. I'm the Executive Director of the IDPF, the International Digital Publishing Forum, the organization that develops the epub standard, and I'm the president of the Readium Foundation, at least the president pro-interim of a new open source foundation. However, I'm not here really to talk about either epub or Readium. I guess I can't help myself. I probably will by somewhere before the end. I'm really here to talk and try to take a step back and, with some risk, try to make some higher level observations, or really do some higher level rabble raising. Let's rabble away.
So how many people here are familiar with this quote? Raise your hand. If you know who is this. Oh, you guys. You don't. Well, this is starting words of a book that was published a couple years ago by a guy you may know, Jaron Lanier, who actually lives here in San Francisco, called You Are Not A Gadget. Now you may recognize it, because he proceeds for the rest of the book to lament this fact. His book is pretty much talking about all the bad things that come about as a result of this, like we've heard a few things today: potential loss of privacy, about analytics of what you are doing, potential for spam books, etc., etc. However, when I read this book, pretty recently, I was jazzed. I was like "Yeah! That's kinda cool!" It made me think that, in a certain way, the most salient thing that's different about digital content and digital books is the fact that it turns the content into data that it can be reused and things can be done with it in different ways, downstream from the original creation, which historically wasn't true.
So, I actually took kind of a positive message away from this. Because it's not just about making ebooks as artifacts to throw out in the supply chain; it's about remixing. It's about search and discovery. This is where I'd be trotting out Borges infinite labyrinth of books, but I have a commitment never to reuse any slides, and I've spoken here three years in a row. This is the fourth year, so I'm sure I used that slide. But, there's an infinity of content, so if we as humans are going to be able to access the content we need for our entertainment, for our research, for our education, we need somewhere to find it. You need some computer intelligence to help you sort through that labyrinth of content, as well as for the remixing and collaboration and, last but not least, for the accessibility, for making that content accessible to you. Because, not everyone has the human eyes to look at them in person. So, I actually thought that was kind of good.
Now, we've heard a little bit about this already these last couple days, and so you know who it is that said this, Marshall Macluhan. I think we get confused a little bit though about the combination of content and context. Of course there are times when those things are very combined or inherently combined, but there are times when they are not so combined. As we also heard, everyone who first read Dickens’ novels read them a week or a month at a time in a serialized form in magazines. That doesn't mean they are not novels. That doesn't mean that it's a completely different experience to read them if you're reading them a chapter a week from a bound book by your bedside table. I think we can say that a story is a story, and while some aspects of it change, the content of it, you might say the underlying, the platonic ideal of the story, actually doesn't change. And again, I don't think we would suggest that a blind person can't read a book. They can't physically read a book, but they can certainly read stories. And, after all, this was written by him forty years ago. So, it was a long time before there was even...there wasn't anything but very gross, coarse grained ideas of media. I mean he was talking about movies vs. books in this, not talking about print books vs. digital books or paperbacks vs. hardbacks. Since he is no longer alive, he passed away in 1980, I can say without fear of him walking on the stage, like that scene in Woody Allen, to say that I know nothing of his work, that he wouldn't have said that the medium of a paperback book and the medium of a hardback book give two completely different messages. And in fact, he clearly understood that content and context were separate. In his book, he talks about how a light bulb has medium, but no content. It is his canonical example of medium without content. So, he understood the concept that content could be, and often was, separable in some respects from medium.
So, does that mean that bookish content is just big data, then? Little fragments, things to be mixed in and sliced up? I don't think so. Luckily, Bob Glushko talked before me because obviously I can't say anything to you about structure and order and metadata and all that kind of stuff because you should just go get his book. But, a book is a complex arrangement of content that has a linear order and other higher order aspects as well. And so, no, in fact the salient thing about a book is that it isn't big data, it isn't reduceable to little granular rows and columns in a database any more than our DNA is reduceable to, you know, base pairs all by themselves. What makes us individuals is how they relate to each other and what that order generates, and that's true of books, too.
In fact, what makes books hard, and they’re harder the farther they get from just plain text, like a novel, is that the structure gets more complicated, and so you have this chunky information. You have, maybe, big square data. Now, if anyone wants to start a big square data company and offer me some founder stock, you know, you're welcome because I've never heard that term before. There's a collection of books. Books in the aggregate are big data, but the chunks are big. That's the challenge we have, because the challenge we have is to create some way to transport that, the big part, the chunky part of that data, and have that be meaningful as it gets communicated. Because, the risk we take if all we communicate is say a PDF, or a bitmap image, is we're giving Turing test to anything we want to help us process that data. We're giving a captcha, or pages and pages of captchas, to computer assistants who are supposed to help us. Now, the computers are getting better and better at that, so there may come a time when, just as we saw demos of automatically captioning video with audio, that the processing of this kind of chunky data can be better, but it's going to be a while, if my Google Voicemail in my inbox is anything to judge by.
And, not all pages are reducible. I promised to try and embarrass Adam Whitwere here, so I'm going to do my best. This is an O'Reilly Book page. Now, this was not done in DocBook XML, and it will be some time before it can be done in a system like Atlas. I made it kind of small, mainly so you guys wouldn't get to see all the details. Not only is it a complex, crazy infographic of a two page book spread, this is from a head first Java. But, in the middle of it are two other pages, so it's like recursively referencing other, also complicated pages. I haven't been able to look to see if those pages also in turn reference still further, I don't think so, but it could be even further recursive. You can see that taking the structure out of this is no easy task. This is not just a case of supplemental content. It's the case where the organization of the content is inherently very, very complex, so it's just something we need to deal with.
And, accessibility requires that we make the content machine processable. There is no other way to make content that can adapt to different people's needs, not just the needs of people who are blind or deaf or otherwise have disabilities, but you and me may have different preferences on a phone like this vs. on a big screen or when we get off a flight or when we get a little older, as some of us have. We need our content in different ways. So, that is one of the key things that makes thinking about content somewhat separably from context interesting and necessary.
As well, to really make this fly, the content has to be transportable. It needs to be interoperable. You can have an accessible iOS app. I mean there accessibility hooks in the platform, and it's possible to make an accessible iOS app, but that iOS app is still a little silo. It can't be an emissary of your content to go somewhere else, so you need both. You need two things: accessibility and interoperability. Forget about standards, and forget about open source. If you can find a way to make your content accessible and interoperable, I think your work is done. I think those are the two key things.
But, what does this interoperability thing really mean though? I'll quote some guys from Harvard, John Palfrey and Urs Gasser, who are experts in the field of interoperability – I didn't even know it was a field – "Higher levels of interoperability tend to increase competition and foster innovation." So, that's a good thing. That's in a book that they wrote that I would highly recommend to you, which dives into some topics I won't have time to talk about today, like even DRM. But, it's interesting to think, “Alright, so interop's a good thing.”
But, there's something that gets in the way of that, and it is what I call "The Platform Game". This software landscape that we work in, in IT, is a mosaic of ecosystems, aka platforms, specialized for different purposes. And, IT firms of all sizes often seek to establish and control platforms. Platforms have this nice property, nice if you're their provider, of "lock-in". There are switch-in costs, which means once somebody's in there, it's costly for them to get out. There's network effects. The more people that come in the more valuable it is, and the more people that want to come in. And, there's often barriers to entry by competitors. Those three corners of this triangular fortress of a platform make it a very valuable thing for anyone who can secure it.
So, just to give you a random example of someone who sells some platforms, this is Mr. Bezos, and their company has at least a handful of different, important platforms. Their first platform, chronologically, was e-commerce, not just selling stuff on the internet, but making their e-commerce infrastructure available for others to sell. Then cloud computing was generalized out of that. Amazon Web Services was the next thing they did, and they're the leader in both of these fields. Video-on-demand was actually the next thing they launched. They launched that and music-on-demand before they launched the Kindle. And, of course, they launched eBooks in 2007, and they've now generalized what was originally a dedicated device business into a more generic mobile device platform complete with its own App Store.
So, there's five different major platforms that Jeff has at his disposal, and one thing he gets to do, like any good platform vendor with multiple platforms, is exploit the ones that are entrenched to advance the others. That's called "tying". Now, that sounds like it should be illegal, but, well, it sometimes is. In fact, Microsoft Windows has the famous example of that, leveraging Microsoft Windows to promote Office and Internet Explorer to ultimately the violation of Fair Dealing. Apple iTunes and iPod leveraged originally OSX. iTunes was available on OSX for two years before they ported to Windows. Do you think anybody running a business selling music back then would have said, when Windows had 97% market share of desktops, would have said "Oh, let's make a music store and let's not make it work on Windows"? No. The only rational reason to do that was to promote their platform. And, more recently of course, it is tied to iOS and has various others there.
Amazon video and eBooks are tied to their Amazon Prime e-commerce. They give away books and music, premium content, to customers of Amazon Prime to insent them to stay customers. Again, it wouldn't be a rational thing to do if you were just in the business of maximizing your revenue on video and ebooks, but it's a perfectly reasonable thing to do with the idea that you want to get lock-in to a commerce platform, which they are. And, they're also using that platform to promote tablets. You can get Amazon Instant Video, if you're a Prime member, on an Amazon tablet but not a Google Android tablet.
Now, platforms can be open or closed. It happens that all the ones I've just spoken about, coincidentally, are relatively closed. Actually, you could say that AWS (Amazon Web Services) is pseudo-open. We'll get to that in a minute. Open platforms are interoperable: PC Hardware, Fax, the Web. Closed platforms limit interoperability: Mac hardware, iTunes, iPod, iOS. There's this semi-open state: PC operating systems. Anybody can make a PC, but Microsoft was the only who could sell the operating system, although there were "clones". There were similarly clones of PostScript and PDF from my former employer, Adobe, but they were really developed and controlled solely by that vendor. Amazon AWS is in that category.
By the way, when somebody has a web-based platform, and all five of Amazon's platforms are built on the web in one way or another, a lot of start-ups will tell you, "Oh, we're open. We have an API." Well, that doesn't really make you open. That at least makes you semi-open, but having an API that you control, that you define, that other people have to adhere to, and that you can turn on and off at your will, as we've seen with Twitter and Facebook and others, doesn't make you open.
So, Open Standards are a key thing that facilitates the platform being interoperable or a real open platform. They lower the barriers to entry. They lower the switch-in costs between providers of platform components. And, Clayton Christiansen, who is another Harvard guy, in the innovative solution talks about the fact that integrated proprietary architectures generally lose in the long run to modular architectures because they cannot generate the diversity of the competition that that open modular architecture provides. Now, he was recently criticized widely for being totally wrong with iOS as a counter-example, because Apple dominates with iPhone and iPad, but now with Android having the global market share leadership in both phones and tablets, he's not looking so wrong now, because it's a more open and modular architecture, even if it is not 100% open.
And, the network effects in an open platform, an interoperable platform, accrue to the benefit of all adopters. That's the key thing. An open architecture, an open platform, a generative architecture as Jonathan Zittrain would say, is something that creates benefit to all. An open platform is like a rainforest. Many people can carve out a niche. Many species can carve out a niche in that rainforest.
What is an Open Standard though? We talked about platforms, but standards, it's debatable like all these terms, but to me it's a specification that is publicly available. It's freely implementable. Again, not always, there can be patent encumbrances. It typically leads to open source implementations (I spelled that wrong), but not necessarily. And often, but not always, collaboratively developed via a transparent process. If it's not collaboratively developed, it might be an open standard mock, a pseudo open standard that's really a vendors proprietary endeavor, which again, things like PDF started out as.
Well, how do Open Standards get established? Well, they partly get established through specs, partly through available implementations, partly, and I think mainly, through market forces that reward adopters and punish non-adopters and deviators. Now we're kind of delving into my own theory here. I mean I have no evidence to back this up, but as I see it, from my historical examples of being an old guy (I'm going to start with some really long ago examples), PC hardware didn't have any spec. Some people just decided to clone the IBM PC. And, it didn't really have any implementation sharing to start with. A few guys started selling something call BiOS, the firmware, but it really was reverse engineered and really not commonly implemented. But, the market forces strongly rewarded compatibility. There used to be a class of device called 90% PC compatible. Now, it seems ridiculous. Who would buy that? But the 100% PC compatible killed them so fast that that just faded away.
Fax is another example. It's an open standard, but as someone who once upon a time had to implement a combination fax/postscript printer, not Adobe's greatest endeavor ever, it was incompletely specified. The standard was insufficient to know how you had to implement it to make it work. And, there was no implementation sharing. You really had to build on top of very low-level modem stuff to make fax work, but the good news was, so to speak, that if you didn't make your device completely compatible with whatever random fax machines were out there, nobody would buy your product. So, everyone building a fax machine, through no standards body, through no open source activity, but through sheer market pressure, had to have a room full of fax machines that they tested against, everybody else's fax machines, and so there was a pretty high standard of compatibility, strictly from market forces. Because everyone had random fax machines, deviation wasn't rewarded either. It was a little bit, but it was hard to deviate in the fax world. So, we had a pretty strong standard here with very limited open standard specification level and limited open sharing.
Well, let's take the web as an example, and this is my last example today. The web almost died. For those of you who were doing web stuff back in the dark ages of 2004, Microsoft's IE 5 and IE 6 had, collectively, 95% of the market share. Wow. The standard thing back then was to make an IE only site. You might bother to make another site, or you in many cases wouldn't. And, there are still some vestiges of that lurking around today in the odd nooks and crannies of things like probably healthcare.gov and who knows what else. But, that was the world. So, we really could have all given up and said, OK Microsoft did an embrace and extend move on the web. They won. It's over. It's their standard now and let’s go home, or let's all use Flash, which was the other alternative starting to come up at that same time.
But if you look at what happened now, it's a healthy, diverse market, where four different browsers have sizeable market share, and therefore, it is much less appealing to code for just one of them, which means it really is much more of an open platform. So, what could possibly have happened in the last nine or ten years that changed it from this, you know mono culture, to an open ecosystem? Well, nobody really knows for sure, but I have some theories that HTML5 of course was part of it. We've gotten to this good state of implementation of a new standard. But, that begs the question of why did people implement the standard? It wasn't just that we developed a spec and so everyone implemented it and so now it's fine. I think that is actually getting it exactly backwards. The spec doesn't matter, and I hate to say that as somebody who peddles specs. But, what matters and what led to this nice, green diagram, which unfortunately stands in contrast to the current diagram of ePub3 support that Peter Haasz from Overdrive showed yesterday from the BISG grid, what led to this over the last nine years, admittedly this meant a little more time to fill in the green in the HTML5 case, what led to was a few things.
Legal remedies could be claimed to do it, but I don't think that's the case. Other vendors were incented to collaborate to fight the market leader. The market leader had other platform battles to fight. And, the community, not just the browser vendors, refused to the let the web die and engaged to make HTML5 happen. That's really my message to you guys today. The community didn't sit back and let the browser vendors hash it out. People like Mark Pilgrim, Sam Reubien, and many, many others, perhaps some of you are even in this room, stepped up and said, “We need an open web, and we're going to make it happen.”
Well, are we going to do that for books or not? I think that, in the web, we need a next generation of portable documents. We need not just books in browsers. We need connected publications for the open web platform. Online only access to browser apps is not enough. Websites aren't design for indirect distribution channels. Publications need to be reliably archiveable. We need to get accessibility, and that's hard to do through arbitrary websites at this whole publication level. You saw what Gerardo showed there. We need to decouple content creation and content delivery workflows. If you want to make the programmer your co-author on every project, I hope to hell you have deep pockets. It just doesn't scale though. For some content, it makes sense to have the programmer as a co-author. In most cases, I think it makes more sense for you take someone like AirBook and let their smart programmers be your tool and service developers. And, books need to be machine processable, in their full, big squared data glory. And last, but not least, the platform for connected books needs to be open and interoperable, not vendor controlled. If we don't step up, and if Kindle books are ebooks, then I think we're going to regret it down the road. What we're going to get, and I'm about to finish. I'm going to skip past these couple slides here that I don't have time to talk about. If we get a monoculture, we're going to end up with a sterile desert. Instead, let's make that ecosystem where we can all thrive.
Thank you.