Skip other details (including permanent urls, DOI, citation information)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please contact email@example.com to use this work in a way not covered by the license. :
For more information, read Michigan Publishing's access and usage policy.
Early in my career, a graphic design project began its physical form as a pencil sketch. We drew the large type. We drew lines to represent the small type. We drew all the images. Once the pencil comp (short for “comprehensive drawing”) was approved, we did it again in color using markers instead of pencils. We calculated the best size for the text type and how much space it would take. We sent the typed (i.e., on a typewriter) manuscript to a typesetter with notations indicating typeface, size, leading, etc. The typesetter would send back galleys (by messenger.) We’d usually make corrections and send back for new galleys. We would work with photographers or illustrators to produce real versions of the images we’d drawn ... I won’t go on to tell you about color separations, mechanicals, or other technical procedures of the era, as this would require another two or three paragraphs that would divert your attention away from the main thread of this narrative.
There was a lot of physical stuff to do. It’s not surprising that there was a hunger for “how to” information on the part of beginning designers. At least two major magazines—How and Step-by-Step—were founded in 1985 to feed that hunger. Many of us figured the “how” was the easy part; I remember Saul Bass saying that Why would be a more interesting subject for a publication. Some might have pointed out that he was Saul Bass; he already knew the how stuff.
A 1985 issue of Why might still be of use today but the “how to” questions changed quickly even for experienced designers: very few graphic designers in 1985 had ever touched a computer; within a couple of years, a few of us were mainly working on Macs; by the early ’90s, the production of graphic design was pretty much a computer-based activity.
In 1995, the editors of How magazine asked me to write something for their tenth anniversary issue (January, 1996) about the next ten years of graphic design. It was titled:
The Crystal Ball
or 2001 A Design Odyssey
by Gunnar Swanson
I started with a nearly-inevitable quote.
“I never make predictions, especially about the future.”
Yogi Berra probably never said that and, if he did, he didn’t say it first. Anything concise, satisfying, and not attributed to Sam Goldwyn is inevitably described as a Berra quote. It set the tone for what I had to say, however. (By the way, one way that writing about design—and writing generally—has changed is that it’s now so much easier to debunk apocryphal quotes that it almost seems lazy not to.)
There’s only one thing we can say for sure about the future: it will be different from now. I’ll try to be a bit more specific, however. If Moore’s Law isn’t repealed and the speed of microprocessors continues to double every 18 months, computer clock speeds will be 32,000 times as fast in ten years. Not only do microprocessors get more powerful all the time, but they also get cheaper. Extend the logic and computers will be incredibly powerful and nearly free. (By now some of you are saying, “Wasn’t this supposed to be about the future of graphic design rather than lame computer predictions?” It’s hard to consider the job of a graphic designer without dealing with technological changes; I’ll get to design and designers in a minute. Other people were thinking “great—a Power Mac for fifteen dollars. Tell me more.” I’ll get to that first.)
A graphic designer ten years from now will be working with equipment that is much less like a Power Mac 9500 than that Power Mac is like a 128K Macintosh. (If you don’t know what a 128K Mac was, ask someone really old.) I don’t mean they will have better displays and more power. I mean they will be fundamentally different. What will they be like specifically? My advice is that you should not believe anyone who claims to know that. Guessing exactly what computers will be in a decade is slightly less likely than guessing the exacta at Hollywood Park. Too many things can affect the outcome.
I was right: I didn’t know. Depending on how you describe things, a Mac G5 was more or less like a Power Mac 9500 and a 9500 was more or less like a 128. A dozen years after my predictions’ “sell by” date, fundamental changes in technology are still debatable.
On the most obvious level a current Mac looks a lot like the original Mac with little pictures we drag around, often accompanied by pull-down menus, and windows. The pictures are much higher resolution, color, and faux 3D, but they are still little pictures we drag around, often accompanied by pull-down menus, and windows. Five years after my prognostications, everything changed under the hood on a Mac, but the screen arrangement is largely the same.
I don’t know precisely what will happen in the next decade for some of the same reasons that I didn’t know in 1996:
Obviously, new technologies are unpredictable, but the perfection and interaction of old technologies are equally hard to know. For instance, right now computer monitors are expensive and bulky not because we lack cheap technology to build good flat screens. It’s the craft of manufacturing them that isn’t up to snuff. Flat screens are expensive because there is so much waste in the process. If somebody improves the craft, we might have cheap displays that can go almost anywhere. And be almost any size. So, computers can be almost everywhere. Like the computers in our cars, coffee pots, and airport toilets, we may not even think of them as computers. Or maybe other technology such as image projection will make today’s flat screens seem like quaint antiques. Ten years ago, fax machines were exotic (despite being fifty-year-old technology). Now they’re ubiquitous. So now every office and many homes have a scanner and a digital printer. What might happen if we begin to tap that potential? We’re just now starting to see a revolution in wireless communications, much of it based on technology that has brought bad music to your dentist’s office for years. How might such technologies combine?
Back to new and/or improved technology. Right now, graphic designers spend a lot of money on computing power. If it becomes cheap to make powerful and specific microprocessors, why wouldn’t Adobe sell you Photoshop and give you the processor that makes it work? Or maybe software won’t be sold. Since typeface designers would make more money if you sent them 2¢ every time anyone sold a job using their typeface than they do now with you paying them $75 and hundreds of others “lending” each other the font, maybe some sort of automatic electronic tracking and payment will mean that the graphic design business will again become what it was ten years ago—one of the cheapest businesses to get into. But would that mean that it will be easier for designers to pay the bills or merely that there will be even more competition as millions more people have access to “our” tools?
Yup. What you’re thinking is right. Like pretty much everyone, I missed the significance of the internet beyond email, chat rooms, and web pages when I yammered about the interaction of technologies. Fans of the sitcom Silicon Valley might point out that I’m now missing the faster, freer, “new internet” that will appear by, I don’t know ... the end of this television season?
Even if the crew at Pied Piper fails again, there will be something else. A lot of people scoffed at Donald Rumsfeld’s “unknown unknowns” but that was the one thing he was right about. The stuff we don’t even think we have to think about will shape our lives as much as the stuff we’re trying so hard to figure out. No wonder Yogi Berra would have said what he’s quoted as saying if only he said half of the things he’s quoted as saying.
So, we’re back to knowing two things: 1) Equipment will be different. 2) Change will accelerate. It’s probably safe to assume the same things about the jobs of equipment users (such as graphic designers): 1) They will be different. 2) Change will accelerate. Just as computers (or whatever we will call them) will be less like our current tools than our current tools are like what we used ten years ago, graphic designers’ jobs will be less like they are now than their current jobs are like a graphic designers’ job ten years ago. (If you don’t know about marker comps, spec’ing type, T-squares and paste-ups, ask someone really old.)
Around the time that How magazine appeared, so did a series of dedicated design computer systems. I still remember seeing an Aesthedes computer demo’d about then. Imagine an archaic Mac with three monitors running an ancient version of Adobe Illustrator with the pull-down menus spread out over a table so you didn’t have to hunt for menus inside of menus. It was amazing. I thought that I might have been able to afford one if I sold my house (but, like an O. Henry story, I wouldn’t have had a place to put it.)
I’m glad that the price was out of the question. Within a couple of years, Macs (now-archaic ones) running (now-ancient versions of) Adobe Illustrator and Adobe Photoshop at a tenth of the price relegated it and several other systems to obscurity.
But in the meantime, I was fascinated watching people watch those computer demos. Even when they seemed fairly enthused, almost every graphic designer I knew expressed fear for their professional futures. Every production artist (the people who used the T-squares and did the paste-ups) I knew said “This is so great. I won’t have to ink lines and clean Rapidograph® pens.”
Nobody seemed to react when I noted that everyone had their fears backwards: the designers were going to end up doing essentially the same thing and the production artists were going to be unemployed. (I was mainly right on that.)
If technology is constantly changing, what does that say about the craft of graphic design? Clearly, skills acquisition will have to be more rapid. Given that, there will be many virtuosos but few masters. Since there will be ever more skills to learn, people whose business is based on craft will specialize. It is only by concentrating on a narrower range of skills that will allow them to keep up. In 1987 I owned every major graphic design related program, had the latest versions, knew how to use them, and still had time left over for actually doing graphic design. Today that would be physically impossible (as well as damned expensive). It’s just another of life’s little contradictions: the technology that has broken down disciplinary boundaries is likely to cause the need for its users to specialize.
I was somewhat right on that. And somewhat wrong. More on that in a bit.
All repetitive action can and will be automated. (Note that certain judgments—most of traditional typography, for instance—are repetitive actions.) As technologies change and skills are automated, those skills will become obsolete. (If you know a bit about design and a lot about html programming, you’re quite employable today.) Don’t count on knowing what <li> means buying you any more in a couple of years than knowing what “ [15△ [U& lc bf ” means, or where the j goes in a California case gets you today. Those earning a living in the craft end of graphic design need to become adept at learning new skills. Since each set of skills is destined to be obsolete (and that obsolescence will come faster and faster), those designers always need to be learning a new set of skills. If they are wise, they will have multiple and unrelated skill sets to hedge against business changes.
One of my skills for many years was the painstaking adjustment of text type. I wasn’t alone in spending hours with Aldus PageMaker and many more hours with Quark XPress adjusting text line-by-line on a quest to rid the world of lousy word spacing, ugly rags, widows, and the like, and creating half-hanging quotation marks despite what the software wanted.
Every time I talked with anyone working on anything related to artificial intelligence, I tried to convince them that making software “watch” designers could result in a world devoid of world of lousy word spacing, ugly rags, widows, et al, without requiring so much work on my part. Nobody seemed to react; maybe computer scientists don’t all share my anguish about ugly rags. Over a few years, someone in Mountain View made Adobe InDesign do most of what I was wanting without much prodding on my part. I suspect that the vast majority of users don’t even notice that it’s doing it. That’s too bad but really good for those of us who want to free the world of bad word spacing.
Artificial Intelligence as the new change agent?
In 1950, Alan Turing posited a computer engaging in a natural language conversation with a human. If the human couldn’t tell that it wasn’t a person typing at another workstation, we would have reached real artificial intelligence. The Turing Test was the Holy Grail for many years. No wonder: Wouldn’t an interface where your computer was more like Samantha from Her simplify everything?
Malware chatterbots and automated phone calls may be a fulfillment of Turing’s dream but Alan Kay, another pioneer of modern computing, rightly described passing the Turing test as fooling someone, thus “weak tea.” Mistaking a bot for a human typist doesn’t move us very far forward. Mistaking a bot for an actual person running a telephone scam isn’t any better.
The real promise of artificial intelligence for design is having the software watching us and not waiting to be directed to word spacing, rags, composition, hierarchy ... Software could embody many of our skills—even those we don’t notice we have. That includes much of what we assume is the taste and judgment unique to the special human beings we call designers. Most taste and judgement are rules—often just too complex for us to note them as rules. Computers are really good with rules. And if the software also watches the users of our design, it may gain some skills that we have failed to develop.
Graphic designers, despite sharing a single name, don’t all do the same thing for a living. That will continue and increase. Not only will technical specialization increase the differences, but it will create more need for those who can integrate the work of this specialized Babel. These people may be creators, or they may be visionary buyers and coordinators. (This range exists among those who call themselves art directors today. It can only increase.)
We were commercial artists. Then we were graphic designers. Then we were communication designers or visual communicators. Then some of us became graphic designers again. (And somewhere along the line, the American Institute of Graphic Arts became a single four-letter acronym.) At one time or another, some of us were graphic artists or art directors. (And for a moment, it seemed like some of us were going to be called information architects.) Sometimes that reflected differences in duties. Sometimes it reflected changes in philosophy; sometimes it was just tribal allegiances.
When people fret about the future of graphic design, it’s worth remembering that graphic design is not and never was singular. That’s one of its strengths.
People who can make things make sense will be in demand. (Richard Saul Wurman predicts that, in five years, 20% of graphic designers will call themselves information architects, and that they will do 60% of the work. I won’t vouch for the name or the percentage, but it’s clear that he is right in essence.)
Wurman’s promotion of the title “information architect” wasn’t quite like Gretchen Weiners’ attempts to make “fetch” happen. The name caught on with a slightly different group instead. He was wrong in his guess that graphic design would be vastly more about sense making than about strange making. The ineffable is all the more valuable as attention becomes an even more precious commodity.
Again, there was never a time where everyone with the same title did essentially the same thing. The different centers of gravity of the graphic design universe will continue to pull people in different directions. Whether people will take on names emphasizing aestheticizing, sense making, curating, directing, integrating ... my guess is we’ll still all do a lot of different stuff whether we share a name or not.
Some of these generalists are likely to work coordinating large projects comprising many media. Some may move from role to role as projects come up, relying on technical specialists as needed. They are all going to need a wider education than we normally associate with graphic designers. Those designers will need an understanding of culture and media, engineering, marketing, and writing. They will also need knowledge of the fields they are designing for. They will have to be able to synthesize the best of many fields and apply ideas widely. While they may not be the specialists that do the tasks, they will have to know something about editing both writing and videotape. They’ll have to understand developing a product and selling it. (Supervision of process—the glorified machine tending we spend much of our time at—is not the basis for the work of this kind of designer. As equipment is improved and standardized, press checks and their electronic equivalents will not be a source of many billable hours. I’m talking about understanding more than technical knowledge. I’m especially talking about the ability to communicate.)
(“Videotape”? How quaint.)
In large web projects, some graphic designers have come to be called information architects but so have some programmers and some librarians. At some point, you might choose to declare that an erstwhile graphic designer is doing something different than graphic design, but graphic design sub-specialties don’t all reside neatly in the realm of traditional graphic design activities.
For instance, “infographics” produced by people who know graphics but clearly didn’t really understand what is important about the info are common. (That’s one of the things Wurman hoped to correct.) It is clear that most graphic designers don’t understand enough or care enough about the value of statistical material to present it in a usable manner. On the other hand, many people with a better understanding of statistical material don’t understand the basics of visual communication. Especially as the tools of graphic design are embedded with some of the skills of graphic design, a dearth of official designers might not be a disaster for the world.
Big projects (including websites) often reveal tension between the aesthetic and communicative penchants of graphic designers, the technical interest of coders, and the information and/or persuasion desires of content folks (marketers, scientists, medical personnel, political organizers, etc.) At the very least, each is more effective with an understanding of the others’ work. Someone with broad understanding of each is in a better position to end up leading projects. It seems likely that the preferences of the team leaders will prevail at least in small ways. (The prayer to St. Venn: “Please let me be the center of the chart.”)
It’s worth noting that advice for graphic design education often comes down to the advisor telling how new designers can most efficiently become just like them. Coders think everyone should write code because, after all, it’s a skill that’s so useful to coders.
God didn’t decree a current division of roles—not that people fit into the putative divisions. There’s no reason that someone can’t love aesthetics and communication plus antique motorcycles or public health issues surrounding toxic chemicals or some other subject matter. I tell my students that the most important skill in graphic design is the one thing I don’t know how to teach: how to be interested in anything and everything.
My friend Joel Sweatte teaches data analysis to computer science students. He uses a TED talk by Hans Rosling about world health as an example of effective presentation of data but notes that it’s good because Rosling really knows his subject. There are probably better data analysts, but they may not communicate their findings as well because they don’t have the same understanding of what the data mean. Joel’s students come from a few years of coding. They want to write code. They are computer scientists, not social scientists, medical scientists, or such. They can’t be expected to be good at something else and our computer science program probably doesn’t want them to be mediocre computer scientists and good at something else.
If designers do not learn about other parts of the world, someone else will learn some of the things that designers know and will provide the needed synthesis and coordination. That means engineers and marketers will be in charge of what we now think of as graphic design. I don’t know who, if anyone, will be called a graphic designer, but those who call themselves that now would be relegated to being computer operators. (If we call them computers.)
As most of what used to be graphic design becomes accessible to more people, graphic designers (at least the bulk of those currently called graphic designers) will continue to become less and less important. Paradoxically, as more people become aware of the power of type and image, graphic design will become more and more important. Graphic design will become a legitimate subject for criticism in the way that film and music are today. (Well, I hope graphic design critics do better than most film and music critics.)
If I’m right, it means most of the work traditionally done by graphic designers will be low paid and held in low regard. New technical specialties will come and go. Those who are quick and, like Wayne Gretsky claims to do, “skate to where the puck will be” will do well. Many others will not. A few others will resist the lure of design as a technical pursuit and will do whatever needs to be done to extend communication. Those few will be very influential.
The Gretsky quote is proof that I wasn’t exactly “The Great One” of prognostication. If I was asking people to intuit industry changes and respond to a nearly-psychic sense of complex actions, taking my advice might have left many people skating to where the puck might have been. If I was merely offering a reminder to act on our best guesses about the near future rather than our assessments of the present, then, sure.
Design educators are often faced with calls from potential employers of their students about skills they should be fostering. They are usually junior designer skills including whatever technique seems vital this week. They are unlikely to say “teach them to deal with (or invent) the future” because they want to hire them right now and to make money this month. Even though nobody ever got a third job without having gotten a first job, the puck isn’t going to be wherever it was the day they handed out diplomas let alone where it was four years before.
What is the future of graphic design? Some of it will be great. Some of it won’t. Some of it won’t be called graphic design anymore. Some new stuff will be called graphic design.
If you think I’m telling you that tomorrow will be like today only more so, you’re probably right. That’s the trouble with writing about the future—nobody I know has been there. Of course, if I really knew what was going to happen to graphic design and graphic designers in ten years, I’d figure out how to say it very slowly and I’d charge Pentagram $750 per hour to tell it to them.
I’d tell you for free; you look like you can keep a secret. But the offer to Pentagram will have to be adjusted for inflation. If I have to make firm declarations, I’ll stick with tomorrow being like today only more so. I’m not talking about how early 1980s movies like Blade Runner had already defined the future as looking like Los Angeles and Tokyo piled on top of each other.
Author William Gibson of “cyberpunk” fame (Neuromancer, The Peripheral) is quoted as saying that the future is here; it’s just not evenly distributed. In the future, the future will still not be evenly distributed. That’s not all bad. Monocultures are not sustainable. Having graphic design practices pulled hardest by the different centers of gravity keeps the future of design safe.
Gunnar remembers what “[15 ▵ [U& lc bf ” means but always has to peek to remember where to put the j in a California case.
My original essay didn’t really end like that. Despite my notes explaining how to create an outline triangle, the magazine’s production staff screwed up and rendered it as a claim that I knew what “[15 ▴ [U& lc bf” meant. But I didn’t. I’m sure that many of you know the little triangle we used to scribble to designate column width as “Unicode Character ‘White Up-Pointing Triangle’ (U + 25B3).” Some of you may be old school enough to think of it as a Zapf Dingbats s with a stroke and no fill. To some of us, it still means picas.
When How put out its first issue, my little sample of type spec’ing meant that the flush left/rag right upper and lower case bold type should set to a maximum width of about 2.49 inches but by the time of their 10th anniversary, it would have meant exactly 2.5 inches (or would have if anyone still spec’d type that way.) Today, letterpress has revived enough that you’re more likely to find graphic designers who hand set lead type than ones who write arcane notes to a typesetter, so they’re likely to know their way around a type case. In their small corner of the graphic design industry, 15 picas still add up to about 2.49 inches.
As for me, I still remember what “[15 ▵ [U& lc bf ” meant, and would have to peek to remember where to put any sort in a California case.
- Kay, A. “The Computer Revolution Hasn't Happened Yet.” Paper presented at Interfacing Knowledge: New Paradigms for Computing in the Humanities, Arts and Social Sciences at the University of California Santa Barbara, CA, USA, March 8, 2002. Also available in Herczeg M., Prinz W., Oberquelle H. (eds) Mensch & Computer 2002. Berichte des German Chapter of the ACM, 56. Wiesbaden, Germany: Vieweg+Teubner Verlag, pgs. 31–35.
- Rosling, H. “The Best Stats You’ve Ever Seen,” TED: Ideas Worth Spreading, 11 February 2006. Online. Available at: https://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen (Accessed February 6, 2019).
- Swanson, G. “The Crystal Ball or 2001 A Design Odyssey.” How (January 1996).
Gunnar Swanson is a graphic designer, educator, and design writer, currently living in Greenville, North Carolina. He is a professor of graphic design in the School of Art & Design at East Carolina University. He has taught graphic design history at Loyola Marymount University, was the director of the multimedia program at California Lutheran University, taught design and design history at the University of California Davis, headed the graphic design program at the University of Minnesota Duluth, and taught design, design history, and computer illustration in the Los Angeles area.
Professor Swanson has over thirty years of professional experience. His work has been honored with over 100 awards and publications. Dozens of his articles about graphic design have appeared in the academic and trade press and his essays have been reproduced in several anthologies of graphic design writing. He was the editor of the Allworth Press book Graphic Design & Reading, the co-editor of Virginia Commonwealth University’s Zed3 and has been invited to speak about graphic design and design education in the US, Australia, Canada, and England.
He has worked testing scuba equipment and teaching diving, as a prop maker, stage hand, and carpenter before he began his design career.
He owns five bicycles and a hybrid car. He is old enough to remember people routinely asking “What's your sign?,” and perhaps too old to remember why.
Kay, A. “The Computer Revolution Hasn't Happened Yet.” Paper presented at Interfacing Knowledge: New Paradigms for Computing in the Humanities, Arts and Social Sciences at the University of California Santa Barbara, CA, USA, March 8, 2002. Also available in Herczeg M., Prinz W., Oberquelle H. (eds) Mensch & Computer 2002. Berichte des German Chapter of the ACM, 56. Wiesbaden, Germany: Vieweg+Teubner Verlag, pgs. 31–35.
Rosling, H. “The Best Stats You’ve Ever Seen,” TED: Ideas Worth Spreading, 11 February 2006. Online. Available at: https://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen (Accessed February 6, 2019).