The Great Web Design Crisis of 2017

Beginning in 1993 and several times each decade since, the interactive industry’s reigning crop of web creators have faced new challenges that have required concerted design responses to overcome. Some of these challenges have been the result of advances in codebases and web standards, changes to hardware, economic shake outs and new business trends. And with each challenge the industry responded decisively. But this year web design faces a new kind of challenge, one that we have never confronted, and one we are failing to overcome. Not the result of external forces, this is a monster from within, ironically ushered in by the very designers and developers that are subject to it. On the surface we can see only symptoms: an industry-wide homogenization of web design, accompanied by a sharp decline in the innovation of new interactive conventions. And while those critical failures would be bad enough, the underlying cause is complicated and runs much deeper. The real crisis is that our entire state-of-the-art web design methodology, our roles and teams, and even our qualitative values are the product of a misunderstanding.

 

 

bio1   Narrowing The Cause

Despite now providing access to countless, wide-ranging categories of content, products and services, today’s websites are aesthetically and functionally blending; becoming indistinguishable from one another, save for a logo in the topmost banner. More and more, the brands that occupy these sites are losing their identities in a sea of sameness.

Further, in a medium where interactivity is its defining attribute, and the technology never more advanced and capable of handling challenges, for the most part, designers seem to have all but abandoned pursuit of new, improved, interactive models, rather settling into a non-confrontational, follow-the-leader approach to web design.

I reject the claim that the pursuit of theoretically optimal usability releases us from the strategic need to notably differentiate and innovate. There is not one absolute way things should look, communicate and behave on the web any more than there is one absolute in architecture, interior or industrial design. Great design has always included a quotient of subjectivity.

To which one might then swoop in with the oft-quoted web-design hammer, “Yeah but it’s been scientifically proven that users prefer generic, prototypical designs. Generic interfaces have been shown to convert better.”

Yes. That’s true. At least that is until it is measured against something that converts better than a prototypical design, at which point the opposite will have been scientifically proven.

Which begs the question, did you stop to wonder how that original prototypical design ever established itself in users’ minds in the first place?

It exists because its parts were innovated. They began life as disruptions. Non-sequiturs. In essence, risks. And that’s something web designers, and the companies they serve, don’t appear to have the guts to do much in 2016; instead, taking short-term refuge in the safety of the status quo. Confidently back-peddling into the historic innovations of braver others. Surely you can see that the meme “users prefer generic interfaces” might also be regarded as the convenient mantra of a designer who takes the passive route to short-term profiteering.

Finally, you may be thinking, “Oh come on, any of us could break out of any design trend tomorrow if we so chose. We’ve done it before.”

Actually we haven’t. This is not merely some aesthetic design trend. It’s not some fashionable phase that can change with taste. The root causes of this are intertwined with our very state-of-the-art thinking. To solve this problem we must dismantle many of our current best ideas. A contradiction which results in defense of the status quo. It would appear that we are facing systemic incentives not to fix this.

 

 

#    Hot Zone: The Web Design Ecosystem

It bears noting that everything I will describe happens within an ecosystem that is almost perfectly engineered to focus and amplify the problems. For example, near universal access to computing platforms has enabled more people than ever before in history to lay claim to Designer and Developer roles.
Ironically, this also happens to be during a period of “Flat” design which is characterized by a minimum of affordances and fewer discrete design elements than ever. So these precious few design elements are being endlessly, incrementally adjusted – on a massive, global scale.
The narrow result of this spray of minutia is then being further massively shared on sites like Dribbble, Behance, Git Hub, CodePen and dozens of other design/developer communities, which allow for favoriting and sorting of a select few common pieces of work. The top minority of these in turn are being subsequently re-re-re-referenced ad-nauseam and co-opted more widely and freely than ever before.

Sorry, gimme a second while I fill my lungs again…

So of course everything looks the same, for Christ’s sake! This is the systemic equivalent of a perfect storm; a made-to-order-global-make-everything-be-exactly-the-same machine. A worldwide design purification filter. If you wanted any category of design to fall into complete homogeny, you couldn’t do much better than facilitate it by setting up the exact ecosystem above. Indeed such a complaint has been voiced before.

There is strength in numbers, confidence in the choices of the mob. And the majority of web designers are falling into that trap.

Despite the obviousness of this, it’s far from the worst offender. The problem cuts a lot deeper.

 

 

bio3   Patient Zero: UX

And lo, the Internet bubble burst and out of the ashes came the single most valuable invention to hit the medium since its birth: the User Experience Design discipline (UX).

If there had been a general irrational exuberance and a lack of due diligence on the web before the bubble, there was an equally irrational fear of the medium immediately following it. It was the fear-based era of rote “Web 2.0” utilitarianism and functionality. It was still before Apple had proven the value of great design to even the CFO’s of the world, where aesthetics were still regarded as largely gratuitous endeavors.

ux-drug

All cures contain side effects.

The UX design discipline evolved out of this period of fear and uncertainty. Following years of unfettered, chaotic experimentation, exploration and a willingness (and sometimes a likelihood) to fail, UX stepped in and rationally culled and refined the best practices (of that era) and established a sensible process whereby optimal performance was achieved through an ongoing cycle of testing, analysis and incremental revision. Today it is the scientific process that dependably leads to incrementally optimized, defensible results.

In some way those of us who have worked in this medium, who have lived through decades of recurring skepticism and doubt about the value of design, are thrilled to finally have such an empirical, validating hammer. The relevant impact of this development is that this is the first time in the history of the interactive industry that the web design discipline has been able to systemically validate its own effort. To prove the ROI of design. That’s a heady maturity that is still new in this industry.

So far so good.

But the accountability and new-found reassurance that comes from the ongoing, incremental, effectiveness of the UX process has lead most web teams to go so far as to promote UX designers to project leadership roles. You can understand of course why UX designers have been tapped to lead, since the whole of this discipline is effectively accountable for both strategic design and the tracking of success. Makes sense if you want to ensure projects stay focused on addressing the business and end up proving ROI of the effort. What’s more, the very UX discipline itself has further come to oversee the web design and development process at large in countless organizations.

On the other hand, our promotion of that sensible, responsible oversight, has resulted in several unexpected, debilitating, side effects.

 

F   UX Side Effect 1: The Fracturing of Design

One of the principle ways UX has unintentionally undermined innovation is that it has caused a fracture down the middle of the design process; a fracture that starts with the people and their roles.

UX, focusing on the translation from business strategy to web functionality, tends to attract and reward highly analytical people. More the “responsible scientists” than the “non-linear artists” among us, who are ultimately accountable for the results of a project. These are people who can articulate data, visualize, organize and document information, and manage a process. I’m not suggesting that such a sensibility is the wrong one to manage those rational responsibilities. However, by placing UX in project leadership roles we are facing an unintended outcome: the “down breeding” of a second, unfortunate sub-species of designer whose sole focus is merely on the UX design leftovers. The scraps. Specifically, aesthetics.

What pesky stuff, that.

The area of focus of this aesthetically-sensitive design species is no longer on the overall emotional, expressive, dramatic experience of Interactive Theater, but on the appearance of the graphic layer alone. As such, these designers have largely been relegated to colorers, or house painters, if you will.

In an attempt to draw a clear distinction between them, we call this secondary role a UI (user interface) Designer. In reality, what the majority of today’s UI Designers focus on is rather not really the whole of UI, but “GUI” (graphical user interface). And even the title “GUI Designer” may be too sweeping since today’s UX Lead has already, presumably, decided exactly what this interface will do, what components it includes, and generally how it will move and behave. UI Designers do not so much design the interface, as they merely design how it looks.

Let’s take a moment here – because this is huge.

irritrant

When we innocently split (sorry, “specialized”) UX and UI design, we unintentionally peeled in two, the whole of great design. More importantly we created a stark imbalance of power and priority between design’s yin and yang that rather should always be in equal balance if truly great interactive design is the intent. We removed influence of the unexpected emotional, improvisational, performer’s sensibilities that come from an artist’s muse and mindset from the origination of interactive theater. Which is too bad, because these are the things that disrupt, that result in innovation, and that delight users. The things that users would otherwise remember.

So is it really any wonder that 90% of the web sites you visit today all look the same? That the apparent “ideal” aesthetic approach is the ubiquitously coveted Flat design, which is itself merely the direct extension of UX’s own wireframes-the flattest design of all? That they all share some variation of the same, tired parallax scrolling effect that pre-dates wide-spread UX leadership? I’ve been in rooms where the question was asked, “Where can I find other effects and transitions?”

Me: (Stares. Blinks) “What, seriously? Well… uh, I don’t mean to be a dick, but that’s what your imagination is for. Invent it! Good lord, this isn’t a #%$@ing IKEA bookshelf!”

Today, most sites lack creative invention and inspiration. Oh sure, we love to point out how this page load animation is timed slightly differently than that page load animation, or the scrolling effect is nuanced in some way, but it’s all the same. And part of the reason is that we have removed the reliably surprising imagination, the randomizing, theatrical showman, the disruptive artful inspiration from the UX process. We have removed the chaos and unpredictability of art and inspiration.

Look, I realize that every UX-centric digital agency on Earth has some line about how their UX designers are story-tellers. Which I think shows that at some level they must understand how important this is. But God love ’em, today’s inordinate breed of UX designer is really a figurative storyteller, and not much of a showman. And I don’t mean that disparagingly, the discipline simply doesn’t naturally attract those people.

Take a summer blockbuster movie; that’s also a user experience. Sure, it’s low on user, high on experience, but such theatrics and production value are merely at one distant end of the UX spectrum. What about theme parks? Who on your user experience team has even the slightest little bit of proven experience in story-telling at that level. That’s an extreme, ok, but the reality is that there is huge potential on the web somewhere between that fully immersive, drama-led linear story, and a registration form. Who on your UX team is responsible for spanning that? For even thinking about it? For imagining the magic in the middle? For finding new ways to move from a wireframe toward the Theater of Interactive? Of making the experience surprise and delight us? How often are your projects led or directed by the performer’s mindset?

Since most of the web looks and behaves the same today, like the answer to a graphic design problem, most of you should have answered, “no one, and rarely”.

At this point there is always that one person who feels compelled to point at a generation of really crappy, Flash-based, ad-ware from the early 2000s – the antithesis of “usable” – the epitome of unnecessarily dense interactive jibberish – as though that proves great interactive work can’t exist. We agree, much of that wasn’t any good. But neither is this future of timid, mincing goals.

Our overzealous response to Flash’s demise, was to throw the baby out with the bath water, to largely abandon pursuit of disruptive new interactive models. I guess it was understandable; UX comes to the table with all these facts and empirical data, whereas UI designers tend to come with playful, colorful, unproven imaginings.  We’ve seen what happens when the artist’s mindset has too much control; the stereotypical result is that it’s pretty, but doesn’t work. So looking at such a comparison one can easily argue that it’s not even a fair fight. You might think that “Of course the UI designers are secondary to UX in the process.”

But you’d be wrong. The UI Design role (perhaps not the graphic-design-only purists, but this archetypal imaginative soul) has just been intentionally positioned that way specifically to keep the unpredictable, chaotic forces of self-expression and imagination, for which there are no best practices, from running roughshod over a methodical, user-centered, prototypical approach.

In fact fostering imagination should be half the job of a project leader who works with tools that are still being collectively learned and understood. But the data-seeking mindset of UX resists this, and resultantly limits imagination. It locks one into what one knows. It causes fear as one contemplates disruption. It magnetically holds one nearer to what’s been done before.

 

!   UX Side Effect 2: The Failure of Experts

In 2010 MIT professor Laura Schulz, graduate students Elizabeth Bonawitz, Patrick Shafto and others, conducted a study with 4-year-olds that showed how instruction limits curiosity and new discoveries. In the study a number of children were split into two groups and each group was introduced to a new toy in a different way.

Photo: Patrick Gillooly

Photo: Patrick Gillooly

In the first group, the experimenter presented the toy in an accidental, inexpert way, “I just found this toy!” and pulled out one of its tubes as if by accident, making it squeak. She acted surprised (“Whoa!”) and pulled the tube a second time to make it squeak again.

With the second group the experimenter presented the toy with authority and expert instruction. “I’m going to show you how this new toy works.” And showed only the function of the one squeaking tube without mentioning the toy’s other hidden actions.

In reality, the toy could do quite a lot more than the children were shown in either group.

In the end the students who were shown the toy in a more open, accidental, non-expert way were more curious and discovered all the hidden aspects of the toy. Whereas the children who were expertly instructed in the use of the toy did not; they discovered only the action they were specifically taught.

Yeah, these were just 4-year-olds, but don’t discount the relevance of this study. This story is being played out in some form on every interactive design team that puts UX in the lead role.

UX-centric project leaders are experts in historic best practices and proven usability patterns; they explain “what works” which is then backed by data, and this practice is resulting in measurable, incremental ROI: or, as most business owners would call it, “success”. But as with most young companies that begin to turn a profit, such success tends to shift team focus in subtle but profound ways; attention is turned away from innovation, toward optimization.

And this trend shows no signs of stopping. There’s been a splooge of new web design tools, such as Adobe’s brand new “Experience Design CC”, which are the design tool equivalent of all this incremental, best-practice, templatized-thinking fed back to us as next generation design platforms. Where rather significant assumptions have already been made for you about what it is you will be creating.

xd-cc

In their attempt to make these tools and thus the UX job easier, they have tried to dramatically close the gap between raw code (hard work and complete control), and what it is you have in your head. Said another way, these tools encourage a limited range of ideas.

On the other hand, an app like Adobe’s Photoshop is, for an image creator or manipulator, a very powerful tool that gives one complete, atomic control over the 2D image. But it is therefor also quite hard to learn and master.

And I think, that may be one of the trade offs with tools like these. This popular UX-ified state we are in has reduced the complexity and possibilities such that “easier-to-use” tools like these can exist at all.

For that matter, web site template providers like Squarespace.com have had such ample opportunity to observe and reproduce all of your favorite, reused, UX-led design trends that they can offer them back to you in fully-designed, pre-fab form. Honestly, if you have no intention of innovating any new user experiences today, their designs are quite good.

All these apps and templates are merely serving as a different kind of expert: limitation, wrapped in the subtext of “what can and should be done”.

There can be no question that starting projects on the “expert’s” platform, while incrementally beneficial in the short-term, staunchly limits site creators’ understanding, imagination and exploration of new models and techniques.

No question.

 

,    UX Side Effect 3: The Separation of Content and Interface

Form follows function. That’s the fundamental mantra of great design. But what exactly is being designed on the web? UX designers say they design “user experiences”, so what constitutes a user’s experience?

If you bother to read the definition on Wikipedia someday, be sure to have a cup of coffee handy. Man, what a yawner. Supposedly the “first requirement of a great user experience” is to meet the needs of the product “without fuss and bother”.

Wait, seriously? That’s a “great” user experience, is it? Literally, it says that. Hey while we’re at it – maybe it should also be without any old tunafish cans and sadness.

Ok, look, whoever wrote this is both part, and indicative of the problem. They have totally nailed the literal “user” part, but they’ve left this otherwise really intriguing notion of an “experience” off the table.

So what is it really? Let’s cut to the chase. An “experience” must largely be defined by what the user does, and in turn what the user does on the web is enabled in large part through an interface.

Shouldn’t the interface itself therefor also be regarded partly as content of the experience? Well, yes, of course it should. Because it is.

If this initially innocent thought ruffles your feathers a bit, if it seems to crack some core belief, hold on tight; I’m not done. Because as a result of the interface being part of the content, the line between the form and the function of an “experience” naturally blurs to an uncommon degree. Which is an inconvenient truth that today’s UX designers, who inordinately prefer fully segregating interface and content, have left generally unacknowledged.

For years most designers have uncritically regarded their specific web interfaces, their chrome, as being more or less separate from whatever content is being served. Almost as a kind of wrapper, packaging design, or container, rather than it being an extension of the content itself. Indeed, that’s why flat design, as a universal, distinct, separate, design trend, independent of any site content, can exist at all. Flat design can only exist as a solution if you draw a hard line between interface and content. If you regard content as something that should be placed in a template. Period.

Flat design can only exist as a solution if you draw a hard line between interface and content. If you regard content as something that should be placed in a template. Period.

In fact the persistence of most of the best practices we share today, the underlying support tools like content management systems and authoring apps, and even design template sites, are all products of the same segregation-thinking. They can only exist if content and interface are broken apart.

On the other hand, we’ve all had the very rare experience of visiting a website where something different happened. Where the site creators bucked the segregation of content and interface, and clearly married those components from the beginning. They developed some unique expression of the content through interactivity as opposed to relying on content management systems and best-practice thinking. And when it’s done well (of course all things can conversely be done poorly) you feel it. You gasp. You say “Hey you guys, check this out”. It feels magical. It wins awards. It feels true to this medium. It says more than any templated, flat-designed interface, images and body copy ever could.

Why did this happen? Why have so many chosen segregation? Why has separating interface and content become the norm? Well, if you have only been working in the industry as long as UX has been a thing, you might imagine that it’s because this is the right way, that we are headed down the true and righteous path. You might look around at all the smart people who are confidently on this path and imagine that it’s because they are in pursuit of some empirically ideal interface; some perfect range of optimized graphic affordances and interaction principles, and that continually refining toward that ideal is our purpose. Indeed if that were true, that might even validate the idea that interface and content are meant to be separate.

But that would be woefully incorrect.

Ultimately the reason we chose to separate interface and content (consciously or unconsciously) is that the nature of true experience design is just… well, it’s just really hard. Sorry, that’s it. Too hard, too expensive, too much time. The truthful, authentic approach to user experience design is harder than whatever it is we have all agreed to call “user experience design” today. So we have just rather thrown up our hands and admitted a kind of meta-failure. We gave up. Our further avoidance of this truth is a sign of our own willingness to admit defeat right from the get go.

So everything we design after that, all the busyness and importance, is an openly pale compromise. At least we know that now.

As if that wasn’t enough… are you sitting down? I hope so, because I am about to say something further that will cause the remaining half of you to grit your teeth, particularly because I don’t have comments enabled on this site to serve as an outlet.

User experience design is not the strategic design of a site architecture, flow and interface – true user experience design is also interactive content creation. As such, form and function become the same thing.

Yeow! Ok, that smarts. I’m, sorry, I know this is painful and unclear. Just so you know, it is for me too. Moving on.

When designing a television, there is a clear demarcation between the interface of your television, say, and the content it presents. The TV hardware has one user experience, and the content, another. And in such a design world there is clearly no expectation that the content creators are in any way responsible for designing the TV interface, and vice versa.

The same can be said when designing any other hardware device on which we cannot know precisely what content will be presented. Or an OS, which must serve all manner of possible experiences. We must work at arms length to the content. Cognizant and supportive, but ultimately behind the wall. In development of a browser, the classic rules of form and function still come into play. There is a clear delineation between interface and content that gives a designer focus.

But when you step inside a browser window those tenets of design blur. Here, an idealized site creator, a true UX designer, would have complete control over the content to be presented. Strategically speaking, that’s their job after all. As such, drawing a line between content and interface, or not, suddenly becomes a matter of choice, not requirement. The corollary being the designer of a television who is also expected to fill it with content, and must therefor make movies.

Can we apply the tenets of design to the creation of a feature film? What is the “function” of a movie that comes before the “form”? Is it to tell a story? To entertain? To market products? To part consumers from their money? To make an audience feel something?

To be honest, I think that probably depends on who you ask. But it’s safe to say that a feature film is less a design problem, and more an art problem. The same condition exists for user experience design on the web. In fact it’s a bit more complicated because the medium and surrounding business has been so strongly influenced by people who come from design for so long. They’ve had their say. They’ve pushed web design into their comfort zone and reduced “experience” to a myopic set of design elements and interactive conventions. And they have relegated the unpredictable sensibility of improvisation and showmanship down the food-chain to an out of the way place that is largely gratuitous.

A movie produced that way would probably result in a middling documentary, a news report, or a corporate training video. But you probably wouldn’t be breaking any box office records.

What about those aspects of an experience design which really truly need to be segregated from the content? Well, you might ask why that’s even part of your site; I mean, you might argue that any interface elements which really are unrelated to the content might rather belong somewhere else.

hamburger

Virologist studies the hamburger icon

Take the ubiquitous “Hamburger” icon, for example. Since it appears to play a role on the vast majority of new sites, one could safely assert that it’s clearly not specific to any brand or strategy. Ubiquitous and non-specific the hamburger icon, one could argue, might even bubble up into the browser chrome. I mean, why not? We have a back button up there, and your fathers used to nevertheless design back buttons into their websites. Theoretically, if prototypical sites are so great, and if you take the generic user experience trend to heart, every website should have content to fill a browser-based, hamburger menu. It would free up space in the window, and give users the prototypical experience we are told they crave. It’s a win, win, right?

Ok, I know it has issues, but let’s pretend you think this isn’t as crazy as it sounds. I hope you can see that as we extend the idea of prototypical function over form, we rather quickly get into a situation where chrome owns your “UX” and you merely fill it with pre-determined content.

And hopefully you see that we are basically causing that today.

 

 

H   External Stimuli: The Fall of Flash And The Rise of Multitouch & Apps

But why haven’t we recovered in our quest to innovate and differentiate on the web? Where did our aspirations go? Why do we even accept such a simplistic view of what’s possible on the web?

Both the popular rise of UX and the en masse surrendering of interactive innovation by web designers popularly took hold around 2007, roughly following the unveiling of multitouch on the iPhone.

Was that timing just a coincidence? I don’t think so; I think they were directly related.

After over a decade of interactive experimentation with the often kludgy, challenged tools available to web designers (Shockwave, Flash, etc.), Multitouch arrived via the iPhone.

Holy crap.

That was true interactivity! Our fingers and hands mucking through the very digital mud? Pinch to zoom- seriously?! Humanity’s continuum toward the Holodeck suddenly snapped into sharper focus. Like the sun overpowering a flashlight, one could see nothing else. It wasn’t even close.

It was at that moment that anyone who was truly passionate about the design of interactive experiences and of innovating new interactive models on the web, redirected some key part of their attention to the implications of this new domain. Adobe’s Flash, the in-browser tool which up to then had been the de facto authoring tool for rich interactive innovation, in conjunction with a PC Mouse, seemed almost immediately antiquated. Next to multitouch, the resolution of interactivity on the web was pathetic.

And I believe a sufficiently large swath of interactivists, at that moment, had an (possibly subconscious) epiphany:

“Maybe,” the thinking went, “Maybe the browser isn’t really an optimal platform for innovating immersive, new interactive experiences. Maybe we have been over-shooting all this time, and the browser is already kind of figured out after all. It’s certainly boring by comparison. Maybe interactive innovation is rather the domain of countless new technical platforms yet to come. Maybe we should just re-approach the browser and refocus our work there towards the simple, obvious things that we already know it does well. Just the basics.”

You can sympathize with this thread. I mean, focusing on the known strengths of any technology, including the web, is sensible, and feels like a more mature, nuanced approach to the medium. And yet that simple recalibration, so holistically adopted, sucked the oxygen out of web design, squelching the drive and appetite for explosive, innovation from our browser-based experience designs.

Some of you read this and probably still believe that the browser is not a reasonable platform for aggressive interactive innovation today. That we “know” what web sites should be now – better than ever before.

Yes, it’s easy to fall into that trap. I was with you as recently as a year ago. But there is one thought that obliterates that opinion for me.

Let’s play out a hypothetical.

What If Technology Froze?

Let’s imagine that our technical landscape froze. Doesn’t matter when. Say, today. Just stopped advancing. That the development tools you use today just stayed exactly the same for the next 20 years, no new versions, no new web standards, or bug fixes or updates, no faster processors, just these exact tools with these exact attributes, flaws and all. What do you suppose would happen?

Would the way we choose to wield those tools remain static and unchanging as well? Would web site design and the experiences we create for the next 20 years stay exactly the same as they are today?

Of course not! Our knowledge of that frozen tool’s true capabilities would explode. We would continue to discover capabilities and nuances that we have simply not had time or wherewithal to discover today. We would fully explore everything these tools could do, every aspect. We would see massive advances in our collective understanding and use of that static technical state. We would see a renaissance in our collective skills, interactive vocabulary and creative concepts. A new language. We would get vastly more sophisticated and better at creating, understanding and using content within that static medium. In short, we would master those tools.

Paint does not have to advance for a painter to develop the skills of a master.

The tools wouldn’t change – we would.

What that suggests to me is that such depth of innovation has always been possible, and is openly possible today, but intentionally or not, we don’t pursue it. That it has otherwise always been possible to deepen our understanding of any given technical state and to innovate aggressive, new experiences, but that we just can’t, or don’t. We simply choose not to set the bar high enough.

mona

Surely, one can argue that this is partly because technology changes too fast for creators to develop mastery. It advances out from under us. Indeed most of us can no more appreciate our true, collective lack of insight and skill any more than a cave painter might have imagined the lacking of insight and skill required to paint the Mona Lisa.

That rather than bother trying to master the tools, many of us now patiently rely on the novelty of new technical tricks and advancements to fill the void. We have off-loaded responsibility to creatively innovate on developers of the platform.

We wait around for the medium – to master us.

There are ways to fight this condition. Always have been. To reach further, and get closer to the ideal of mastery and innovation, despite technical change, than the vast majority of web design teams do. A very small handful of great teams know those tricks and techniques and achieve that today, such as North Kingdom in Sweden. But it starts with acknowledging that there is more. There are better, bigger ideas than those which the best practices and incrementalism of UX have delivered us so far.

It means you must look at the product of your design effort today and see the potential to have innovated much further.

You have to believe, once more, that your medium, exactly as it is, can do much more than you have bothered to discover.

 

 

I   The Tragic Death of The Masters

Those of us who created interactive work in the early 90s were long ago forced to come to peace with this. Those of you who created Flash-based projects for many years were probably forced to face it only recently. And as sure as you are reading this, those of you who have only been working in the medium for less than a decade will unfortunately face it soon enough.

That we live in the Digital Dark Ages.

The one thing most of us will agree on is that technology changes. But as our industry incessantly celebrates the new, we less often notice what resultantly fades from view. Yet fade away is exactly what our work does. Unlike virtually every other preservable medium, the interactive work we create erodes from the view of users of future technologies. This is not just the result of changing software, versions and standards, but also of changing hardware, input devices and platforms, and of the unwieldy nature of maintaining functionality across the exponentially vast body of work being produced. A basic change in hardware will obliterate a decades worth of work or more.

A century from now your grandchildren, who bother to look back to your career, will see little to nothing, wonder fleetingly whether you ever created anything of value, and then assume “probably not”, since absolutely nothing you created will exist. Lost forever, as though it was merely a live performance. And then they will design some experience themselves, using future tools, that attempts to result in exactly the same emotional payoff or dramatic moment that you are producing today. Maybe they won’t do it as well as you did. But no one will be the wiser.

Unlike any other recorded, preservable medium such as literature, audio recording, film & television, etc, Interactive work is the first where the great masters and masterworks that came before you disappear. Have disappeared. Vanished for being fully dependent on a temporary technical state. Consider what literature, music, film & TV would be like today if every book, song, movie and show ever recorded vanished forever 8-10 years after its creation. If you’d never seen Charlie Chaplan, Buster Keaton, David Lean or hell, even Star Wars. If the only content you could experience and reference today was all post 2008? Think about that. Across every medium. Because although we chronically celebrate the “new” in the interactive space, it’s the previous work that first addressed all the fundamental challenges interactive designers face today. And often without the burden of being instructed by experts in what can’t be done.

There are countless versions of the Internet (and earlier platforms, CD-Roms, Floppys, Consoles, etc) full of astounding interactive experiences, conventions and designs- beautiful, delightful work that you have lost the ability to reference and learn from. Even now you probably mistakenly assume whatever work existed then wasn’t all that because “we’ve moved so far past it; our tools are so much more advanced now”.

But if you imagine this, you are mistaking platform and interactive resolution for experiences, content, emotion, and behavior.

Unfortunately, many of you are relegated to having to stumble blindly into re-conceiving and rediscovering those same, been-done, inventions, stories and discoveries all over again as if for the first time. And sometimes you won’t.

The Digital Dark Age has cut today’s designers and developers off from the vital inventions and experiments of the medium’s previous masters; rich resources of experimentation and invention which might have challenged the gross commonality and safe standardization we see today. Might have allowed today’s designers to go farther. But its lacking has instead relegated today’s industry to being perpetually inexperienced.

 

 

`    Taking Control & Crushing the Crisis

We can fix this. It’s huge and has massive momentum, and it will require us to humbly accept our recent errors, but we can fix this.

Approaching project leadership differently than we do today is going to be the best lever we have for affecting this change. We need to start with the right people and priorities in leadership positions.

Individually, each of us can start by acknowledging less certainty in the things we currently take for granted. To once again accept the possibility that aspects of our process and beliefs, bleeding-edge though they may seem, are woefully incomplete. I realize that back-pedalling from progress feels counter-intuitive in a medium that is still being understood – where we’ve only just begun to feel like we “get it”.

Where so many are still only now climbing onto the UX wagon.

But this medium has always been destined to evolve past the domain of GUIs, toward “Interactive Theater”. Consider ongoing advances in Multitouch, AI, and VR among others. More and more, Interactive media is going to be defined truly by experiences, real ones, not the scroll and click brochures we see today.

User Experience will be designed to generate improvisational drama and emotion, delight and magic, in short, a show. Leading such a project, you’ll need usability-knowledgable performers and showmen, not analysts and best practitioners.

Although this shift in user experience is more obvious when you project out to such an advanced technical future, it is still starkly relevant today.

In no way am I saying that we should abandon the best practices we have developed. But I am saying that it’s patently wrong for the same psychographic that inordinately defends those best practices, to lead an interactive project.

Some of you are UI Designers who truly love exploring the aesthetics of an interface. I get that. And thanks to the present state of the medium that will remain a valid role a bit longer. But to build and maintain relevance in your career you must move past the focus on mere graphic design that UX has relegated you to. You must be thinking about motion, behavior and improvisation with your users. And needless to say, you must resist referencing so few of your peers for inspiration.

There was a time before which Bobby McFerrin had his uniquely recognizable vocal style; the one that made his sound instantly recognizable in the early 80s. In interviews he described developing that style by cutting himself off from listening to his peers for 2 years, specifically to avoid sounding like everyone else. He used this time to explore new, original ideas. He then spent 4 more years practicing and perfecting.

Perhaps for you, developing an original approach won’t take 2 years of isolation, perhaps you can find your personal, inspirational vein without retreating from the industry altogether. But part of being a strong designer is tapping into a unique creative vein that inspires you. Not taking your inspiration from the work that has been “Most Appreciated”, or “Most Viewed”, but from a thread of your own. It takes guts to put yourself out there. To accept the professional risk of stepping aside the work of the popular kids. To avoid clicking on the “Upcoming Design Trends of 2017” prediction articles, and to do something appropriately true to you. If you aren’t on such a path, then you need to get on it; either that or take satisfaction in being regarded as a craftsman. Good craft is valid, but distinct from design.

So Who Leads?

  • When designers lead projects you get solutions that look pretty but don’t work.
  • When technologists lead projects you get solutions that function but look like spreadsheets.

And we must now add a new, equally unfair generalization to the list:

  • When UX leads projects, you get solutions that convert better but are just like everyone else’s.

The ideal interactive experience takes equal insight and invention from all of these very disparate, specializations: creative, performance and design, strategy, best practice, analysis, and numerous technologies.

That’s why we must rather encourage much more systemic debate between specializations. This is a team alchemy issue.

In particular we must try to undo the damage we have done to the design community when we began deprioritizing the “artist’s muse” in the origination of user experience. The fractured sensibilities of strategic UX and aesthetic UI, as we define them today, must be re-merged, either through skill development or team structure.

We then must empower that sensibility. We must reprioritize expressiveness, artistic exploration, play and the chaos of inventiveness as critical aspects of the UX Design process. Equal in importance to the logical, rational aspects that exist today.

Project planning and scheduling further need to accommodate this change in process by building space for experimentation and the inevitable stages of failure and problem solving.

I believe that the best, most advanced interactive teams will address this leadership issue in one of three ways:

  1. They will assign project leadership to a new role, a Director, hierarchically above what stands in for UX today. UX will take on an advisory role. This director will defend the dramatic experience, like a film director who leads many technical specialists across wide ranging fields, to a creative end.
  2. They will significantly change the job requirements of UX leadership to favor people who have a strong showman’s sensibility, an artist’s openness to new ideas, a true innovator. A playful romantic. In addition to managing strategic, functional, best-practice authorities.
  3. They will remove the singular leader altogether and move to a multi-disciplinary leadership model, raising the status and empowerment of each leader of the multidisciplinary teams. This is hard, and risks committee thinking, but in a team lacking ego, whose focus is laser-set on ground breaking experiences, it absolutely works.

Many combinations of these approaches would work. Each has risks and challenges. But if done well, each also increases the likelihood that we will see more  differentiation and innovation than we do today. Hopefully we will see more magic.

 

Conclusion

I’m sure by now it’s occurred to you that I’m a total hypocrite. That here I’ve produced this stupidly long blog post that might as well have been a book, and it’s everything I have railed against.

Ok, I admit it, you’re dead right; ideally I would have expressed these ideas interactively. At least I would have supplemented my ideas that way. What can I say. I did this alone. I don’t code much myself, so I have to rely on the tools that I do have sufficient command of.

But at least I’ll admit here in front of you that I have totally failed the medium because of that. That I have produced a piece of totally inauthentic work. I only hope you can see past that. And I truly wish the rest of the industry would similarly admit the same failure.

Tomorrow we’re all going to go to work, and we’re going to be surrounded by a wide range of highly talented people who don’t yet think this way. Who don’t yet see their sore lacking. People who are comfortable with the current definitions and trends, but who collectively have all the necessary skills to conquer this problem.

Many will not see that they are more than members of the design community, but that they are also on a stage, in a theater, in front of an audience. Putting on a nicely typeset, but very dull, show.

The idea that their project is maybe off target to some extent will not be met with ease. The whole system is aligned right now to squelch your innovative inclinations. But I encourage you to evangelize the need to break the stagnation, to find a new definition for UX and a new team structure to support it. At the very least be the person who questions the status quo and is confident that we can and should invent again.

And look, changing your mindset to favor innovation will help you in your career. The medium is going to change as technology and the world changes. As it has changed countless times already since the early 90s. The birth and death of browsers, the birth and death of Shockwave and Flash, propagation of social interconnection, the fragmenting of screens, ubiquity of mobile, the Internet bubble and ad-blockers, the only certainty you have is that the next fundamental changes will be so profound that they’ll catch you off guard. And if you have merely specialized in the current state, if your days are filled with the processes and skills that serve this narrow, off-target, flat graphic, UX best-practice-based website, circa: 2016, then there is more than a good chance, when such change comes, you’ll have precious few applicable skills to bring to bear.

Focus on the big picture. Study and understand the future ideal of this medium, and work back from there.

This medium is beautiful and powerful. It carries more latent potential than any other medium before it. And yet we barely understand it. We can barely speak its language. We even lack respect for our own inadequacy. So as the young, best and brightest enter our industry to be routinely trained against such misaligned conventions, led to believe our self-satisfied celebration of “design”, all while the true reason for being of this medium is so weakly embraced, it breaks my heart.

In those very rare instances that I discover someone who actually does get the medium, who bucks all these constitutionally weak, generic trends and produces a delightful, innovative, true-use piece of work, that effectively marries strategy and functionality, imagination and performance with a fine sense of taste and craft, it gives me goose bumps and makes me want to yell and fist pump in their direction.

This medium can support magic. We only need to try.

 

conclusion

 

Special thanks to Tom Knowles, Gisela Fama, and Marcus Ivarsson for some truly thought-provoking, spirited debates.

 

 

Messages from the Future: VR Entertainment

Ok, so in the future, Elon Musk’s math turned out to be wrong. No, we don’t live in a Virtual Reality simulation. Turns out, however dull and tragic it might seem, this world is our actual base reality. Boring, I know. However what his math did prove was that otherwise smart people who are exposed even to old, crappy pseudo-VR, like you have today, almost immediately start to question their base reality for no other apparent reason. Not surprisingly this turned out to be equally true of 15-year-old boys who watched “The Matrix”. Go figure.

That said when we extended Musk’s math even further, it also proved that we will eventually learn to travel back in time, and that time travelers are therefore among us.  Something I didn’t believe until it happened to me.

Anyway, before we get into what made VR entertainment awesome in the future, I feel like I need to explain what VR wasn’t, because in your time a lot of you are still confused about that.

VR Wasn’t On Your Phone

Listening to the press in your time you might imagine that VR is on your phone. That as early as next year you will be able to “put awesome VR in your pocket”.  …Really?  Could someone at Wired please define the word “awesome”? I mean, because I just used that word, and the way you’re using it so wasn’t what I meant.

Today, filmmakers, technologists and, naturally, pornographers are breathlessly diving into this idea (prematurely), pitching and signing VR content deals to produce some of the world’s first so-called “VR films”.

So let’s cut to the chase.

In the future VR was not about turning your head to look a different direction.

Yeah, that didn’t turn out to be it at all. Totally wrong. Yet somehow an entire industry seems to have confused this point. In fact, turning your head during a linear movie appears to be the entirety of what many otherwise smart people mean when they say “VR” in your time. Didn’t the fact that Google made theirs out of cardboard indicate anything to you?

Didn’t the fact that Google made theirs out of cardboard indicate anything to you?

360 wasn’t VR any more than a 4-year-old’s crayon-drawn flip book is a summer blockbuster.

Anyway, despite the efforts of some over-eager filmmakers who really tried making movies where turning your head was, like, a “thing”, you thankfully moved past that phase pretty quickly.

If you are one of those guys considering making one of those linear, head-turny VR movies, you could save yourself a lot of professional embarrassment and personal disappointment and just not do that instead.  Strongly recommended.

I further find it fascinating that the same people who kicked and screamed before admitting that wearing Google Glass made you look like a complete dork, are now honking the same ain’t-it-cool clown horns all over again with VR on your phone.

I get it. I know. You can’t wait to be special, little, cyber-cool, robot-hacker adventure guys. That’s still a cool thing in your time, right? A hoodie, a laptop, Christian Slater, and you, with a shoebox strapped to your face, waving your arms like an idiot catching invisible unicorn butterflies.

I get it.  Yeah, you’re right, you’re really cool when you do that in public.

 Augmented Reality’s Achilles Heel

“Oh, but I am fully aware that so-called VR on your phone is kind of stupid,” you say. “I’m on the cutting edge. That’s why my eye is on Light Field-based Augmented Reality.”

Right. Augmented Reality, or “blended reality”, or “mixed reality”, or good lord, whatever the hell Reality you’re calling it now – it’s all the same thing; why do you keep renaming things that have perfectly good names?

Augmented reality wasn’t an entertainment game-changer either. And this goes against everything you are reading in the press in your time. Most Augmented Reality evangelists are super excited about how AR is a, or maybe even the, medium for entertainment in the future.

So here’s the deal, in the future, AR was to digital entertainment what Sushi is to fine cuisine.  Some of it is really good, but the vast majority of fine cuisine doesn’t involve uncooked fish.

In the future, AR was to digital entertainment what Sushi is to fine cuisine.  Some of it is really good, but the vast majority of fine cuisine doesn’t involve uncooked fish.

Due to the medium’s definitive limitations in the face of the massively expansive domain of entertainment, AR occupied a very narrow slice of the experience pie. It was no more the medium for entertainment in the future than mobile phones are today. You know, hindsight being 20/20.

I admit however, that AR appears to demo very well. About as well as any spanking new special effect technique from Hollywood. Bearded tech bloggers with their geek chic but mostly geek glasses are giddy and excited about the promise of this medium, thrilling and editorially gasping at seeing jellyfish near a ceiling or C-3PO standing behind someone’s desk. And it is kind of cool in a way, because in your time you have never seen that before. As a visual effect (which is all it is), it probably seems like magic. But if you’ve therefor proclaimed AR as “the future of entertainment”, well, you’re missing a really crippling something:

You’re in your room.

Star Wars VR – Episode IX – LOCATION: YOUR ICKY ROOM

There’s your coffee cup, there is yesterday’s half-eaten banana, and, oh, there is the underwear pile you keep meaning to put in the laundry basket before Brittany gets here.

In the expanse of storytelling, I don’t know how else to say this, there are only so many believable stories that could happen in that room, surrounded by your own personal junk. Said another way, amidst the infinity of possible amazing stories that storytellers will wish to tell, across vast worlds and realities, only a minuscule, meaningless number of them has anything to do with wherever the hell you actually happen to be in reality.

Got it? By its very definition, AR suffered the severe limitation of having to co-exist with your world such that suspension of disbelief was maintainable.  Even though the effect might make a compelling 5-minute demo today.

Hey, who doesn’t love the idea that C-3PO might be on a vital mission for the rebellion which coincidentally can only be conducted… three feet from your slightly mildewed, 1960s, pink-tiled bathroom?

Hey, who doesn’t love the idea that C-3PO might be on a vital mission for the rebellion which coincidentally can only be conducted… three feet from your slightly mildewed, 1960s, pink-tiled bathroom?

Pixar folk have uttered a number of brilliant statements related to the relationship between Technology and Art over the years, and this one feels relevant here:

“You can have some really stunning imagery and technical innovation, but after about 5 minutes the audience is bored and they want something more interesting — story.” – Lee Unkrich

Yes – I know AR entertainment seems cool right now while the visual effect is still novel, and further, by having not yet experienced any, let alone five or more, big budget VRs, it probably seems like there must be countless stories and realities one could create that would coexist nicely with your real world thanks to AR. In fact, the possibilities might seem limitless to you now. That’s what you’re thinking, right? That’s certainly what you’re reading. And ok, fair, there were a small handful of good ones.

The problem was that those couple good ones got made, and in very short order it became clear that the same few, contrived, narrative devices had to be repeatedly enlisted, ad nauseam, in order to explain away the unavoidable fact that this story was happening a few feet from your much too hastily selected Ikea shelving unit.  And trust me, that got old really fast.

There were the scary killer/monsters in your room stories, the impossible, magical/sci-fi whatever in your room thanks to some coincidental, random, accidental dimensional/time portal stories, the Elon Musk was right about the Matrix stories, and the king of all AR stories, the Bourne-ish spy/conspiracy for-some-reason-you’re-the-random-person-we-coincidentally-need stories. And at some point storytellers and audiences just realized that having to co-exist with the real world was a repetitive, and somewhat annoying, contextual handicap, and backed off, allowing AR mode to settle into it’s righter use-cases.

…at some point storytellers and audiences just realized that having to co-exist with the real world was a repetitive, and somewhat annoying, contextual handicap, and backed off.

That said, the one genre where none of this was a problem at all was – porn. Although few acknowledged it openly, porn dominated AR entertainment. Integrating with your real world actually enhanced porn’s value. On the one hand, the illusion was all that mattered; unlike other genres, no one cared about actual story devices in this context. And on the other hand, with AR you could keep a defensive, watchful eye on the real world. There was little more embarrassing than being walked in on, and on full display, unawares, while blindly aroused in some depraved, fetishistic VR extravaganza.

To wit, whole video sharing sites were dedicated to streaming parades of horrifying, thank-the-greek-love-goddess-Aphrodite-that-wasn’t-me, “caught” videos revealing one blissfully-oblivious, self-gratifying, gogglebrick-faced, sex pig after another. Esh. There but for the grace of God…

Non-porn AR entertainment on the other hand, settled into a more casual entertainment role, generally serving arcade and puzzle games that utilized objects or textures in the space around you, or ignored the room altogether.

This is not to say that was all AR was good for. Not at all. AR was massive in so many other, non-entertaining ways. AR was indispensable at work, in communication, education and productivity.

After all that, I guess it would be a good time for me to tell you that actually, the difference between Augmented Reality and Virtual Reality, was trivial.  They were just modes.  A toggle.

Tap, AR.

Tap, VR.

Tap, AR again. Got it?

So turning off your view off the real world and entering an immersive new one was trivial.

But although mode switching was trivial, it was here in VR mode – the real world blocked out entirely – that entertainment non-trivially reigned.

VR Storytelling

What constituted a VR entertainment experience? What made a great VR story?

Today the closest discrete relative you have to VRs, as they manifested in future, is games. But don’t get all excited. These weren’t sisters.  Today’s games are that second cousin who voted for Trump, eats way too many Cheet-Os, smells her toenail clippings, and buys cheap jewelry on the Home Shopping Network. Comparatively speaking, the best games you have today are fiddly junk. And the platforms that power them are – so – fucking – slow. I mean really, today’s games are cryptic. How did I ever enjoy them? Ug. All these ridiculous limitations.

“No, you can’t kick that door down because we meant for you to find the key. Oops, you can’t walk over there because, well, you just can’t. You can break the window… oh, but no, you can’t use the broken glass to cut the rope, because, well, honestly none of us thought of that.”

If a specific action was not preconceived by a creator, you can’t do it. No room for your creativity and problem solving, unless the creators thought of it first.  And the whole time you have to twiddle these stupid little game controller buttons and joysticks. God forbid I should wish to say something to a character, with all those coy responses designed to side-step the fact that this so-called “AI” can’t logically respond to anything outside of some arbitrary, branching, preordained multiple-choice quiz.

This isn't VR either.

This isn’t VR either.

I mean, imagine how you would feel if you were back in 1978 and all the millennial, hipster news bloggers were fawning over Coleco’s Electronic Quarterback as “awesome video games in your pocket” and you were the only one in the world who’d spent 5 years playing a PS5?  You wouldn’t know where to start.

And yet games are still the closest, discrete thing you have to actual VR.

VR Storytelling: The User Strikes Back

A great story, as you think of it today, depends on structure, timing and sequence. Without a story’s structure and a sequence of events, how can there be a story at all? Fair point. A traditional, linear auteur very carefully structures a story, and times a sequence of specific events, that build to a conclusion. The linear storyteller definitively owns these decisions.

But in an interactive domain, glorified by VR, the user throws that structure, timing and sequence into chaos. Because the user makes those decisions. Not the storyteller.

So structurally speaking, “Storytelling” and “Interactive” are polar opposites.

Now at first this sends all the linear story tellers into a tizzy because it sounds to them like a kind of absolute chaos. But that’s because they’re just used to having absolute control over structure, timing and sequence.

“Yes, but what do I do then? How do I tell a story?! What are the tools of my trade?”

To this I would remind the linear storyteller that story is about more than structure, timing and sequence. Story is about character. In fact, the very best storytellers will instruct, much better than I can, that “character is story”. That embedded in every character are countless stories that might manifest fascinatingly under a near infinite range of meaningful challenges. Writers of the world’s greatest stories start with who their characters are. The way characters react to conflict drive the story forward. In fact when a story is perhaps not about character, the story is usually bad.

Character backstory was a critically fundamental part of interactive storytelling in the future. The platforms were fast enough and the AI intelligent and improvisational enough to literally perform characters that you could relate to. As a writer, you could create people.

There were other tools: environments, objects, and acts of God (or maybe acts of “storyteller”). Along with character these were all parts of the interactive storyteller’s palette. The story was literally embedded in the assemblage of programmatic objects.  And “acts of God” did allow the storyteller to take some control. To affect timing and events.  To a point.

And in the end, that was the game. Not sequence and timing, but potential and likelihood.

The art of VR storytelling was the masterful design and assembly of character, environment, objects, and acts of God in the construction of a narrative theme park – saturated with latent story probability.

The art of VR storytelling was the masterful design and assembly of character, environment, objects, and acts of God in the construction of a narrative theme park – saturated with latent story probability. All powered by sophisticated, improvisational AI.

VR storytellers were experts in human behavior, and understood how to encourage motivation and read and manipulate emotions in action.

Crying

Despite the skill of great VR storytellers, one of the things they struggled with for years were sad stories. There were so few successful dramas that brought you to an emotional place. And at some point we realized – it was a result of the very medium.

In real life we experience emotional pain due to our lack of control over the universe, and that is what makes us cry. For example, say Bosco your puppy gets run over by some (yes, self-driving) Uber. You would do anything to stop that from happening, but it’s real life so you can’t.  The same is true for movies and books where you are limited to a preplanned path. You can’t change it.

But in VR, we were in control. When you saw some pending tragedy, you had the immediate ability to fix it, undo it.  We could choose what we wanted to happen. Bosco can be saved!  Hooray!

VR, as a medium, existed to provide wish fulfillment.

As a result, you almost never cried in VR. And VRs that wrested away your control to try to make you cry, just didn’t do that well.

Actually, Size Matters

The hipster millennial reporters, who thought awesome VR could ever make its way to your phone, took years to accept that despite Moore’s Law, there was a consistent, significant, qualitative improvement afforded by large, dedicated, wired systems. The quest for perfected VR ended up being a never-ending hole of technical advancement because the target was so high (the convincing recreation, abstraction and manipulation of the real world and all nature’s elements and laws) and so far beyond what was technically possible at any given time, that any version of miniaturized, portable VR always seemed grossly inferior. The state of the art physical set up, the sensors, haptic projectors and computing power required to run great VR still had not, by the time I popped back to this time, become small enough to carry with you, and nor would you want it to.

The target was so high and so far beyond what was technically possible at any given time, that any version of miniaturized, portable VR always seemed grossly inferior.

The reason you might not fully appreciate this is because you are still defining VR resolution as you think of it today. Oh, that gets miniaturized, sure, but that was lame.

For example, by the time I popped back here what you’re calling haptic holography was a critical part of both the experience and the interface. This was not just some dull Apple Watch pulse. I mean you could bruise yourself on a virtual rock if you weren’t careful. Wide ranges of textures, heat, cold, fluid dynamics (wind, water, etc) could all easily be replicated. If you got haptic water on your hands, they really felt wet. Which lead to all sorts of applications. You could wave your virtually wet hands and feel the coolness of evaporation; you had to dry them off on something. You could feel VR clothes and the weight of objects.

And get this, they could even create haptic effects inside your body. Again there were safety limitations, but it allowed the system to adjust your sense of orientation, to create the illusion that you were flying, or falling, or accelerating or decelerating, or standing. Even when you were just sitting in a chair you could feel like you were walking. And seriously – don’t get me started on porn.

As you can see, by the time your Apple Watch (the only device most of us carried) had enough thrust to power aural/visual-only experiences, the larger, wired, in-home rigs were producing massively richer, more jaw-dropping experiences that just made the phone version seem, well, kind of stupid.

You’d see businessmen fiddling with some portable version on the Hyperloop, but there just wasn’t much to that.

And so it went for some time. Until our senses could be bypassed entirely, computers became sentient, and all hell broke loose.  But that’s another post.

Generation VRI

Back in the “moving meat days”, and despite VR-proofing rooms (which basically involved padding, like you would do for a baby, but only for a full-grown, 250-pound man) everyone of my friends had some awful VR injury story . VRI was a thing you bought insurance for. As you neared walls and objects in the real world, most VRs would alert you in various ways. However, you would be surprised how strong the drive to do what you’d intended could be in the heat of the dramatic moment. You would invariably push, just that little bit further, to accomplish your goal, despite the warning. This tendency was called “elevening” (11-ing “push it to 11”). Elevening caused stubbed toes, noses and fingers. People tripped, collided, broke bones, knocked things over, fell off balconies, knocked people out windows, got electrocuted and burned, and in too many cases, died. To counter this, some VRs employed something between VR and AR called Reality Skinning, where your real room and objects like chairs or whatever was in it, were all rendered as themed objects in the virtual one. But I always found that a bit lame.

Getting injured in VR, however, was the least of our issues.

Although VR was awesome, it’s problem was that is was really awesome.

Pretty much an entire generation weaned on VR grew up, at best, bored stiff with the real world. But usually worse. Leaving the virtual world and reentering the real one of inconvenience, dirt, ailments, limitations and an oppressive lack of control was such a profound let down. Your ego was once again forced to accept your oh so many pathetic imperfections.

Pulling out was universally met with depression. Often severe.

People slept there. It was vilified as being addictive, but how could it not be? An always-on Vegas casino, perpetual early-dusk; party-time forever.  Like heroin addicts, VR users suffered from a wide range of ailments, severe nutritional deficits and health problems related to hours on end foregoing attention to their real-world meat. Dieting stopped being something anyone tried to do. You think you have a sedentary population today?  You have no idea.

Users exhibited all sorts of bizarre behaviors and tics due to reflexively gesturing virtual actions that had no impact in real life.

Intelligent users were often confused and distrusting of base-reality.

A surprising number of people drowned when they discovered they couldn’t actually breathe under real water, let alone swim. Others jumped off buildings because they actually did, with complete certainty, believe they could fly. Empathy plummeted. Samurai sword violence shot up dramatically.  And we generally stopped procreating, having been profoundly overstimulated by wildly perfect, surreal fantasy surrogates, and because we’d also become far too insecure in the presence of other equally damaged, relatively ugly, real live biological people to build relationships anyway.

I mean, no duh, seriously? What did you expect?

There is so much more to this story, but suffice it to say that VR changed everything. You could do anything, be anywhere, be anyone…

As such, VR was not just another medium.

VR was an alternate world in which our wishes were granted.

Think about that while you fiddle with your phones.

Oh, and Oculus Rift didn’t end up ushering in anything. They just became a peripheral company.

Messages From The Future: The Decline of Apple

I’m sure you’ve had your own debates with the “Apple is about to die” crowd. I’ve had those too. Except that being from the future, of course I’m the only one who actually knows what I’m talking about. And yet even though the future is not always rosy for Apple, even though some of these people sometimes have a point, they still piss me off just like they did the first time I was here.

Usually the argument centers around the tired meme that Apple has nothing significantly visionary or profitable to jump to that comes close to the potential of the iPhone, which of course supposedly means that Apple is going to die under its size and obsessive and unsustainable inclination to polish and “perfect” in the face of speedier, less precious, competition.

But that is so not how it goes down.

The other day Marco Arment read about Viv the AI virtual assistant , still being developed by creators of Siri. This, in particular, he told me years from now, after we’d met online which hasn’t happened yet (Hi Marco – you dropped it in the potato salad – remember I said that), coupled with highly cited reports of the AI efforts of Google and Facebook, inspired his first post on the topic of this possible kink in Apple’s armor. It was about then that he, and a handful of others, came to their conclusion; one that was not too far off from what actually happened.

Though it wasn’t quite as simple as “Apple showing worryingly few signs of meaningful improvement or investment in…big-data services and AI…”, nor as some had suggested, “When the interface becomes invisible and data based, Apple dies”.

Actually interfaces remained visible, tactile and exceptionally alive and well in the future. AI (via natural language interfacing) did not herald the death of the visual or tactile interface.  We used each for different things and in different places. Trust me – there are still a million reasons you’ll want to see and touch your interfaces, and maybe more importantly, a million places in which you still don’t want to sound like a dork talking to your virtual assistant.  Even in the future.

But there was some truth floating within the “big-data services” thread.

But there was more going on here than the advance of AI. There was also the ongoing fragmentation of your platforms.

Apple mastered the hardware/software marriage. With rare exception, Apple exceeded in virtually any device category they ventured into. So you might argue that so long as there were devices to build and software to make for it, even if it was indeed powered by advanced AI, Apple, with resources beyond any other company, stood a chance. But there was more going on here than the advance of AI.

There was also the ongoing fragmentation of your platforms.

20 years ago most of you still had one computer: a desktop. 15 years ago you probably had two, including a laptop.  10 years ago you added a smartphone and a “smart” TV. 5 years ago you added a tablet. Last year you added a watch. Now you have six computing devices plus peripherals and are only a few years from adding the first real VR platforms (incidentally real VR catches all the currently uncertain silicon valley trenders, and mobile hypeists off-guard. VR is so not this unbelievably temporary, phone-based, turn-your-head-to-look-another-direction ridiculousness. What a joke. That’s the equivalent of the 1998-chose-your-ending interactive CD-ROM, or red and blue 3D glasses. Trust me, speaking from the future – your phone didn’t become much “VR” anything. Took a lot more hardware. If you’re investing, focus on in-home solutions and content creators, that’s where it all went down in the future. Another post.)

And do you think this platform fragmentation will stop? (Spoiler: it doesn’t.) Having to remember to put your device in your pocket is totally kludgy, you can see that even now, right? That you have to manage where all your various form factors are and independently charge, set up, maintain and connect them all, is grossly unwieldy, right?

You should know this, because it has been commonly theorized by now, that computers will continue to get cheaper and more powerful so as to become ubiquitous. In everything. Not just the so called internet of things, but the interconnection of EVERYTHING. And indeed, that happened. Exponentially. And yet few were ready. Businesses were disrupted and died.

Computers became part of almost everything. Seriously, for example, computers were in fish. And for that matter your toilet became a massively parallel processing platform…

Seriously, for example, computers were in fish. And for that matter your toilet became a massively parallel processing platform… I’m not kidding, 1.84 YottaFLOP power flushers.  Everything that touched it, and went into it, was measured, DNA-decoded, and analyzed in a million ways.  And this, more than any other advance in medicine, lead to quantum advances in health care and longevity. Who knew. There was a toilet company on Forbes Top 100 US Companies list.  No, really.  Though Apple never made a toilet.

Aside from your watch (and a headset which eventually you didn’t need anyway), you didn’t carry anything. Everything was a potential display, even the air, when it was dark enough. My Apple Watch was still the last and only device I carried with me (I posted about this before – it is laughable that people today think Apple Watch has no future. Oh man, just you wait.) But I’m getting ahead of myself.

This platform fragmentation, and not just AI, and not the feared loss of interface, was what ultimately changed things for Apple. Suddenly – there were thousands of device form factors everywhere. A virtual fabric. Rampant commoditization. Experience democratization.

In hindsight, it became clear that what Apple required to be at its best was a market limitation in consumer’s access to devices. Limited form factors, limited production and distribution. These limitations, which were common during the PC and mobile eras, allowed Apple to qualitatively differentiate itself. To polish and make something insanely great – in contrast to the others. To design a noticeably superior experience.

But the more the device ecosystem fragmented, the harder it became for Apple to offer uniquely valuable and consistent experiences on top of all those infinitely unique functions and form factors. It just became unmanageable. I mean Apple had an impact for sure. Companies knew design mattered. And Apple went down in history as the company that made all that so.

Your six devices became upwards of fifteen devices, at which point I think most of us just stopped counting. And the number kept growing so fast. The car was a big one. Yeah, Apple did a car. A few actually. The car was perfect for Apple then because there were still significant market limitations in that category. And Apple could focus on the qualitative experience. Later, as the dust settled, the watch, being the last device we needed to carry with us, also served as a different kind of market limitation – a focal point for Apple’s strengths.

the idea of a “device” of any specific, personalized sort had begun to lose meaning

But as device fragmentation continued to explode, as the hardware market massively commoditized, the idea of a “device” of any specific, personalized sort had begun to lose meaning. In the future, the term “device” sounds much the way “mainframe” or “instamatic” might sound to you today. Quaint, old; it’s just not a relevant concept anymore. Everything was a device. Focus instead shifted to the services that moved with you. Which was part of Apple’s problem.

Like Facebook, Apple, the wealthiest company in the world at the time, did a good job buying its way into various service categories (including AI), and innovating on some. Apple had a huge finance division, they had media and content production, utilities, but then so did other companies by then. Ultimately, none of it was in Apple’s sweet spot.

Without discrete devices, it was services and systems that became consumers’ constant. I hope you can see that when this happens, it fundamentally changes Apple’s proposition, the so-called perfect marriage of hardware and software itself becomes an antiquated paradigm.

No, Apple did NOT die, but became some significant degree less a focal point for all of us. And yet… maybe now armed with this knowledge, they can change how things played out.

In the meantime I’m watching the toilet sector. And I plan to invest heavily.

The Presentation of Design

There was an excellent post on Medium recently called: 13 Ways Designers Screw Up Client Presentations, by Mike Monteiro which contained thoughtful, if rather strident, recommendations related to the selling of design work. It was a relatively enjoyable read. I agreed with all 13 points. However in his first paragraph, establishing the primary rationale for the article, Mr. Monteiro made a statement that caused me to choke on my coffee:

“I would rather have a good designer who can present well, than a great designer who can’t.”

Like a punch to the gut — it caught me off guard. I had to reread it a few times to make sure I’d read it correctly. After reading the article I kept coming back to that line. “Really?” I kept asking myself.
He went on to say:

“In fact, I’d argue whether it’s possible to be a good designer if you can’t present your work to a client. Work that can’t be sold is as useless as the designer who can’t sell it.
And, no, this is not an additional skill. Presenting is a core design skill.

My emphasis added.

Undoubtedly that pitch goes over super well in rooms filled with wannabe designers who can present really well, busy account executives and anyone whose primary tool is Excel. Certainly for people who look on the esoteric machinations of designers as a slightly inconvenient and obscure, if grudgingly necessary, part of doing business.

But surely it can’t be the mantra of someone who cares supremely about the quality of the design work – about achieving the greatest design?

I never do this, but I posted a brief opinion of disagreement on this point in the margin comments of Mr. Monteiro’s article. And I would have moved on and forgotten all about having done that, but my comment was subsequently met with some amount of resistance and confusion by Mr. Montiero and other readers writing in his defense. It frankly depressed me that there were professionals in our industry that might sincerely feel this way and more so that the article might convince even a single talented young designer for whom presentation is a non-trivial challenge, that this particular thought, as worded, has any industry-wide merit. And then I came across this recent talk he gave where he doubled down on the idea.

So I wanted to explain my reasoning more fully- it’s an interesting debate- but the limited word-count allotted to side comments didn’t allow for meaningful explanations or exchanges (particularly by people who are as verbose as I am). So rather than pollute Mr. Monteiro’s otherwise fine article further, I decided to explain myself more completely in a post of my own. Maybe more as personal therapy than anything.

No matter your field or role, you will never do worse by having strong presentation skills. It will help you align the world in your best interest, without question.

Let me first state — presentation proficiency is a useful skill. No matter your field or role, you will never do worse by having strong presentation skills. It will help you align the world in your best interest, without question. Everyone should cultivate these skills to the best of their ability.

But to what degree does this affect a designer? Does its lacking utterly obliterate one’s potential as a great designer, as Mr Monteiro asserts? And should designers further be “had” principally on presentation skill over other attributes?

Language Logic

Linguistically speaking, his choice of words “A good designer”, and “who can present well” clearly contemplate two separate states of being. This compels one to infer that not all good designers can present well, which is further supported by the fact that Mr. Monteiro evidently turns away “great designers who can’t.”

Which leaves me wondering:
What is demonstrably “great” in a “great designer” who can’t present well, if presenting is a core design skill that dictates the ultimate usefulness of the entire role?
Wouldn’t that mean then, that there never was any such “great designer” to begin with? That this designer must have been, rather, a “poor designer” for lacking presentation skill?
Perhaps a better way to say what I believe Mr. Monteiro meant is:

“There is no such thing as a great designer who can’t present well, because presenting is a core design skill.”

On the one hand this revised statement at least avoids contradicting itself, but on the other I still absolutely disagree with it because, to me, it inexorably expands into the following thought:

“I would rather have a designer who has relatively weaker creative problem-solving, conceptual, aesthetic and technical skills so long as he can, alone, persuade the client to pay for the work, than I would a designer who has vastly superior creative, conceptual, aesthetic and technical skills  who unfortunately happens to lack presentation skill.”

Based on his specific wording, I think Mr. Monteiro would have to concede, at the very least, that design and presentation are separate, independently measurable skills unrelated to one another except within the context of what he prioritizes – in this case the selling,  as opposed to the designing of  work.

Part of what troubles me then, is that no other option further appears to exist to Mr. Monteiro except that every designer present and sell his own work – full stop.   And that the quality of the design work is naturally the first thing that should be compromised to enable this.

And I think that’s an unnecessary, limited, nonrealistic supposition.

When Design Requires Explanation

Great design does not exist in some vacuum, opaque and impenetrable until, thank God, some good presenter comes to our rescue and illuminates it.

Nor is presentation inexorably required in order to perform the act of designing. If it were, that would mean that a tongueless person, who also perhaps further lacked the ability to play Charades, could never be a designer. Which is ridiculous of course. None the least of which because I cannot name any tongueless designers who could not also play Charades, but I trust that within the expanse of probability such a person could nevertheless exist.

But what about basic language and cultural barriers?

I now work and live in Switzerland with a team of highly international designers: German, Swiss, Swedish, French, Ukrainian, British.  And so perhaps I see this more acutely than Mr. Montiero, who lives and works in America.  But the native languages and references of these great designers are all quite different – and this would obviously affect their ability to present to, say, an American audience. If I valued their universal presentation skill above their great design skill, well – there would be no team.

That said, it would be interesting to see Mr. Montiero present to a roomful of native Chinese executives.  I wonder whether he would attempt to learn Mandarin, or choose to have a Mandarin translator interpret his words and meaning, or ask the Manderin-speaker on his team (if he has one) to assist in the presentation.  More critically, I wonder if he would be eager to define his presumed lack of fluid, confident Mandarin presentation skill as weakness in his design, or in his skill as a designer.

I’m admittedly being obtuse here, but only to illustrate the fault in the mindset. Great design is worth defending with presentation support, and I would argue there are even those projects where, counter to Mr. Montiero’s opinion, design actually does speak for itself.

 …design which is not “great” rather usually does require a fair amount of explanation. Enter “good design”

This is because design is, in part, a language of its own. Indeed great design results in, among other things, the communication of function.

So where design is truly “great”, as opposed to “good”, its value must be nearly, if not sometimes wholly, self-evident. Great design is observable — at the very least, by the designer’s own team, for example. More on that later.

In contrast I find that design which is not “great” rather usually does require a fair amount of explanation. Enter “good design”, or worse, which may in fact require some presentation skill merely to compensate for its relative lower quality, its relatively weakened ability to self-communicate.

Supporting Talent

If you limit what you value in design talent by requiring that it absolutely be accompanied by self-sufficient sales skill, then you are shutting yourself off to some of the most creative and talented people in the world.  Indeed many people become designers and artists in part specifically because their brains don’t connect with the world the way people who are good presenters do!  From my point of view it rather requires a kind of tone-deafness to the psychology of creatives to not see this.

My old friend, Sir Ken Robinson, speaks on the topic of creativity all over the world, and he often points out that exceptional intelligence and creativity take many forms. That rather, our reluctance and systematic inability to recognize and accommodate these varied forms of intelligence and creativity – our resistance to individualizing our interaction and support of it – results in an utterly wasted natural resource. He points to many famous creative people — at the top of their respective fields — who simply didn’t fit in “the box”, they didn’t easily align with the system. And that only through acknowledgment of their unique skills and provision of personalized support, could their inordinate brilliance find its way into the world. These are the people who often dominate their profession once the standardized models surrounding them are challenged to support their unique strengths.
And I suppose I feel something similar is certainly true here. From my perspective, great talent must always be nurtured and supported. Even if, no, particularly if, that merely requires the support of a presentation.

From my perspective, great talent must always be nurtured and supported. Even if, no, particularly if, that merely requires the support of a presentation.

My expectation is that the people who buy into Mr Montiero’s stance don’t like this idea in part because, for them, it probably perpetuates an old archetype of entitled, high-maintenance designers; insulated royalty who idealistically prefer to ignore business realities and design in a bubble. Of the managers and the operational and sales functions having to serve and adapt to the designers whims— of having to support and compensate for someone who isn’t carrying his weight in the business sense.

In reality, the type of extra effort required to support the development of truly great creative work in any field is exhausting and something that anyone lacking sufficient constitution gets quickly fed up with. So it must feel good, refreshing even, to be able rally behind this concept, to shed all those feelings of subordination and responsibility, and demand that designers do that work themselves, to say:

“Designer, if you can’t sell the work yourself you’re not good enough! Because guess what, it’s always been your job – alone!”

And although that stance may feel refreshing and proactive, it’s misguided.

The Business of Design

“Work that can’t be sold is as useless as the designer who can’t sell it.”

With this excerpt from the article, here again, I take issue. Sure, in the business of design, work that can’t be sold is (usually) useless. Agreed. But why on Earth is it the only option that the designer alone sell the work? And why does that make one’s world-class, insanely-great design “useless”? This designer obviously works on a team, since Mr. Monteiro “would rather have” one of a different sort. So where is the rest of this team?

Of course in business, presentation must happen — it’s a requirement in the client-based sales of design. But how we go about accommodating that requirement within our agencies, I think, is a fair debate, and a relevant topic.

In my teams we frankly rely on one another. Does that sound odd?

Since we have already established that great design can be identified in isolation without the accompaniment of a formal sales presentation, that means great design is observable. At the very least, it’s certainly not going to be missed by a seasoned team. Especially, I assume, by someone like Mr. Monteiro, or his fans, who have all undoubtedly worked in design for a very long time. Surely each would acknowledge being able to recognize great design work if it were shown to them without the benefit of a sales presentation?

In my teams we frankly rely on one another. Does that sound odd?

So when this truly great designer who can’t present comes to you, lays an unbelievably brilliant piece of design work on your desk, perhaps the best you’ve ever seen, and mumbles to his feet:

“Yeah, um…well, this is what I did. ….er… I uh…. don’t know what else to say. (inaudible… something about “…my mom… ”)

What does Mr. Monteiro, or any of the people who would argue with me do?

I’ll tell you what I wouldn’t do, I wouldn’t yell:

“Somebody get a worse designer in here and start all over! Pronto!”

I would sit down patiently with this great designer who can barely put two words together, along with members of our team, and talk it through.

This is where a couple things happen, first it’s at this time that a strong director is sometimes called upon to be a mentor, a psychologist, a parent or friend to nurture, to listen and understand, to pull words and thoughts from someone whose mind literally doesn’t work that way. Yes, that sometimes takes work, but in my world-view,  great design is well worth it.  This is also when the team comes together to build our common language.   The fact is, the whole team needs to understand the project anyway. We all need to internalize why it works and what makes it so insanely special. Each of us.

If the design is actually great, at most, this exercise takes one hour.  Usually quite a lot less.  Rather I find we enjoy discussing truly great work, it sets the bar.  And we probably spend more time than necessary doing that because we love doing it.

And I have never in my 30+ year career been faced with a situation where someone on the team who was indeed exceptionally skilled at presenting could not assist a great designer who can’t present well.

Oh sure, it’s super-duper convenient to have great designers who are also great presenters — but those are rare creatures. Unicorns. You better believe that your search results get exponentially narrower with each search term you add. To combat this natural rarity, Mr. Monteiro claims he would rather broaden his search results by dropping the term “great design” from a search that includes “can also present”.

Whereas I prefer the reverse.

What the Client Wants

Obviously Mr. Monteiro is a busy person who runs a company that hires designers. This company cannot survive if design work is not paid for by clients. Perhaps because he has very little time, he has therefore decided that he needs his designers to be able to present well, as well as design. In fact, his preference for designers who can present is so strong, he will choose a designer with lesser design talent to accommodate that.

Hierarchically this clearly places the quality of the work below one’s ability to persuade the client to buy it.

Hierarchically this clearly places the quality of the work below one’s ability to persuade the client to buy it.

If one were to take this to heart (and I am not suggesting that Mr. Monteiro necessarily takes his own advice in running his studio), to me this would be a very cynical, virtually dishonest, platform on which to operate a design firm that promises great design solutions. Indeed it’s a hiring platform that is perfectly engineered to systematically lower, rather than raise, the qualitative bar. One that prioritizes not the best work, but the ease of the financial transactions. And one that takes advantage of unsuspecting clients.

Where I come from that’s called selling out, and as a client, if truly great work is what I’m in the market for, any team that operates that way is the team I wouldn’t knowingly hire.

Good and great are relative of course, but in principle, I simply cannot imagine passing on what I would perceive of as great design in favor of something lesser-than just so that the rest of my team and I don’t have to put effort into assisting with a presentation. Because in the end — that’s all this boils down to — a willingness to apply the required effort to sell the greatest solution.

If you’re not willing to support a great designer with help in presentation — you might as well tell your clients you routinely compromise on quality because you don’t like to work that hard.
Surely your clients would vastly prefer having the best possible talent in the world on their project.

Common Ground

Honestly when I originally wrote this post I just didn’t think Mr. Monteiro probably had the opportunity to be as critical as I am being about the language of that particular statement yet. I guessed he might have accused me of splitting hairs.  Of playing semantics. I was sure if he were really pushed against the wall on this topic he would probably concede that this particular stance seriously needs to be re-worded.  But his recent lengthy magnification of the idea at his recent talk makes me think he sincerely believes it, as I interpreted it.

That said, the two skews in which I think Monteiro’s language and my beliefs on this topic are in absolute alignment are:

  1. Regarding a fully independent designer, one who wishes to work in total solitary — not part of any team — contracting design skills directly to paying clients. Then I agree, having presentation skills will be critical in the event that you wish to support yourself on the sales of that design work.
  2. And that as a universal rule, presentation is an excellent skill to nurture in yourself if you are a designer, or occupying any other role on the planet, quite frankly. Presentation is just a good skill to have no matter what field you are in or role you play. It’s a skill that will always serve you well in affecting the world to suit your interestes. Everyone should do what they can to improve their presentation skills.

The lone swordsman aside, if you have even one other partner or team member, there are almost always alternatives that will allow great work to be presented and sold.

And if you are indeed a great designer, a lone swordsman, and feel genetically incapable of presenting well, I’d suggest you develop a professional relationship with a strong sales/presentation partner.

FYI — that’s generally called starting a company.

Apology

I’d like to sincerely apologize to Mr. Monteiro for being so hard on him in this piece. I’m sorry. I rather respect his thinking in every other way so I have felt conflicted the whole time writing this. I think if you haven’t, you really should read his article because it otherwise contains some solid advice and can help you be a better presenter.

…Just maybe completely skip the first two paragraphs. Truth is, the thrid paragraph is actually a much better opening bit.

Lastly —  To any truly great designers out there who can’t present well or don’t feel comfortable doing so:

I admire and respect you. I’m hiring great designers all over the world.  Send me a message.

iOS Ad Blockers: Why Advertisers Are Suddenly Going Diarrhea In Their Pants

Apple recently released ad blocking capabilities in iOS, and the ad and publishing industries began frothing at the mouth. Every emotion from spitting panic to disdain have been hurled into the webversphere over the capability. And as a consumer, and an ex-advertising shill, I love it.

I am particularly fond of the most vicious ad blockers, the so-called ‘blunt instruments’. The ones that leave gaping, blank maws between thin slices of actual content. The ones that so severely disable Forbes ’welcome page’ (an interruptive page of ads feigning value with some irrelevant ‘quote of the day’) that you are required to close the resulting blank window and click the article’s original link again to see the content.

Yes, I even revel in the extra effort it requires to get past all the newly broken, well-blocked bits. It’s harder in some ways. But you know what? It’s payback time. And that extra effort? It’s a pleasure. I know that each tap and empty window is sending a message.

With every whiny press release and industry insider wailing about the “end of content as we know it” a delightfully warm, glowing feeling washes over my insides.

I admit it, it’s an unhealthy pleasure in general. And in any other context I wouldn’t celebrate it. But here? I’m gonna party like its 1999.  Because for all the ad industry has learned since then, it might as well still be.

I’m gonna party like its 1999.  Because for all the ad industry has learned since then, it might as well still be.

This is what selfish, self-inflicted industry ruin smells like. Banners in ashes, melted trackers. A stockpile of suddenly outmoded scripts and tactics, all in embers. The dumbfounded expressions of dim-witted middlemen watching the gravy dry up.  Ah, there’s that warm glow again.

Unfortunately, ruin is what this will take.

I realize there is a risk that the arms race will result in even more devious forms of advertising, that the penicillin will result in resistant strains. But the relief for now is unquestionably worth it.

Even so, some are feeling guilt.  Under peer pressure, I assume, a few creators of Ad blocking technology are trying to give a crap.

Marco Arment pulled his ad blocker from the iOS app store, after 3 days as the top seller, I assume, with a last-minute guilty conscience.

He said: “Ad blockers come with an important asterisk: while they do benefit a ton of people in major ways, they also hurt some, including many who don’t deserve the hit.”

I believe his observation is mostly correct but his response was wrong. And his kids will probably hate him someday for leaving a sizable portion of their inheritance to someone else’s family. To wit, other excellent ad blockers have already moved in happily.  At least he hopefully slept better that week.

Then there is the new “AdBlock Acceptable Ads Program” where the previously dependable ad blocker now whitelists so-called ‘acceptable ads’ – allowing these ads through by default.  They define acceptable ads as adhering to a manifesto they’ve concocted which attempts to qualify approved types of interruptions.  I commend the attempt – but it is critically flawed,  a fundamentally incomplete manifesto, that sits precariously on an arbitrary portion of the slippery slope.

In an article posted to the Verge, Walt Mossberg wrote: “browser ads represent both an unwanted intrusion and a broken promise”. I read that and wanted to virtually high-five him since I momentarily thought he shared a core belief. But then I kept reading and discovered that the only ‘intrusion’ he referred to was the surreptitious collection of your information, and the ‘broken promise’ was the delivery of ads that weren’t as personalized and useful as he felt should be possible.

Well, ok he has a point, a reasonable one, but completely misses THE point. He’s a Kool-Aid drinker debating flavors.

So, What Is the Point?

Those of you who have read this blog in the past know that my world view of interactive media has, since the early 90s, been based on a small handful of very stable principles. Interactive Axioms.

The most sweeping of all, what I call “The First Axiom of Interactive”, is that the user is, by definition, in control. “The User is your King. You, the creator, are merely a subject.”

People don’t often acknowledge that this medium would simply not even exist if delivering control to the user was not the singular top-most goal.  There is nothing inconsistent or squishy about this reason for being.  Any functional capability you can point to will distill upwards to the quest for control.

The sheer existence of an affordance, a button say, anywhere on a remote control, or a website, or app, is a promise. It’s not one that we talk about much. But the obvious, unspoken promise is that it will react predictably and instantaneously.

The medium itself is an affordance – and the expected result of that affordance is control.

THAT is the promise.  Said another way, the medium itself is an affordance – and the expected result of that affordance is control.

If you remember DVDs and you happened to be in the USA, you might recall the FBI Duplication Warning at the start of every movie. Upon seeing these warnings, every one of us pressed the “skip” button. And then we subsequently experienced a moment of inner outrage because the button had been temporarily disabled requiring us to view the FBI warning in its entirety.

The promise of control had been intentionally wrested away from us. And it felt like a violation.

Because it was.

Today interactive media is based on an even wider and more articulate provision of such control. It is a ubiquitous and fundamental condition of the medium. As such, any time anything happens that is not what we wish, we feel something similar to a sense of injustice. A violation of the medium.

So, yes, of course Walt Mossberg is right, spyware and irrelevant ads sit somewhere on the spectrum of broken promises. But what he does not acknowledge is that the mere existence of interruptive ads in the first place, ads that were not explicitly requested, is the spectrum.

That’s further the problem with the Adblock Acceptable Ads Program manifesto.  It attempts to carve out a little plateau on the slippery slope that allows for *some* control to be wrested away from you.  But they miss the point which is that sheer interruption of any kind, not degrees of interruption, is the violation.  My rewritten manifesto would be very simple and would contain only one test, “Acceptable ads do not, in any way, interrupt the user’s attention.”

Acceptable ads do not, in any way, interrupt the user’s attention.

That would be acceptable.

But the problem for advertisers, then, is that such an ad will take up no screen real estate.  It will call no attention to itself. It will not seek to draw the user.

In short therefore, it will not exist – until explicitly sought out. That is an acceptable ad, because that is an ad that honors the promise of the medium.

John Gruber occasionally points to his ad partner The Deck, as a viable ad model, intimating that it is less invasive, and more relevant, and therefore an appropriate ad format. Ads, but not “garbage”. He claims not to understand someone who wants to block ads. But I hope you can see that he is still defining the Deck’s format merely by contrasting it with the grosser violations of other advertisers. Yes, it’s a degree less offensive, sure. A comparison to “garbage” ads actually makes sense because they are, after all, genetically closer, interruptive cousins. But we are not comparing it in context to, say, the content the user sought out in the first place. Because if we did that we would see that such an interruptive ad is still quite a lot further away.

If you’re an advertiser, or an interruptive-ad-funded writer or publisher, I’m sorry if your livelihood may yet suffer as a result of ad blockers. That’s no one’s goal. But it’s you who’ve chosen to base your livelihood on such a patently inauthentic payment format, one that defiles the very medium it exists in. Tidy and convenient though it may have seemed for you at the start.

It’s a kind of Faustian bargain. Content creators agree to include interruptive advertising to afford creation of their content or derive wealth. But the ads are, by definition, not the content. I seriously doubt a single one of these content creators would choose to include an interruptive ad on the merit of the ad alone. Which reveals a truth.

That interruption in the user’s quest, the user’s wishes, is not allowed in this medium. If you break this rule – you must accept the penalties.

You say, “But ads are the necessary cost of receiving content!”  No, actually they are not. It’s the cost of receiving your content. And if you stop, unable to afford creation of your content any longer, don’t worry, someone else will be there to take up the slack.  And I think you know that.

“But ads are the necessary cost of receiving content!”  No, actually they are not. It’s the cost of receiving your content.

Do you seriously think that without advertising content creation will go away?  Please. It will result in industry upset perhaps. It will inspire more authentic payment systems, or not. But it won’t go away.  Fees from advertising is not a prerequisite for creation of content.

All these publishers and content creators who complain about the bluntness of the ad blockers, arguing about which interruptive ads should be blocked, are already working way outside true-use of the medium. Ignoring the basic fact that they stand on stolen ground to begin with. They rather seem to be suggesting that there is a way to break the law of the medium in a good way. They remain hopeful that they can remove maybe just a little of your control. And that should be totally ok with you.

Well, sorry, I appreciate the work many of you do – but you’re wrong. It’s not ok. You have merely gotten away with the violation until now.

Authentic Advertising

Authentic advertising (if you can even call it advertising) requires an advertiser to be part of the very business it’s selling. To promote the product through authentic interaction with the product itself (I’ve written about this before). And/or to create something that is so inordinately valuable and powerful that it will be sought out. To become the very content, services and products that people want.

To create authentic advertising you must embrace that you must be CHOSEN (or ignored) by the King. If you interfere in any way in your King’s journey to suit your own interests – even daring to appear when the King doesn’t wish it – you are a violator. A criminal.

Since you are not allowed to present yourself until invited, authentic advertising is hard. Much harder than the ad industry is accustomed to.  Traditional interruptive ads need only be good enough that users maybe won’t look away after their control has been wrested away. That kind of traditional, interruptive advertising of course is much easier to produce.

But rather, honest to god valuable content that people might be willing to pay for, or invest their time and networks into, takes the same effort, risk and expense that developing a successful product does.

Interruptive ads need only be good enough that users maybe won’t look away after their control has been wrested away.

Do not confuse this with so-called ‘native advertising’ as it’s been disingenuously referred to, which is little more than a cheap ad aping the appearance of content.

Authentic advertising in interactive is not easy to produce, and it’s often the subject of inordinate luck. This means advertisers wishing to defensibly game that system have to resort to great expense and extravagance. And precious few are willing to do that.

Conversely, interruptive advertising requires little to no luck, and demands roughly the same work and expense that advertisers are used to applying. The difference is that these advertisers are still, unbeknownst, spending wildly. The resource these advertisers have been spending rampantly without qualm is your goodwill. Your willingness to continue to tolerate their violations.

Well advertisers, you’re in a deficit now. A really big, fat overwhelming deficit. Hope you enjoyed the ride, because interruptive advertising has drawn down your accounts and built tremendous debt.

And ad blockers are just the latest means of putting holds on your well-worn credit cards.

An Open Letter to the Creators of the New Muppet Show

Dear Disney and ABC,

Holy crap, how could you assholes so monumentally blow it!?

Whoa… I’m… I’m so sorry, that just came out. I totally meant to intelligently build up to that point.  Sorry, let me start over:

Dear Disney and ABC,

There is precious little joy in our world. Such little magic and wonder. Far too little care-free innocence.

When we connect through media today, our lives are more commonly associated with terrorism, disease, economic meltdowns resulting from greed, natural disasters, child shootings, police brutality, suffering, intolerance, hatred and the ongoing horror and assault of the seedy bottom half of real life.

Hold on, before you jump into writing me off as some “the world is going to hell, whatever happened to the values of this great country” kind of person, who worries that video games are perverting our youth or thinks television should be censored or whatever, please know that I enjoy violent video games and I like seeing movies and TV shows that push the limits of acceptability. Things change. Boundaries get crossed. That’s progress, art and evolution. Live and let live. No I truly don’t give a crap about breaking the rules and pushing the limits of inappropriateness.

But over the last several weeks I’ve discovered that I actually do give a crap – a really big, fat, loving crap, about the Muppets.

A New Muppet Show!

When I heard that you were bringing the Muppet Show back – with a new, more modern take – I remember thinking, “YES! It’s about time!”

The news was reason enough for celebration. I told my wife and some friends and they too shared the very same sentiment. Who wouldn’t? One would have to carry a very cold, dark heart not to feel that way.

But as I have been exposed to your new pre-show clips, teaser, pseudo press releases and marketing, a slow dawning has crept over me. It took a while, but I have begun to feel something unsettling that has taken some effort to define.

In fact my initial elation has now settled into deep disappointment.

Although as of today the show has yet to premiere, I believe (but continue to hope you’ll prove me wrong) that you have mistranslated and misunderstood Henson’s great, iconic legacy. Worse, I believe you may be in the process of undermining it, surely unintentionally. But surely nonetheless.

Until I saw the teaser for the New Muppet Show (UPDATE: the video has been pulled) I confess I took the long-standing values of the Muppets and the reality of their world quite for granted; how they behaved, how they deftly interacted with our real world at arms length. They made it seem effortless.

And one of those values, a key attribute, perhaps the most critical of all, is that the Muppets never – ever – fell below THE LINE.

The Line

When I talk about the line, I mean the line above which the Muppets remain arguably pure creatures at heart, connected to the joyful world they came from, and largely driven by the pursuit friendship and the spreading of happiness. Sounds a bit corny – but in fact isn’t. And the line below which the Muppets would become just another part of real-life’s ugly bottom half, inconsistent, undependable, self-centered and cynical.

Naturally, good humor demands breaking boundaries, stepping over some line. And at their strongest, the Muppets were so very good at doing that.  The Muppets always broke boundaries. They understood magic – of playing with the medium (whatever medium they were contemplating) – of breaking the 4th wall – of being surprisingly self referential. And in so doing concocted their own, very recognizable, brand of magic.

And I imagine, aside from Henson’s obvious challenge of inventing, or rather, raising the art form to a new level, it must have taken tremendously hard work and commitment to that vision to maintain that position – above the line.

Henson, Oz and company always stayed above the line. Dependably. They clearly worked very, very hard to find new humor and boundaries to break above the line. Satire and social comment are all possible above the line of course. Tear-inducing laughter is possible above the line. Pixar, for example, dependably and successfully lives only above the line. Boundaries can be broken above the line. And like it or not, the Muppets made clear that being above the line was a fundamental tenant of the brand.

As I reflect on my feelings upon seeing your new teaser, the pseudo PR and marketing for the new show, I believe the tone with which you are approaching the new Muppet Show, the direction your underlying compass is aimed, is fundamentally inauthentic and careless.

I further argue, that this approach you’ve taken, this direction, required very little effort. You merely chose the easy path.

You’ve just drug The Muppets way below the line, sacrificing everything that came before it, in exchange for a few cheap laughs.

You’ve just drug The Muppets way below the line, sacrificing everything that came before it, in exchange for a few cheap laughs. You chose the dark side.

You chose the dark side.

Did you think, for one second, that the temptation to do what you have just done was not an easy temptation all along to the original teams, just as it was for you?

Do you think that living in 2015 somehow suddenly makes such a thing a good idea? Perhaps that only now would we “get” such a joke? Give me a break.

Yes, yes, the Muppet Show was made “for adults”. And quite often the show would venture briefly into comically dark places. But these ventures always fell short of true cynicism of cold reality. Never would the Muppets cross the line into the seedy underbelly of real life. Of genuine cynicism, grime and fear. The Muppets were never cynical, they were never crass. They always reassured us with a deft wink. They were always tethered to a balloon that kept them floating, kept them from descending. And in so doing they defended and insulated us from the bottom half of life. That was their role and very reason for being after all! They gave us a world that we could to escape into. One that wasn’t reality.

Why then have you concluded that being an“adult” today must equate to being cynical, inwardly conflicted and cold?

I have no doubt that as the new show and its tone was being developed words like “edgy, fresh, and real” were used. Which always, bar none, sounds like a good idea in any board room.  Who wants to be the opposite of that?  Further that you probably felt the writing of the later Muppet movies and presentations were growing stale and you must have talked at length about breaking through that staleness with a “modern, fresh take”.

I do not believe, as you must, that the Muppets innate lack of cynicism, and consistent distance from the grotesqueness of the bottom half of real life, was the reason the material was not compelling enough, not fresh. That, I believe, would be a misdiagnosis on your part.

We can debate the quality of much of the writing in later movies and years. Some of it was admittedly a bit tired and occasionally not very good. Not as good as Pixar. Some of the later movies suffered a kind of lack of meaningful stakes for the characters to respond to (one might also argue this was true of Most Wanted). But, and I suppose this is one of my main points, I do not believe, as you must, that the Muppets innate lack of cynicism, and consistent distance from the grotesqueness of the bottom half of real life, was the reason the material was not compelling enough, not fresh. That, I believe, would be a misdiagnosis on your part.

What’s worse, by depicting The Muppets in our often tragic, imperfect real world via the reality-TV, documentary style, and imbuing the characters with peculiar new behaviors, inconsistent with their legacy, you have, perhaps unintentionally, established that anything the Muppets may have been before – any purity or innocence they may have shown us in the past – all of that was actually unreal, an illusion, just show. That by rewriting their characters and motivations to be able to exist in our world, you have introduced the idea that whatever we thought they were – with their original personalities, these were just parts, roles they’d been playing before. That only now are we seeing the “real” Muppets, their real lives, behind the scenes, for the first time. The suggestion is that they were actually like this all along, we were just never exposed to what they do off-camera before now.

As a result, you have instantaneously undone and debunked Henson’s entire great legacy.

What a shame.

As a result, you have instantaneously undone and debunked Henson’s entire great legacy. What a shame.

Yes, Miss Piggy, and others as well, have made occasional appearances in our “real world” for decades.  And it was always met with a level of heightened enthusiasm from audiences.  It’s pretty transparent that this partly inspired your approach.  But it was not so simple.  These appearances, and the occasional overlaps with our real world was always a delicate balancing act. Those brief appearances were a magic trick that only worked because their world, the Muppet’s world, still existed somewhere. Piggy was only visiting us, breaking the 4th wall of their world.  And we were all in on the joke. Her dips, so precariously close to the line on those occasions, were handled with extreme care and awareness.

And on a superficial level, yes, most of the movies even appear to happen in our world – but they never did. It was always the muppets own world, and it only looked a lot like ours. On the Muppet Show and in every Movie, human actors always joined the Muppets in their world.  Not the other way round.

…one instantly feels that we were never meant to see any of this.

But by eliminating the existence of the Muppet’s safe, insulating world, as the new show appears to have done, you have scraped them raw. Laid them bare. There is no Muppet world left to poke through or join into. The magic has been surgically removed. Like pulling the skin off a live animal, we now see with discomfort, the organic muscle, ligaments and bones hidden underneath. And one instantly feels that we were never meant to see any of this.

Fozzie

I think the appearance of Fozzie in your teaser best captured this problem and as such caused my heart to sink most of all.
So, Fozzie has a sexy human girlfriend. Um… ok. Feels quite out of character and slightly creepy, but alright, I’m sort of with you, maybe, MAYBE that could work, Miss Piggy was briefly attracted to William Shatner years ago (although that WAS absolutely in character for her).

COMIC howard the duck magazine 7

Howard the Duck and Hot Girlfriend

But then you bring us to the real home of his girlfriend’s disapproving human parents, they reveal their “secret” romance, she calls him “Honey”, Fozzie’s panic over his pending unemployment, and all that stark reality is run through a way-below-the-line, icky exploration of a kind of cartoon bigotry and a clear intimation of sexuality. Such ideas were somewhat funny in the Howard the Duck comics – a comic world specifically designed to explore these topics – but feels utterly out of place and even grotesque here because this is not some random bear. This was, we all thought, our innocent, beloved Fozzie. But it slowly dawns on us that, no, this is not our Fozzie, it’s a strange imposter. Even Fozzie’s voice change, no longer performed by the brilliant Frank Oz, might have passed by without bothering us much, but packaged within a sweaty real world just makes us feel queasy.

As a result of this scene, we are not so subtly asked to consider Fozzie’s underlying drive for survival and even his reproductive needs. Requisite mental images of the two of them “sleeping together” conjure naturally as a side effect of your scenario. Oh, sorry, you didn’t even think of that, right? Images of the two of them having sex? Never occurred to you? Uh huh, sure, how sick of me to even think that. Yeah, right, convince yourself of that.

Images of the two of them having sex? Never occurred to you? Uh huh, sure, how sick of me to even think that. Yeah, right, convince yourself of that.

Face it, the joke of that scene – the uncomfortable humor – comes from the fact that a real woman is really truly dating a bear puppet. Ha ha ha. The rest of the mental images are just falling dominoes.

Gonzo’s love affair with Camilla the Chicken never had this kind of real-world context and intimated followthrough.

You’re showing us inauthentic things that, speaking as a viewer, we never wanted to see. You’ve pushed deep below the line and opened a big ol’ can of slimy worms: if Fozzie can be unemployed, does he get unemployment checks? Since he’s in our world, well, one must assume that he does. If he can’t afford food does he go hungry or beg? Does he mooch off his friends? Either way this is all a kind of undeniable, below the line thread that just feels icky. But why stop there? One is almost encouraged then to wonder all sorts of things – perhaps whether Fozzie gets feces stuck to his fur when he defecates. Does he wear a condom? No, don’t feign surprise at all this. Please see that this is the natural result of pushing below the line as you have. You have broken those boundaries, and opened these thoughts, not us. Though undoubtedly you would feign surprise at such implication.

Good god, Disney! You have whole buildings full of departments in place devoted to ensuring that Mickey is never caught in compromising positions. How dare you turn around and do this to our dear old friends.

Kermit and Piggy

Good god, Disney! You have whole buildings full of departments in place devoted to ensuring that Mickey is never caught in compromising positions. How dare you turn around and do this to our dear old friends.

Really, now Kermit actively dates and is “hopelessly attracted to pigs”… in general? Ugh, too much information, yet again.

Gonzo was insane – and loved chickens. That worked. It never dipped into sexuality because he was truly an eccentric. But Kermit is sane, he’s our hero, the reasoned one – and therefor this new intimation that Kermit sleeps around is once again moving towards the too-real grotesque. And this focus on his and Piggy’s TMZ break up – as though they actually ever had a relationship – seems totally misguided.

IMG_0933b

The garbage can in the foreground says it all.

Yeah, yeah, we get it, if they are “broken up” it gives the characters and narrative something to build to. And it’s tabloidy which plays into the whole theme, and maybe most important of all, serves as free marketing.

Brilliant.

Hey, you’re the writers, but throughout the Henson years, the beauty of their story was that, well,  Kermit and Piggy never really had an official relationship to break up over.  They flitted around the idea, flirted with it you might say, Piggy always on the offensive, and Kermit never quite connecting. Like so many other things, even their relationship always hovered just above the line. They’re so-called relationship was a slippery and elusive concept. Totally non-committal.  By design.

But by bumbling into the the Muppet universe flailing, mouth-breathing and drooling as you seem to be, you are knocking over these delicate constructions, it seems, without much care for the original rationale or their great benefits.

Gonzo

This failure, like the others, is so obvious and easy to see.

In your teaser you chose, of all characters, Gonzo to criticize use of “the office interview” format.

Should have been funny. I wanted to chuckle because the observation was a good one. But I found myself wincing a bit. Don’t you see, you chose perhaps the only character in the entire Muppets main cast, next in line perhaps to Animal, who lacks enough self-awareness to even have such an opinion in the first place? So it just feels strangely “off” somehow. Not to mention that Gonzo’s, well “GONZO” has been completely denied. Now Gonzo is suddenly just some calm, rational guy? Seriously?

Hello?! Gonzo is many things, but calm and self-aware was never – and I mean like ever – one of them.

TMS.301.GonzoPiano

Gonzo recites the seven-times table… balancing a piano. Naturally.

This is the guy who overenthusiastically agrees to every insane, wrong idea, no matter how absurd – in the name of art. That’s who he is. You see that, right? The guy who shoots himself from cannons, wrestles a brick, tap dances in oatmeal, recites shakespeare while hanging from his nose, Plays bagpipes from the top of a flagpole, recites poetry while diffusing a bomb, hypnotizes himself, and wants to go to Bombay India to become a movie star because it’s not the “easy way”.

Did you, even for a second, consider that just maybe Gonzo was the completely wrong guy to feel vaguely self-conscious and introspective enough care about such a subtle little narrative device? Do you really think he, of all characters, would really care? Piggy sure, Fozzie maybe, but freaking Gonzo?! You’ve lost me. The reason that joke wasn’t funnier (and it should have been), is because you chose the wrong guy. You went fully against his long-standing character. And we all felt it. Maybe younger viewers don’t remember enough to care, and maybe most long-time viewers couldn’t quite put their finger on why – but sure enough – it just felt weird. And it’s another example of your apparent inability to defend and shepherd The Muppets at a most basic level.

What Worked

Lest you think I did not appreciate any of your effort, there were, what I would call, a few “authentic, above the line, classic Muppet moments” in the teaser too.

Miss Piggy’s walk, smack, into the glass, leaving a nose print. A brilliant moment.

That creepy “incredibly obscure character” with glasses who talked with his tongue between his teeth was funny as crap.

I’m conflicted on Rowlf wearing the big surgery collar. I laughed authentically at that. And although that doesn’t sit above the line, maybe ON the line, a very careful, self-aware break like that can clearly work, so long as the Muppet universe is still intact.

Work Harder

Look, truth is – the idea that, say, a puppet is dating a real girl, probably has sex, meets her disapproving, real parents, and maybe loses his job and all that, that’s actually really funny.

ted-article

Ted smokes weed.

Seth Macfarlane’s Ted did that a couple years ago and it was a good movie. A teddy bear that has sex, smokes weed, swears – it’s totally juvenile and funny as Hell. I loved it.

And then there’s “Meet the Feebles”, Peter Jackson’s obscure, disturbing puppet movie that includes a frog prone to vietnam war flashbacks, a pornography-directing rat, suicide, adulterous three-ways, alchoholism, drug-running and all sorts of other far below the line topics.

meet_the_feebles_04_stor

A gun to the head of a character in “Meet the Feebles”

But the Muppets? In one of your bumpers Rowlf talks about being followed by cameras into his bathroom at home. It’s kind of funny, but so now Rowlf uses the can? This is a very slippery slope you’re on.

In the old days these topics could never find their way into the Muppet consciousness. The Muppet world was intentionally disconnected from all that. But now, stripped from their world, these  real-life concepts begin to co-mingle, and indeed they will.

And that’s not who the Muppets are. You should have known better.

“Hey – you’re making all this up! We never said they had sex, and we definitely would never show them doing drugs or taking a dump!!” you say.

No? But that is the world you have directed them to inhabit. All these ideas, and a lot more, exist in our real world, and you have placed them in that exact real world. You have provided no buffers. No signals. No insulation from the edges of that very cold reality. Indeed, your every creative decision has amplified it. You have said, “They live with us here, amidst our real-life challenges, filth, and complexity.”

What a monumentally bad call.

You have said, “They live with us here, amidst our real-life challenges, filth, and complexity.”  What a monumentally bad call.

If that’s the show you wanted to make, why, oh why didn’t you just work harder, take some risk (e.g.. by not trying to rely on the automatic, positive associations we all have for the characters), and instead invent a new set of colorful characters of your own. Some who could more naturally play out the decidedly unMuppet-like topics you are shoe-horning our old friends into? I would have actually enjoyed seeing that show to be honest. I would have tuned in, and I’m sure I would have laughed. Ironically the connection to the the Muppets and every other pillar of innocent puppetry would have been obvious. But at least then you would have been arguably protecting and defending something the world still needs.

We needed the Muppets that Jim Henson left us.

We needed the Muppets that Jim Henson left us.

But instead you chose to exploit our gentle, rainbow-yearning friends into the same old, daily gutter that we were all, ironically, trying to escape. All in trade for a couple easy, if uncomfortable, laughs and the benefit of a built-in audience.

“Hey, the Muppets were all about cheap laughs.” True, but you did it at the utter expense of their very long and hard-earned legacy. You threw that gentle, magical, innocent legacy under the bus of reality. And that is not where The Muppets great and endearing humor belongs.

In doing so you have so far proven yourselves unworthy guardians of these beloved icons.

And from Disney of all places. Hard to imagine.

Well, I’ve made my point ad nauseam. So all I can do now is beg you, please, please be more careful.

These are our dear friends.

And corny as it sounds, the world still needs that rainbow.  Maybe now more than ever.