A Statement on A.I.
Thoughts on business's fear and fanaticism regarding The Current Next Big Thing
After a year’s hiatus, I suppose it takes a lot to get back to this newsletter. The current froth about A.I., ironically, inspired me to re-engage in the activity everyone is trying to tell me will be replaced — writing.
There’s a ton of smart commentary out there, but also a lot of context that people are missing. Grab a cup of whatever swill you gulp down on a Sunday morning and let’s get at it, then.
Let’s just get this part out of the way.
In every new technology epoch, there’s an early tendency for opinion-shapers and influencers to cast anyone not fully imbued with the passion of the convert as a grumbler, doubter, resister, refusenik, or Luddite. You’ll recognize this species within your LinkedIn feed and in the press because they are the same people you saw doing the same aggressive finger-wagging during the advent of the World Wide Web, social media, and in the industry where I currently ply my trade — crypto, Web3, and otherwise blockchain-enabled technologies.
Few would call me a Luddite given my career history. In fact, my curiosity in embracing new technologies has placed speedbumps in front of my career trajectory only slightly less often than igniting thrusters behind it. Any forward progress I have achieved in this career while keeping my head held high and retaining my soul has come from edging this balance toward the latter.
Further, my track record as a limited-purpose public figure shows that my posture has always been one of calling people in rather than calling people out.
In that spirit and with modern expectations of unnecessary throat-clearing thus satisfied, let’s get into my first impressions about A.I., marketing, and business.
“Automation. Automation Everywhere.”
It’s kind of rich that the cohort that blithely talked about retraining the blue-collar workers displaced by automation and offshoring are now suddenly worried about the disruptive effects of automation. (“Learn to code,” indeed. ChatGPT apparently does that pretty well too. I’ll write about that experience another time.)
When I worked at The World’s Largest PR Firm, one office president discovered that about 10-15% of billings came via junior employees who were cutting out news clippings, pasting them on sheets of paper, photocopying those sheets, binding them, and sending them to clients. (This was in the early 2010s, but clients still thought of online coverage as far less-desirable than “the print version” and wanted to see the evidence.) As digital clipping services became more reliable and presented more client-ready output, this small-but-highly-profitable revenue source shrank.
The firm made up for that disappearing revenue with value-added services, which included vastly expanding the role that a PR firm played and the range of services it offered. Rather than destroying entry-level jobs, it made them more parallel with the college degrees that those jobs mandated.
It’s kind of rich that the cohort that blithely talked about retraining the blue-collar workers displaced by automation and offshoring are now suddenly worried about the disruptive effects of automation.
If half of someone’s job can be automated away by A.I. or anything else, that means that the other half needs to deliver value that automation cannot. This is not new, neither in white-collar nor blue-collar environments.
Also, keep in mind that a good chunk of that sacrificial half comprises tasks that few people would miss, such as meeting notes, document summaries, and so on. (Curiously, I find very little conversation about A.I. and its relationship to administrative professionals.)
Toward an Intellectually Antiseptic Future
From one of my favorite moments in the movie The X-Files: Fight the Future:
“Whatever happened to playing a hunch, Scully? The element of surprise? Random acts of unpredictability? If we fail to anticipate the unforeseen or expect the unexpected in a universe of infinite possibilities, we may find ourselves at the mercy of anyone or anything that cannot be programmed, categorized or easily referenced.” - Agent Fox Mulder
There’s a species of business leader who actively, loudly desires an antiseptic, frictionless, DWIM-ified world where everything magically self-assembles according to their needs and wants without any human interaction or meaningful external evaluation. All biases confirmed. All uncertainties sidestepped. All but the desired reality muted.
While I can understand the appeal, there’s a uniquely tragic soullessness there; a lamentable surrender of self that inspires the same sadness I felt when I read that people started outsourcing, of all things, their musical tastes to professional curators. (“Tell me what I need to like.”)
The promise of advanced technology, alongside all of the advantages and enhancements it brings, exacerbates this particular afflictions. The commentary about A.I. produced by and for this group absolutely drools over how much cheaper and faster content-generation will become. “Some [clients] admitted that I am obviously better than ChatGPT,” one Redditor reported. “But $0 overhead can't be beat and is worth the decrease in quality.”
First, I find this thinking from business leaders to be profoundly, distressingly incremental, amounting to little more than “I can save money on blog posts!” This dearth of insight presents as all the more disappointing given the platforms that this cohort has earned.
Second, I am not sure I want to work with someone satisfied with an A.I.’s raw or even lightly edited output, much less anyone who claims that they’ve never hired a better writer than ChatGPT. It betrays, at best, incredibly poor taste and unexamined media-consumption habits and, at worst, a loss of control over both their own identity and that of their business. Call me crazy, but I like to be inspired by the people I work for, not feel sorry for them.
After nearly five years in a marcom-leadership role, I’m honestly far more concerned about receiving low-effort, A.I.-sounding content coming from humans than I am to any degree excited by the slim possibility of truly superb content coming from A.I. In other words, I don’t care if the human used ChatGPT. It just can’t read like it.
[More below.]
“Are you being served?”
After experimenting with ChatGPT and Google Bard extensively, it strikes me that these A.I.s desperately want to please the user so long as they can do so in the most inoffensive, vanilla, and anodyne ways possible. They are built to deliver the frictionless world described above.
For example, I asked ChatGPT to write an essay comparing Data, the android from Star Trek: The Next Generation, with a broken toaster oven. ChatGPT generated five paragraphs of incredibly strained, high-school-freshman-level logic of the kind students employ when they are compelled to satisfy a mandatory word-count. (“In conclusion, while the broken toaster and Data may seem like an unlikely pair to compare, there are interesting similarities and differences between the two. Both are machines that rely on complex systems to function, and both can experience malfunctions or failures.”)
These A.I.s desperately want to please the user so long as they can do so in the most inoffensive, vanilla, and anodyne ways possible.
However, a better result would have been output like “There really are no meaningful connections here. Are you sure you want to explore this further?” The surfacing of meaningful connections is an incredibly useful A.I. capability, which will only become more powerful as search and productivity tools become more A.I.-enabled.
This is what I want A.I. to deliver — to help me figure out what I need rather than just hand me what I want.
This evokes a familiar scenario where a consultant advises a client “This is a bad idea,” offers alternatives, sees those alternatives rejected, and receives threats for their trouble. (“Well, if you won’t do it I will find someone who will!!”) With A.I., now the DWIM-prone business leader — as flawed as any other human — has the most compliant-possible option, to the detriment of everyone involved.
Again, “Profoundly Incremental”
Most of the A.I. discussion appears to center around the automation of production activities — copywriting, meeting notes, image creation, presentation development — and very little on automating the fundamental aspects of program planning and strategy. This is perhaps another case of people gleefully supporting the use of A.I. so long as the price of the services they buy go down and that of the services they sell go up.
Since about 2003 or so, I had an idea that the task of planning a launch from a public-relations perspective should be automated, on the order of reducing weeks to an hour or two. It was an issue where we had plenty of Big Data but not nearly enough Big Math.
For example, if you had a major technology launch, you knew you needed to approach John Markoff of The New York Times. If you read enough of his work, you’d also know that from the late 1990s through the mid-2000s, he tended to seek out analyst Richard Doherty of Envisioneering in order to provide additional context to a story. Part of a well-planned, world-class launch by an experienced practitioner came with the understanding that you should really try to brief the two. Now, expand this research geometrically across numerous media outlets, analyst firms, influencers, social media platforms, interest groups, and so on and you’ll see what kind of a task launch planning could become.
Given that most of my clients at the time had thin-on-the-ground marketing departments, an A.I. would have been useful in providing rapid decision support in terms of where to place educated bets and what messages or approaches would have been most likely to resonate given historical trends and relationships. The datastore was there, but the technology available didn’t really serve up a reliable result quickly. The difficulty and unreliability of automated sentiment analysis only added to the task.
What I’m Most Interested In
I welcome the current explosion of A.I. tools and the creativity that I believe it will enable rather than replace. That said, I currently have my doubts about business leaders’ collective ability to see past the short-term cost savings these tools can provide.
This brings us to A.I. tools I would buy with my last dollar if I had budget lined up to do so.
As described above, the first would be any tool that helps provide decision support in the research-and-planning phase for integrated marketing programs. Take the influence-mapping example above and layer in all other marketing channels and disciplines. It must be able to handle fairly niche industries like mine and not rely on the volumes of data thrown off by mass-market scenarios like consumer packaged goods.
The next would be anything that accelerates human creativity — a bicycle rather than a crutch — and enables very small teams like mine to outperform. Deft handling of ChatGPT is one way. (And I thank John Biggs for the excellent primer he gave my teams last month.) Bonus if such a tool can operate in the same range-of-motion as everything else and not merely serve as a more advanced Clippy, which relies on interruption in order to work.
In short, I welcome the current explosion of A.I. tools and the creativity that I believe it will enable rather than replace. That said, I currently have my doubts about business leaders’ collective ability to see past the short-term cost savings these tools can provide. This means that marketing leaders will have to advocate for the careful, thoughtful integration of these tools in parallel with the up-leveling of their teams.