Early in my journalism career, before I joined a well-known media outlet, I called an investment banker for insight on a business deal in the industrial sector, my beat that I was just learning about. During our call, he said a phrase I hadn’t heard before, so I asked, “What does that mean?” His response would color my reporting for years after. “If you don’t know, I’m not sure why we’re talking.” And then he hung up.
We’ve all heard the well-trod phrase, “There are no stupid questions,” but in journalism, as in life, sometimes you’re made to feel there are. His implication – that I should know all the ins and outs of the industry, all the corporate catchphrases and lingo – before having the privilege to speak to him – changed the way I reported. After that interview, I didn’t feel like I could ask potentially stupid questions. In fact, I found in the finance world when discussing mergers and acquisitions with bankers, analysts, and executives, it’s best to present yourself as having more information than them. Information such as – what are the terms being discussed? What parties are interested in the deal? Who is involved? – are all powerful pieces of knowledge that can be traded to fill in the pieces you don’t know.
But this means you’re often missing out on truly understanding what is happening at the moment. By needing to google terms later or ask a friendly source or expert to describe a term or concept and why it matters, you’ve missed an opportunity to dig in deeper.
Once I landed at Bloomberg, things changed. I was no longer begging sources to talk to me; sources now came out of the woodwork to speak to me (or more precisely, to speak to Bloomberg). Now I had the confidence and privilege to say to an expert, “I don’t understand what that means,” without them hanging up on me.
Years later, after I had switched sides and was working for a communications agency, I was helping to prepare a biotech CEO for an interview at the JP Morgan Healthcare Conference, the Super Bowl of the biopharma business events. He had landed an interview with Andy Pollack, the New York Times’ long-time healthcare reporter. Andy had joined the newspaper in the early 1980s and had been covering the biotech industry for more than a decade when the interview took place. Even getting the interview was a massive PR win regardless of the outcome.
Understandably, the executive, his team, and the PR agency were nervous about how it would go – would he have a detailed, specific question about the company’s science that would catch the CEO off guard? We prepped for hours, making sure every potential angle and topic area was covered. It felt like getting ready for the SATs – trying to shove too much information into your brain as quickly as possible and hoping you could regurgitate it in as quotable a manner as possible.
When we got into the small, dim media room at the top of the conference hotel, we sat down across from Andy, pencils at the ready. Where would he go first? Would it be some obscure or difficult question about a competitor or scientific study we hadn’t seen yet? Instead, he simply said: “Pretend I don’t know anything about healthcare or science – explain to me what you do.”
It's a brilliant question. Here’s someone with years of experience who could probably explain any number of complex topics with accuracy, detail, and examples. And yet he was challenging us to explain something complex and detailed simply.
I find the best and smartest people can do this – break down their area of interest into simple and memorable terms that anyone can understand. And do it without an air of haughtiness. The smartest people typically want to help others understand their passion, to share their interests in a way that turns others into believers as well. I appreciated that the Bloomberg writing guidebook extolled us to write for both the expert and novice while being informative. It’s actually quite hard to do.
As technology develops, it has become exponentially easier to find the basic information you need to understand a new topic. As a child in the 1980s, my parents bought us at great cost the entire set of Encyclopedia Britannica – dozens of leather-bound books with gold gilt edging – no doubt hoping we would devour the thousands of pages and commit them to memory, which would lead to straight As, acceptance to an Ivy League school, and a prestigious and well-paying career. I think I flipped through the cellophane human body pages and read sections on ninjas and samurai. The remainder of the books sat dusty and in the dark for the remainder of our ownership.
There were even TV ads that, for some reason, I still think about often for Time Life Books that provided you instructions and pictures on how to fix basic things around your house – shelves, sinks, etc. My view of manhood as a child was that you either grew up to know all this stuff – plumbing, electrical work, carpentry, car repair – or you had to call a professional and feel deep, immense shame and would probably remain alone for the rest of your life. Thankfully, we now have YouTube where I can rewatch these TV ads for Time Life Books and laugh before watching a video on how to repair my clogged dishwasher and feel no guilt or embarrassment to my family.
We no longer live in a world where you need resources to have deep knowledge of a subject to be able to learn about it. You don’t need to own piles of books, take classes, or memorize facts and figures across a wide range of topics. And you no longer need to spend hours googling, searching, reading. Now there is AI.
I’ve been amazed at how fast AI – specifically tools like ChatGPT – has emerged onto the scene and immediately offers many wide use cases. Unlike other technologies that have debuted during my lifetime – personal computers, the Internet, social media, mobile phones – AI seemingly came onto the scene fully formed and ready for use.
Those other technologies took years if not decades to be functional enough for everyday use. Not AI. With AI, I can ask a question on any topic and get an instant answer. I can have it respond to me as if I were a child or an educated adult without a background in a subject. I can have it summarize the latest news stories, scientific research, or online conversation and present me with a quick overview.
AI erases the privilege of being able to say, “I don’t understand.” You can ask it and get an answer with no shame or need to pretend you already knew that.
There is much fearmongering about what the rise of AI will mean for humans. Will it become malevolent and destroy us all as depicted in many sci-fi movies? Is it all hype, a shiny new toy that will prove to be useless to most people? And then there’s me: am I deluded in thinking that the ability not to need to learn a wide swath of information on a panoply of topics is a positive thing?
This question has been asked before long ago. Plato wrote about the dangers of writing to humanity. In Phaedrus, he recalls (or invented) a conversation between his teacher Socrates and Phaedrus where the former declared: “If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves but by means of external marks.”
Writing has existed (on and off) for millennia and, for most of that time, has been available only to the elite and wealthy. The printing press, the spread of public libraries, and public education have all helped in expanding access to knowledge. The Internet was supposed to expand it in ways we’ve never experienced before. While there are still paywalls and barriers to entry, you can pretty much find anything you want online. Are we smarter as a society?
As someone who was running communications for companies that worked to develop gene therapies over the past few years, it’s hard to tell. I used a service that would scrape news sources and social media looking for anyone talking about gene therapy. I wanted to see if any key people were driving the online conversation on this topic and how we might be able to participate. I imagined an online utopia where doctors, scientists, government officials, patients, executives, and experts would be having engaging conversations and debates. Instead, most people were screaming IN ALL CAPS that Bill Gates was trying to poison you with gene therapy COVID-19 vaccines.
I’m not exaggerating on this – you can set your search results to return news and social posts that are measured for sentiment, either positive or negative. I would filter out all posts tagged negative and typically 90 percent of the content mentioning “gene therapy” would be gone from the results. Among the remaining 10 percent of the hundreds of posts I would review every week, half of those were negative but just phrased with sarcasm, so the sentiment filter wouldn’t pick up on it.
Misinformation is powerful, and a lack of education or the ability to rationally think or evaluate information is the key skill society needs to function effectively in the future. AI can help.
Now many people are content to live in their information bubbles. They can receive positive affirmations and feedback loops for their position. It’s difficult to break out of this – check out the Reddit page on “QAnon Casualties” where people share depressing stories of family members and friends who become so taken by conspiracy theories they end up destroying all their relationships and becoming scared, angry people.
How can AI help? Well, perhaps this is a pipe dream, but AI could be trusted as a neutral arbiter of information (yes I know, a big “if”) and people used it to inform themselves or question their beliefs, then maybe – just maybe – falling for misinformation wouldn’t happen as frequently.
Outside of big societal changes, AI can be useful for many small daily practices. As a reporter, it’s valuable to have a friendly source or expert who can break down a subject for you. The Internet can also help, but the ability to sort through dozens of links – many now sponsored and unhelpful – takes time and effort. For me, AI has become that expert. I use it in my role as both a freelance writer and communications consultant. When I need to describe the delicate dance of RNA, proteins and cancer drugs, I check with ChatGPT to see if what I’m saying is accurate. I ask it to find me relevant scientific studies on a particular subject matter I’m researching to help prove a point, and it provides me with a list of titles, abstracts and links.
Unlike a Google search, which requires some discernment between advertisements, sponsored results, biased sources and junk content, AI could provide a trustworthy result. While that isn’t necessarily the case today, with AI-generated results on Google providing misleading or even outright fake answers, with time and training (and more data sets), AI can be a go-to source for answers on any number of topics.
For me, AI has torn down the privilege that previously restricted being able to say “I don’t understand,” by letting anyone now ask the question in private. You don’t need to worry about the embarrassment of not knowing something or being unsure of how to understand a concept. AI will give you an answer with no judgment or shame. If people use it in this manner, then maybe the self-segregating cults and online cul-de-sacs will start to lose their staying power. Perhaps we might even start to agree on a shared truth and reality again.
At the least, it could help you not get hung up on by a snooty expert. And would that just be grand?
Speak to me as if I was a small child, or a Labrador….always great for a “hot start” conversation starter..