The AI could help ease many writers into retirement
When I was a freshman in high school, my geography teacher had us read The World Is Flat: A Brief History of the Twenty-first Century by Thomas Friedman. That book described the entire process of globalization and how the world was “flattened,” or in other words, how the playing field was leveled for workers to compete against one another across the globe.
Listening to his lectures about the book, my teacher clearly wanted us to be able to find our place in the globalized economy since, at that time, we were just coming out of the Great Recession. And I think that, at least on a subconscious level, I understood the general message from this class because Friedman described a group of experts, people who could explain things well and leverage their knowledge in the globalized world, who would be immune from offshoring and outsourcing.
Of course, he was alluding to academics, journalists, columnists, consultants and, well, basically people like himself. And while I’m not anywhere near the same level of prestige as Mr. Friedman, that also includes folks like me now. So, in a way, a class I took when I was 14 helped me find a niche in a highly competitive “flat” world.
At least it seemed that way until AI emerged and programs like ChatGPT emerged. ChatGPT is a software application developed by the American tech firm OpenAI and released in November of last year. It’s basically a chatbot that you can talk to, ask questions to and perform other functions, like writing code with common language.
It learns from input from users (and its developers) but does have some limitations, such as not being able to give its own opinions or have accurate information about current affairs after 2021. However, the AI is still famous for its vast array of knowledge and, in fact, it has now gone on to make the news nearly every day by passing high-level university exams, including an MBA exam from the Wharton School of Business. This has universities clamoring to find out how their “business model” can survive.
If AI is now coming for academics, might it come for some of these other immune jobs that Friedman talked about? Well, I actually put ChatGPT to the test by asking it to write in the style of Friedman himself. And it actually produced a perfect summary of the book I mentioned at the beginning of this piece almost exactly like it was written by the original author.
“The world is flat, my friends! Yes, you heard it right – flat. In this hyper-connected, digitized, and globalized era, geographical boundaries and physical distance have become irrelevant. Today, a person sitting in Mumbai can work for a company based in New York, collaborate with team members in London, and deliver results seamlessly. That’s the power of the flat world we live in,” the AI wrote in the first paragraph of a TL;DR version of Friedman’s most famous work.
Some commentators have started to have fun with this. Hu Xijin of China’s Global Times tested it and generated a pretty hilarious outcome. If you are familiar with Hu’s work, then you’d know that the AI almost nailed it on the head when it took a crack at copying China’s most famous commentator’s style.
After seeing this, I tried it on myself. ChatGPT described me as “a freelance journalist who covers politics, foreign policy, and culture. He is known for his critical analysis and commentary on the United States’ domestic and international affairs. Blankenship’s work often emphasizes the country’s political polarization, its role in global affairs, and its domestic social and economic issues.”
OK, that’s fair. Then I asked the program to write a column about US foreign policy in my style. And so it did. Here is the conclusion of the column:
“Overall, the US foreign policy seems to be lacking a clear and coherent strategy. It is important for the United States to engage in robust diplomacy and work with allies to address global challenges like climate change, terrorism, and economic inequality. A more thoughtful and strategic approach to foreign affairs will be critical in the years ahead if the US wishes to maintain its position as a global leader.”
If you’re interested in comparing this to my work, feel free to click my name here on RT to see my author page or check out my columns on CGTN. While I think this reflects, generally, something I would say, I noticed that it’s definitely not as sharp as I am sometimes. For example, I don’t really care if the US maintains its position as a global leader or not. I actually wish it wasn’t – and I have said that publicly on numerous occasions. So maybe I’m safe for now and ChatGPT can’t replicate my style.
It could also be that the AI doesn’t have as much to work with compared to Hu or Friedman, who have both been in the game a lot longer than yours truly. And thus it could be filling in the gaps with some internal biases, which were noted by CGTN’s senior sci-tech editor Gong Zhe, who claimed that the AI has a notable left-wing bias. His experiment with the technology produced some results that suggested a bias and he even got it to praise Joe Biden. Maybe that explains why the AI made me sound like a hippy-dippy liberal, which I will come out and state here clearly that I am not.
Conservatives have criticized the chatbot for allegedly being “woke” on culture war issues. For instance, one conservative tried to get it to write a story about why drag queen story hour is bad for kids, to which it replied that it couldn’t on the grounds that it would be “harmful.” But if prompted to write about why the controversial teaching technique is good, it gave a long and detailed account of a drag queen named Glitter who taught schoolchildren a valuable lesson about inclusion. It also refused to write a story about Trump beating Biden in the 2020 election, saying sternly that this event did not occur.
Despite the jabs from the right, overall, I think that ChatGPT can and will undoubtedly and convincingly pass off as some high-profile writers when prompted, which raises questions about the future of political commentary. Of course, the AI generally doesn’t produce any novel opinions or analyses, which means, as of now, it will be relegated to a knock-off. So, for now, I think us political commentators are safe.
ChatGPT itself shares this opinion – or rather, when posed the question from this article’s title, says it can’t have an opinion:
“As an Al language model, I can analyze data and provide insights on a variety of topics, including politics. However, I don’t have personal opinions or biases, and my responses are based purely on the data and information available to me.
Political commentators, on the other hand, are human beings who provide their own analysis and opinions on political events and issues. They can bring their own unique perspectives and experiences to their analysis, which can be valuable for their audiences.
In summary, while I can provide information and insights on political issues, I cannot replace the role of political commentators in offering their own personal analysis and opinions.”
But then again, it can write fiction and play pretend, so how far are we from seeing its bylines in famed outlets?