AI NEWS REPORTING ISN’T SOLVING THE PROBLEM

Dr. Jason Lawrence is a technical professional documentation writing knowledge specialist in Cheshire Connecticut.

I was surprised to find AI news reported by the following news sources. However, I was suspicious of the articles because of the sensational way they referred to “scared” OpenAI computer scientists. Consequently, I read the articles, as well as the original press release to which the articles refer.

Researchers, scared by their own work, hold back “deepfakes for text” AI.

Sean Gallagher writing for Ars Technica

Too Scary? Elon Musk’s OpenAI Company won’t Release Tech that can Generate Fake News

Edward C. Baig writing for USA TODAY

OpenAI and Implications of their Better Language Models

The news report articles describe a press release from Elon Musk’s OpenAI initiative. Apparently, OpenAI wrote software code for an AI that can write news articles, based upon small amounts of text input. The machine learning code is named GPT-2. OpenAI computer scientists provide some amazing samples of GPT-2’s writing and I’m impressed at the capacity of the machine learning software they have created. With only a few text input prompts, GPT-2 can write a complete news article.

Unfortunately, GPT-2 hasn’t yet figured out how to actually write factually correct articles. It does draw from real information but the machine learning can’t possibly be expected to put the information together in a way that makes sense. That said, it makes far more sense than an AI shareware I found–dated from 2013. OpenAI’s GPT-2 writes with complete sentences, accurate spelling, and correct punctuation. It just doesn’t know what it is talking about–YET!

Where the Two News Articles went Wrong

The two articles go wrong because their titles misrepresent several things. Of course, the subsequent stories follow the path set by the titles.

First, they misrepresent the mission of OpenAI. OpenAI is all about staying ahead of the development curve to set up the ethical best practices for the rest of the market. Musk started the organization because he couldn’t trust the free market would generate artificial intelligence that put people first. I seriously doubt a super intelligence is going to come along and mess up humanity; the energy required to overcome mankind wouldn’t be worth what it would gain as a result. Regardless, Elon Musk just wants the world to have access to conscionable software code. OpenAI’s mission is to release that ethical software and that is exactly what they did.

Second, they misrepresent what OpenAI was actually trying to accomplish. I’m not some blind, faithful OpenAI believer; I just understand what OpenAI wanted to accomplish. If some free agent wanted to use AI to generate an endless avalanche of fake news then code to anticipate it and build tools to stop it is now available. They state in the press release: “the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.” It is that thoughtful way that Musk hopes will counter the machine of enterprise business software. The fact that OpenAI removed the parts of the code that make GPT-2 the most efficient, just means they released the ethical part of the code–just like their mission suggests.

Third, they omit the software mechanism in an attempt to heighten the mystique of AI. OpenAI released a technical paper, the code, and wrote a press release to explain how GPT-2 works. They also provided samples of what it looks like. They explained exactly what they removed from the GPT-2 code: “We are not releasing the dataset, training code, or GPT-2 model weights.” The GPT-2 code is available but the means to create fake news was not released. They suggested they were completing a responsible release. The update to the article, two months later, identified the longterm release schedule, following the original release of code in March.

Finally, the articles insinuate that OpenAI’s motive was fright. OpenAI computer scientists were scared of their own creation; they did not believe humanity was ready for the responsibility; they could not let the code get into the wrong hands. These are the worst kinds of sentiments from science fiction. Actually, the news article writers skipped the first sentences of the press release; they skipped the part where OpenAI articulated their motive. They didn’t want to release the model they trained and the data they used to train the machine; they simply released the code alone because they didn’t want to promote malicious applications. They were not a bunch of scared Dr. Frankenstein’s terrified of their monster.

The Truth about GPT-2

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

This passage is an excerpt from GPT-2’s article about a herd of unicorns discovered in the Andes mountains. GPT-2 writes very well. However, GPT-2 doesn’t make any sense, unless you can make sense out of Ovid’s four-horned, silver unicorns.

The most amazing thing is the way GPT-2 uses 40 Gigs of internet text to predict the next word selection that matches the pattern of words it has already selected. It doesn’t predict content but that isn’t really the point; after all, GPT-2 is simply a software that models language. The point is that we are at the beginning of something incredible and while the news media is supposed to find the scoop, it is actually missing the real story.