OpenAI is a non-profit organization floated by Tesla, The Boring Company and SpaceX CEO Elon Musk, and Sam Altman, president of Y Combinator, on December 11, 2015. Musk left the organization in February 2018, citing conflicts with his job at Tesla, Inc.

OpenAI was started with an endowment of $1 billion, and its objective is to help develop safe AGI, or artificial general intelligence, to benefit the human race. It originated from the founders’ fear that AI could pose the “biggest existential threat” to humans.

At a summer meeting of the US National Governors Association in Rhode Island in 2017, Musk said this:

“AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late. AI is a fundamental risk to the existence of human civilisation.”

OpenAI stands for collaboration and knowledge-sharing within the realm of artificial intelligence. Its patents and research papers are supposedly freely available to members of the public, barring cases in which it could have a negative impact on the safety of humans.

Sponsors of OpenAI include the following people and organizations:

YC Research
OpenPhilanthropyy Project

These are the current positions held at senior management level, including founding members:

“OpenAI’s research director is Ilya Sutskever, one of the world experts in machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.”

In recent news, OpenAI has developed a trained model called GPT-2, which can be used to generate fake news articles. It has been trained with a dataset of 8 million webpages, and is capable of writing realistic articles based on its training data.

Soon after the news broke, Elon Musk tweeted:

“I’ve not been involved closely with OpenAI for over a year & don’t have or board oversight. I had to focus on solving a painfully large number of engineering & manufacturing problems at Tesla (especially) & SpaceX. Also, Tesla was competing for some of same people as OpenAI & I didn’t agree with some of what OpenAI team wanted to do. Add that all up & it was just better to part ways on good terms.”

The company has decided not to release the trained model of GPT-2 for obvious reasons, but it will release a smaller model for the purpose of research.

“Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

Although GPT-2’s objective is simple – predict the next word from previous words in the prompted text – it is quite scary to see what the program can come up with. Here’s one example:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.

What’s frightening about this simple piece of fiction is that it sounds like a well-researched news piece – minute details and all. Of course, it cannot cite or link to other sources and, therefore, cannot be considered genuine; but for the ill-informed public, it’s as good as news from a reputed media house.

Fortunately, OpenAI has decided not to release the trained model, or the implications might have been severe. Imagine a bad actor getting hold of the trained model and disseminating false information disguised as genuine-looking news stories. Isn’t that what all the fuss was about when Facebook and Google were accused of not flagging fake news on their platforms?

The development of GPT-2 throws open questions about whether or not OpenAI can actually keep such products under wraps. What if there are leaks that subsequently result in such technology being made available to malicious actors? What’s the guarantee that the product will remain unreleased, since the code already exists?

Such questions invoke doubt and fear about organizations like OpenAI being able to hold their secrets indefinitely. Food for thought?