OPENAI has now released the artificial intelligence that it said was just too dangerous to share, implying that the startup may have exaggerated the danger.

An artificial intelligence research firm called OpenAI made an announcement in February about the discovery of a new algorithm called GPT-2. This system is able to write amazingly logical paragraphs of text.

But rather than releasing the AI in its entirety, the team decided to provide just a more limited model out of concern that individuals may exploit the more powerful technology maliciously, such as to generate false news items or spam.

smallelephant

However, on Tuesday, OpenAI announced its intention to share the algorithm in its entirety via a blog post. The statement said that the organization "had observed no compelling evidence of abuse thus far."

However, It Is Not Yet to the point of Perfection

According to the article published by OpenAI, the business was made aware of "conversation" over the possible use of GPT-2 for spam and phishing; however, it was never made aware of anybody actually abusing the disclosed versions of the algorithm.

It's possible that the issue is caused by the fact that GPT-2, although being one of the finest (if not the best) text-generating AIs currently available, is unable to create material that is indistinguishable from text authored by a human being.

OpenAI cautions that it is those algorithms that we will need to keep an eye out for.

According to what was said on the company's website, "We believe synthetic text generators have a larger possibility of being abused as their outputs grow more trustworthy and consistent."