OpenAI researchers warned board of AI breakthrough ahead of CEO ouster, sources say
On November 22, in the lead-up to OpenAI CEO Sam Altman’s four-day absence, a group of staff researchers penned a letter to the board of directors, cautioning about a significant discovery in artificial intelligence.
According to two individuals familiar with the matter, this previously undisclosed letter and the AI algorithm in question were pivotal events that preceded Altman’s removal from the board. Altman, known as a prominent figure in generative AI, faced opposition from over 700 employees who considered leaving and aligning with supporter Microsoft (MSFT.O) in a show of solidarity with their ousted leader. Altman ultimately made a triumphant return on Tuesday after the tumultuous period.
The sources identified the letter as just one factor in a more extensive list of grievances by the board that led to Altman’s dismissal. Among these concerns were issues related to commercializing advances without fully understanding the potential consequences. Despite efforts, Reuters was unable to access a copy of the letter, and the staff members who authored it did not respond to requests for comments.
Upon reaching out to OpenAI, the organization, which chose not to provide a comment, did acknowledge the existence of a project called Q* and a letter addressed to the board before the events over the weekend. This information was shared internally with OpenAI staff in a message from long-time executive Mira Murati, according to one of the sources. The message aimed to alert staff to certain media stories without explicitly commenting on their accuracy, as stated by an OpenAI spokesperson.
Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.
Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.
Within OpenAI, there is a belief among some members that Q* (pronounced Q-Star) has the potential to be a breakthrough in the organization’s pursuit of artificial general intelligence (AGI), according to one individual who spoke to Reuters. OpenAI defines AGI as autonomous systems that can outperform humans in most economically valuable tasks.
The person, speaking on the condition of anonymity as they were not authorized to represent the company, revealed that with substantial computing resources, the new model demonstrated the ability to solve specific mathematical problems. Although its mathematical capabilities were at the level of grade-school students, the fact that it excelled in such tests generated significant optimism among researchers regarding Q*’s potential for future success.
VEIL OF IGNORANCE’
[1/2] Sam Altman, the CEO of OpenAI, the creator of ChatGPT, attended a bipartisan Artificial Intelligence (AI) Insight Forum for all U.S. senators on September 13, 2023. The forum, hosted by Senate Majority Leader Chuck Schumer (D-NY) at the U.S. Capitol, aimed to provide insights into artificial intelligence. The photograph, taken by REUTERS/Julia Nikhinson, captures Altman’s arrival for the event. The mention of acquiring licensing rights suggests potential usage or distribution of the image.
Researchers within the AI community view mathematics as a frontier in the development of generative AI. While current generative AI excels in tasks such as writing and language translation, the ability to perform mathematics, where there is a singular correct answer, implies greater reasoning capabilities resembling human intelligence. This enhanced capability could have applications in novel scientific research, according to AI researchers.
Unlike calculators that can only solve specific operations, Artificial General Intelligence (AGI) possesses the capacity to generalize, learn, and comprehend. In the letter to the board mentioned earlier, researchers highlighted both the prowess and potential dangers of AI, without specifying the exact safety concerns. Concerns about highly intelligent machines posing a danger, including the possibility of deciding on actions detrimental to humanity, have been discussed among computer scientists.
The letter also referenced the work of an “AI scientist” team, formed by merging the “Code Gen” and “Math Gen” teams. This team focuses on optimizing existing AI models to enhance their reasoning abilities and eventually engage in scientific work. Sam Altman, during his tenure, played a key role in making ChatGPT one of the fastest-growing software applications. He secured investments, including computing resources, from Microsoft to bring OpenAI closer to achieving Artificial General Intelligence (AGI).
In addition to unveiling new tools in a recent demonstration, Altman, at a summit of world leaders in San Francisco, expressed optimism about major advances in AI. However, shortly after making these statements, Altman was dismissed by the board, as mentioned in the latter part of the provided text.