The OpenAI, board, and Sam Altman situation is an intriguing story with ethical leadership questions. Responsibilities of OpenAI’s board, Sam Altman, and Microsoft during these events are significant. Nonprofit boards, like OpenAI’s, have a special duty to ensure the organization meets its mission.
If they believe the CEO, in this case, Altman, isn’t fulfilling that mission, they have cause to take action. The central focus should be on prioritizing OpenAI’s mission, emphasizing the importance of ethical leadership and aligning actions with the organization’s objectives.
As per OpenAI’s mission statement, the organization aims to “ensure that artificial general intelligence benefits all of humanity.” This mission is nuanced and significant, especially considering the distinction between artificial general intelligence and artificial intelligence. If OpenAI was on the verge of achieving its definition of artificial general intelligence and believed it wouldn’t benefit humanity, this could have contributed to the recent events.
In a pre-dismissal podcast interview, Altman expressed skepticism about the term artificial general intelligence, considering it “ridiculous and meaningless” and opting for the definition of “really smart AI.” The board might have found the term and its definition crucial, leading to potential discrepancies.
While OpenAI’s mission statement resembles a vision statement, emphasizing aspiration and foresight, the ethical concern lies in the board’s obligation to take actions ensuring fulfillment. The debate over whether it’s a vision or mission statement is secondary. The crux is that the board must act in ways that align with the organization’s purpose. While a cautious approach to AI progress might not be enticing to all investors, there could be those who value and seek to invest in such prudence. Pursuing a goal aligned with OpenAI’s mission, even if unconventional, holds ethical weight.
The board is obligated to actively oversee OpenAI’s activities and manage its assets responsibly. In the nonprofit sector, boards are entrusted with the institution’s well-being for the benefit of the community they serve—in this case, all of humanity. OpenAI, defined as a research and deployment company on its website, cannot fulfill these roles if a significant portion of its staff departs or if funding is inadequate. Therefore, the board’s responsibility extends to ensuring not only mission alignment but also the practical aspects necessary for the organization’s sustainability and success.
Additional insights into the board’s dysfunction have emerged, revealing longstanding tensions and a specific disagreement over a board member’s paper. The paper was perceived as critical of OpenAI’s AI safety approach and commendatory of a competitor. While the board member defended it as an exercise of academic freedom, writing such papers while serving on the board can be viewed as a conflict of interest, breaching the duty of loyalty. In such cases, resigning from the board might have been the appropriate course of action.
As the CEO of OpenAI, Altman was obligated to prioritize the organization’s interests. Reports indicate his involvement in launching two other companies, raising questions about whether OpenAI was his utmost priority. While it’s yet unclear if this contributed to the communication issues with the board, the evidence suggests he was actively engaged in getting these other organizations off the ground, potentially diverting attention from OpenAI.
Altman demonstrated an awareness of his responsibility to lead a sustainable organization and ensure employee satisfaction. He was actively involved in a tender offer, a strategic move to secure additional investment and allow employees to monetize their shares. Moreover, his comfortable engagement in industry-wide issues, such as regulation and standards, showcased his leadership approach. However, finding a balance between these diverse activities is a crucial aspect of corporate leadership, and the board might have perceived a failure to strike this balance in the months leading to his dismissal.
Microsoft, on the other hand, appears to prioritize its interests, as evidenced by strategic decisions. Hiring Altman and Greg Brockman, who resigned from OpenAI in solidarity, offering to employ more OpenAI staff, and planning continued collaboration, Satya Nadella seems focused on safeguarding Microsoft’s interests. This includes harnessing both the technological potential of AI articulated by OpenAI and securing the talent needed for fulfillment. Nadella’s decision-making, as reflected in positive market responses and support for Altman’s return to OpenAI, demonstrates a comprehensive approach, with the interests of Microsoft and its future at the forefront of considerations amid rapidly unfolding circumstances.
The board’s bold statement that allowing the company’s destruction aligns with the mission might not find favor among OpenAI employees, potentially more inclined toward profit. The board, intentionally structured to eliminate profit interests, contrasts with the employees’ potential profit-driven focus. This tension reflects a fundamental conflict within OpenAI, where the board is committed to a mission-driven agenda, while employees may prioritize financial considerations.
Ann Skeet, senior director of leadership ethics at Markkula Center for Applied Ethics and co-author of the ITEC handbook, “Ethics in the Age of Disruptive Technologies: An Operational Roadmap,” offers insights into the ethical challenges posed by disruptive technologies.
For more insightful commentary, explore additional Fortune articles covering diverse topics, such as challenges in Amazon’s $26 billion delivery business, the potential shift toward merit-based flexibility in the future of work, the real reasons behind the West’s lead in the digital economy compared to Asia, and the call by Melinda French Gates to change the face of power in venture capital.