Logged-out Icon

OpenAI’s New Safety Team to be Led by CEO Sam Altman and Board Members

Amid high-profile resignations and public scrutiny, OpenAI has established a new Safety and Security Committee to steer its commitment to AI ethics and safety

Sam Altman

At OpenAI, the past few weeks have seen a flurry of activity, most notably marked by a reshuffling of its safety oversight. Following the disbandment of a key safety team and high-profile departures, the company is steering a new course with its recently established Safety and Security Committee. This new body is not just any committee—it’s led by board members Bret Taylor and Nicole Seligman, along with CEO Sam Altman, who will oversee the firm’s commitment to AI safety and security. The move is significant, given it arrives hot on the heels of two important resignations that set the tech community abuzz.

The restructuring came after Ilya Sutskever and Jan Leike, two influential figures at OpenAI, stepped down. Their departure from the so-called “Superalignment Team,” which was tasked with aligning AI products with human values, was a moment of reflection for many. Jan Leike, in particular, made headlines with his candid farewell shared on X (formerly Twitter), where he expressed concerns about the company’s trajectory towards ensuring AI safety amid its rapid growth and development.

These concerns arise in a context where OpenAI, in partnership with Microsoft, is evidently pushing the envelope on AI capabilities. Yet, the recent reorganization raises an eyebrow or two. Is the establishment of this new committee a strategic pivot to genuinely enhance AI safety, or is it merely a performative measure amid growing scrutiny?

Moreover, the timing of this reshuffle coincides with another incident that caught public attention—the brief introduction and sudden withdrawal of a new voice model by OpenAI. The model, which bore a striking resemblance to actress Scarlett Johansson’s voice, led to an uproar when Johansson revealed that OpenAI had previously approached her for permission to use her voice to train an AI model—a request she declined. The model’s uncanny similarity to her voice led to widespread disbelief, including from Johansson’s circle and various media outlets, showcasing the complexities and ethical dilemmas in AI development.

The company also made headlines with its decision to relax the enforcement of nondisparagement agreements for departing executives. This change in stance followed criticism about the constraints such agreements imposed on employees’ freedom to express concerns post-departure, particularly those related to the company’s direction and safety protocols.

In response to these events, the new Safety and Security Committee has set a 90-day period to review and potentially overhaul OpenAI’s safety and security practices. This initiative, according to a recent blog post from the company, is part of a broader commitment to lead with transparency and responsibility, especially as it develops the successor to its GPT-4 model.

OpenAI’s approach to discussing these changes has been cautiously optimistic. The company acknowledges the challenges ahead and expresses a readiness to engage in meaningful dialogue about the balance between innovation and ethical responsibility. As this period of introspection and restructuring unfolds, the tech world watches closely, hopeful yet vigilant about the path OpenAI will tread in its quest to harmonize advanced technology with public trust and safety.

Posts you may like

This website uses cookies to ensure you get the best experience on our website