Former OpenAI employee Jacob Hilton recently shared a revealing account of his departure from the organization, alleging the coercive nature of non-disparagement agreements within the company. Hilton’s Twitter post detailed his experience of being compelled to sign such an agreement to retain his vested equity, an arrangement he found both restrictive and disappointing despite leaving the company on amicable terms.
His disclosure has sparked significant reactions and discussions, particularly from figures like Stella Biderman, who echoed and expanded on his concerns.
I invite OpenAI to reach out directly to former employees to clarify that they will always be provided equal access to liquidity, in a legally enforceable way.
Jacob Hilton
Hilton’s Account
Hilton recounted that upon leaving OpenAI over a year ago, he was presented with a non-disparagement agreement that included a non-disclosure clause about the agreement itself. The terms were clear: signing the agreement was necessary to retain his vested equity. Hilton expressed his frustration at having to surrender his right to speak freely about the organization, even though he had no immediate intentions of criticizing OpenAI.
The situation took a turn when, following investigative reporting by Kelsey Piper of Vox, OpenAI contacted Hilton to release him from the non-disparagement agreement. Hilton highlighted the importance of such investigative work in holding powerful entities accountable, particularly in fields with transformative potential like artificial intelligence. He stressed the need for major AI labs to ensure protections for whistleblowers, including binding commitments to non-retaliation.
Hilton also pointed out that OpenAI retains the ability to restrict the sale of equity, effectively rendering it worthless for an unspecified period. Although OpenAI has stated that former employees have historically been allowed to sell their equity at the same price regardless of their current employment status, Hilton argued that the company’s past use of liquidity access as a coercive tool leaves many former employees fearful of speaking out.
Stella Biderman’s Reaction
Stella Biderman, a researcher for Booz Allen Hamilton and EleutherAI, and part of the open-source software community, added her voice to the conversation with a series of tweets. Biderman criticized OpenAI for coercing individuals into signing restrictive agreements, noting that many in the community had been similarly pressured. She emphasized that such agreements suppress the rights of individuals and prevent them from discussing important issues publicly.
Biderman highlighted the broader implications of these agreements, pointing out that the threat of ruinously expensive litigation rather than loss of equity is used to intimidate individuals. She compared this tactic to the experiences of former OpenAI employees, who face similar suppression through non-disparagement agreements. Biderman’s comments underline a critical issue within the tech industry: the use of legal tools to silence dissent and protect organizational interests at the expense of transparency and accountability.
Broader Implications for AI Development
The revelations by Hilton and the subsequent reactions show a significant ethical concern within the field of AI development. As companies like OpenAI work on technologies with profound societal impact, the need for transparency and the ability to speak out about potential issues is paramount. Non-disparagement agreements that silence former employees can hinder the public’s understanding of the ethical and safety considerations involved in AI development.
These agreements can also create a culture of fear among current employees, deterring them from raising concerns internally or externally. For AI labs, fostering an environment where employees can freely discuss and critique their work is crucial to ensuring the development of safe and beneficial AI systems.
The Role of Investigative Journalism
The role of investigative journalism, as highlighted by Hilton, is critical in bringing such issues to light. Journalists like Kelsey Piper play a vital role in uncovering practices that may otherwise remain hidden, prompting organizations to reassess and potentially change their policies. Hilton’s release from the non-disparagement agreement following media scrutiny exemplifies the power of public accountability.
Reactions
Jacob Hilton’s recent disclosure about the coercive nature of non-disparagement agreements at OpenAI has sparked significant reactions from various figures in the tech community. Among the notable voices is Neel Nanda, the Mechanistic Interpretability Lead at DeepMind, who expressed his disbelief and criticism of OpenAI’s practices. Nanda highlighted that OpenAI did not offer any payment for signing the non-disparagement agreements, relying instead on the threat of losing vested equity. This practice, according to Nanda, starkly contradicts former CEO Sam Altman’s claims of ignorance about the coercive measures.
Nanda’s tweet emphasizes the improbability that Altman was unaware of the situation, suggesting that the coercive tactics were likely a known strategy within OpenAI. His comment emphasises the broader issue of transparency and ethical conduct within organizations developing advanced AI technologies. Nanda’s reaction adds a critical perspective from a leader at another leading AI research organization, DeepMind, further amplifying the concerns raised by Hilton.
Another notable reaction came from Arnold TwtUser, who described Hilton as another brave whistleblower coming forward to expose questionable practices at OpenAI. TwtUser pointed out that the reputation of OpenAI is suffering due to these revelations, with the “dents” in its image becoming more pronounced. This perspective reflects the growing scrutiny and pressure on OpenAI to address these ethical concerns transparently.
Scarlett Johansson Voice Controversy
In a recent public statement, actress Scarlett Johansson revealed her unsettling experience with OpenAI, specifically involving CEO Sam Altman. According to Johansson, Altman approached her last September with an offer to voice the ChatGPT 4.0 system. Altman believed Johansson’s voice could help bridge the gap between tech companies and consumers, easing the public into the significant changes AI brings to human interactions. Johansson declined the offer for personal reasons.
However, nine months later, Johansson was shocked to hear the released demo of ChatGPT 4.0, named “Sky,” which sounded eerily similar to her voice. Johansson noted that even her closest friends and media outlets could not tell the difference. Altman himself hinted at the similarity by tweeting “her,” a reference to Johansson’s role in the film “Her,” where she voices an AI system forming an intimate relationship with a human.
Two days before the ChatGPT 4.0 demo’s release, Altman contacted Johansson’s agent, urging her to reconsider. However, by the time they could connect, the system had already been released. Johansson had to hire legal counsel to address the issue, which led to OpenAI agreeing to take down the “Sky” voice. This incident gives us significant ethical and legal concerns regarding consent and the use of personal likeness in AI developments.
Paul (@DevPaulC), a developer and Twitter user, echoed these sentiments, questioning whether such behavior should be tolerated from the gatekeepers of AI. He suggested that the inability to accept a simple refusal raises serious concerns about the ethics of those leading the AI industry. Another user, Helen121, highlighted the broader issue of consent in tech, implying that this is a recurring problem with tech executives.
OpenAI Pushes On
OpenAI announced the rollout of interactive tables and charts for ChatGPT, along with the ability to add files directly from Google Drive and Microsoft OneDrive. These new features are set to enhance the user experience by allowing more dynamic data manipulation and integration of external resources.
OpenAI has also announced a partnership with Reddit to bring Reddit’s vast content repository into ChatGPT. This collaboration aims to enrich the platform with a wide array of user-generated content, discussions, and insights available on Reddit.
Another major update came on May 22, when OpenAI announced a multi-year global partnership with News Corp. This collaboration is designed to integrate premium journalism from News Corp into ChatGPT, providing users with access to high-quality, reliable news sources. The partnership is expected to enhance the credibility and depth of the information provided by ChatGPT, making it a more valuable tool for users seeking authoritative news and insights.
Moving Forward
Jacob Hilton’s revelations about his departure from OpenAI have highlighted significant ethical issues related to the use of non-disparagement agreements within the company. Despite leaving on amicable terms, Hilton was required to sign an agreement to retain his vested equity, which he found restrictive.
His disclosure, supported by Kelsey Piper’s investigative reporting, has sparked considerable reactions, notably from figures like Stella Biderman, who emphasized the broader implications of these agreements in suppressing individuals’ rights and preventing public discourse on critical issues.
In addition to these revelations, OpenAI’s recent product updates and strategic partnerships show the company’s ongoing efforts to enhance its capabilities. However, these advancements are overshadowed by ethical controversies, including allegations of unauthorized use of Scarlett Johansson’s voice for ChatGPT’s “Sky” assistant, further complicating OpenAI’s public image.
Hilton’s call for OpenAI to ensure protections for whistleblowers and commit to non-retaliation is a crucial step towards fostering trust and transparency. As AI technology evolves, maintaining ethical standards and promoting accountability will be essential for responsible development and deployment.
Author Profile
- Lucy Walker covers finance, health and beauty since 2014. She has been writing for various online publications.
Latest entries
- December 5, 2024NewsWireThe Bitcoin Community Celebrates $100,000 in Historic Moment
- December 3, 2024NewsWireMismanagement Pandemic With US Gov “Losing” $64B on COVID-19
- December 2, 2024NewsWireIs De-Banking Discrimination Disguised as Risk Management?
- November 29, 2024NewsWireWright’s Appeal Denied in COPA “Faketoshi” Case