OpenAI CEO Sam Altman believes that sustainable improvements in quality of life stem from scientific progress, but stresses the need to manage through the associated risks to achieve that. Implicitly responding to accusations of hypocrisy in support of regulation, Altman asserts that OpenAI supports regulation even in private meetings, and only protests burdensome regulation on small startups. Altman calls for global coordination in regulation and suggests certification of AI models above a certain capability threshold. Finally, Altman responds to criticisms by Elon Musk, saying that they have differences in some areas but both care deeply about AI safety.
**He speaks about how the human data that feeds into AI can be biased and how OpenAI uses techniques such as reinforcement learning and human feedback to reduce bias in their models. Altman also explains why he has no equity in OpenAI. The organization is governed by a nonprofit and has a board that needs to be filled with disinterested investors who don’t have equity in the company. Altman emphasizes that he has already achieved financial independence through other investments and he wants to lead an interesting life in which he can contribute to human technological progress. When asked about the amount of power he holds as the CEO of OpenAI, Altman rejects the idea of complete trust and emphasizes the need for democratization of the company’s decision-making process to belong to all of humanity. OpenAI’s goal is to make the benefits and access of AI belong to everyone, not just one company or person.**
**yikes, last time I checked humans were the source of biases and incentives affect everyone equally. Does he think he’s special? To me, this company seems like a scam (hopefully) or maybe a precursor to some Bond villain combination of SkyNet and The Umbrella Corporation.**