2 Comments

Is the alignment team no longer critical because the technology is plateauing or did they leave because they see there’s no way left to prevent a misalignment scenario if the company is productizing and monetizing?

Expand full comment

It seems the latter, but then it's a question of choice and commitment to ensure and prioritize safety alongside innovation. There's definitely a sense of disappointment surrounding OpenAI's recent decisions. They claim that the superalignment team’s efforts will be absorbed into their research, but transparency is lacking. It's a complex issue. Integrating safety across all teams might not be as efficient as having a dedicated group of experts, especially considering the immense challenges of AI safety.

Expand full comment