Principal, Special Projects
Job Description
Team: Various Teams
The Center for AI Safety (CAIS) is a leading research and advocacy organization focused on mitigating societal-scale risks from AI. Some of our past achievements include: releasing the most widely used measure of AI capabilities used by all major AI companies, running a large compute cluster to facilitate AI safety research which has been cited over 16,000 times, and publishing a global statement on AI Risk signed by Geoffrey Hinton, Yoshua Bengio and top AI CEOs.
The Role
We’re hiring senior operators to own high-stakes projects and initiatives. You will identify high-impact opportunities, define strategy, and drive execution end-to-end. Above all, we need people who can operate autonomously: someone with the judgment to navigate complex decisions and the track record to be trusted with significant responsibility.
The scope is broad by design. Example projects include: partnering with the team behind #TeamTrees to run a public campaign on AGI risk, supporting researchers in building benchmarks for deception and weaponization risk, finding ways to engage YouTubers and longform creators on AI safety, and standing up an AI safety hub in Washington DC. What unites these is the need for someone with the judgment and ability to take an ambiguous mandate and turn it into a concrete outcome, without needing to be managed closely.
Who We're Looking For
You’ve operated at a high level in fast-moving, high-stakes environments—and have the track record to prove it. Example profiles we're looking for include former startup Founders and COOs—people with both exceptional ability and judgment. Your specific background may look very different.
What You'll Do
What We're Looking For
Explore More
Apply Now
Back to Job Listings
Add To Job List
Company Profile
View Company Reviews
Date Posted
02/27/2026
Views
0
Neutral
Subjectivity Score: 0