Jobs at Anthropic

Positions 473

Anthropic is a San Francisco‑based AI research company founded in 2020 by former OpenAI leaders. The firm builds large language models—most notably Claude—designed to be more interpretable, safe, and aligned with human values. Anthropic’s mission is to advance AI safety while pushing the boundaries of natural‑language understanding.

Anthropic hires across research, software engineering, policy, product, and operations. Researchers work on model architecture and safety protocols; engineers build scalable inference systems; policy teams shape regulatory frameworks; product managers translate technical advances into user‑facing features. Candidates can expect rigorous technical interviews, a culture that prioritizes transparency, and a pay scale that reflects the high skill demand for AI specialists.

Checking Anthropic’s listings on Job Transparency gives you a distinct edge: you can view the exact salary range for each role, read employee sentiment scores, and compare offers to industry benchmarks. This data helps you negotiate confidently and choose positions that match your career goals.

No open positions at Anthropic right now.

Browse All Jobs

Frequently Asked Questions

What is it like to work at Anthropic?
Employees at Anthropic experience a collaborative environment where safety and interpretability are central. Daily work involves cross‑functional teams, rapid prototyping of new model safety techniques, and open discussions about ethical implications. The company offers mentorship from senior researchers, frequent knowledge‑sharing sessions, and a flexible schedule that supports work‑life balance while tackling high‑impact AI challenges.
What types of positions are available at Anthropic?
Anthropic recruits in several domains: research scientists focusing on model architecture and alignment, machine learning engineers building production‑ready inference pipelines, policy analysts shaping AI regulation, product managers translating research into marketable features, and operations staff ensuring smooth deployment. Each role requires strong analytical skills, a passion for AI safety, and a collaborative mindset.
How can I stand out as an applicant?
Showcase concrete experience with safety‑aligned research—papers, open‑source contributions, or projects that address bias mitigation or interpretability. Provide code samples that demonstrate proficiency in PyTorch or TensorFlow, and include documentation of any RLHF or human‑feedback loops you’ve implemented. Highlight any interdisciplinary work, such as combining ethics studies with technical modeling, and emphasize your ability to communicate complex ideas to both technical and non‑technical audiences.

Other Companies Hiring

© 2026 Job Transparency. All rights reserved.