Jobs at Anthropic

Positions 473

Anthropic is a San Francisco‑based AI research company founded in 2020 by former OpenAI leaders. The firm builds large language models—most notably Claude—designed to be more interpretable, safe, and aligned with human values. Anthropic’s mission is to advance AI safety while pushing the boundaries of natural‑language understanding.

Anthropic hires across research, software engineering, policy, product, and operations. Researchers work on model architecture and safety protocols; engineers build scalable inference systems; policy teams shape regulatory frameworks; product managers translate technical advances into user‑facing features. Candidates can expect rigorous technical interviews, a culture that prioritizes transparency, and a pay scale that reflects the high skill demand for AI specialists.

Checking Anthropic’s listings on Job Transparency gives you a distinct edge: you can view the exact salary range for each role, read employee sentiment scores, and compare offers to industry benchmarks. This data helps you negotiate confidently and choose positions that match your career goals.

Engineering Manager, Detection

Company: Anthropic

Location: San Francisco, CA

Posted Oct 31, 2023

Anthropic, an AI safety and research company, is seeking an experienced engineering leader to build a team focused on detecting abuse, fraud, and harmful content. The role involves leading a team to implement systems for detecting fraudulent accounts, spam campaigns, harmful user-generated content, and other malicious usage. The candidate should have deep experience with techniques for bot detection, account fraud, misinformation, and harmful user-generated content. The company values impact and advances long-term goals of steerable, trustworthy AI. Anthropic offers competitive compensation, benefits, and a collaborative work environment.

Privacy Program Lead

Company: Anthropic

Location: San Francisco, CA

Posted Oct 30, 2023

Anthropic, an AI safety and research company, is seeking an experienced Privacy Technical Program Manager. The role involves developing, implementing, and managing privacy-related policies and procedures for AI products and services. The ideal candidate will have experience in privacy program management, knowledge of privacy-enhancing technologies, and strong communication skills. The position offers a competitive salary, equity, and comprehensive benefits. Anthropic values collaboration, impact, and trustworthy AI systems.

Product Manager, Generalist

Company: Anthropic

Location: San Francisco, CA

Posted Oct 29, 2023

Anthropic, an AI safety and research company, is seeking a Product Manager to lead the vision, strategy, and execution of innovative applications leveraging their latest models. The ideal candidate will have 5+ years of product management experience, a technical background, and proficiency in SQL. They should be able to navigate ambiguity, gather user feedback, and collaborate across organizations. The role involves defining product requirements, analyzing metrics, and ensuring exceptional user experience. Anthropic offers competitive compensation, benefits, and a collaborative work environment.

Frequently Asked Questions

What is it like to work at Anthropic?
Employees at Anthropic experience a collaborative environment where safety and interpretability are central. Daily work involves cross‑functional teams, rapid prototyping of new model safety techniques, and open discussions about ethical implications. The company offers mentorship from senior researchers, frequent knowledge‑sharing sessions, and a flexible schedule that supports work‑life balance while tackling high‑impact AI challenges.
What types of positions are available at Anthropic?
Anthropic recruits in several domains: research scientists focusing on model architecture and alignment, machine learning engineers building production‑ready inference pipelines, policy analysts shaping AI regulation, product managers translating research into marketable features, and operations staff ensuring smooth deployment. Each role requires strong analytical skills, a passion for AI safety, and a collaborative mindset.
How can I stand out as an applicant?
Showcase concrete experience with safety‑aligned research—papers, open‑source contributions, or projects that address bias mitigation or interpretability. Provide code samples that demonstrate proficiency in PyTorch or TensorFlow, and include documentation of any RLHF or human‑feedback loops you’ve implemented. Highlight any interdisciplinary work, such as combining ethics studies with technical modeling, and emphasize your ability to communicate complex ideas to both technical and non‑technical audiences.

Other Companies Hiring

© 2026 Job Transparency. All rights reserved.