LLM Red Team
Company
Deepmind
Location
Peninsula
Type
Full Time
Job Description
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.
Snapshot |
 |
Our team is responsible for enabling AI systems to reliably work as intended, including identifying potential risks from current and future AI systems, and conducting technical research to mitigate them. On this team, you will discover and evaluate vulnerabilities in our frontier AI systems, enabling other teams to implement approaches that mitigate the risks. |
About us |
 |
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority. Conducting research into any transformative technology comes with responsibility to build mechanisms for safe and reliable development and deployment at every step. Technical safety research at Google DeepMind investigates questions related to evaluations, reward learning, fairness, interpretability, robustness, and generalization in machine learning systems. Proactive research in these areas is essential to the fulfillment of the long-term goal of Google DeepMind: to build safe and socially beneficial AI systems. |
The role |
 |
This team aims to work on the forefront of technical approaches to designing state-of-the-art LLM attack simulations to identify unknown vulnerabilities and threats that can bypass implemented guardrails. We’re seeking to build a team of creative problem solvers to provide the most complete AI risk position. Combined with Google DeepMind’s existing AI systems, these experts will analyze, prompt, and optimize internal models to accelerate the future of AGI safely and responsibly.  Key Responsibilities:
|
About you |
 |
We seek out individuals who thrive in ambiguity and who are willing to help with whatever moves prototypes forward. We regularly need to invent novel solutions to problems, and often change course if our ideas don’t work, so flexibility and adaptability to work on any project is a must. In order to set you up for success in this role at Google DeepMind, we are looking for the following skills and experience:
In addition, the following would be an advantage:Â
|
What we offer |
 |
At Google DeepMind, we want employees and their families to live happier and healthier lives, both in and out of work, and our benefits reflect that. Some select benefits we offer: enhanced maternity, paternity, adoption, and shared parental leave, private medical and dental insurance for yourself and any dependents, and flexible working options. We strive to continually improve our working environment, and provide you with excellent facilities such as healthy food, an on-site gym, faith rooms, terraces etc. We are also open to relocating candidates to a core GDM location and offer a bespoke service and immigration support to make it as easy as possible (depending on eligibility). The US base salary range for this full-time position is between $136,000 - $245,000 + bonus + equity + benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process. Application deadline: 12pm GMT Thursday 7th September 16 2024Â Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policy. |
Â
Date Posted
08/23/2024
Views
1
Similar Jobs
Support Engineer - Pricefx
Views in the last 30 days - 0
Pricefx a leading SaaS Pricing Price Optimization Management provider is seeking a Tier 34 Support Engineer The role involves providing technical sup...
View DetailsPeople Operations Specialist II - Guardant Health
Views in the last 30 days - 0
Guardant Health a leading precision oncology company is seeking a detailoriented People Operations and Employee Relations Specialist II The role invol...
View DetailsSenior Product Manager - Instrumental
Views in the last 30 days - 0
Instrumental is seeking a Senior Product Manager with extensive experience in enterprise SaaS products or deep domain expertise in electronics manufac...
View DetailsInside Sales & Technical Support Specialist - Gator Bio
Views in the last 30 days - 0
Gator Bio headquartered in Palo Alto CA is a leading developer and manufacturer of BioLayer Interferometry BLI instrumentation and consumable products...
View DetailsSr. Flight Software Engineer (Verification) - Reliable Robotics Corporation
Views in the last 30 days - 0
Reliable Robotics is a team of missiondriven engineers developing safetyenhancing technology for aviation aiming to make air transportation safer more...
View DetailsDistributed Systems Engineer - Kumo
Views in the last 30 days - 0
Kumo is a company building a machine learning platform for data lakehouses enabling data scientists to train powerful Graph Neural Net models directly...
View Details