Research Scientist/Engineer, Alignment Finetuning
Company: Menlo Ventures
Location: San Francisco
Posted on: March 28, 2025
Job Description:
About AnthropicAnthropic's mission is to create reliable,
interpretable, and steerable AI systems. We want AI to be safe and
beneficial for our users and for society as a whole. Our team is a
quickly growing group of committed researchers, engineers, policy
experts, and business leaders working together to build beneficial
AI systems.About the role:As a Research Scientist/Engineer on the
Alignment Finetuning team at Anthropic, you'll lead the development
and implementation of techniques aimed at training language models
that are more aligned with human values: that demonstrate better
moral reasoning, improved honesty, and good character. You'll work
to develop novel finetuning techniques and to use these to
demonstrably improve model behavior.Responsibilities:
- Develop and implement novel finetuning techniques using
synthetic data generation and advanced training pipelines
- Use these to train models to have better alignment properties
including honesty, character, and harmlessness
- Create and maintain evaluation frameworks to measure alignment
properties in models
- Collaborate across teams to integrate alignment improvements
into production models
- Develop processes to help automate and scale the work of the
teamYou may be a good fit if you:
- Have an MS/PhD in Computer Science, ML, or related field, or
equivalent experience
- Possess strong programming skills, especially in Python
- Have experience with ML model training and experimentation
- Have a track record of implementing ML research
- Demonstrate strong analytical skills for interpreting
experimental results
- Have experience with ML metrics and evaluation frameworks
- Excel at turning research ideas into working code
- Can identify and resolve practical implementation
challengesStrong candidates may also have:
- Experience with language model finetuning
- Background in AI alignment research
- Published work in ML or alignment
- Experience with synthetic data generation
- Familiarity with techniques like RLHF, constitutional AI, and
reward modeling
- Track record of designing and implementing novel training
approaches
- Experience with model behavior evaluation and
improvementLogisticsEducation requirements: We require at least a
Bachelor's degree in a related field or equivalent
experience.Location-based hybrid policy: Currently, we expect all
staff to be in one of our offices at least 25% of the time.
However, some roles may require more time in our offices.Visa
sponsorship: We do sponsor visas! However, we aren't able to
successfully sponsor visas for every role and every candidate. But
if we make you an offer, we will make every reasonable effort to
get you a visa, and we retain an immigration lawyer to help with
this.We encourage you to apply even if you do not believe you meet
every single qualification. Not all strong candidates will meet
every single qualification as listed. Research shows that people
who identify as being from underrepresented groups are more prone
to experiencing imposter syndrome and doubting the strength of
their candidacy, so we urge you not to exclude yourself prematurely
and to submit an application if you're interested in this work.How
we're differentWe believe that the highest-impact AI research will
be big science. At Anthropic we work as a single cohesive team on
just a few large-scale research efforts. And we value impact -
advancing our long-term goals of steerable, trustworthy AI - rather
than work on smaller and more specific puzzles. We view AI research
as an empirical science, which has as much in common with physics
and biology as with traditional efforts in computer science. We're
an extremely collaborative group, and we host frequent research
discussions to ensure that we are pursuing the highest-impact work
at any given time. As such, we greatly value communication
skills.The easiest way to understand our research directions is to
read our recent research. This research continues many of the
directions our team worked on prior to Anthropic, including: GPT-3,
Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws,
AI & Compute, Concrete Problems in AI Safety, and Learning from
Human Preferences.Come work with us!Anthropic is a public benefit
corporation headquartered in San Francisco. We offer competitive
compensation and benefits, optional equity donation matching,
generous vacation and parental leave, flexible working hours, and a
lovely office space in which to collaborate with colleagues.
#J-18808-Ljbffr
Keywords: Menlo Ventures, San Francisco , Research Scientist/Engineer, Alignment Finetuning, Engineering , San Francisco, California
Didn't find what you're looking for? Search again!
Loading more jobs...