About Us
FAR.AI is a non-profit AI research institute dedicated to ensuring advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response.
Since our founding in July 2022, we've grown quickly to 30+ staff, producing over 40 influential academic papers, and establishing leading AI Safety events. Our work is recognized globally, with publications at premier venues such as NeurIPS, ICML, and ICLR, and features in the Financial Times, Nature News and MIT Technology Review.
We drive practical change through red-teaming with frontier model developers and government institutes. Most recently, we discovered major issues with Anthropic’s latest model the same day it was released, and worked with OpenAI to safeguard their latest model. Additionally, we help steer and grow the AI safety field through developing research roadmaps with renowned researchers such as Yoshua Bengio; running FAR.Labs, an AI safety-focused co-working space in Berkeley housing 40 members; and supporting the community through targeted grants to technical researchers.
About FAR.Research
Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR.AI aims to pursue a diverse portfolio of projects. Our model is to conduct initial investigations into a range of high-potential areas. We incubate the most promising directions through a combination of in-house research, field-building events, and targeted grants. Once the core research problems are solved, we work to scale them to a minimum viable prototype, demonstrating their validity to AI companies and governments to drive adoption.
Our current focus areas include:
FAR.AI is one of the largest independent AI safety research institutes, and is rapidly growing with the goal of diversifying and deepening our research portfolio. For that reason, we’re seeking senior research engineers who can increase the technical depth of our work and allow us to answer research questions more definitively and at a larger scale.
About The Role
This role would be a good fit for an experienced machine learning engineer, or an experienced software engineer looking to transition to AI safety research. All candidates are expected to:
Additionally, candidates are expected to bring expertise in one of the following areas corresponding to the core competencies our different research teams most need:
Option 1 – Machine Learning:
Substantial experience training transformers with common ML frameworks like PyTorch or jax.
Good knowledge of basic linear algebra, calculus, vector probability, and statistics.
Option 2 – High-Performance Computing:
Power user of cluster orchestrators such as Kubernetes (preferred) or SLURM
Experience building high-performance distributed-systems (e.g. multi-node training, large-scale numerical computation)
Experience optimizing and profiling code (ideally including on GPU, e.g. CUDA kernels).
Option 3 – Technical Leadership:
Experience designing large-scale software systems, whether as an architect in greenfield software development or leading a major refactor.
Comfortable project managing small teams, such as chairing stand-ups and developing detailed roadmaps to execute on a 3-6 month research vision.
About The Projects
As a Member of Technical Staff (Senior Research Engineer) you would join one of our existing workstreams and lead projects there:
As we continue to grow our research portfolio, additional workstreams may open up for contribution, for example in mechanistic interpretability.
Logistics
If based in the USA, you will be an employee of FAR.AI, a 501(c)(3) research non-profit. Outside the USA, you will be an employee of an EoR organization on behalf of FAR.AI.
If you have any questions about the role, please do get in touch at talent@far.ai.
Compensation Range: $150K - $250K