As part of the Acceleration Economy AI Industry Accelerators series that showcases individuals in vendor, partner, and customer organizations driving AI innovation and adoption, we’re highlighting the work of Daniela Amodei, president and co-founder of Anthropic.
Current Role and Responsibilities
In 2021, Amodei joined forces with her brother, Dario, and other colleagues from OpenAI to found Anthropic. Anthropic’s first year was focused on building the company, which involved fundraising and training its LLM. The company expanded its safety research with the goal of making “the safest model available in the market.”
With a demonstrated history in managing teams as well as trust and safety, Amodei serves as the president of the company. In an interview with Fast Company, she described overseeing “the majority of day-to-day management of the company,” with senior leadership teams reporting to her. She is responsible for working with trust and safety research groups, particularly with regard to its model, Claude. The teams she oversees ensure Claude outputs accurate and relevant information and while preventing hallucinations.
Educational and Professional Background
Amodei earned a Bachelor’s degree in English Literature, Politics, and Music from the University of California.
Starting in 2013, she was a leading member of the recruiting team at Stripe, a fintech company. In this role, she collaborated with the CTO, VP of engineering, and team leads to develop and execute the company’s technical recruiting strategy. Also at Stripe, she shifted to Risk Program Manager in 2015 then Risk Manager, adding user policy and underwriting responsibilities in 2016.
Daniela Amodei of Anthropic
Title | President and co-founder |
Previous Employers | Stripe, OpenAI |
Key AI Initiatives at Anthropic | Claude |
AI Passions | Building safe models with humans at the center |
In Her Own Words | “We want to make sure that humans are at the center of our process whether it’s reinforcement learning from human feedback or just thinking about how AI is going to impact the world more broadly.” |
After five years with Stripe, she worked for OpenAI starting in 2018. As an engineering manager, Amodei led two technical teams: NLP and music generation. Additionally, she had oversight of a technical safety team. As the VP of People, she supervised recruiting and various programs for DEI, learning, and development. In 2020, she was the VP of Safety and Policy, leading the company’s safety and policy functions as well as managing its business operations team.
AI Projects & Passions
Anthropic was built with helpfulness, honesty, and harmlessness in mind. Considering these factors, it’s no surprise that Amodei, as president and co-founder, would be passionate about building safe models with humans at the center. She believes thorough research on the technology is crucial to ensure safe and responsible AI; this applies to developers, policymakers, and more.
Further, keeping humans at the center of AI has been at the forefront for Amodei and the company. She highlighted that “Anthropic” means “relating to humans” in an interview with Stripe. “What has been important for us, as we’re working on these evermore powerful generative AI tools that are interacting with the world, is wanting to make sure that humans are still at the center of that story,” she continued. “We also want to make sure that humans are at the center of our process whether it’s reinforcement learning from human feedback or just thinking about how AI is going to impact the world more broadly.”
Latest Anthropic News
Anthropic recently announced a plan “to source new evaluations for measuring advanced model capabilities and outline our motivations and the specific types of evaluations we’re prioritizing.” The company emphasizes the importance of third-party evaluations particularly when it comes to determining AI policies as well as assessing AI capabilities and risks. In recognizing the lack of evaluation tools, Anthropic is looking to fund “evaluations developed by third-party organizations that can effectively measure advanced capabilities in AI models.”
It identified three priority areas: AI safety level assessments; advanced capability and safety metrics; and infrastructure, tools, and methods for developing evaluations. Anthropic is accepting proposals for this initiative to expand the evaluations landscape and promote AI safety.
Big Quote
“We want the transition to more powerful AI systems to be positive to society and the broader economy. This is why much of our research is focused on exploring ways to better understand the systems we are developing, mitigate risks, and develop AI systems that are steerable, interpretable, and safe.”