As part of the Acceleration Economy AI Industry Accelerators series, showcasing individuals in vendor, partner, and customer organizations who are driving AI innovation and adoption, we’re highlighting the work of Clara Shih, CEO of Salesforce AI.
Current Role
Shih’s time with Salesforce began in 2006 when she worked on its AppExchange product. After leaving Salesforce in 2009, she returned in 2021 to lead the Salesforce Service Cloud CRM software business.
Since May 2023, she has held the title of CEO of Salesforce AI; she leads teams in “building the frontier of AI products.” During an interview with the New York Times, Shih described working closely with general managers across Salesforce and its partners to develop its AI products along with the sales, marketing, and customer services teams that help customers successfully deploy GenAI technology. Additionally, she is responsible for expanding the company’s Einstein AI platform.
Educational and Professional Background
With an education in computer science from Stanford and internet studies from Oxford, Shih has been well-equipped to advance the field of AI. Following graduation, Shih worked in corporate strategy at Google before her initial tenure at Salesforce.
Clara Shih of Salesforce
Title | CEO of AI |
Previous employers | Google, Hearsay Social |
Key AI Initiatives at Salesforce | Einstein Copilot |
AI Passions | Creating AI that is trusted |
In Her Own Words | “AI is a moving target.” |
She developed “Faceforce,” a business application on Facebook, as a side project that then evolved into a startup called Hearsay Social. In 2009, Shih served as the CEO and co-founder of Hearsay Social, which provided its customers with an avenue to have authentic interactions and personal messaging for their clients, until rejoining Salesforce.
In a Women in Stem interview, Shih noted that she has been driven by three things: learning, building things that have impact, and working with people that inspire her. “These are the common threads across everything I’ve done and probably will do, and why I’ve been drawn to major tech disruptions from the Internet to social media and now AI,” she said. “Technology disruptions create the best opportunities for learning and building, and to do this, you have to bring together diverse teams of people who each bring a unique perspective and superpower.”
AI Projects and Passions
In terms of safety and security with AI, in a New York Times interview, Shih said she feels “a tremendous sense of personal responsibility toward creating AI that is trusted.” For instance, she’s been involved with developing the Einstein Trust Layer, which protects user data and promotes responsible AI across the Salesforce ecosystem. “I do have a healthy level of concern about it, but I’m an action-oriented person, so being in this role gives me a great platform to use to educate others, including members of Congress, CEOs, and our customers,” she shared.
Shih also commented on her experiences and perspective as a woman leading the charge in advancing AI, stating that she “got used to being underestimated.” She aims to reframe tough situations in a positive light: Rather than thinking of being the only woman or person of color in the room, she takes pride in her personal traits. “I stand on the shoulders of the women of color that came before me,” she acknowledged.
Salesforce AI Advancements
With the risks of AI hindering some organizations from adopting the technology, Salesforce is “taking steps to ensure AI is trusted and reliable by empowering humans at the helm through product design.” The Salesforce Responsible AI & Technology team has been building standardized human-at-the-helm patterns to serve as guardrails across its AI products. The company identified five categories for these patterns to improve safety, accuracy, and trust:
- Mindful Friction: Ensures intentional human engagement at critical junctures
- Awareness of AI: Functionality for transparency and awareness with GenAI content
- Bias & Toxicity Safeguards: Guardrails to prevent the production of harmful or malicious content
- Explainability & Accuracy: Designed experiences to increase AI reliability and understandability, explaining the AI’s action and delivering correct information
- Hallucination Reduction: Policies and prompt instructions to limit the scope of what an AI can generate
Big Quote
“AI is a moving target…Every few weeks a new paper comes out that changes everything and yet you still need to ensure you execute on the initial plan you have…Often, you have to carry on as well as explore new ideas.”