Highlighting bias in artificial intelligence (AI) is essential, but equally important is educating people on ways to prevent it.
In 2019, Salesforce established its Trusted AI Principles to address the ethical implications of predictive AI and machine learning. By 2023, with the rapid evolution of generative AI, Salesforce deepened its commitment to responsible AI by participating in the AFROTECH™ Conference, where it discussed strategies for prioritizing ethical and inclusive practices in generative AI.
During the session, “Closing The AI Trust Gap: Battling Bias | Presented by Salesforce,” Jackie Chambers de Freitas, Vice President of Agile Delivery and Coaching at Salesforce, shared how Salesforce’s focus on AI began in 2014, when CEO Marc Benioff declared that the company would become an AI-first company.
Watch the session here via AFROTECH™ Labs.
View this post on Instagram
“The goal was to transform Salesforce into an intelligent CRM, making it easy for every company and employee to harness the power of AI by using technologies like machine learning and natural language processing to deliver AI-powered predictions and insights,” said Chambers de Freitas during the Learning Lab.
While Salesforce aimed to be ahead of the curve when it came to AI, Chambers de Freitas also shed light on a group of individuals who were apprehensive about the technology. As companies increasingly adopt AI to meet increasing customer demands and improve efficiency, 75% of their customers don’t trust that the technology will be used ethically, said Chambers de Freitas.
During the Learning Lab, Chambers de Freitas went on to share that the risks of generative AI include accuracy, bias, toxicity, privacy, and security. To combat these risks, she detailed generative AI principles that companies should strongly consider enforcing.
- Accuracy: Deliver verifiable results that balance accuracy, precision, and recall.
- Safety: Mitigate bias, toxicity, and harmful output. Create guardrails to prevent additional harm.
- Honesty: Ensure consent to use data. Be transparent that an AI has created content when delivered autonomously.
- Empowerment: Identify the appropriate balance to “supercharge” human capabilities and make these solutions accessible to all.
- Sustainability: Develop right-sized models where possible to reduce carbon footprint
“When it comes to being accurate, you’ve got to make sure that things are being done by citing the sources where the model is pulling the information from,” Chambers de Freitas said. “This is really important. We want to make sure it’s safe. Further security assessments can help organizations identify vulnerabilities that may be exploited by bad actors. It’s important that humans are involved. If you only rely on generative AI, it could have catastrophic effects on people’s lives and finances. And so think about a field service agent that’s in electricity or gas or a nuclear power station. If we’re not pairing the AI with a human interaction, it could mean devastation for humankind. And I’m not trying to be dramatic, but it could.”
During the Learning Lab, Chamber de Freitas also stressed the importance of minimizing one’s AI carbon footprint.
“These large language models (LLMs), they use a lot of energy and water to train them,” she said. “At Salesforce, we make sure we’re training our models on high quality CRM data, which helps maximum maximize accuracy and minimize the size of our models to reduce our carbon footprint.”
For additional guidance on how to close the AI trust gap and combat bias, Chamber de Freitas shared the following:
- Make sure to use zero-party or first-party data. Do not use third-party data.
- Delete old and inaccurate data and label it properly.
- Keep a human in the loop when it comes to AI.
- Make sure you’re testing and checking your outputs for bias and accuracy.
Haven’t purchased your ticket for AFROTECH™ Conference 2024? Click here.