Today, data security is a top priority. As organizations embrace the power of AI to deliver smarter, faster, and more personalized customer experiences, their customers are increasingly concerned about how their personal data is used, stored, and protected. Privacy, security, and compliance aren’t optional add-ons, they set the foundation for meaningful customer relationships. This is where the Salesforce Trust Layer comes in. As a core component of Salesforce's Agentforce with an AI-powered customer service solution that combines automation with human support, the Trust Layer ensures that customer data remains private and protected.
By embedding privacy and compliance safeguards directly into the AI infrastructure, Salesforce enables businesses to innovate confidently while honoring the trust their customers demand. The Trust Layer has defined what it means to build AI that is not only powerful, but ethical and principled.
Agentforce supports both generative and predictive AI in a way that keeps your data private and secure. Generative AI is a process of creating something, such as writing an email draft, summarizing an account or case record, or even writing a summary of a conversation with a client. Predictive AI uses past data to make data driven guesses for the future, like predicting which leads will be most likely to convert or when a customer needs support. Both types use something called large language models (LLMs) which are advanced tools trained to understand and generate human language. While AI is a powerful tool, there can be risks if not handled carefully. That’s why the Trust Layer exists, to make sure that AI power is used responsibly, without exposing sensitive information or violating privacy rules.
The Salesforce Trust Layer is a secure AI architecture designed to ensure safe, private, and responsible use of generative AI within Salesforce products. It protects sensitive enterprise data while enabling powerful AI-driven experiences. The Trust Layer consists of the following:
Together, these features ensure that Agentforce is trustworthy, secure, and aligned with enterprise needs. The main purpose is to make sure sensitive information remains private and helps businesses stay compliant with data privacy laws. At the core of these features is data masking, dynamic grounding, and the zero data retention policy.
Data masking is used to hide or remove personal details, adding an extra layer of privacy. Data masking hides sensitive information before it's seen by AI so personal details aren’t exposed during processing by the LLM. Salesforce initially masks certain fields like name, email address, and phone number and provides admins with the ability to configure additional masking for fields which are encrypted using Salesforce Shield, fields matching a recognized pattern including social security or passport numbers, fields with a compliance attribute such as PII, HIPAA, COPPA, or PIC, or fields categorized with a sensitivity level of public, internal, confidential, restricted, or mission critical.
An AI playground org can be created by going to the Salesforce Trailhead module "Large Language Model Data Masking in the Einstein Trust Layer". The following images from setup in one such playground org shows some of the settings available for managing data masking.
Dynamic grounding makes sure AI responses are based only on approved company data, helping to prevent hallucinations, which is when an LLM returns false or misleading information that is not grounded in factual data, or off-topic answers that can lead to compliance risks.
That’s a big deal for companies operating under strict regulations like GDPR or HIPAA, where data control and transparency are non-negotiable. Together, these features let businesses tap into AI while still meeting the highest standards for privacy and compliance.
The zero data retention policy is just as the name implies. Zero data is retained, which means that when data is passed through the LLM, the data is deleted once the response is returned. This allows the use of an external LLM with the confidence that your data is secure within your Salesforce org.
Imagine a health insurance agent helping a customer understand their coverage. Thanks to data masking, Agentforce can assist the agent without ever seeing private health details allowing the conversation to be both helpful and compliant. In another case, a banking support agent might get real-time AI suggestions during a customer call, with grounding ensuring that the AI’s guidance is based only on the bank’s approved policies and documents. Nothing is made up, and no personal financial data is stored. Meanwhile, a supervisor can use AI-generated summaries to review how agents are handling calls, with audit trails and policy controls in place so every insight is transparent, accurate, and accountable. These examples show how the Trust Layer enables smarter customer service without sacrificing privacy or trust.
Ready to put the power of AI to work safely and responsibly within your Salesforce org? The Salesforce Trust Layer makes it possible to use advanced tools like Agentforce while keeping your customer data secure and compliant. If you’re exploring how to set up Agentforce or need help with any Salesforce project, we’re here to support you in every step of the way. Reach out to start a conversation. We’d be happy to help you build smarter, more trusted customer experiences with the power of Agentforce.