<img alt="" src="https://www.companydetailscompany.com/798294.png" style="display:none;">
Article Summary
Salesforce’s Agentforce Trust Layer is powering AI-driven customer service that’s private, compliant, and prioritizes security from the start.

Today, data security is a top priority. As organizations embrace the power of AI to deliver smarter, faster, and more personalized customer experiences, their customers are increasingly concerned about how their personal data is used, stored, and protected. Privacy, security, and compliance aren’t optional add-ons, they set the foundation for meaningful customer relationships. This is where the Salesforce Trust Layer comes in. As a core component of Salesforce's Agentforce with an AI-powered customer service solution that combines automation with human support, the Trust Layer ensures that customer data remains private and protected.

AI Innovation That Demands Built-In Trust

By embedding privacy and compliance safeguards directly into the AI infrastructure, Salesforce enables businesses to innovate confidently while honoring the trust their customers demand. The Trust Layer has defined what it means to build AI that is not only powerful, but ethical and principled.

Agentforce supports both generative and predictive AI in a way that keeps your data private and secure. Generative AI is a process of creating something, such as writing an email draft, summarizing an account or case record, or even writing a summary of a conversation with a client. Predictive AI uses past data to make data driven guesses for the future, like predicting which leads will be most likely to convert or when a customer needs support. Both types use something called large language models (LLMs) which are advanced tools trained to understand and generate human language. While AI is a powerful tool, there can be risks if not handled carefully. That’s why the Trust Layer exists, to make sure that AI power is used responsibly, without exposing sensitive information or violating privacy rules.

What Is The Salesforce Trust Layer?

The Salesforce Trust Layer is a secure AI architecture designed to ensure safe, private, and responsible use of generative AI within Salesforce products. It protects sensitive enterprise data while enabling powerful AI-driven experiences. The Trust Layer consists of the following:

  • Prompt Templates: Standardized, structured prompts ensure consistency.
  • Secure Data Retrieval: Only authorized and relevant enterprise data is retrieved to ground AI responses, maintaining data privacy.
  • Dynamic Grounding: Real-time, context-aware information is used to enhance accuracy and relevance in AI-generated outputs.
  • Data Masking: Sensitive information is masked before being passed to the LLM.
  • Prompt Defense: Security measures are applied to prompts to prevent malicious inputs.
  • Prompt sent to the LLM: After applying protections, the prompt is sent to the LLM for generation.
  • Zero Data Retention: LLM providers do not store any customer data, ensuring privacy and compliance.
  • Generation: The LLM produces a response based on grounded data and prompt inputs.
  • Toxic Language Detection: Content is scanned for harmful or inappropriate language before it reaches the end user.
  • Data Demasking: If necessary, masked data is safely reinserted into the output in a secure post-processing step.
  • Feedback Framework: Users can provide feedback on AI responses to improve quality, accuracy, and safety over time.
  • Audit Trail: Detailed logs track AI interactions, providing transparency and accountability for compliance and review.

Here’s a good visual representation from “The Einstein Trust Layer” module on Salesforce’s Trailhead.

Salesforces Agentforce Trust Layer - AI You Can Trust Image1

Together, these features ensure that Agentforce is trustworthy, secure, and aligned with enterprise needs. The main purpose is to make sure sensitive information remains private and helps businesses stay compliant with data privacy laws. At the core of these features is data masking, dynamic grounding, and the zero data retention policy.

Privacy by Design, Not Default 

Data masking is used to hide or remove personal details, adding an extra layer of privacy. Data masking hides sensitive information before it's seen by AI so personal details aren’t exposed during processing by the LLM. Salesforce initially masks certain fields like name, email address, and phone number and provides admins with the ability to configure additional masking for fields which are encrypted using Salesforce Shield, fields matching a recognized pattern including social security or passport numbers, fields with a compliance attribute such as PII, HIPAA, COPPA, or PIC, or fields categorized with a sensitivity level of public, internal, confidential, restricted, or mission critical.

An AI playground org can be created by going to the Salesforce Trailhead module "Large Language Model Data Masking in the Einstein Trust Layer". The following images from setup in one such playground org shows some of the settings available for managing data masking.

Salesforces Agentforce Trust Layer - AI You Can Trust Image2 Salesforces Agentforce Trust Layer - AI You Can Trust Image3

Dynamic Grounding: Combatting Hallucinations with Real Data

Dynamic grounding makes sure AI responses are based only on approved company data, helping to prevent hallucinations, which is when an LLM returns false or misleading information that is not grounded in factual data, or off-topic answers that can lead to compliance risks.

That’s a big deal for companies operating under strict regulations like GDPR or HIPAA, where data control and transparency are non-negotiable. Together, these features let businesses tap into AI while still meeting the highest standards for privacy and compliance.

The zero data retention policy is just as the name implies. Zero data is retained, which means that when data is passed through the LLM, the data is deleted once the response is returned. This allows the use of an external LLM with the confidence that your data is secure within your Salesforce org.

Secure AI in Action

Imagine a health insurance agent helping a customer understand their coverage. Thanks to data masking, Agentforce can assist the agent without ever seeing private health details allowing the conversation to be both helpful and compliant. In another case, a banking support agent might get real-time AI suggestions during a customer call, with grounding ensuring that the AI’s guidance is based only on the bank’s approved policies and documents. Nothing is made up, and no personal financial data is stored. Meanwhile, a supervisor can use AI-generated summaries to review how agents are handling calls, with audit trails and policy controls in place so every insight is transparent, accurate, and accountable. These examples show how the Trust Layer enables smarter customer service without sacrificing privacy or trust.

Let’s Build a Trusted AI Experience

Ready to put the power of AI to work safely and responsibly within your Salesforce org? The Salesforce Trust Layer makes it possible to use advanced tools like Agentforce while keeping your customer data secure and compliant. If you’re exploring how to set up Agentforce or need help with any Salesforce project, we’re here to support you in every step of the way. Reach out to start a conversation. We’d be happy to help you build smarter, more trusted customer experiences with the power of Agentforce.

Upgrade Your Platform,
Accelerate Your Victory

Outdated platforms hold your business back. Let’s future-proof your technology and position you for real, measurable wins.

Let's turn your vision into victory

Reach out to discuss how we can drive success for your business with tailored solutions and expert support.