AI Principles
Our approach to AI and machine learning
At Slack, artificial intelligence (AI) and machine learning (ML) are a core element of our product experience. Because our commitment to data privacy and security is fundamental to everything we do, we ensure robust controls are in place to protect your information. Slack’s native AI features have been built to uphold these commitments. Your customer data (like messages and files) is never used to train any LLMs.
The AI and ML features that we build are designed to make working in Slack simpler, smarter and more efficient. While AI is newer to Slack, ML has long been a key part of the Slack experience. This guide digs into how these technologies are built into Slack and the clear distinction between them:
- AI specifically refers to generative AI features powered by large language models (LLMs), like channel summaries.
- ML encompasses features that use predictive models, like emoji recommendations and display name suggestions when other users are mentioned.
AI in Slack
AI features are included in Slack paid subscriptions. Admins can enable or disable AI features, giving customers full control over how and whether AI is used in their workspaces.
Our AI architecture was designed with privacy in mind.
All of our AI features are protected by Slack AI guardrails, a set of built-in foundational protections. These safeguards include content thresholds that reduce hallucinations, explicit safety instructions that limit prompt engineering, context engineering to reduce prompt injection risks, URL filtering to prevent phishing attacks and output format validation, as well as provider mitigations.
The Slack AI guardrails also automatically apply content safety filters as another protective layer for AI features that rely on user-generated inputs. These filters analyse queries in real time to identify and mitigate harmful content, prompt injection attempts and security risks before they reach AI systems such as Slackbot and search answers.
Our approach to AI is grounded in these three principles:
- You control what AI can access:
Your customer data in Slack (like messages and files) is not used to train LLMs. - Your data is never used to train large language models.
Workspace and organisation admins can turn AI features on or off at any time, giving customers full control over how AI is used. Slack AI only works with content that you already have permission to view. For example, AI search answers will only include results that you could also find in a standard search. - Your data stays within Slack’s trusted infrastructure.
The LLMs that we use are deployed inside Slack’s cloud environment, so model providers do not have access to your data.
Note: Admins can decide whether members of their workspace or Enterprise organisation have access to our AI features. Visit Manage access to AI features in Slack for more details.
Machine learning in Slack
We also use predictive machine learning models to help to make the Slack experience even better. When you see an emoji that you and your teammates have used recently in the emoji picker or an autocomplete suggestion to help to find the right person at your company with a common name, our ML models are responsible for the relevancy and accuracy of those suggestions.
- Machine learning models allow us to deliver high-quality, personalised experiences.
- We build and train models so that they cannot reproduce customer data, and the outputs of the models cannot be linked to data from a specific customer.
Tip: Visit our Privacy Principles for an overview of the principles we follow to inform product development at Slack, along with customer controls.
More information on models and data
For more details on the generative and predictive models that we use and how they power our AI and ML features, visit Slack artificial Intelligence, machine learning and data usage. This resource provides detailed information for anyone looking for a more thorough understanding of our AI and ML data practices.



