Data retention policy
Slack content is stored as embeddings using vector databases like Turbopuffer. These embeddings allow for efficient searches and comparisons across different pieces of text to provide accurate and relevant responses to your questions.
Notion’s vector databases have been reviewed by our Security team and independently assessed by an external auditor as part of our SOC 2 Type II attestations. See https://www.notion.com/help/notion-ai-security-practices for more information. Data archiving and removal policy
Notion treats your Slack content with the privacy principle of data minimization in mind. If you decide to disconnect your Slack environment from Notion AI, your Slack content will become unsearchable, and we will begin deleting any data that is stored.
Data storage policy
Slack content is stored as embeddings using vector databases like Turbopuffer. These embeddings allow for efficient searches and comparisons across different pieces of text to provide accurate and relevant responses to your questions.
Notion’s vector databases have been vetted by our Security team as well as by an external auditor to obtain their SOC2 Type II attestations.
For specific locations, please refer to this link: https://notion.notion.site/Notion-s-List-of-Subprocessors-268fa5bcfa0f46b6bc29436b21676734?pvs=74 App/service has sub-processors
yes
Guidelines for sub-processors
App/service uses large language models (LLM)
yes
LLM model(s) used
Open AI models, Anthropic models
LLM retention settings
When using Notion AI, our LLM providers utilize zero data retention, so no data is stored with LLM providers.
LLM data residency policy