Supported editions for this feature: Frontline Standard; Enterprise Standard and Enterprise Plus; Education Fundamentals, Education Standard, Teaching and Learning Upgrade, and Education Plus. Compare your edition
DLP for Chat is also available to Cloud Identity Premium users who are also licensed for Google Workspace editions that include Google Chat and audit and investigation.
Using DLP for Chat, you can create data protection rules to prevent data leaks from Chat messages and attachments (uploaded files and images).
DLP for Chat features
DLP for Chat gives you control over the sharing of sensitive data in chat conversations. Using Chat for DLP, you can:
- Create data protection rules specifically for Chat, or for Chat plus other apps (such as Drive or Chrome).
- Create data protection rules that block Chat messages and attachments, warn users from sending them, or log them for future audit.
- Define data sensitivity conditions using text strings, predefined and custom detectors (which include word lists and regular expressions).
- Enforce data protection rules for a specific organizational unit or group, or for your entire organization.
- Investigate Chat DLP violations using the Security investigation tool (including viewing end user messages that violate such rules).
Known limitations
- In general, links are scanned, but the linked content is not scanned.
- Files shared through Drive are subject to Drive DLP rules. Go to Use Workspace DLP to prevent data loss for details.
Chat is a latency-sensitive application, and we have designed Chat DLP to not degrade the end user experience.
- For messages, DLP is given a fixed amount of time to perform scans. Depending on the complexity and number of detectors you have, some detectors may not complete in time, and won’t be enforced. DLP scan status is included in the Google Chat audit log for messages sent and attachments uploaded.
- The following predefined detectors might require more time to scan—using them in Chat DLP rules increases the risk of scan timeouts:
- Date of birth
- Person name
- Attachments are given more time for scans.
Comma-separated values (.csv) files are treated as plain text. As a result, DLP might not find violations in columns that are apparent when you review the data in the file.
Ensure that your users' Gmail and Google Chat applications are up to date so they receive complete messaging for blocked Chat conversations. On older versions of Gmail and Chat, content that should only trigger a warning will be blocked instead.
How does DLP for Chat work?
When the user sends a Chat message, DLP scans that message for sensitive content. If an attachment violates a block or warn rule, the action is applied when the message is sent.
What is scanned?
DLP rules are applied to sent messages, not the messages a user or space can receive.
- Messages and attachments are scanned. Attachments include files and images. The attachment filename itself is also scanned (for supported file types). Attachments that violate security policies are blocked from being sent.
- Messages in 1:1 chats, group chats, and spaces are scanned, even if Chat history is turned off. Refer to Turn history on or off in Google Chat for details.
- Chat DLP incidents are logged in the Rule audit log; in some cases, message content may be available in the log. How long the message content is visible in the log depends on your Chat history settings and your configured message retention period for Chat.
- When Chat history is turned on, Admins can view the message up to your configured message retention period.
- When Chat history is turned off, the message will be accessible for 24 hours.
Scanned file types
File types scanned for content include:
- Document file types: .txt, .doc, .docx, .rtf, .html, .xhtml, .xml, .pdf, .ppt., .pptx, .odp, .ods, .odt, .xls, .xlsx, .ps, .css, .csv, .json, .sh
- Image file types: .eps
Note: If OCR is enabled, the following image types are also scanned: .bmp, .gif, .jpeg, .png, and images within PDF files.
- Compressed file types: .zip
- Custom file types: .hwp, .kml, .kmz, .sdc, .sdd, .sdw, .sxc, .sxi, .sxw, .wml, .xps
Attachment size limitation for scans
Attachments over 50 MB in size are uploaded and sent without being scanned by DLP.
When is the message scanned?
When a user sends a Chat message, Chat messages are scanned whether there are attachments or not.
Summary of DLP for Chat flow:
- You define DLP rules. These rules define which content is sensitive and should be protected. You can apply DLP rules to both messages and attachments.
- A user sends a Chat message. DLP scans the contents for DLP rule violations. Attachments are scanned upon upload, and those that violate rules are blocked.
- DLP enforces the rules you defined, and violations trigger any actions you've configured for the rules.
- You're alerted of DLP rule violations in the Rule audit log.
What happens when a user's message is blocked or triggers a warning?
Before you implement DLP for Chat rules, tell your end users what to expect. Explain that there are policies in place regarding what information can be shared, and that messages that violate these policies are blocked or result in a warning message. Tell them what information is restricted, so they won’t be surprised when they receive messages about blocked content, or are warned of sensitive content.
User experience for blocked messagesHere are some messages users can receive when a Chat message or attachment is blocked:
- Your message couldn’t be sent
- Your message couldn't be updated
Your message may contain sensitive content (like credit card numbers) that shouldn't be shared based on your organization's policies. Edit as needed, or check with your admin if this doesn’t seem right.
When a message is blocked, the user can dismiss the dialog or click Edit message and edit the message text or remove the violating attachment.
When a Chat message or attachment triggers a warning, users receive the following message. Note that the message is initially blocked, and is only sent if the user chooses to send the message anyway:
- Check your message
Your message may contain sensitive content (like credit card numbers) that shouldn't be shared based on your organization's policies. Edit as needed, or check with your admin if this doesn’t seem right.
After getting a warning, the user can click Edit message and edit the message text, click Send anyway to send the text as is, or dismiss the dialog.
How do I control what messages are blocked? What if I want to block messages to spaces or groups?
After choosing a DLP rule action (such as Block message), select the conversation type you want to cover: internal or external (for example an externally-owned space, or a conversation with guest access enabled). You can also choose whether to apply the rule to spaces, group chats, and 1:1 chats:
DLP for Chat - rule examples
Here are some examples of how to create DLP rules that block Chat messages or attachments, warn about sensitive content, or log details about Chat messages in the Rule audit log.
For general steps on creating DLP rules, go to Create new DLP for Drive rules and custom content detectors.
-
Sign in to your Google Admin console.
Sign in using an account with super administrator privileges (does not end in @gmail.com).
-
In the Admin console, go to Menu Rules.
- Under Protect your sensitive content, click Create rule.
- Add the name and description for the rule, such as Block when sharing SSN in chat.
- In the Scope section, choose Apply to all <domain.name> or choose to search for and include or exclude organizational units or groups the rule applies to.
- Click Continue.
- For Google Chat select Message sent and File uploaded (for attachments).
- Click Continue.
- In the Conditions section, click Add Condition and select the following values:
- Content type to scan—All content (note that All content is the only content type available if you select Google Chat, no matter what other apps are selected)
- What to scan for—Matches predefined data type (recommended)
- Select data type—United States - Social Security Number.
- Likelihood Threshold—High. The confidence threshold for the condition. This is an extra measure used to determine whether messages trigger the rule action.
- Minimum unique matches—1. The minimum number of times a unique match must occur in a message or attachment to trigger the action.
- Minimum match count—1. The number of times the content must appear in a message or attachment to trigger the action. For example, if you select 2, content must appear at least twice in a message to trigger the action.
- Click Continue. In the Actions section, under Chat, select Block message. Also select when the action should apply. For this example, select External conversations and Internal conversations. Leave Spaces, Group chats, and 1:1 chats selected.
- (Optional) In the Alerting section:
- Choose a severity level (Low, Medium, or High) for how an event triggered by this rule is reported in the security dashboard.
- Choose whether an event triggered by this rule should also send an alert to the alert center. Also choose whether to email alert notifications to all super administrators or to other recipients.
- Click Continue to review the rule details. The action for Chat is to block message for external and internal conversations.
- Choose a status for the rule:
- Active—Your rule runs immediately
- Inactive—Your rule exists, but does not run immediately. This gives you time to review the rule and share it with team members before implementing. Activate the rule later by going to SecurityAccess and data controlData protectionManage Rules. Click the Inactive status for the rule and select Active. The rule runs after you activate it, and DLP scans for sensitive content.
- Click Create.
Changes can take up to 24 hours but typically happen more quickly. Learn more
-
Sign in to your Google Admin console.
Sign in using an account with super administrator privileges (does not end in @gmail.com).
-
In the Admin console, go to Menu Rules.
- Under Protect your sensitive content, click Create rule.
- Add the name and description for the rule, such as Block when sharing a passport number in Chat and Drive.
- In the Scope section, choose Apply to all <domain.name> or choose to search for and include or exclude organizational units or groups the rule applies to.
- Click Continue.
- For Google Drive select File created, modified, uploaded, or shared. For Google Chat select File uploaded only.
- Click Continue.
- In the Conditions section, click Add Condition and select the following values:
- Content type to scan—All content (note that All content is the only content type available if you select Google Chat, no matter what other apps are selected).
- What to scan for—Matches predefined data type (recommended)
- Select data type—United States Passport
- Likelihood Threshold—High. The confidence threshold for the condition. This is an extra measure used to determine whether messages trigger the action.
- Minimum unique matches—1. The minimum number of times a unique match must occur in a document to trigger the action.
- Minimum match count—1. The number of times the content must appear in a message to trigger the action. For example, if you select 2, content must appear at least twice in a message to trigger the action.
- Click Continue. In the Actions section:
- Under Google Chat, select Block message. Also, select when the action should apply. For this example, deselect Internal conversations, and leave External conversations selected. You can also select which types of chats to apply the rule to.
- Under Google Drive, select Block external sharing.
- (Optional) In the Alerting section:
- Choose a severity level (Low, Medium, or High) for how an event triggered by this rule is reported in the security dashboard.
- Choose whether an event triggered by this rule should also send an alert to the alert center. Also choose whether to email alert notifications to all super administrators or to other recipients.
- Click Continue to review the rule details. The action for Chat is to block content for external conversations only. The action for Drive is to block external sharing.
- Choose a status for the rule:
- Active—Your rule runs immediately
- Inactive—Your rule exists, but does not run immediately. This gives you time to review the rule and share it with team members before implementing. Activate the rule later by going to SecurityAccess and data controlData protectionManage Rules. Click the Inactive status for the rule and select Active. The rule runs after you activate it, and DLP scans for sensitive content.
- Click Create.
Changes can take up to 24 hours but typically happen more quickly. Learn more
-
Sign in to your Google Admin console.
Sign in using an account with super administrator privileges (does not end in @gmail.com).
-
In the Admin console, go to Menu Rules.
- Under Protect your sensitive content, click Create rule.
- Add the name and description for the rule, such as Log when sharing names in chat or Chrome.
- In the Scope section, choose Apply to all <domain.name> or choose to search for and include or exclude organizational units or groups the rule applies to.
- Click Continue.
- For Chrome, select File uploaded only. For Google Chat select File uploaded only.
- Click Continue.
- In the Conditions section, click Add Condition and select the following values:
- Content type to scan—All content (note that All content is the only content type available if you select Google Chat, no matter what other apps are selected).
- What to scan for—Contains text string
- Enter contents to match—SpiderWeb
- Click Add condition to add an OR condition, and select the following values:
- Content type to scan—All content
- What to scan for—Contains text string
- Enter contents to match—SpdW
- Click Continue. In the Actions section, under Chrome and Chat, select Audit only. Also, for Chat, select when the action should apply. For this example, select both External conversations and Internal conversations.
- (Optional) In the Alerting section:
- Choose a severity level (Low, Medium, or High) for how an event triggered by this rule is reported in the security dashboard.
- Choose whether an event triggered by this rule should also send an alert to the alert center. Also choose whether to email alert notifications to all super administrators or to other recipients.
- Click Continue to review the rule details. Under Action, note that the action for Chrome is audit only, and the action for Chat is also audit only, and mentions that the action occurs for external and internal conversations.
- Choose a status for the rule:
- Active—Your rule runs immediately
- Inactive—Your rule exists, but does not run immediately. This gives you time to review the rule and share it with team members before implementing. Activate the rule later by going to SecurityAccess and data controlData protectionManage Rules. Click the Inactive status for the rule and select Active. The rule runs after you activate it, and DLP scans for sensitive content.
- Click Create.
Changes can take up to 24 hours but typically happen more quickly. Learn more
In this example, you create a custom detector that lists project-sensitive terms. Then, you’ll use this custom detector as a condition in a DLP rule.
Create the detector
-
Sign in to your Google Admin console.
Sign in using an account with super administrator privileges (does not end in @gmail.com).
-
In the Admin console, go to Menu SecurityAccess and data controlData protection.
- Click Manage detectors.
- Click Add detector, then Word list.
- In the Add word list window:
- Add the name (such as Sensitive terms) and a description.
- Add a comma-separated list of your sensitive terms. Note that capitalization and symbols are ignored, and only complete words are matched. Words in word list detectors must contain at least 2 characters that are letters or digits.
- Click Create. Now, you can use the custom detector in a rule condition.
Use the custom detector in a rule
-
Sign in to your Google Admin console.
Sign in using an account with super administrator privileges (does not end in @gmail.com).
-
In the Admin console, go to Menu Rules.
- Under Protect your sensitive content, click Create rule.
- Add the name (such as Sensitive terms to warn users about) and a description for the rule.
- In the Scope section, choose Apply to all <domain.name> or choose to search for and include or exclude organizational units or groups the rule applies to.
- Click Continue.
- For Google Chat select Message sent and File uploaded.
- Click Continue.
- In the Conditions section, click Add Condition and select the following values:
- Content type to scan—All content (note that All content is the only content type available if you select Google Chat, no matter what other apps are selected).
- What to scan for—Matches words from a word list
- Word list name—Sensitive terms
- Match mode—Match any word
- Minimum total times any word detected—1
- Click Continue. In the Actions section, under Chat, select Warn users. Also, for Chat, select when the action should apply. For this example, select External conversations and Internal conversations.
- (Optional) In the Alerting section:
- Choose a severity level (Low, Medium, or High) for how an event triggered by this rule is reported in the security dashboard.
- Choose whether an event triggered by this rule should also send an alert to the alert center. Also choose whether to email alert notifications to all super administrators or to other recipients.
- Click Continue to review the rule details. Under Action, note that the action for Chat is Warn users, and mentions that the action occurs for External and Internal conversations.
- Choose a status for the rule:
- Active—Your rule runs immediately
- Inactive—Your rule exists, but does not run immediately. This gives you time to review the rule and share it with team members before implementing. Activate the rule later by going to SecurityAccess and data controlData protectionManage Rules. Click the Inactive status for the rule and select Active. The rule runs after you activate it, and DLP scans for sensitive content.
- Click Create.
Changes can take up to 24 hours but typically happen more quickly. Learn more
A rule template provides a set of conditions that cover many typical data protection scenarios. Use a rule template to set up policies for common data protection situations.
This example uses a rule template to block sending a chat message, uploading a file to a chat, or sharing a Drive file, if the message or file contains US personally identifiable information (PII).
Before you begin, sign in to your super administrator account or a delegated admin account with these privileges:
- Organizational unit administrator privileges.
- Groups administrator privileges.
- View DLP rule and Manage DLP rule privileges. Note that you must enable both View and Manage permissions to have complete access for creating and editing rules. We recommend you create a custom role that has both privileges.
- View Metadata and Attributes privileges (required for the use of the investigation tool only): Security CenterInvestigation ToolRuleView Metadata and Attributes.
Learn more about administrator privileges and creating custom administrator roles.
-
Sign in to your Google Admin console.
Sign in using your administrator account (does not end in @gmail.com).
-
In the Admin console, go to Menu Rules.
- Click Templates.
- On the Templates page, click Prevent PII information sharing (US).
- In the Name section, accept the default name and description of the rule or enter new values.
- In the Scope section, search for and select the organizational units groups the rule applies to.
- Click Continue. Under Apps, the following options are preselected:
- for Google Chat, the Message sent and File uploaded boxes are checked.
- For Google Drive, File create, modified, uploaded or shared is selected.
- Click Continue.
- Review the default preselected conditions for the PII rule template:
- Content type to scan—All content (note that All content is the only content type available if you select Google Chat, no matter what other apps are selected).
- What to scan for—Matches predefined data type (recommended)
- Select data type—Several data types, including Social Security Number, Driver's License Number, and United States Passport number.
- Likelihood Threshold—Very high. The confidence threshold for the condition. This is an extra measure used to determine whether messages trigger the action.
- Minimum unique matches—1. The minimum number of times a unique match must occur in a document to trigger the action.
- Minimum match count—1. The number of times the content must appear in a message to trigger the action. For example, if you select 2, content must appear at least twice in a message to trigger the action.
- Click Continue to review the default Actions selected for the PII rule template (for Chat, Block message; for Drive, Block external sharing).
- Click Continue to review the rule details.
- Choose a status for the rule:
- Active—Your rule runs immediately
- Inactive—Your rule exists, but does not run immediately. This gives you time to review the rule and share it with team members before implementing. Activate the rule later by going to SecurityAccess and data controlData protectionManage Rules. Click the Inactive status for the rule and select Active. The rule runs after you activate it, and DLP scans for sensitive content.
- Click Create.
Changes can take up to 24 hours but typically happen more quickly. Learn more
As your organization's administrator, you can add a message to display to your users when a rule is triggered. You can customize the message to help users understand and fix the violating content and then block the message or warn them.
-
Sign in to your Google Admin console.
Sign in using an account with super administrator privileges (does not end in @gmail.com).
-
In the Admin console, go to Menu Rules.
- For Classify and protect your sensitive content, click Create rule.
- Click Name and enter a name for the rule.
(Optional) To enter a description for the rule, click Description and enter it. - For Scope, choose an option:
- To apply the rule to your whole organization, select All in domain.name.
- To apply the rule to specific organizational units or groups, select Organizational units and/or groups and include or exclude the organizational units and groups.
- Click Continue.
- For Apps and Google Chat, choose one or more of the following options:
- To apply the rule to messages, check the Message sent box.
- To apply the rule to attachments, check the File uploaded box.
- Click Continue.
- (Optional) To add a condition:
- Click Add Condition. For Google Chat, All content is the only option.
- Click What to scan for and complete the needed attributes for the type of scan.
If you create a DLP rule with no condition, the rule applies the specified action to every Chat message and all uploaded files (depending on whether you select messages, file attachments, or both when creating the rule).
- Click Continue.
- Click Action and choose an option:
- To warn users, select Warn users.
- To block the message, select Block message.
- Select when to apply the rule and check the Customize message box.
- Enter your customized message. You can create messages up to 300 characters long and insert links. An inserted URL counts toward the character limit of the message.
- (Optional) To choose a severity level for how to report events triggered by this rule in the Admin console, for Alerting, select Low, Medium, or High.
- (Optional) To send an alert to the alert center when an event is triggered by the rule, check the Send to alert center box and to send a notification about the alert to all super admins, check the All super administrators box. You can enter other email recipients as well for notifications.
- Click Continue and review the rule details.
- Choose a status for the rule:
- Active—Your rule runs immediately.
- Inactive—Your rule exists but does not run immediately. This option gives you time to review the rule and share it with team members before implementing. To activate the rule later, in the Admin console, go to SecurityAccess and data controlData protectionManage Rules, change the status to Active, and click Confirm.
- Click Create.
Scan images for sensitive content
Using optical character recognition (OCR), DLP for Chat scans image text for sensitive content in uploaded attachments to Chat message.
Note that OCR is only available for attachments with images uploaded to Google Chat messages, and it can cause delays for messages containing images.
See Scanned file types above for a complete list of supported image formats.
-
Sign in to your Google Admin console.
Sign in using an account with super administrator privileges (does not end in @gmail.com).
- On the Admin console Home page, go to SecurityAccess and data controlData protection.
- Under Data protection settings, click Optical character recognition (OCR). The default state is ON. If OCR is OFF, then select OFF and slide it to ON.
- Click Save. This turns on OCR for data protection rules that apply to Google Chat.
Note: Once turned on, the OCR setting will apply to all DLP for Chat rules. It can’t be applied selectively to specific rules.
You can verify that OCR is turned on when creating a data protection rule.
-
Sign in to your Google Admin console.
Sign in using an account with super administrator privileges (does not end in @gmail.com).
- On the Admin console Home page, go to Rules.
- Under Protect your sensitive content, click Create a rule.
- Enter a name and description for the rule.
- In the Scope section, choose Apply to all <domain.name> or choose to search for and include or exclude organizational units or groups the rule applies to.
- Click Continue.
- Under Apps, for Google Chat, check File uploaded.
- Click to check whether OCR is turned on. If OCR is OFF, then select OFF and slide it to ON.
- Click Continue to finish creating the rule. For help on creating rules, see DLP for Chat rule examples, above.
Investigate Chat DLP violations using the Security investigation tool
After you’ve set up Chat DLP rules, rule violations are logged in the Rule log. You can use the Security investigation tool to search the Rule log and get specific information on the violating chat message or attachment, including:
- Name of the DLP rule that was triggered
- Message sender
- Date the message was sent
- Type of conversation—for example, 1:1 chat, or space.
- Message content (depending on your message retention settings).
For complete steps, see Investigate Chat messages to protect your organization's data.
Investigation tool limitationsYou can’t view the original violating message or attachment if:
- It was not sent (was blocked). Only content that is sent and violates an audit-only rule can be viewed.
- It was sent in a conversation owned by another organization.
- The message is past the retention period.