Skip to content

Collaboration Tool Security: Hidden Risks in Slack, Teams, and Chat Platforms

A chat message bubble containing a database password, surrounded by open integrations and disconnected user avatars with warning indicators

It is 11:47 PM. A backend engineer is debugging a production outage. The database is returning timeout errors and the on-call Slack channel is filling up with pings from customer support. Her colleague asks for the production database credentials so he can check connection pool settings. She pastes the username and password directly into the channel. Eleven people are in the channel. Three of them are contractors whose access was supposed to expire last quarter. The message is indexed, searchable, and will exist in Slack’s retention archive for as long as the workspace does.

The outage gets resolved by midnight. The credentials stay in that channel forever. Six months later, when a contractor’s Slack account is compromised through a reused password, those credentials are the first thing the attacker finds.

This scenario plays out constantly in organizations of every size. The risks hiding in workplace chat platforms go far beyond the occasional careless message.

What makes collaboration tools a security risk?

Section titled “What makes collaboration tools a security risk?”

Collaboration tool security refers to the policies, controls, and employee behaviors that protect corporate data flowing through workplace chat platforms, video conferencing tools, and shared workspaces like Slack, Microsoft Teams, Zoom, and Google Chat. These platforms process an enormous volume of sensitive information daily. Slack reports that its average enterprise customer sends over 200,000 messages per month. Microsoft Teams surpassed 320 million monthly active users in 2024. Each message, file upload, screen share, and integration represents a potential exposure point that most security programs overlook.

The core problem is a perception gap. Employees treat chat messages like hallway conversations. Informal, ephemeral, low-stakes. But unlike a hallway conversation, a chat message is stored on third-party servers, backed up, indexed for search, accessible to workspace admins, potentially subject to legal discovery, and readable by every integration connected to that channel. The informality that makes chat productive also makes it dangerous.

Most organizations have invested heavily in email security: phishing filters, DLP scanning, encryption gateways. Chat platforms receive a fraction of that scrutiny, despite carrying an increasing share of sensitive communication. Gartner estimated that by 2025, 70% of team communication in large enterprises would happen outside of email. The security tooling has not kept pace with that shift.

The attack surface is also wider than most security teams realize. Beyond messages, collaboration platforms handle file storage, video recordings, voice transcripts, screen shares, calendar integrations, and third-party app connections. A single Slack workspace is not just a messaging tool. It is a data warehouse of conversations, decisions, credentials, and documents that accumulates continuously and is rarely pruned.

Why do credentials keep ending up in chat messages?

Section titled “Why do credentials keep ending up in chat messages?”

This is the single biggest risk in collaboration tools, and it happens with depressing regularity. A 2023 1Password survey found that 34% of IT and security workers have pasted credentials into a chat message or shared document. Among all employees, the number is likely higher because non-technical staff are less aware of the risk.

The scenario is almost always the same. Someone needs access to a system. The “proper” way to grant it (updating permissions, using a secrets manager, submitting an access request) takes time. Pasting the password into Slack takes three seconds. Under deadline pressure, three seconds wins every time.

It is not just production credentials. AWS access keys, Stripe API tokens, database connection strings, SSH keys, VPN credentials, admin panel passwords. GitGuardian’s 2024 State of Secrets Sprawl report found that 12.8 million new hardcoded secrets appeared in public GitHub commits in 2023 alone. The same behavior that puts secrets in code puts them in chat. The difference is that GitHub has automated scanning for leaked secrets. Slack and Teams do not, unless an organization specifically configures DLP rules to catch them.

Once credentials land in a chat channel, they are searchable. Anyone with access to that channel can find them by searching for keywords like “password,” “credentials,” or “login.” Attackers who compromise a single Slack account often run exactly this search as their first move. The Collaboration Tool Hygiene exercise walks through this scenario in detail, showing employees how credentials posted in chat create persistent, searchable vulnerabilities that outlast the original need.

Deleting the message does not solve the problem either. Slack retains deleted messages in its backend for compliance and legal hold purposes. Even if the message vanishes from the channel, it persists in exports and backups. And anyone who saw it before deletion could have copied it. The credentials are burned the instant they hit the channel, regardless of what happens to the message afterward.

The fix is not telling people to stop sharing credentials in chat. It is giving them a tool that makes the secure path faster than the insecure one. A password manager with secure sharing lets you grant time-limited access to a credential without ever exposing the plaintext password in a message. The recipient gets access. The password never touches the chat log. For organizations dealing with credential security more broadly, the password manager also eliminates the reuse problem that turns a single chat exposure into a multi-system compromise.

What happens with integrations nobody audits?

Section titled “What happens with integrations nobody audits?”

The average Slack workspace has dozens of integrations: bots, webhooks, custom apps, third-party connectors. Each one has an API token with specific permissions. Some can read messages. Some can post on behalf of users. Some have access to file uploads across the entire workspace.

A 2024 Productiv analysis found that enterprises average 87 SaaS integrations connected to their primary collaboration platforms. Many were installed for a specific project, by a specific person, who may no longer be at the company. The integration stays active. Its token stays valid. Nobody reviews whether it still needs the permissions it was granted.

Webhooks are particularly risky. An outgoing webhook that posts build notifications to a channel sounds harmless. But if that webhook URL leaks or the receiving endpoint is compromised, an attacker can inject messages into internal channels. Incoming webhooks are worse. They provide a URL that anyone with the link can use to post messages to a specific channel. These URLs are often stored in CI/CD configs, scripts, and documentation wikis with minimal access control.

In 2023, security researchers demonstrated how a compromised incoming webhook URL could be used to post convincing phishing messages to internal Slack channels, impersonating automated systems that employees trust. A message from “Jira Bot” asking employees to re-authenticate looks credible when it appears in an engineering channel alongside real Jira notifications.

This is a form of shadow IT that hides in plain sight. The integrations are technically “approved” because someone with workspace admin rights installed them. But nobody maintains an inventory, reviews permissions quarterly, or deactivates integrations when the project that needed them ends.

The 2024 EA Games Slack breach illustrates the risk. Attackers purchased a stolen Slack session cookie for $10 on a dark web marketplace, logged into EA’s internal Slack workspace, and used it to social-engineer an IT support agent into granting them access to the internal network. From there, they stole 780 GB of source code. The initial entry was through a collaboration tool. The path from cookie to source code took less than a day.

Our Collaboration Tool Hygiene exercise includes a module on identifying and auditing stale integrations before they become entry points. It also covers session token hygiene, which most employees do not think about when they check “keep me signed in” on their work laptop at home.

When an employee departs, IT typically deactivates their Active Directory account, revokes VPN access, and collects their laptop. Collaboration tool access is often an afterthought. The problem is particularly acute because Slack and Teams accounts may not be tied to the same identity provider as other corporate systems, especially for external guests and contractors who were never in Active Directory to begin with.

Slack guest accounts for contractors and agency partners are especially problematic. They are created for a specific engagement, rarely documented in the same system as employee accounts, and almost never included in offboarding checklists. A 2023 Cerby report found that 60% of organizations had active accounts for former employees or contractors in at least one SaaS application. The average large enterprise works with 200+ external vendors and agencies at any given time. Each vendor relationship generates guest accounts that someone needs to track and eventually revoke.

Microsoft Teams shared channels compound the risk. When two organizations connect via shared channels, users from the external organization gain access to messages, files, and sometimes SharePoint sites. When the partnership ends, disconnecting the shared channel is a manual step that someone has to remember to do.

The problem is worse at organizations that use multiple collaboration platforms simultaneously. A company might use Slack for engineering, Teams for the rest of the business, and a separate tool for external client communication. Each platform has its own identity system, its own guest access model, and its own deprovisioning process. Nobody owns the complete picture of who has access where.

Our Guest Access Management exercise trains employees to audit external access grants and flag accounts that have outlived their purpose.

The access problem connects to broader insider threat risks. An ex-contractor with lingering Slack access can read strategic discussions, monitor hiring plans, or exfiltrate shared files without tripping any security control, because as far as the system is concerned, they are still authorized.

Slack’s own 2024 transparency report showed that enterprise workspace admins deactivate guest accounts an average of 23 days after the engagement ends. That is 23 days of continued access to potentially sensitive channels, files, and message history. For organizations handling regulated data, those 23 days can constitute a compliance violation.

Can someone eavesdrop on your collaboration tool calls?

Section titled “Can someone eavesdrop on your collaboration tool calls?”

Remote and hybrid work moved sensitive conversations from conference rooms to video calls conducted over WiFi and Bluetooth headsets. This introduced a category of risk that most collaboration tool security programs ignore entirely. A 2024 Buffer State of Remote Work report found that 98% of remote workers want to continue working remotely at least some of the time. The distributed workforce is permanent, and so are the audio security risks it brings.

Bluetooth, the protocol connecting your headset to your laptop, has known vulnerabilities. The KNOB (Key Negotiation of Bluetooth) attack, disclosed in 2019, allows an attacker within radio range to force a Bluetooth connection to use a weaker encryption key, potentially enabling real-time audio interception. The BLUFFS attack, published by researchers at EURECOM in late 2023, demonstrated that an attacker can force Bluetooth devices into a legacy pairing mode that allows session key brute-forcing across multiple sessions.

The practical risk is highest in shared spaces. Coffee shops, coworking spaces, airport lounges, hotel lobbies. An employee taking a board call from a hotel lobby over a Bluetooth headset is broadcasting audio data within a 30-foot radius. The Safe Bluetooth Practices exercise covers the specific scenarios where wireless eavesdropping becomes a realistic threat and teaches employees when to switch to wired audio or defer sensitive calls.

This is not theoretical paranoia. State-sponsored and corporate espionage operations have documented Bluetooth interception capabilities. For most organizations, the bigger risk is opportunistic. An attacker sitting in the same coworking space, scanning for Bluetooth devices, and intercepting fragments of a call that happens to contain something valuable.

Screen sharing creates a parallel risk. During a Zoom or Teams call, an employee shares their screen to walk through a document. A notification pops up from their personal email, or a browser tab with sensitive data is briefly visible, or a Slack message with a customer name scrolls past. Screen sharing broadcasts everything on the display, not just the intended window. A 2023 Tessian survey found that 28% of employees have accidentally shared sensitive data during a screen sharing session. The Shadow IT Awareness exercise covers how personal apps running alongside work tools create these accidental exposure moments.

How often do files end up in the wrong channel?

Section titled “How often do files end up in the wrong channel?”

Accidental sharing is quieter than credential exposure but potentially just as damaging. An HR manager uploads a salary spreadsheet to a public channel instead of a private one. A sales rep shares a contract with a customer’s competitor because she picked the wrong channel from a dropdown. A developer posts a production config file containing API keys into a general engineering channel instead of the restricted infrastructure channel. These are not edge cases. They happen in every organization with more than a handful of channels.

The Verizon 2024 DBIR found that misdelivery, sending information to the wrong recipient, accounted for 43% of errors leading to data breaches. Collaboration tools make misdelivery frictionless. When every channel is one click away, the wrong click has the same weight as the right one.

Unlike email, where you at least see the recipient’s name before sending, chat platforms let you post to channels with similar names in rapid succession. The muscle memory of typing in the message box and hitting Enter does not leave room for the “did I pick the right channel?” check. Our Insider Threat (Accidental) exercise simulates exactly this type of scenario, where a well-intentioned employee sends the wrong file to the wrong place and has to deal with the consequences.

The problem is compounded by how collaboration tools handle file permissions. A document shared in a Slack channel inherits the channel’s access permissions. If the channel has 200 members, all 200 now have access to that file. Some platforms retain file access even after the message is deleted.

Most employees have never been trained to think about file permission inheritance in chat. They understand email attachments go to specific recipients. They do not realize that uploading a file to a Teams channel can make it accessible to everyone with SharePoint access to that team’s underlying site. Understanding cloud sharing controls and how file permissions propagate through chat platforms is a practical skill that prevents these quiet data exposures.

Are “private” channels actually private?

Section titled “Are “private” channels actually private?”

Employees say things in private channels they would never put in an email. Strategic plans, opinions about clients, complaints about management, salary discussions. They assume “private” means what it sounds like. It does not.

Workspace administrators in Slack on Enterprise Grid plans can access private channel messages through Compliance exports. Microsoft Teams admins with eDiscovery permissions can search and export private channel content. Corporate legal teams can obtain private channel records through litigation holds. And any integration with the right OAuth scope can read private channel messages silently. A 2022 survey by Aware (formerly Aware360) found that 68% of employees believed their direct messages in workplace chat were visible only to the participants. They are not.

The 2023 Slack security breach demonstrated the stakes. Slack disclosed that attackers used stolen employee tokens to access externally hosted code repositories and internal Slack messages. Private channels were not exempt. The breach did not result from a vulnerability in the private channel feature itself, but from the broader access that authentication tokens grant. If an attacker has your session token, every channel you belong to is accessible to them, private or not.

This matters for data leakage prevention because employees treat private channels as safe spaces for sharing sensitive information. They post API keys “just between us,” share customer complaints with identifying details, and discuss acquisition targets. All of this content is discoverable, exportable, and accessible to anyone who compromises an admin account or a sufficiently privileged integration.

For organizations in regulated industries, this creates compliance exposure when protected data appears in channels that are not subject to proper retention and access controls. A private Slack channel containing HIPAA-covered patient information or GDPR-protected personal data is subject to the same regulatory requirements as a database or email thread. The channel’s “private” label provides no legal protection. See the compliance mapping guide for how training requirements map to specific regulatory frameworks.

What does a practical collaboration tool security program look like?

Section titled “What does a practical collaboration tool security program look like?”

Telling employees to “be careful in chat” accomplishes nothing measurable. Vague guidance produces vague compliance. A working program addresses the specific behaviors that create risk, with controls and training mapped to each one.

Credential sharing. Deploy a password manager with secure sharing features and make it the path of least resistance. Block messages containing patterns that look like passwords or API keys using Slack Enterprise DLP or Microsoft Purview. Train employees on why chat-based credential sharing is dangerous through exercises like our Collaboration Tool Hygiene simulation.

Integration hygiene. Audit connected apps quarterly. Require admin approval for new integrations. Set expiration dates on webhook URLs. Remove integrations installed by employees who have left the organization. Rotate webhook URLs on a regular schedule, the same way you rotate API keys. Secure messaging practices training should include integration awareness, not just message content.

Access lifecycle. Add collaboration tool deprovisioning to your offboarding checklist. Audit guest accounts monthly. Set expiration dates on external access grants. Review shared channel connections when partnerships end. For contractors and agency partners, set calendar reminders tied to contract end dates rather than relying on someone remembering to revoke access manually.

Channel discipline. Establish naming conventions that signal sensitivity levels. A prefix like #proj- for project channels, #ext- for channels with external guests, and #restricted- for sensitive topics gives employees a visual cue before they post. Train employees to verify the channel before posting. Implement data classification labels in channels that handle sensitive content. Use DLP policies to flag and quarantine messages containing PII, credentials, or classification-marked content.

Wireless security awareness. Include Bluetooth and WiFi hygiene in your remote work security policy. Teach employees when wired connections are necessary for sensitive calls. Cover this gap with exercises like Safe Bluetooth Practices and reinforce awareness of how social media oversharing can reveal details that make targeted eavesdropping easier.

Screen sharing hygiene. Train employees to use window-level sharing instead of full-screen sharing during video calls. Close unnecessary apps and disable notification pop-ups before presenting. These small habits prevent the accidental exposures that happen when a Slack DM or personal email notification flashes across a shared screen during a client presentation.


The fastest way to build these habits is through practice, not policy documents. Browse our full security awareness training catalogue for exercises covering collaboration tools, credential management, data leakage, and access control. Every exercise puts employees inside a realistic scenario where these risks play out, because reading about credential exposure in a slide deck is not the same as watching a simulated attacker search your Slack history for the word “password.”