Slack has been using your messages to train their ML and AI models – Exposed!

A recent discovery by the DuckBill Group found that Slack has been using user the text, documents and other content that users send through the service to train its AI and ML algorithms
read more

Slack’s recent controversies surrounding its machine-learning practices have raised serious questions about user privacy and data protection.

The revelation that the company trains its models on user messages, files, and other content without explicit consent has sparked widespread concern among users and privacy advocates alike.

The issue came to light when Corey Quinn, an executive at DuckBill Group, uncovered the policy buried within Slack’s Privacy Principles and shared his discovery on social media.

This revelation shed light on a practice where Slack’s systems analyse various forms of user data, including messages and content sent through the platform, as well as additional information outlined in the company’s Privacy Policy and customer agreements.

What’s particularly troubling about this practice is that it operates on an opt-out basis, meaning users’ private data is automatically included in the training process unless they specifically raise a request to be excluded from the data set.

To make matters worse, users cannot opt out by themselves; instead, they must rely on their organisation’s Slack administrator to initiate the process on their behalf, creating an additional layer of complexity and inconvenience.

In response to the mounting concerns, Slack attempted to address the issue with a blog post aimed at clarifying how a customer’s data is used. The company claimed that user data is not used to train its generative AI products but is instead fed to machine learning models for tasks such as channel and emoji recommendations and search results.

However, this explanation failed to assuage privacy concerns that the discovery raised, as users remained sceptical about the extent to which they can access and the adequacy of privacy safeguards.

The twisted and complicated nature of the opt-out process further exacerbates the situation, placing the burden on users to navigate administrative channels and actively request exclusion from data training activities. This approach shifts the responsibility onto users to safeguard their data, rather than placing the onus on the company to obtain explicit consent for using the personal information for training purposes.

Moreover, inconsistencies in Slack’s privacy policies have added to the confusion and scepticism surrounding the company’s data practices. While one section claims that Slack cannot access the underlying content when developing AI/ML models, other policies appear to contradict this assertion, leading to uncertainty among users about the true extent of data access and usage.

Additionally, Slack’s marketing of its premium generative AI tools as not using user data for training purposes has further muddied the waters. While these tools may indeed adhere to strict privacy standards, the implication that all user data is immune from AI training processes is misleading, given the company’s practices with other machine-learning models.

Ultimately, Slack’s handling of user data and its communication regarding privacy practices have raised significant concerns about transparency, consent, and data protection. As users increasingly prioritize privacy and data security, it is imperative for companies like Slack to address these issues promptly and transparently to maintain trust and accountability within their user communities.

Leave a Comment

Your email address will not be published. Required fields are marked *