Why Nostr?What is Njump?
Engadget /
npub1zyg…m36h
2024-05-19 18:45:11

Engadget on Nostr: Yuck: Slack has been scanning your messages to train its AI models ...

Yuck: Slack has been scanning your messages to train its AI models

https://o.aolcdn.com/images/dims?image_uri=https%3A%2F%2Fs.yimg.com%2Fos%2Fcreatr-uploaded-images%2F2024-05%2Fc9fb70d0-1476-11ef-9a35-477d7506414a&resize=1400%2C787&client=19f2b5e49a271b2bde77&signature=b73bdca5d4bc5525eeb8ccc619008c238a6b709a

Slack trains machine-learning models on user messages, files and other content without explicit permission. The training is opt-out, meaning your private data will be leeched by default. Making matters worse, you’ll have to ask your organization’s Slack admin (human resources, IT, etc.) to email the company to ask it to stop. (You can’t do it yourself.) Welcome to the dark side of the new AI training data gold rush.
Corey Quinn, an executive at DuckBill Group, spotted the policy in a blurb in Slack’s Privacy Principles and posted about it on X (via PCMag). The section reads (emphasis ours), “To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.”

In response to concerns over the practice, Slack published a blog post on Friday evening to clarify how its customers’ data is used. According to the company, customer data is not used to train any of Slack’s generative AI products — which it relies on third-party LLMs for — but is fed to its machine learning models for products “like channel and emoji recommendations and search results.” For those applications, the post says, “Slack’s traditional ML models use de-identified, aggregate data and do not access message content in DMs, private channels, or public channels.” 
A Salesforce spokesperson reiterated this in a statement to Engadget, also saying that “we do not build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data.”

I'm sorry Slack, you're doing fucking WHAT with user DMs, messages, files, etc? I'm positive I'm not reading this correctly. pic.twitter.com/6ORZNS2RxC
— Corey Quinn (@QuinnyPig) May 16, 2024

The opt-out process requires you to do all the work to protect your data. According to the privacy notice, “To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at [email protected] with your Workspace/Org URL and the subject line ‘Slack Global model opt-out request.’ We will process your request and respond once the opt out has been completed.”
The company replied to Quinn’s message on X: “To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results. And yes, customers can exclude their data from helping train those (non-generative) ML models.”
How long ago the Salesforce-owned company snuck the tidbit into its terms is unclear. It’s misleading, at best, to say customers can opt out when “customers” doesn’t include employees working within an organization. They have to ask whoever handles Slack access at their business to do that — and I hope they will oblige.
Inconsistencies in Slack’s privacy policies add to the confusion. One section states, “When developing Al/ML models or otherwise analyzing Customer Data, Slack can’t access the underlying content. We have various technical measures preventing this from occurring.” However, the machine-learning model training policy seemingly contradicts this statement, leaving plenty of room for confusion. 
In addition, Slack’s webpage marketing its premium generative AI tools reads, “Work without worry. Your data is your data. We don’t use it to train Slack AI. Everything runs on Slack’s secure infrastructure, meeting the same compliance standards as Slack itself.”
In this case, the company is speaking of its premium generative AI tools, separate from the machine learning models it’s training on without explicit permission. However, as PCMag notes, implying that all of your data is safe from AI training is, at best, a highly misleading statement when the company apparently gets to pick and choose which AI models that statement covers.
Update, May 18 2024, 3:24 PM ET: This story has been updated to include additional information from Slack, which published a blog post explaining its practices in response to the community's concerns. 
This article originally appeared on Engadget at https://www.engadget.com/yuck-slack-has-been-scanning-your-messages-to-train-its-ai-models-181918245.html?src=rss

https://www.engadget.com/yuck-slack-has-been-scanning-your-messages-to-train-its-ai-models-181918245.html?src=rss
Author Public Key
npub1zygw2uzcnpy8gp9qfjfrgjc292k0vwwe709ren726v9fspf49vds28m36h