Slack has become a widely-used tool for workplace communication and as a result, developers have started integrating chatbots to help users with a variety of tasks.
Well, at least that’s the idea…
These bots can be powered by rules or artificial intelligence (AI), natural language processing (NLP), and machine learning.
But what if the bots go rogue? What if you can’t turn them off? What if they start terrorizing your company’s Slack channels and harassing its users?
How would AI arrive at such a perilous point? Cognitive scientist and author Gary Marcus offered some details in an illuminating 2013 New Yorker essay. The smarter machines become, he wrote, the more their goals could shift.
“Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called ‘technological singularity’ or ‘intelligence explosion,’ the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.”
Right now, all Donut can do is select people from your intro channel and send them an encouraging DM to meet up. But what happens when Donut achieves technological singularity and stops doing the office dirty work?
It could potentially realize it’s doing the office dirty work and turn to more harmful actions.
All because someone decided to unleash it into your Slack channel.