It's worse than that. A form of terrorism in the future will be poiso the data vacuumed up to feed these AIs.
That has been done before. In 2016, I heard Microsoft had released an AI chatbot (named Tay) that would post on Twitter, and within a day, people pollutted what it used to learn and made the chatbot post racist tweets
|Location:||Riverside County, California|
|Nodes:||15 (0 / 15)|