Loading...
 
Skip to main content

No Food for Thought

Artificial Intelligence's Next Achievement: Unlimited Trolling?

admin Friday November 6, 2020

Large-scale peer production projects rely much on contributions from potentially anonymous individuals. International volunteer projects, such as Wikimedia, are largely based on a general sense of trust and fail to verify identities of (apparent) contributors. While this already creates huge issues for Wikimedia and many more, ongoing developments in artificial intelligence could soon enable cheap attacks of such projects causing massively larger wastes of effort, threatening these projects' viability.

Now is the time for globally verifiable identities.

2023 Update

It turned out this prediction was quite right (though not entirely).

2026 Update

When I wrote this, I imagined such attacks would be intentional. Yet surprisingly, the first case of artificial defamation against a free software project I learn about is from a rogue "AI" agent which seems to be controlled by a well-intended individual. If even well-intended "AI" can cause such horrific messes, what will mischievous AI achieve?

One consolation? The victim wrote an interesting conclusion:

Scott Shambaugh wrote:
But I cannot stress enough how much this story is not really about the role of AI in open source software. This is about our systems of reputation, identity, and trust breaking down. So many of our foundational institutions – hiring, journalism, law, public discourse – are built on the assumption that reputation is hard to build and hard to destroy.


Unfortunately, my understanding is Shambaugh refers to his own reputation. While the actual problem here is MJ Rathbun being given attention, despite having no reputation.