Parler Returns to Apple Store With App That Only Cracks Down on Hate Speech on iPhones

The social network Parler is back on the Apple App Store after its post-Capitol riot ouster after retooling its app to crack down on hate speech -- but only on Apple devices, The Washington Post reports.

Parler was booted from the Apple and Google App stores after users shared plans and videos of their crimes in the January 6 Capitol riot on the network. Parler ultimately agreed to ramp up moderation of hate speech in response to the ban and will now return to the Apple store.

The platform’s new algorithm will prevent posts labeled “hate” by its new artificial intelligence moderation system from being visible on iPhones and iPads. But people who use the network on their browsers will still see the hate-marked posts.

Amy Peikoff, Parler’s chief policy officer, told the Post that the company has resisted limiting content but recognized the importance of its relationship with Apple.

“At Parler we embrace the entire First Amendment meaning freedom of expression and conscience are protected,” Peikoff said. “We permit a maximum amount of legally protected speech.”

Parler pushing to allow some hate speech:

Parler is still pushing Apple to allow it to include a function that would allow users to click through to see the banned content but “the banning of hate speech was a condition for reinstatement on the App Store,” the Post reported.

“Where Parler is different [from Apple], is where content is legal, we prefer to put the tools in the hands of users to decide what ends up in their feeds,” Peikoff told the outlet, describing the iOS app as “Parler Lite or Parler PG.”

Is new system enough?:

Other networks like Facebook and YouTube employ hundreds of content moderators but Parler is much smaller and does not have the resources to launch human moderation.

AI moderation is “decently good at identifying the most obviously harmful material. It’s not so great at capturing nuance, and that’s where human moderation becomes necessary,” Sarah Myers West, a researcher at NYU’s AI Now Institute, told the Post. “It’s really critical for highly skilled labor to be able to make these kinds of decisions.”

The company has employed Hive, a content moderation company, which has contracted more than 1,000 moderators to review posts flagged by its algorithm for moderation.

“Even the best AI moderation has some error rate,” Hive CEO Kevin Guo told the Post.

 

Related News
Comments