Facebook’s “AI” (Artificial Intelligence) Censoring Free Speech Endangers More Than the First Amendment

Dear Editor:

As you know, Facebook utilizes auto-bot algorithms programmed to detect key words, phrases, values, and social trending vernacular that target a member’s post when it goes against FB’s “Community Standards,” all done without the aid of a human. The dark side of AI is dangerous to our democracy, with censorship being just one of its ill effects.
While Artificial Intelligence is a new technological toolkit to open up advances in science and society, it also brings more more infringements to our very freedoms.
But FB’s attempt is more than censorship: it combines hardware and software that mimic human thought; however, FB’s interpretation of “human thought” is politically and socially filtered through a FB’s-view of the world–indeed, a specific and small looking glass.
There are inherent dangers to AI that Facebook may or may not be aware of, or may be ignoring; I’m talking about how historically humans used newly founded discoveries like fire quite a while before we figured out the inherent dangers and then we needed to invent fire extinguishers. The invention of the combustion engine gave us automobiles, but then required us to safeguard against the inherent dangers: brakes, speed limits, seat belts, auto insurance, and a host of other safe guards. When we learned how to become airborne and fly in airplanes, we had to invent seat belts, parachutes and other devices to keep us from falling to the ground.
Facebook needs to balance the advancement and advantages of AI with existential “Human” interaction and an impartial balance of bipartisan ideologies.

John Amato