Just so I fully understand this op-ed, let me review what I think it says and wait for others to set me straight. It suggests that LLMs, systems without a soul, a conscience or moral compass should be protected for providing hallucination-created false information, be it politically left or right. Do I have it about right? In an age where malinformation has already put democratic republics on the brink of failure because they have tons of ill-informed voters, Mr. Mchangama wants to let electronic bots say what they wish because they deserve "free speech" just like everyone else. May I point out that LLMs aren't everyone, they are a collection of electronic ones and zeros. Lest we forget, unrestricted free speech brought Germany to ruin but not until 60 million souls had been extinguished. James Madison could never have envisioned the media landscape of 2026, especially LLMs, when he said, "A democratic republic requires well-informed voters." Uh-oh.
I do not think you're on the mark there. Ignore the title for a moment and look at the rest of the piece: it's all arguing that restricting these chatbots is akin to restricting search engines or information availability. I think the title is just overly clever (and was concerned about the piece when I read the title)
I hate that this piece was titled "Chatbots Deserve Free Speech Rights, Too" since that's not at all its argument. I suppose it got me to read the piece, so well done? But leaving readers feeling manipulated is not a good way to keep them.
I agree that we should create a list that publicly discloses all AI censorship / jawboning.
There are simple ways that an "AI" company could provide extra information about the amount of data behind statements that the bot makes. For example the text boldness and or darkness could be increased for parts of the text with ample data behind it and thinner/greyer for parts with less support. Or a red background could be used for parts with only thin data support.
They do not do this because such a presentation would make the results more machine-like and break the magic spell they are hoping to create. They are bad actors.
As regards free speech protection for chatbots I think it is not clear since we normally reserve this protection for humans. To take an extreme example if I was an old school email spammer and someone developed some software that blocked my spam, would I be able to claim free speech rights for my spamming operation? It seems unlikely.
Of course all government over-reach should be logged, but placing chatbots on the same level as humans?
AIs, Bots, whatever we want to call them, are not humans, they are corporate creations. They should have no more rights than humans; they should have no more rights than other media; they should be held accountable when they broadcast maliscious lies. Individuals and groups that use them to broadcast maliscious lies should also be held accountable. Criticism, yes. Maliscious lies, no.
Just so I fully understand this op-ed, let me review what I think it says and wait for others to set me straight. It suggests that LLMs, systems without a soul, a conscience or moral compass should be protected for providing hallucination-created false information, be it politically left or right. Do I have it about right? In an age where malinformation has already put democratic republics on the brink of failure because they have tons of ill-informed voters, Mr. Mchangama wants to let electronic bots say what they wish because they deserve "free speech" just like everyone else. May I point out that LLMs aren't everyone, they are a collection of electronic ones and zeros. Lest we forget, unrestricted free speech brought Germany to ruin but not until 60 million souls had been extinguished. James Madison could never have envisioned the media landscape of 2026, especially LLMs, when he said, "A democratic republic requires well-informed voters." Uh-oh.
I do not think you're on the mark there. Ignore the title for a moment and look at the rest of the piece: it's all arguing that restricting these chatbots is akin to restricting search engines or information availability. I think the title is just overly clever (and was concerned about the piece when I read the title)
I hate that this piece was titled "Chatbots Deserve Free Speech Rights, Too" since that's not at all its argument. I suppose it got me to read the piece, so well done? But leaving readers feeling manipulated is not a good way to keep them.
I agree that we should create a list that publicly discloses all AI censorship / jawboning.
Agree - the editor has since changed the title. Thanks!
There are simple ways that an "AI" company could provide extra information about the amount of data behind statements that the bot makes. For example the text boldness and or darkness could be increased for parts of the text with ample data behind it and thinner/greyer for parts with less support. Or a red background could be used for parts with only thin data support.
They do not do this because such a presentation would make the results more machine-like and break the magic spell they are hoping to create. They are bad actors.
As regards free speech protection for chatbots I think it is not clear since we normally reserve this protection for humans. To take an extreme example if I was an old school email spammer and someone developed some software that blocked my spam, would I be able to claim free speech rights for my spamming operation? It seems unlikely.
Of course all government over-reach should be logged, but placing chatbots on the same level as humans?
AIs, Bots, whatever we want to call them, are not humans, they are corporate creations. They should have no more rights than humans; they should have no more rights than other media; they should be held accountable when they broadcast maliscious lies. Individuals and groups that use them to broadcast maliscious lies should also be held accountable. Criticism, yes. Maliscious lies, no.
Freedom of expression includes the right to receive information across border regardless of media. Therefore, governments should not be free to demand censorship of models even if done in the name of fighting “misinformation”. China is a very good example of why AI censorship should be avoided https://www.lawfaremedia.org/article/china-s-ai-governance-ambitions-and-their-implications-for-free-expression