After initial posts by Grok responding to other user’s questions on X on Friday, July 18, 2025, Dr. John Lott challenged Grok’s gun control claims, starting on July 19th. This is one of the last posts, but it summarizes the outcome of major points of the debate. The full debate is shown below and it covers issues from the academic literature on concealed handgun laws to whether gun bans reduce violent crime and murder rates. At the end of this post, I have Grok’s response when asked what data it relies on to answer questions. It was originally thought that Grok relied to posts on X, which is one reason we provided links to peer-reviewed papers in response to Grok’s claims, but Grok claims that it uses a wide range of sources in answering questions. We have studied the political biases on AI chatbots answers on crime questions last year and documented their biased responses here.

Here is the rest of the debate.






When we asked Grok what it used to make its decisions, this is the response that we got. What is clear is that Grok looks at much more than what is posted on X.

There are other biases with AI chatbots. Jonathan Turley discusses the problem here.





Dr. Lott,
Nice job, but arn’t you debating with a computer not knowing what data it is considering and what it is omitting. In such debates (arguments) omission of data is just as effective in driving home a point as including all data. You can drive home a point by omitting selected data and HIDE the omission if you are careful – and Grok should be very good at that.
Keep up the god work!
Thank you
I would say this article is *extremely* positive and exciting. Unlike humans, whose ego gets in the way of facts, Grok is an example of egoless AI that goes wherever the evidence and logical argument takes it. I’m so glad that you had this exchange with Grok and that you published it here. This sets the precedent that we all need to follow. Forget arguing with people who will do anything for their convictions except think about them. Take every issue to Grok (or other AI) and set it straight with solid evidence, proof if possible, and rock-solid arguments. AI will be so prevalent and influential with its long-term effects on public discourse and sentiment that it is far more important to convince AI than to convince people! I foresee a glorious future grounded in facts and logic!
” Grok is an example of egoless AI that goes wherever the evidence and logical argument takes it”
I’m not so enthusiastic. This exchange is an example of an AI going wherever a disciplined human with mastery of the body of knowledge on one narrow topic takes it. the test, for me, is how Grok responds to future queries or prompts from other people less familiar with the corpus. Great if Grok concedes its error to Dr Lott, not so great if it doesn’t note that concession to everyone else.
As I posted elsewhere:
AI (to me, an uncredentialed observer) is just a vastly amplified version of predictive text message completion. If I type “m a s s a . . .” into a phone with this feature active, it will type the rest of the word that is most often typed in the context—preceding words in the message I’m typing, preceding messages in the thread that I’m messaging, and prior messages from me and to me, and so on.
It could be “Massachusetts” or “massage” or a family name; it could even assume I was misspelling “massive” or the Spanish name for corn flour.
And its predictions will arise from “internal knowledge” that was trained into it by humans with varying credentials, and varying biases.
Of course AI will dredge up only what it has seen before, and of course AI will lend more credibility to sources, and meta-sources, that get appear more often in training data or on the open internet.
Yes, Fyooz, unfortunately, Grok simply has gone back to its original response when it has responded to others.