Evan DeFilippis and Devin Hughes have been repeating past criticisms of John Lott’s work and without ever bothering to let readers know that Lott has already responded to those points. John Lott has written up a detailed response available below to the first two-thirds of their points in order that they were made. The response to other similar attacks that repeat some of their charges is available here and here.
Point by Point Response to Evan DeFilippis and Devin Hughes.
Recently Evan DeFilippis and Devin Hughes claimed that defensive gun use was a myth. Gary Kleck wrote a response where he noted that these authors were merely repeating earlier criticisms and ignored the responses that he and others have made to those critiques.
— Tim Lambert as a source. Professor Jim Purtilo at the University of Maryland put up a post in 2004 that he has updated over the years that shows that Lambert has been caught falsifying evidence on multiple occasions and has otherwise been dishonest. See:
- Tim Lambert types are the reason nobody can trust WP presentations
- History of namecalling in WP talk sections
- Detailed history of edits on WP’s Lott page
- Other points
— Cherry picking surveys on gun ownership.
In an audacious display of cherry-picking, Lott argues that there were “more guns” between 1977 to 1992 by choosing to examine two seemingly arbitrary surveys on gun ownership, and then sloppily applying a formula he devised to correct for survey limitations. Since 1959, however, there have been at least 86 surveys examining gun ownership, and none of them show any clear trend establishing a rise in gun ownership. Differences between surveys appear to be dependent almost entirely on sampling errors, question wordings, and people’s willingness to answer questions honestly.
My paper with Mustard as well as my book looked at all the crime data available when those pieces were written and I updated that data with each successive updated edition of my book.
— Paper with David Mustard finished in 1996 and published in 1997: crime data for all the counties and states in the US from 1977 to 1992. The data started in 1977 because that was the first year that the FBI UCR crime data was available at the county level. 1992 was the latest year that complete data was available when the data was put together. In all my work, I have used all the data that was available.
— First edition of MGLC 1998: crime data for all the counties and states in the US from 1977 to 1992 as well as up to 1994 for a comparison. Literally, hundreds of different factors that could impact crime rates were accounted for.
— Second edition of MGLC 2000: crime data for all the counties, cities, and states in the US from 1977 to 1996.
— Third edition of MGLC: crime data for all the counties and states in the US from 1977 to 2005.
The regressions in those publications account for all the data available (all counties, all cities, all states for all the years the data is available), no cherry picking, and, following earlier work by William Alan Bartley and Mark Cohen, report all possible combination of these hundreds of control variables to show that the results are not sensitive to a particular specification.
The only survey discussion that I made in my first two editions of MGLC was for the 1988 and 1996 voter exit poll surveys. Those two exit polls included a question on gun ownership. The third edition of MGLC updates the data to include the 2004 exit poll survey. The reason for using those large exit polls is that they can contain up to 32,000 people surveyed (though in other years it might only be about 3,600) and that allows one to breakdown the data on a state by state basis to see how gun ownership is changing across different states. The GSS survey only has data for 600 to 800 observations at a time every two years. Some other surveys may occasionally have up to 1,200 people, but those samples are just too small to make cross state comparisons. So I wasn’t looking at these exit poll surveys to check general gun ownership rates for the whole US, but to look at the data for specific states.
However, we know this assertion is factually untenable, based on surveys showing that 5-11% of US adults already carried guns for self-protection before the implementation of concealed carry laws.
The surveys that DeFilippis and Hughes are referring to involve people carrying guns for any reason, including going hunting or simply moving guns between places (See the
It’s extremely unlikely, therefore, for the 1% of the population identified by Lott who obtained concealed carry permits after the passage of “shall-issue” laws to be responsible for all the crime decrease.
Again, I refer to the same discussion from MGLC as it shows that this 1% number is misleading and it also shows a simple numerical example regarding what would be required to get the expected reduction in crime. This is part of a consistent pattern where DeFilippis and Hughes make no attempt to discuss the responses that I have already made on these issues.
On Hood and Neeley — “zip codes with the highest violent crime before Texas passed its concealed carry law had the smallest number of new permits issued per capita.”
I have a long discussion about why purely cross-sectional analysis is unreliable. Regarding: “zip codes with the highest violent crime before Texas passed its concealed carry law had the smallest number of new permits issued per capita.” Well, given that it cost $140 and 10 hours of training to get a permit, it isn’t very surprising to me that poor areas have both high crime rates and low permit rates. As to cherry-picking, even if cross-sectional analysis was useful, somehow the authors have to explain why they picked one city in the entire US to look at. In any case, I note this paper and respond to it in MGLC.
Note on the Dade county data.
Dade county police records, which cataloged arrest and non-arrests incidents for permit holders in a five-year period, also disproves Lott’s point. This data showed unequivocally that defensive gun use by permit holders is extremely rare. In Dade county, for example, there were only 12 incidents of a concealed carry permit owner encountering a criminal, compared with 100,000 violent crimes occurring in that period. . . .
Anyone who has been following the debate on justifiable police homicides knows that the data is not very reliable. The justifiable homicide data for civilians is even worse.
As Albert Alschuler explains in “Two Guns, Four Guns, Six Guns, More Guns: Does Arming the Public Reduce Crime,” Lott’s work is filled with bizarre results that are inconsistent with established facts in criminology. . . .
Dennis Hennigan writes, “the absence of an effect on robbery does much to destroy the theory that more law-abiding citizens carrying concealed guns in public deter crime.”
My response to this type of point is available here in MGLC.
Frank Zimring and Gordon Hawkins as well as Dan Black and Daniel Nagin are intertwined here.
Black and Nagin noticed that there were large variations in state-specific estimates for the effect of “shall-issue” laws on crime. For example, Lott’s findings indicated that right-to-carry laws caused “murders to decline in Florida, but increase in West Virginia. Assaults fall in Maine but increase in Pennsylvania.” In addition, “the magnitudes of the estimates are often implausibly large. The parameter estimates that RTC laws increased murders by 105 percent in West Virginia but reduced aggravated assaults by 67 percent in Maine. . . .
Again, DeFilippis and Hughes ignore that I have extensive discussions on this in both MGLC and a 1998 paper published in the Journal of Legal Studies.
1) Note that even throwing out all counties with populations below 100,000 and Florida, still produced statistically significant drops in some violent crime categories. They thus removed about 89 percent of the data in the study. There are so many combinations of county sizes and states that could have been dropped from the sample — for example, why not Georgia or Pennsylvania or Virginia or West Virginia or any of the other six states? Why not drop counties with populations under 50,000? Black and Nagin never really explain the combination that they pick.
2) More importantly, even when they drop out counties with fewer than 100,000 people as well as Florida, Black and Nagin still find statistically significant drops in aggravated assaults (significant at the 5% level) and robberies (significant at the 8% level) and no evidence that any type of violent crime increases. Note that they also didn’t report over all violent crime, and the reason that they don’t report that is because even with their choices the drop in over all violent crime would have been statistically significant.
3) As to the increase in West Virginia, there was only one county in WV (Kanawha County) with more than 100,000 people in it. What they showed is not that crime increased in WV (it fell over all), but that there was an increase in one type of violent crime in one county in WV.
4) DeFilippis and Hughes continually write about “Florida” being removed from the sample, but it is Florida as well as counties with fewer than 100,000 people.
5) If one is interested in my other responses, I suggest that people read both MGLC and the paper published in the Journal of Legal Studies.
Regarding Ted Goertzel’s comments, DeFilippis and Hughes plagiarize/copied his comments in their discussion of Dan Black and Nagin. In general their approach is to copy, slightly rewrite other critiques, and then ignore what I have written in response.
DeFilippis and Hughes write:
Within a year, two econometricians, Dan Black and Daniel Nagin validated this concern. By altering Lott’s statistical models with a couple of superficial modeling changes, or by re-running Lott’s own methods on a different grouping of the data, they were able to produce entirely different results.
Within a year, two determined econometricians, Dan Black and Daniel Nagin (1998) published a study showing that if they changed the statistical model a little bit, or applied it to different segments of the data, Lott and Mustard’s findings disappeared. Black and Nagin found that when Florida was removed from the sample there was “no detectable impact of the right-to-carry laws on the rate of murder and rape.” They concluded that “inference based on the Lott and Mustard model is inappropriate, and their results cannot be used responsibly to formulate public policy.”
This is one time where DeFilippis and Hughes pretend that they are actually linking to what I wrote in response to Goertzel, but instead they misstate what I wrote and link back again to Goertzel. My responses to Goertzel were similar to what I just note above in response to Black and Nagin.
DeFilippis and Hughes claim “Lott’s response to Goetzl was to shrug him off, insisting that he had enough controls to account for the problem.” But that is not accurate. I point out that I was also concerned that the sensitivity of specifications. That is why I pointed to papers such as the one by Bartley and Cohen that provided tests of whether the results were indeed sensitive.
As to Ayres and Donohue’s 2003 law review paper, DeFilippis and Hughes are just simply wrong about the facts. They write:
“Fortunately, Lott’s data set ended in 1992, permitting researchers to test Lott’s own model with new data. Researchers Ian Ayres, from Yale Law School, and John Donohue, from Stanford Law School, did just this, and examined 14 additional jurisdictions between 1992 and 1996 that adopted concealed carry laws.”
The 2nd edition of MGLC came out in 2000 and, as noted above, it had data through 1996. I provided Ayres and Donohue with my data set and they added one year to the study, 1997. That single year did not change the results. While Ayres and Donohue also claimed that the my research had ended with 1992, anyone who checks the 2nd edition of the book or reads chapter 9 in the third edition will see that I had looked at data from 1977 to 1996.
The reply to Ayres and Donohue in the law review was by Florenz Plassmann and John Whitley. I had helped them out and Whitley notes “We thank John Lott for his support, comments and discussion.” There were minor data errors in the additional years that they added from 1997 to 2000, but those errors didn’t alter their main results that dealt with count data. They had accidentally left 180 cell blank out of some 7 million cells. Donohue has himself made much more serious data errors in his own work on this issue. For example, he repeats the data for one county in Alaska 73 times, says that Kansas’ right to carry law was passed in 1996 and not 2006, and made other errors. I did co-author a corrected version of the Plassmann and Whitley paper that fixed the data errors and is available here. But DeFilippis and Hughes can’t even get it straight what paper I co-authored.
In any case, for those who want my response, you can read what I wrote in MGLC (the link only provides part of my discussion).
Again, talk about DeFilippis and Hughes cherry-picking, there are several ways of responding to the quotes by Kleck and Hemenway.
1) Note that Kleck has also said many positive things about my research. For example, see this quote: “John Lott has done the most extensive, thorough, and sophisticated study we have on the effects of loosening gun control laws. Regardless of whether one agrees with his conclusions, his work is mandatory reading for anyone who is open-minded and serious about the gun control issue. Especially fascinating is his account of the often unscrupulous reactions to his research by gun control advocates, academic critics, and the news media.”
2) I have discussed Kleck’s quote in MGLC (see attached file).
4) There are a lot of prominent academics and people involved in law enforcement who have said positive things about my research. I can list a few here, but I don’t really see the point.
“John Lott documents how far ‘politically correct’ vested interests are willing to go to denigrate anyone who dares disagree with them. Lott has done us all a service by his thorough thoughtful, scholarly approach to a highly controversial issue.”
— Milton Friedman, Nobel prize winning economist
“John Lott is a scholar’s scholar and a writer’s writer — and this book shows why. That gun ownership might bring social benefits as well as costs is a story we do not often see in the press, and Lott here explains why. With a blend of new data, evidence, and examples, he unpacks the bias against such stories in the media.”
— Mark Ramseyer, Harvard University
“For anyone with an open mind on either side of this subject this book will provide a thorough grounding. It is also likely to be the standard reference on the subject for years to come.”
—Stan Liebowitz, University of Texas at Dallas
“John Lott’s work to uncover the truth about the costs and benefits of guns in America is as valuable as it is provocative. Too much of today’s public debate over gun ownership and laws ignores the empirical evidence. Based on carefully proven facts, Professor Lott shatters the orthodox thinking about guns and debunks the most prominent myths about gun use that dominate the policy debate. For those who are convinced that the truth matters in formulating public policy and for anyone interested in the role of guns in our society, More Guns, Less Crime is must reading.” —Edwin Meese III, U.S. Attorney General, 1985–88
“More Guns, Less Crime is one of the most important books of our time. It provides thoroughly researched facts on a life-and-death subject that is too often discussed on the basis of unsubstantiated beliefs and hysterical emotions.”
—Thomas Sowell, Stanford University
“Armed with reams of statistics, John Lott has documented many surprising linkages between guns and crime. More Guns, Less Crime demonstrates
that what is at stake is not just the right to carry arms but rather our performance in controlling a diverse array of criminal behaviors. Perhaps most disturbing is Lott’s documentation of the role of the media and academic commentators in distorting research findings that they regard as politically incorrect.”
—W. Kip Viscusi, Cogan Professor of Law and Director of the Program on Empirical Legal Studies, Harvard Law School
“Until John Lott came along, the standard research paper on firearms and violence consisted of a longitudinal or cross-sectional study on a small and artfully selected data set with few meaningful statistical controls. Lott’s work, embracing all of the data that are relevant to his analysis, has created a new standard, which future scholarship in this area, in order to be credible, will have to live up to.”
—Dan Polsby, Kirkland & Ellis Professor of Law, Northwestern University
“His empirical analysis sets a standard that will be difficult to match. . . . This has got to be the most extensive empirical study of crime deterrence that has been done to date.”
Up to this point in their list, I have tried to go through each of DeFilippis and Hughes’ claims. What should be clear is that I haven’t skipped points and I have already answered these claims elsewhere and the same is true for their other assertions. I would suggest that people get a copy of MGLC for issues up to 2010 and look at my later academic papers at the Social Science Research Network or the Crime Prevention Research Center website. Regarding their attack on “The Vanishing Survey,” they again completely ignore what I have already written on the issue. Without any attempt to address any of the responses that I have already made to these points, DeFilippis and Hughes’ is just a big waste of people’s time.
I will make one final point. DeFilippis and Hughes incorrectly describe the National Research Council report. Their report examined seemingly ever possible gun law that has been studied by academics, but the panel could not identify one single law that made a statistically significant difference. They made the same response regarding right-to-carry, but unlike all the other laws studied the discussion on right-to-carry laws was the only one that drew a dissent by James Q. Wilson, who pointed out that all of the panel’s own regressions found that right-to-carry regressions reduced murder rates. In 15 years prior to that there had only been one other dissent. Academics who don’t sign on to a NRC report are not invited back to be on future panels. That creates pressure for people not to dissent, but it also means that virtually all the reports indicate that they can’t say anything matters.