The Rand Corporation reaches its conclusions on gun control by pointing to reasons that can’t be accounted for in its survey. It completely ignores studies that indicate that gun control is harmful.
Time after time, their justifications for excluding studies are simply false. When a study finds that there’s no statistically significant evidence that a particular law had an impact, Rand calls the results “uncertain” or “inconclusive.”
The Rand Corporation claims are shown in bold. As an initial example, take Rand’s discussion of the third edition of More Guns, Less Crime (University of Chicago Press, 2010).
Their discussion is simply not correct on many levels. The entire book gives extensive results for murder, rape, robbery, and aggravated assaults. Apparently, the authors of the Rand report don’t understand the difference between murder and homicide. Here is an example from Table 10.4 on page 265, that provides detailed results for those different categories, along with their level of statistical significance.
“These included information on the statistical significance of each coefficient in the model but not for a test comparing post-implementation time intervals with pre-implementation time intervals.”
If you look at Table 10.4, you can see that comparisons of before-and-after trends and their levels of statistical significance are indeed provided.
Strangely, Rand’s survey of existing research ignored virtually all of the dozens of studies that find that right-to-carry reduces violent crime.
The Rand report gives the impression that MGLC only looks at country level data. But MGLC also does city-level analyses for all cities with populations of over 10,000 (pp. 191-194) and state level data (many places through out the book).
It ignores the work in MGLC on concealed handgun laws on suicides.
Again, this is not correct. The coefficients and the descriptive statistics for the endogenous variables are all provided, so it is possible to figure out how a change in the laws affected rates of murders or suicides.
They also ignore that MGLC provides information on child-access prevention laws (e.g., pp. 198-201). Why do you completely ignore this research?
This “unfavorable ratio of estimated parameters to observations (approximately one to eight)” is very strange. 1) The important issue seems to be degrees of freedom. 2) “may have been overfit” is an empirical issue. The results are fairly similar whether one uses just fixed effects or fixed effects in conjunction with all of the control variables. 3) With up to 1,010 observations in some specifications, this ratio, whatever it’s actual significance, is between 10+ to 1 and 9 to 1.
But there is absolutely no mention of Thomas Marvell’s work (See: Thomas B. Marvell, “The Impact of Banning Juvenile Gun Possession,” Journal of Law and Economics, October 2001). The Journal of Law and Economics is the top law and economics journal, so it is very surprising that Rand ignored this research. Marvell found that the 1994 age requirement was associated with a 5.1 percent increase in the homicide rate, and a 6 percent increase in firearm homicides. Beyond that, there was no real effect on crime rates. Marvell notes that if “juveniles are more vulnerable targets, the result is likely to be more crime, especially violent crimes involving juveniles.”
The Rand report completely ignores the debate over the Ludwig and Cook paper. JAMA (Dec 6, 2000, p. 2718): “Furthermore, disaggregating the authors’ age categories yields results that contradict this hypothesis. The data reveal that the reduced incidence of firearm suicides for persons older than 54 years is primarily affected by the change for the group aged 55 to 64 years; however, this subcategory has the lowest suicide rate for those older than 54 years. The different age groups experienced apparently random increases and decreases in firearm suicides after enactment of the law: the groups aged 35 to 44 years, 45 to 54 years, and older than age 85 years, for instance, all show increases in firearm suicide rates.”
They overlook multiple papers by Lott that account for background checks (also all three refereed editions of More Guns, Less Crime (University of Chicago Press, 1998, 2000, and 2010), including his original paper in the Journal of Legal Studies with David Mustard.
While the Rand report mentions Lott’s research here (“More Guns, Less Crime,” University of Chicago Press, 2010, pp. 326-8), it selectively only reports the result that is biased towards not finding an impact from the ban. It ignores the results that compare before-and-after trends showing an increase in murder rates after the ban. No explanation is provided for why Rand picked one of Lott’s results over the other, particularly since Lott has spent a lot of his book explaining why looking at simple before-and-after averages is not the best way to examine these estimates.
The Rand report also misstates what Lott studied. He examined both federal and state assault weapon bans from before and after the federal ban. Lott’s work is the only study that Rand cites in its table which addresses the federal assault weapons ban.
It isn’t obvious how the Rand report could find that Lott’s work satisfied the criteria for proper regressions, but then not include anything from Lott’s work on concealed handgun laws, waiting periods, or stand your ground laws. The same regressions were used to produce all of those estimates and all of those variables were included in regressions at the same time.
The Rand Corporation overlooks Lott’s peer-reviewed research on this (e.g., even Lott’s More Guns, Less.
Only one study by Roberts is relied on to test the impact of waiting periods. Lott (2010), Lott and Mustard (1996), Lott and Whitley (2000), and many other studies examined the impacts of waiting periods.
This topic is not addressed by the Rand report.