I don’t know the answer to that question, but it’s certainly the case that the long-awaited study of Houston’s red light cameras didn’t give any clear answers about what effect they might have had on the accident rate at intersections where they were installed.
One specialist from a renowned traffic research organization who reviewed the study for the Houston Chronicle said the methodology was “flawed” and has serious “limitations.”
The main problem is a statistical one, said Anne McCartt, senior vice president for research at the Insurance Institute for Highway Safety. The institute has conducted several studies that were published in peer-reviewed journals on traffic research.
Because red-light cameras are known to have a spillover effect — meaning that they have been shown to impact the number of accidents at intersections where there are no cameras — robust examinations of camera programs always compare crash data with that in other cities.
It’s what statisticians call a control group. Unless the study authors compare crashes at the 50 intersections where red-light cameras have been installed with other intersections in which they have not been — preferably in other cities — no conclusions can be drawn from it.
“The design of the study doesn’t allow you to draw a conclusion about the effect of the cameras,” McCartt said. “We believe very strongly based on lots of other good studies that red-light cameras reduce violations and crashes. … But I don’t think this study allows a person to draw a conclusion about the effects of the program.”
Bryan Porter, an associate professor of psychology at Old Dominion University who has conducted red-light camera research, said he believes study authors did the best they could with the data.
“The methods are different, as they admit, which has some weakness, as well as some interesting twists on how cameras can be evaluated,” Porter wrote in an e-mail.
You could say that. All I know at this point is that more data is needed. Maybe the next one will tell us something more clearly.
He added that most research has shown red-light cameras are not revenue generators. Over time, as people learn and remember where they are, they either break even or cost money.
Somehow, I don’t think that’s going to settle anything, either. I suspect that whatever level of revenue these things ultimately generate, it will always be about that for some. Having said that, there is a danger that, as with cigarette and lottery revenues, the city will project a certain amount from the cameras and budget accordingly, then find itself in a squeeze because of fewer violations than expected. That happened in Dallas, and it could certainly happen here. Grits adds his criticism of the study.
My criticisms were actually more of the Chronicle coverage. I called their expert’s statistical criticism of the study “gobbledy gook.”
If this study is flawed, and frankly I’m not sure that’s true, then by comparison, it still contains a lot more probative information (because they gathered data before and after enforcement began at the same intersections) than the one from TXDOT in December showing the opposite conclusion. Olson’s also wrong that the study is an outlier from other non-industry generated research on the topic.
Happy New Year, Kuff!