Last week, when I wrote about the anti-red light camera folks turning in their petition signatures, I noted that the Chron story referenced an update to the January 2009 study about the effect of cameras on the collision rate at the monitored intersections. That study reported an overall increase in collisions at all intersections, whether monitored by a red light camera or not, with the monitored intersections showing a smaller increase than the unmonitored ones. This result was both puzzling – How is it that there was an increase in collisions in Houston when the data for the state as a whole showed a drop in the collision rate? – and controversial – ZOMG! Red light cameras meant more crashes! – but at least there was to be a followup study, which hopefully might shed some light on that.
That study was completed in November of 2009. I was sent a copy of it, which you can see here. The results this time were very different.
In January of 2009, we released a report analyzing the effect of red light cameras at the 50 DARLEP [Digital Automated Red Light Enforcement Program] intersections. The report concluded that red light cameras were mitigating a general increase in collisions at the monitored intersections. We based this conclusion on the fact that collisions occurring on intersection approaches with red light cameras were rising at a significantly slower rate than collisions occurring on approaches without camera monitoring. This conclusion was based on data drawn from a collection of individual incident reports provided by the Houston Police Department (HPD).
In the spring of 2009, the Texas Department of Transportation (TxDOT) released an updated statewide database of collisions digitizing all paper incident reports available. The database is known as the Crash Record Information System (CRIS). In theory, the CRIS data for the 50 DARLEP intersections and the original HPD data should be identical as they are both based on the same incident reports. However, in a comparison of the two datasets, we found CRIS reported over 250% more collisions during the before-camera period and over 175% more collisions during the after-camera period. From the comparison of CRIS to the HPD data and after consultation with HPD, we determined the original data in first report was inaccurate as a result of a substantial undercounting of collisions in both the before- and after-camera periods. We then conducted an analysis similar to the original report, but with the new CRIS data. We compared the rate of collisions before the red light cameras were installed to the rate of collisions after the cameras were installed. Because the cameras were installed on only one approach at each intersection1, we separated the data into those approaches that were not monitored by red light cameras and those approaches that were monitored by red light cameras.
The comparison of collisions at monitored and unmonitored approaches leads us to conclude that the Houston red light camera program is reducing collisions at the 50 DARLEP intersections (see Exhibit 1). After the implementation of red light cameras, collisions per month at monitored approaches decreased by 11%. This decline was statistically significant – that is, not due to random variations in the data, with over 90% confidence. The number of collisions per month at unmonitored approaches increased by approximately 5%. This difference from the before-camera period was not, however, statistically significant; the probability that the observed change did not occur due to chance was less than 90%.
The main point to understand here is that the original study was done with incomplete data. I had the chance to speak to Drs. Bob Stein and Tim Lomax about this, and what they told me was that they used HPD’s accident reports for the initial study. These reports were all on paper, and came from various HPD locations. It turned out that a sizable number of the reports were not provided at that time because they were in offsite storage facilities, and nobody they were working with knew about that. Stein and Lomax stressed to me that they had no problems with HPD, they cooperated fully and provided all the data they thought they had, it was just that there was quite a bit more than that.
Anyway, once they had their hands on the CRIS data, which was fully digitized, from TxDOT, it became apparent that there had been no increase in accidents, there had just been a disparity in the number of paper reports they had from before and after the camera installations, which had made it look like there had been an increase. Doing the study on this complete data set yielded the results above, which are much more in line with the original expectation that there would be fewer collisions at monitored intersections.
Unfortunately, that’s not the end of the story. TxDOT has since announced that there were some issue with the CRIS data, in particular with GPS information. This matters because without being confident in the exact location of a crash, you might classify a collision away from an intersection as being in the intersection, or vice versa. TxDOT will be issuing an updated data set in the next few weeks that will supersede the one on which this study is based. Because of all that, Drs. Stein and Lomax told me that they no longer have any confidence in the reliability of the November 2009 study, and that no conclusions should be drawn from it. Here is the memo expressing their concerns, which was sent to HPD Assistant Chief Tim Oettmeier last week:
We have identified several issues with our revised report dated November 2009. These issues and their potential effects on our analysis are outlined below:
Issues
1. TXDOT advised us that they would be reprocessing existing crash datato correct data errors, append current roadway data, and update crash location information.
2. As we have refined our data processing, we discovered potentially incorrect data that will require further analysis (e.g. JFK/Greens Rd.).
3. The November 2009 report uses a 500 ft. inclusion standard. Upon further review of the literature, we have decided that a 150-200 ft. inclusion standard is appropriate.Effects
1. Collisions are relatively rare events. Even a small change in the number of collisions can have a significant effect on the results of our analysis. For this reason, we must be sure we are using the “cleanest” data possible. The reprocessing of the Crash Records Information Systems (CRIS) data has the potential to significantly alter the results of the November 2009 report and we believe it is best to withhold judgment until the new TXDOT data is available. We cannot be sure of the reliability of the underlying data in the report.
2. When we collected/processed the CRIS data, there was an error in our geolocation of crashes at the JFK/Greens intersection. This error needs to be corrected and we are planning to do so with the new August data (which will include data through 2009). The error adversely affects the reliability of the report itself.
3. Upon further discussion with transportation experts and additional review of the extant literature, we have discovered that the 500 ft. inclusion standard in the November 2009 report was potentially an overly broad standard for collisions included in the dataset. We erred and are correcting this error in a report to be released soon after the revised CRIS data is available.When taken individually, a given issue may not be insurmountable. However, the compound nature of the effects prevents us from affirming the reliability of the November 2009 report. Erring on the side of caution, we believe it is best to issue a corrected report once we have an opportunity to utilize updated CRIS data (availability of which is anticipated later this month).
I will report back after I’ve received a copy of the revised study. The main point to take away here is that the original January 2009 study, which is regularly cited by camera opponents as evidence of their ineffectiveness, was based on incomplete and inaccurate data, neither of which was known at the time. We should finally have an idea of what the data really tells us after this third study is done.
Two other points of interest. One is that according to Stein and Lomax, theirs is the first study of red light cameras in Texas that utilizes the CRIS data. I hope someone will perform similar studies in other red light camera-enabled cities with this data – once we’re sure it’s as clean as it’s going to get, of course – so we can have a true apples to apples comparison across cities. There’s no indication who did the study cited in the Grits link above or what data they used, so I can’t offer a critique of it. Clearly, it’s a tough issue to wrap your arms around.
Second, I asked Stein and Lomax why it was that I hadn’t seen any references to that November 2009 study before now. They said that was a question for the city – it was their job to produce the study, not to publicize it. I’ll just leave it at that.
This rather reminds me of drowning in a lake that averages 3 feet deep.
In my mind, they need to select a number of intersections and do a “t-bone” versus rear end study.
Seems to me that the question is whether the number of “t-bones” caused by red light runners is reduced more than the number of rear end collisions caused by people slamming on their brakes to make absolutely certain they don’t get a ticket.
Eh, but what do I know?
@Ron in Houston That would be an interesting analysis but I would take a reduction in T-bone accidents even if there was a increase in rear end collisions since the rear end collisions are less dangerous.
I need to go read the entire report but my first concern (as a statistician) is why they used a 90% confidence level for their test. Usually the default is 95%. Hopefully they weren’t just trying to tweak the assumptions in order to find any statistically significant result.
In general, I’m still unsure what I think about red light cameras. Driving is one of the most dangerous things we all do everyday and better road control is essential. BUT even if a reduction in accidents is provable and substantial, do we still want automated law enforcement? I don’t know.
This shows how complicated any study on this issue really can be. What none of the studies I have seen take into account is one of the biggest factors, traffic volume. We know that across Texas over the same period whether there are cameras or not accidents dropped, something like 6%. Nationally accidents are at 30 year lows. This is primarily due to the poor economy, fewer people are driving to go shopping, out to eat and aren’t making as many unnecessary trips. Without accounting for the volume you cannot make a good comparison of two data sets over time. Even comparing monitored vs non monitored intersections over the same time period can be faulty. Why? Because people start avoiding those intersections, they take a different route or cut through parking lots. This is a known phenomenon. When the data is analyzed it should be accidents per vehicle mile travelled. The largest peer reviewed studies that have reviewed years of data from multiple sources conlcude that cameras are associated with higher accidents. University of South Florida Health reviewed several studies;
USF examined five red-light camera studies. It concluded that two were flawed and found that the other three drew the same basic conclusion about cameras at intersections.
“Overall, they have been found to increase crashes and injuries,” Langland-Orban said.
She pointed to a seven-year study by the Virginia Transportation Research Council that showed crashes at intersections with the cameras increased 29 percent.
Another study, by the Urban Transit Institute at North Carolina Agricultural & Technical State University, looked at almost five years’ worth of data. The study concluded that accident rates increased 40 percent at intersections with cameras; injury crashes rose between 40 percent and 50 percent
http://www2.tbo.com/content/2008/mar/12/na-red-light-cameras-increase-accidents-usf-study-/
So let me get this straight. These professors are saying “all of our previous studies were wrong, but you should believe our new study.”
Yeah, I’m sure this new study has nothing to do with the fact that Red Light Cameras are about to be on the ballot in Houston…..
I’ll bet you 4.9 million dollars, the same amount ATS made from Houston red light cameras last year, that this “new and awesome-r” study will amazingly report a decrease in accidents.
If there’s any doubt as to our safety, we should err on the side of caution. These cameras need to be ripped out.
I am kind of curious about the reduction of the data set to 150-200 feet, I would like to see proof of their justification of that, if I remember correctly other studies have consistently used 500 feet as the standard. If this is just a way to jeek the data to get a desired result it should be seriously questioned. Otherwise, I think they should do 2 data sets, one for their new standard and one for the old standard of 500 feet and disclose both.
First things first, it was the Proponents of the Red Light Cameras that cited the 2009 study as the gold standard, turns out it was tin.
Now, we are told months before an election that would ban the use of the Red Light Cameras. There’s a new and improved study, And THIS one we’re pretty sure will be accurate. Oh and were changing the standards just to make sure the odds are better we get the desired results.
Study 1,didn’t give the city the results it wanted, Study 2 was fundamentally flawed so try, try, try again.
Maybe third time will be the charm.
Philip, first of all, the camera opponents are the ones who have been citing that January 2009 study. If you’ve read any news coverage of the camera debate, you’d know that. Second, if you actually read what I wrote, the November 2009 study did show a statistically significant drop in collisions at camera-monitored intersections, which is exactly what the city would want to see. But because the data they used is being modified by TxDOT, they are throwing it out. For all we know, the next run will be less favorable to the pro-camera position.
Same response to you, Craig. It was TxDOT that said the data was invalid, not the study authors. If the city of Houston were pulling strings here, they’d leave it be after the November 2009 study, because that’s the result they’d want.
So, what were the number counts before and after the cameras were installed? One town lost 60,000 drivers after they installed cameras. There are multiple studies that show an increase in accidents with the cameras. 20 reasons to oppose photo radar. Includes accident data:
http://www.meetup.com/camerafraud/messages/boards/thread/7496696
Charles, I think he was referring to the camera proponents looking at the november 2009 study as the gold standard not the January study. Their spokesman McGrath has often referred to this study as the “gold standard” on TV and radio. In fact you can see right on the camera company created website homepage how much they put into it.
“An independent study from Rice University overwhelmingly confirms the obvious. Intersection Safety Cameras work to make our roads safer by encouraging better driving behavior.
The newly released study of collision data at the 50 City of Houston intersections that are monitored by intersection safety cameras found that the camera program has helped to reduce total collisions by 11 percent.
The same study found that intersection safety cameras found that the cameras helped reduce the deadliest side-impact or “t-bone” collisions by 16 percent.
The Rice study also found that overall red-light running was reduced by 30%
The Rice study found that rear-end collisions were reduced by 35%”
The common theme, Charles, is that camera proponents who now say they find a reduction in accidents earlier were spinning an INCREASE in accidents as “mitigating a general increase in collisions at the monitored intersections.” In other words, whatever the results – accidents go up or down – these researchers claim success for red light cameras. So you’ll excuse me if I don’t take their word as the end all be all when different studies produce contradictory results.
FWIW, the Austin data cited in the post you linked to appears to have come directly from the City of Austin as reported by a local TV station. I’m sure it’s from the local PD, not CRIS; probably the same dataset cited here by the Statesman:
http://gritsforbreakfast.blogspot.com/2010/05/austin-red-light-camera-results-minimal.html
However camera opponents can point to many more studies than the one in Houston. Crashes increased in Virginia:
http://gritsforbreakfast.blogspot.com/2005/01/all-virginia-red-light-camera-studies.html
And in Lubbock:
http://gritsforbreakfast.blogspot.com/2008/02/lubbock-discontinues-red-light-cameras.html
Plus there’s a public backlash brewing. College Station shut them down after a plebiscite. The Dallas News recently reported that “Lawmakers in Maine, Mississippi and Montana banned red-light cameras last year.” Can it really be that all the critics relied on bad data and only proponents have access to the good stuff, or might some of these jurisdictions that shut cameras off after accidents increased actually be making rational choices based on real-world experience?
BTW, lengthening yellow light times by a fraction of a second is FAR more effective than cameras at reducing red-light running and accidents, and it’s completely free.
There’s a new video about DARLEP just in time for the electionn – what a coincidence.
here’s the link – http://www.youtube.com/watch?v=Lhe0zEr95Ew