Off the Kuff Rotating Header Image

Bob Stein

On voter confidence

There was one more interesting aspect to that poll of Harris County from last week, and it had to do with how confident voters were that the vote they cast would be counted. This KUHF story goes into that result.

A new KUHF/KHOU poll shows that black voters aren’t as confident as other voters that their vote will be counted accurately.

[…]

Rice University Political Science Professor Bob Stein, who conducted the poll, says confusion and possible anger over voter ID could be fueling the lower level of assurance.

“African-Americans here are actually considerably less confident that their vote will be counted accurately than other African-Americans throughout the country, with the exception of states who’ve had this controversy over photo IDs.”

Stein says the difference between the KUHF/KHOU poll and national polls is the level of confidence African-American voters expressed. While nationally 40-45 percent of black voters are very confident that their vote will be counted accurately. Stein says the numbers are different for those voters polled in Harris County.

“Among African-Americans only about 36% are very confident, compared to 50% white and 44% Hispanic.”

Here are the relevant tables from the topline data:

Dr. Stein asked me for my feedback on this, and I replied as follows:

Interesting stuff. From a Dem perspective, I would add two things that likely add to the perception of one’s vote not being counted:

1. In my experience, Dems have a much higher level of distrust of electronic voting machines. Some of that is lingering paranoia and conspiracy-mongering from Ohio 2004, and some of it is the very legitimate concern that these machines aren’t terribly secure and could well be compromised without anyone knowing it. The fact that every cycle there seems to be a story about some well-connected Republican having an ownership stake in a company that produces these machines, as is the case this year with Tagg Romney, adds to this level of distrust.

2. Every time something happens that causes a problem with voting, or that results in misinformation about voting, it seems to affect people of color in a vastly disproportionate amount. See the recent debacle with the “dead voter” purge here, and the recent story in Arizona about the wrong date for Election Day being provided in Spanish-language materials. Add in the various official and unofficial efforts to suppress minority voting – voter ID, the King Street Patriots’ “poll watchers”, efforts to curb early voting in Ohio, etc etc etc – and it’s easy to see why some folks feel like their vote is discounted.

Almost as if on cue, we had this story in Friday’s Chron:

State election officials repeatedly and mistakenly matched active longtime Texas voters to deceased strangers across the country – some of whom perished more than a decade ago – in an error-ridden effort to purge dead voters just weeks before the presidential election, according to a Houston Chronicle review of records.

Voters in legislative districts across Texas with heavy concentrations of Hispanics or African-Americans were more often targeted in that flawed purge effort, according the Chronicle’s analysis of more than 68,000 voters identified as possibly dead.

It’s unclear why so many more matches were generated in some minority legislative districts. One factor may be the popularity of certain surnames in Hispanic and historically black neighborhoods.

That’s as may be, and as noted before there were Anglo voters and known Republicans affected by this as well. But still, there are only so many times that this sort of thing can happen before people stop believing it to be a coincidence or an innocent mistake. Texans for Public Justice argued last week that this was anything but an innocent mistake, as they accused Andrade of deliberately trying to suppress the vote. You can read the report and come to your own conclusions, but again I’m not surprised by the poll numbers. I’m sure there are other reasons I didn’t come up with. Maybe this is an anomaly, maybe it’s a small sample size problem, but it’s worth keeping an eye on, because people who don’t think their vote counts are less likely to vote. What do you think about this?

Obama leads in poll of Harris County

More polling goodness for you.

The poll conducted for KHOU 11 News and KUHF Houston Public Radio indicates Obama leads Romney in Harris County, but not by much. That gives some indication how election night might go for politicians running for offices that are down the ballot.

The poll shows the president leading in Harris County with the support of 46 percent of surveyed voters, compared to Romney’s 42 percent. Libertarian Gary Johnson cracked the survey with 2 percent.

In the U.S. Senate race, Democrat Paul Sadler’s 44 percent leads Republican Ted Cruz with 42 percent in Harris County. With a 3.5 percent margin of error, that’s a statistical dead heat in the largest county in Texas.

[…]

Republican crossover voters are helping push Democratic Sheriff Adrian Garcia to 51 percent in this survey, compared to Republican challenger Louis Guthrie’s 32 percent. Another 13 percent were undecided.

On the other hand, many Democrats told pollsters they’re voting for Republican district attorney candidate Mike Anderson, who’s polling at 41 percent. Nonetheless, Democrat Lloyd Oliver is close behind with 35 percent. Another 19 percent are undecided. That number is especially striking because Democratic Party leaders were so embarrassed by Oliver’s candidacy they tried to remove him from the ballot.

“What we’re seeing is a much more significant ticket-splitting among Republicans than Democrats,” said Bob Stein, the Rice University political scientist and KHOU analyst who supervised the poll. “I don’t know if that’s because they’re more bipartisan, or they simply are more capable and more likely to make that choice, which is not easy to do on an e-slate ballot.”

Or maybe Sheriff Garcia has done a better job of making the case for himself than Mike Anderson has. Prof. Stein was kind enough to share the topline data and the poll questions with responses, and I’ll note that there were considerably more “don’t know” answers in the DA race than in the Sheriff’s. Perhaps that’s the difference.

You can also find basic poll data here, though for some odd reason there’s no breakdown of the Senate race on that page. There are also results for the five City of Houston bond proposals, the HCC and HISD bond proposals, all of which have majority support and in some cases large majorities. There’s no result for the Metro referendum, but I infer from the teaser at the end of this KUHF story on the poll that that result may be released separately. Released by KHOU and KUHF, anyway – if you go back and look at those docs I linked above, you’ll see the Metro referendum result from this poll. It has plurality support, but that makes it the only one not to have a majority. Make of that what you will.

For what it’s worth, there was a Zogby poll of the Presidential race in Harris County in 2008, which showed a 7-point lead for Obama over McCain. Oddly, as I look back at it, the story never mentioned the actual numbers, just the margin; the links for the poll data and crosstabs are now broken, so I can’t check them. (The story did say that Rick Noriega had a 47-40 lead over John Cornyn for Senate in Harris County.) A separate poll of county and judicial races showed similar results, though it did correctly call Ed Emmett the leader in the County Judge race. Democrats did win most of those races, and both Obama and Noriega carried Harris County, though by smaller margins than the poll predicted. As I noted at the time, Zogby (the pollster) showed Dems with an eight-point advantage in party ID, which largely explained the poll numbers. This poll shows roughly the same partisan ID numbers, which could mean some Democratic slippage from 2008, or could just be random. As Greg says, what we very likely have here is a swing county where GOTV will make the difference. We’ll know soon enough.

KHOU polls the Mayor’s race

We have our first published poll of the season.

Mayor Annise Parker, leading the city during an era of budget cutbacks and high unemployment, has the lowest approval ratings of any Houston mayor in decades.

That’s the striking headline popping out of an exclusive poll conducted less than a month before the city’s Election Day. The mayor faces only token opposition, but the survey conducted by KHOU 11 News and KUHF Houston Public Radio indicates that a well-financed candidate could have seriously challenged Parker’s bid for re-election to her second term.

“She is down in almost every demographic and geographic area of the city,” said Bob Stein, the Rice University professor who supervised the poll.

The poll indicates fully half of likely Houston voters — 50 percent — rate Parker’s job performance “fair” or “poor,’ while 47 percent rate her “good” or “excellent.” That’s an unusually low approval rating for a first-term Houston mayor.

The mayor blames her low ratings mainly on general discontent with the economy. She also points out that fate has dealt her a difficult hand, forcing her to make politically unpopular decisions, like cutting services during an unprecedented budget crisis and imposing water conservation rules during an unprecedented drought.

Putting aside any specific disagreements one may have with the Mayor, I think there’s a lot to that. Every officeholder is less popular in bad times than they would be in good times. I’m not saying Mayor Parker would have Bill White/Bob Lanier levels of popularity, but she wouldn’t be upside down in a stronger economy. I’m sure she’d love to have the chance to be Mayor in some flush years.

As for election numbers, my advice is to reach for the salt shaker:

Perhaps as a result of that discontent, a whopping 50 percent of likely voters say they still haven’t made up their minds how they’ll cast their ballots in the mayor’s race. If the election were held tomorrow, the poll indicates the winner would be “Don’t Know.” Parker wins the support of 37 percent of voters. Her five opponents — little known and little funded — split 11 percent of the vote, but they’re all mired in single digits.

You can see some questions and their occasionally mismatched answers here. The critical numbers:

GENERIC BALLOT Frequency Percent Vote to reelect Parker 143 19% Vote for another candidate 118 16% Don't know 101 14% Refused 3 .3% Total 364 49% With Names Frequency Percent Kevin Simms 6 .8% Amanda Ulman 4 .6% Dave Wilson 6 .8% Fernando Herrera 17 2.3% Annise Parker 132 17.6% John 'Jack' O'Connor 7 .9% Do not know 178 23.8% Refused 4 .6% Total 353 47.3%

I presume what this means is that they asked about half of the sample the generic question, and for the other half they named names. Parker led the generic sample 39-32, and had 37% on the named ballot; Fernando Herrera was next with a shade under 5%.

The question is how to interpret these numbers. As Greg notes, one way to look at it is to take the generic Parker/not Parker result, which translates to a 55/45 split for the Mayor. This basically assumes that a lot of the “Don’t Know” respondents will stay home, which strikes me as a very reasonable assumption. Another thing to consider is that while “not Parker” got a fairly hefty total, the sum of her named opponents garnered a total of 11%, barely a third of the generic opposition. If you assume that most of the people who expressed some preference are likely to vote, then the question is what to the people who don’t care for Parker but don’t know any of the opponents do? My guess is some will randomly choose an opponent, some may have heard enough about one of them to enable them to make a choice – having someone handing out literature at all of the early voting locations might pay a dividend – and some of them will simply skip the race. That is to the Mayor’s benefit, and it suggests her actual level of support is higher than the generic re-elect total. If you combine all of the Parker/not Parker totals, you get a 275-158 split for her, which is 63.5% in her favor. Let’s call that the opening over/under line from Vegas.

(Yes, it’s possible that some people who say they support the Mayor are among the unlikely voters, and that this could shift the real percentages the other direction. But then the same might be true for some of the non-supporters of the Mayor, and who’s to say which group is greater? My assumption, as I said above, is that most of those who expressed a preference are likely to vote. I’ve got to assume something.)

Of course, all of this follows from another critical assumption, which I am not prepared to make, that this sample is made up of actual likely voters. Here’s the key question from the poll:

How likely are you to vote in the November City of Houston election? Would you say you are very likely to vote, somewhat likely to vote, or not likely to vote in this November’s election.

Frequency Percent
Very likely to vote 620 83%
Somewhat likely 127 17%

Total 748 100%

Funny how nobody answered “not likely”, isn’t it? Pollster Bob Stein says in the KHOU story that he “expects about 17 percent of voters to cast ballots, putting the turnout at about 125,000”. I’d say that puts him on the optimistic side of the equation, but regardless of that, how many of these people are really in that 125,000? I don’t have the crosstabs, though I have asked for them, and I don’t have any information about whether a pre-screen was done to narrow the sample down to those who really do tend to vote in odd numbered years. A poll of plain old registered voters is next to meaningless in a low turnout context, as we saw in 2009. For my money, even being a 2009 voter isn’t enough this year. If you were eligible to vote in Houston in 2007 and failed to do so, I don’t consider you a “likely” voter for polling purposes. Turnout in 2007 was 125,856, right in line with Dr. Stein’s projection. Polling any larger sample for anything other than a read on general attitudes is a waste of time, in my opinion. KUHF has more.

Accidents down overall in Houston

I know I said I was done litigating the red light camera question, and I am, I swear it, but I can’t let this pass without comment.

Automobile accidents on Houston streets have declined 13 percent in the last five months, mirroring a trend noted in other major Texas cities, according to police and state highway statistics.

The citywide decline was not as large as the 16 percent reduction in accidents at 50 intersections where the city’s red light cameras quit recording violators Nov. 14, shortly after Houstonians passed a referendum to end the program last fall.

“In Houston, we’re seeing the same declines in overall collisions and crashes on roadways and that confirms what we think has happened at red light intersections — that fewer miles driven contributes to fewer collisions,” said Rice University professor Robert Stein, who has conducted studies of Houston’s red light camera system.

However, fatal traffic accidents in Houston have remained steady during the last three years, according to state crash statistics.

Houston police say 23,432 accidents were recorded throughout the city from mid-November until April 15, a decline of 13 percent from the 26,662 crashes that were reported between mid-June and Nov. 14.

After the city’s camera system was shuttered, there were 362 accidents during the next five months at the 50 monitored intersections – a 16 percent reduction from the previous five months.

The difference between thirteen percent and sixteen percent is noise. To enumerate that, the total number of accidents at former camera intersections would have been 431. A thirteen percent decline from that is 375, so the difference between that and a 16% drop is a grand total of 13 accidents, spread over five months at 50 intersections. A sixteen percent decline in accidents at a subset of intersection is not remarkable when the overall rate of decline is thirteen percent. It’s noise. There were not two separate stories here. There was one story about how the accident rate in Houston had declined noticeably in the past five months, and that decline was about the same at former red light intersections and other intersections. Bad on the Chronicle for not presenting it that way.

The revised red light camera study

Last week, when I wrote about the anti-red light camera folks turning in their petition signatures, I noted that the Chron story referenced an update to the January 2009 study about the effect of cameras on the collision rate at the monitored intersections. That study reported an overall increase in collisions at all intersections, whether monitored by a red light camera or not, with the monitored intersections showing a smaller increase than the unmonitored ones. This result was both puzzling – How is it that there was an increase in collisions in Houston when the data for the state as a whole showed a drop in the collision rate? – and controversial – ZOMG! Red light cameras meant more crashes! – but at least there was to be a followup study, which hopefully might shed some light on that.

That study was completed in November of 2009. I was sent a copy of it, which you can see here. The results this time were very different.

In January of 2009, we released a report analyzing the effect of red light cameras at the 50 DARLEP [Digital Automated Red Light Enforcement Program] intersections. The report concluded that red light cameras were mitigating a general increase in collisions at the monitored intersections. We based this conclusion on the fact that collisions occurring on intersection approaches with red light cameras were rising at a significantly slower rate than collisions occurring on approaches without camera monitoring. This conclusion was based on data drawn from a collection of individual incident reports provided by the Houston Police Department (HPD).

In the spring of 2009, the Texas Department of Transportation (TxDOT) released an updated statewide database of collisions digitizing all paper incident reports available. The database is known as the Crash Record Information System (CRIS). In theory, the CRIS data for the 50 DARLEP intersections and the original HPD data should be identical as they are both based on the same incident reports. However, in a comparison of the two datasets, we found CRIS reported over 250% more collisions during the before-camera period and over 175% more collisions during the after-camera period. From the comparison of CRIS to the HPD data and after consultation with HPD, we determined the original data in first report was inaccurate as a result of a substantial undercounting of collisions in both the before- and after-camera periods. We then conducted an analysis similar to the original report, but with the new CRIS data. We compared the rate of collisions before the red light cameras were installed to the rate of collisions after the cameras were installed. Because the cameras were installed on only one approach at each intersection1, we separated the data into those approaches that were not monitored by red light cameras and those approaches that were monitored by red light cameras.

The comparison of collisions at monitored and unmonitored approaches leads us to conclude that the Houston red light camera program is reducing collisions at the 50 DARLEP intersections (see Exhibit 1). After the implementation of red light cameras, collisions per month at monitored approaches decreased by 11%. This decline was statistically significant – that is, not due to random variations in the data, with over 90% confidence. The number of collisions per month at unmonitored approaches increased by approximately 5%. This difference from the before-camera period was not, however, statistically significant; the probability that the observed change did not occur due to chance was less than 90%.

The main point to understand here is that the original study was done with incomplete data. I had the chance to speak to Drs. Bob Stein and Tim Lomax about this, and what they told me was that they used HPD’s accident reports for the initial study. These reports were all on paper, and came from various HPD locations. It turned out that a sizable number of the reports were not provided at that time because they were in offsite storage facilities, and nobody they were working with knew about that. Stein and Lomax stressed to me that they had no problems with HPD, they cooperated fully and provided all the data they thought they had, it was just that there was quite a bit more than that.

Anyway, once they had their hands on the CRIS data, which was fully digitized, from TxDOT, it became apparent that there had been no increase in accidents, there had just been a disparity in the number of paper reports they had from before and after the camera installations, which had made it look like there had been an increase. Doing the study on this complete data set yielded the results above, which are much more in line with the original expectation that there would be fewer collisions at monitored intersections.

Unfortunately, that’s not the end of the story. TxDOT has since announced that there were some issue with the CRIS data, in particular with GPS information. This matters because without being confident in the exact location of a crash, you might classify a collision away from an intersection as being in the intersection, or vice versa. TxDOT will be issuing an updated data set in the next few weeks that will supersede the one on which this study is based. Because of all that, Drs. Stein and Lomax told me that they no longer have any confidence in the reliability of the November 2009 study, and that no conclusions should be drawn from it. Here is the memo expressing their concerns, which was sent to HPD Assistant Chief Tim Oettmeier last week:

We have identified several issues with our revised report dated November 2009. These issues and their potential effects on our analysis are outlined below:

Issues

1. TXDOT advised us that they would be reprocessing existing crash datato correct data errors, append current roadway data, and update crash location information.
2. As we have refined our data processing, we discovered potentially incorrect data that will require further analysis (e.g. JFK/Greens Rd.).
3. The November 2009 report uses a 500 ft. inclusion standard. Upon further review of the literature, we have decided that a 150-200 ft. inclusion standard is appropriate.

Effects

1. Collisions are relatively rare events. Even a small change in the number of collisions can have a significant effect on the results of our analysis. For this reason, we must be sure we are using the “cleanest” data possible. The reprocessing of the Crash Records Information Systems (CRIS) data has the potential to significantly alter the results of the November 2009 report and we believe it is best to withhold judgment until the new TXDOT data is available. We cannot be sure of the reliability of the underlying data in the report.
2. When we collected/processed the CRIS data, there was an error in our geolocation of crashes at the JFK/Greens intersection. This error needs to be corrected and we are planning to do so with the new August data (which will include data through 2009). The error adversely affects the reliability of the report itself.
3. Upon further discussion with transportation experts and additional review of the extant literature, we have discovered that the 500 ft. inclusion standard in the November 2009 report was potentially an overly broad standard for collisions included in the dataset. We erred and are correcting this error in a report to be released soon after the revised CRIS data is available.

When taken individually, a given issue may not be insurmountable. However, the compound nature of the effects prevents us from affirming the reliability of the November 2009 report. Erring on the side of caution, we believe it is best to issue a corrected report once we have an opportunity to utilize updated CRIS data (availability of which is anticipated later this month).

I will report back after I’ve received a copy of the revised study. The main point to take away here is that the original January 2009 study, which is regularly cited by camera opponents as evidence of their ineffectiveness, was based on incomplete and inaccurate data, neither of which was known at the time. We should finally have an idea of what the data really tells us after this third study is done.

Two other points of interest. One is that according to Stein and Lomax, theirs is the first study of red light cameras in Texas that utilizes the CRIS data. I hope someone will perform similar studies in other red light camera-enabled cities with this data – once we’re sure it’s as clean as it’s going to get, of course – so we can have a true apples to apples comparison across cities. There’s no indication who did the study cited in the Grits link above or what data they used, so I can’t offer a critique of it. Clearly, it’s a tough issue to wrap your arms around.

Second, I asked Stein and Lomax why it was that I hadn’t seen any references to that November 2009 study before now. They said that was a question for the city – it was their job to produce the study, not to publicize it. I’ll just leave it at that.

More on city term limits

Here’s the Chron story about the 2010 term limits survey and recommendations by the Term Limits Commission. Of interest is what the immediate prospects are for action by City Council.

[Dr. Robert] Stein, a commission member who has polled voters on the question of term limits, said at Tuesday’s meeting that he did not believe [Mayor Annise] Parker or council members want to put any changes to term limits on the ballot this year.

Other potential ballot propositions, as well as an unpredictable anti-incumbent electoral environment in November, make passage of any changes unlikely, he said.

“It’s simply a practical political problem,” Stein said.

Parker said she has not closely monitored the commission’s activities, although she is awaiting its recommendation before making up her mind.

“This commission was created under the previous administration,” the mayor said. “I am not a supporter of term limits, I have never been a supporter of term limits and would like to see term limits at least modified or overturned. But I wait to hear what the commission has to say as to whether council should submit it to the voters. I’m not driving this one way or the other.”

That’s about what I expect. There’s no requirement that any action be taken this November. Frankly, having it as a May election, when there would be nothing else on the local ballot, might be the most sensible course, as it would allow for the least distraction. We’ll see what happens when the Commission finishes up its work and presents its report.

You want more information about term limits?

Of course you do. And I’ve got you covered. Via email from Robert Stein, I give you the following:

– A research paper from 2002, co-authored by Dr. Stein, called “Public Support for Term Limits: Another Look at Conventional Thinking”. It’s a fairly technical overview of research on people’s attitudes towards term limits. The main finding:

We qualify the conventional wisdom that term limits are mostly a Republican issue: Support for term limits is more a function of the incongruence between an individual’s expressed partisanship and the party of their representative than of the individual’s party affiliation. Further, the effect of unsatisfactory representation is strongly related to a voter’s engagement with politics and willingness to monitor political affairs actively.

In other words, if you’re a Democrat with a spotty voting record living in Montgomery County, you probably support the idea of term limits.

– A comparative look at cities with term limits and how their adoption has affected turnover and diversity in their governments.

– Graphical representations of the data in that previous document for minority and female representation on City Council.

– A bunch more links on the city’s Term Limits Review Commission page, including the results of the 2004 term limits survey and the proposed wording for an updated term limits survey.

My thanks to Dr. Stein for sending all this to me. Happy reading!

City to release documents related to red light camera study

They’re being ordered to do so by a judge, but it doesn’t look like they’re particularly bothered by it.

Paul Kubosh and Randall Kallinen filed a lawsuit challenging the city’s refusal to release 208 documents they requested under the Texas Public Information Act, many of them internal city communications and e-mails to and from the camera vendor, relating to last year’s city-sponsored study of the effectiveness of the camera program.

State District Judge Tracy Christopher has ordered the city to release 160 of 208 contested documents, ruling the city legal department presented no evidence they should be withheld under the law’s exceptions for attorney-client privilege or the deliberative process.

[…]

City attorney Arturo Michel said the city likely would not appeal the order, noting that many of the documents had been added as exhibits to motions filed in court.

Okay then. I’m not sure why it took this long to release these documents, given that the study came out nearly a year ago, followed almost immediately by Kubosh and Kallinen’s suit demanding the release of the draft report of the study. The city hasn’t exactly gone to the mat to defend the need to keep these docs secret, so perhaps a certain amount of fuss could have been avoided.

Be that as it may, the idea behind this is apparently to fuel an effort to get a referendum to remove the cameras onto the ballot in 2010. Kubosh and Kallinen think that these docs will have something in them that will expose the program as a fake, or something. And who knows, maybe they’re right, though again it seems to me that if the city were concerned about it, they’d be putting up more of a fight to keep the docs secret. (They may yet appeal the initial ruling, so it’s not a given that they’ll hand them over.) All I know is that Kubosh was sure that the city’s camera program was unconstitutional, and he’s since given up that fight after losing in court.

Let’s assume for a minute that the docs do all get released, and there’s no smoking gun in them, though there are a few bits and pieces that Kubosh and Kallinen seize on to press their case. What are the odds their desired referendum passes? Offhand, I would say they’d have a chance, but I’d make it no better than a coin flip. They’ll have passion on their side, which certainly counts for something, but I can’t quite shake the feeling that their base is mostly Republicans who mostly don’t live in the city of Houston. Paul Kubosh, for example, doesn’t live in the city of Houston, at least according to his voter registration card. If this would be a city of Houston referendum, as I presume it would be, he himself would not be eligible to vote for it. I could be wrong, and I’d love to see some polling data on this, just as I’d love to see an update to that original badly-flawed study, but I’m not nearly as sure as they are that there’s gold at the end of this particular rainbow.

Safe Clear reduces wrecks

So says a study commissioned by the city.

Houston’s mandatory towing program has continued to reduce crashes on the city’s freeways, according to a city-commissioned study released Monday.

The study examined the effect of the Safe Clear program from 2005 through 2008. It found there were 120 fewer accidents per month, on average, compared to the baseline year of 2004. The program began in January 2005.

[…]

The new study could not discern if crashes declined because wreckers were no longer racing each other to a scene or because rubbernecking was reduced.

But the study did take into account other influences on the crash rate, such as rainy days, gas prices and the amount of traffic.

“It makes the program look exceptionally effective,” said Bob Stein, a Rice University professor who co-authored the study with Tim Lomax of the A&M Texas Transportation Institute. (Stein’s wife works for the White administration as a City Council agenda director.)

The study showed a correlation between Safe Clear response times and the number of monthly accidents. The faster towing trucks responded to a call, the fewer accidents on the freeway. For every minute decrease in response time, monthly collisions dropped by 80 on all Houston highways, Stein said.

I don’t have any trouble believing that SafeClear has been effective. Hell, just not seeing thirty-seven wreckers at every fender bender on the Loop makes it a win in my book. I have to ask, though, was there no one else available to do this study? Stein’s a fine political scientist, but his last traffic-related effort wasn’t so hot. I hope this one at least is a bit less controversial.

UPDATE: Here’s the Rice press release on the study, with charts. Thanks to Joe White for finding this.

And the answer is…more cameras (maybe)

Well, there is some logic to it.

The Houston Police Department is considering changes — possibly even expansion — to its red-light camera program after a city-commissioned study showed that crashes went up at intersections where the devices have been installed.

“What we’re concerned about is safety, safety, safety at these intersections,” said Executive Assistant Chief Timothy Oettmeier, whose command includes the camera system. “We want fewer injuries, we certainly don’t want any death, and we want a reduction in accidents.”

To meet those aims, the department will evaluate over the next few months whether existing cameras might be redeployed to intersections that continue to see a high volume of crashes and red-light running. They also could add to the 70 cameras now placed at 50 intersections around the city. The evaluation of the program and any options for updating it would be presented to City Council by June 30, Oettmeier said.

Critics said such options are not the best response to the controversial study, which the city released last month.

“If you’re putting more cameras at some intersections, what you’re going to do is make the intersections more dangerous,” said Paul Kubosh, an attorney who represents ticketed drivers in court and unsuccessfully sued to end the program. “That’s what’s going to be proven out by this.”

The report, authored by researchers at Rice University and Texas A&M University’s Texas Transportation Institute, showed crashes increased slightly at intersection approaches where cameras had been installed. The number of crashes, however, rose dramatically at unmonitored lanes of those same intersections, leading the study authors to conclude that the cameras had kept collisions lower than they would have been without the devices.

The results led police to look at the data and to determine whether monitoring more than one approach to an intersection was more effective.

One way or the other, we do need to understand what happened at those unmonitored approaches. Maybe the rise in accidents was a fluke, maybe we’re just counting them more accurately now, maybe there was some kind of effect from the monitored approaches, however odd that seems to me. We can’t make a good decision regarding what (if anything) to do about it unless we have a handle on what happened. I hope that’s the top priority, because otherwise we’re just guessing.

Not mentioned in this story is the next phase of the camera study, which I hope is being done with accepted methodology. Given the flaws in the initial study, I don’t think we know anything more about the effects of the cameras in Houston than we did when we started out. Surely the cameras’ critics would be hammering on this if the study had found a decrease in the number of accidents.

Critics claim camera study shenanigans

So what else is new?

The Houston Police Department tried to influence the outcome of a controversial city-commissioned study by changing how crashes at intersections with red-light cameras were counted, according to documents included in a lawsuit.

HPD’s request was refused by the study’s authors, however, who concluded the number of accidents at 50 intersections with the cameras had increased, not decreased as city officials expected, documents say.

Attorneys fighting to end Houston’s 2-year-old red-light camera program seized on the documents — released after an open records lawsuit they filed against the city — as evidence the study was tainted by a purposefully skewed methodology.

“As in other cities, the red-light camera system in Houston is increasing accidents,” said Randall Kallinen, a lawyer who represents ticketed drivers in court. “This is very dangerous for the public, and we must end the red-light camera experiment.”

I just want to point out here that by Kallinen’s logic, if the next batch of data shows a decrease in accidents at these intersections, it must also be the cameras that caused that decrease. You can’t have it both ways.

City officials and Rice University political science Professor Robert Stein, one of the study’s main authors, contend the Houston Police Department’s requests were part of an ordinary back-and-forth about how best to examine the efficacy of red-light cameras and were not a conspiracy to deliver false data.

[…]

Researchers have studied the impact of such cameras for decades, but the results are mixed and inconclusive, according to an analysis of numerous studies conducted by The Cochrane Collaboration, an international organization that evaluates medical and public health research.

The Cochrane analysis found only five studies that used statistically sound methodology to examine data from the U.S., Singapore and Australia.

The result was that red-light cameras usually reduce the number of fatal crashes but don’t necessarily reduce total collisions.

The Houston study’s authors and city officials expected that to be the result here. Instead, the review showed crashes doubled at intersections where at least one camera was installed, although the uptick in collisions happened in the approaching lanes without cameras. At the lanes with cameras, the increase was too slight to be statistically significant, the study’s authors found.

According to an e-mail included in the lawsuit, an HPD official asked Stein in April to rule out accidents if they occurred more than 100 feet from the intersection. Kallinen also said that documents he obtained indicated the department attempted to rule out crashes that did not involve a red-light violation. Either of those steps would be more likely to lead to results showing the cameras reduced crashes, Kallinen said.

Stein, whose involvement has been criticized because his wife works for White, said the study’s other authors rejected HPD’s suggested change because they were using what they believed was the best methodology.

Mayoral spokesman Patrick Trahan said the police had legitimate reasons to consider limiting the crashes that way, as they did not want the study to include collisions that had nothing to do with running red lights or the cameras.

Doesn’t seem like too unreasonable a request to me, but then I haven’t been peddling conspiracy theories about the cameras. Your mileage may vary.

Can I make a simple request here? I know there’s another study going on to gather more data about the cameras and the accident rate in Harris County as a whole. How about we make sure this study uses the statistically sound methodology that the Cochrane folks refer to? Maybe we could all even agree beforehand that if such a methodology were to be used, we’d all accept the results, whatever they are. And finally, maybe we could try to get other locations that have the cameras do the exact same kind of study, so we can see if Harris County is getting similar results as they are or not. I mean, it could be the case that we’ve just done a lousy job of implementation, and if we’d followed the example set by others we’d get better results. Or perhaps we’ll learn that there are no better results, or that what we got in the first year was a fluke. All I’m saying is, it can’t hurt to have more and better data.

Stein acknowledged that the cameras are not working in Houston as well as he believes they have been shown to work in other cities. The city and critics should be more concerned about why, he said.

“Why are these crashes going up at these intersections?” Stein asked. “Nobody really cares to get at the truth here. Cars are being damaged, people are being injured and a handful of people are dying. … What I want to know is, why they aren’t working in Houston, and what we can do to improve them?”

You know what my suggestion is. And while we’re at it, let’s please release the draft version of this first study. There’s no reason not to, and holding onto it just fans the flames.