Sorry!! The article you are trying to read is not available now.
Thank you very much;
you're only a step away from
downloading your reports.

Should There Be a 'National Transportation Safety Board' to Investigate Financial Disasters?

By

The NTSB's report on a terrible bus accident shows why it is no model for financial industry investigations.

PrintPRINT

Even with this distortion, curbside carriers appear safer. If you have 70% as much chance of being in an injury accident, even if that accident is 1.4 times as likely to kill you, it's better than a conventional carrier. Your accident and injury probabilities are lower, and your chance of being killed is the same. By the way, when you see numbers like this as a statistician, you immediately look for other explanations than the type of carrier. Generally you either run a safe operation or not, if you don't, you get both more accidents and more serious ones. Differences like the kind in the NTSB data are more likely to be explained by things like urban versus rural routes or different vehicle types.

The data presented a big problem for the NTSB. Congress had called for a denouncement of curbside carriers. Members were already on record decrying the terrible danger of these less-regulated, inexpensive carriers that were taking increasing shares of business away from established companies and government-owned terminals. It would be embarrassing to admit error. Moreover, the NTSB is part of the federal transportation-regulatory complex. It could not suggest that tighter regulation of established carriers not only drove up prices to many times the curbside fares, but it also made them less safe. On top of all that, established carriers were getting into the curbside business. If they could convince the public and legislators that the competition was dangerous, it would give them major commercial advantages.

How can you twist data that clearly showed curbside carriers were safer into a conclusion that they were seven times more dangerous? The NTSB computed a fatality per bus value for each company, then averaged the rates for curbside and conventional carriers. Curbside carriers averaged 1.4 fatal accidents per 100 busses operated over the six year period; conventional carriers averaged 0.2. There is your seven times as dangerous figure.

How is this consistent with curbside carriers actually being safer? Simple. Remember World Wide Travel, with 15 fatalities. At the time of the accident, it was operating 13 busses. If it had the same number over the entire sample period, that would be a rate of 15 / 0.13 = 115 fatalities per 100 busses. Even if none of the 70 other curbside carriers had any fatalities at all, that would be an average rate of 115 / 71 = 1.6 fatalities per 100 busses. If the other curbside carriers had the same average rate as conventional carriers, this one accident explains almost the entire difference in fatality per bus rates.

If this point is not clear, consider a simpler example from baseball. So far this year, Minnesota has the worst pitching in the American League with a team earned run average of 4.8 (meaning its pitchers gave up an average of 4.8 runs per nine innings, excluding "unearned runs" which are blamed on fielding errors), while Tampa Bay has the second-best with a team ERA of 3.4.

Suppose Minnesota were regulated by federal baseball authorities with revolving-door employment for regulators, while Tampa Bay was an upstart team outside the regulatory umbrella. How could you show that Minnesota pitching was actually better? You could take the earned run average for each Tampa Bay pitcher. The two least-used pitchers on Tampa Bay are Josh Lueke, who has given up seven earned runs in three and a third innings work, and Dane De La Roas, who has given up five earned runs in one inning of work. That makes their ERAs 18.9 and 45 respectively, and raises the average ERA of all Tampa Bay pitchers from 3.4 to 6.6. Although Minnesota pitchers have been worse than their Tampa Bay counterparts at nearly every position, they have no pitchers with ERAs above 10, and the average of their pitcher's ERAs is the same as their team ERA, 4.8.

It is statistical malpractice to average rates this way instead of taking total fatalities for each type of carrier and dividing by total number of busses (or total earned runs divided by total games pitched). However, you can mitigate this somewhat by presenting standard errors. As mentioned above, these are provided only graphically in the report (hopefully showing some conscience on the part of some NTSB analyst), but not given in figures and certainly not mentioned anywhere in the text or press release (suggesting less conscience on the part of editors and managers). The standard error suggests that the fatality rate for curbside carriers is between 0.2 and 200 times the fatality rate for conventional carriers. Reporting that range as "seven times" without qualification is wrong. Not just sloppy or incomplete, but wrong. This is especially true because you know exactly why there is such a large range; it's not random noise or an inherently difficult to estimate figure. In non-quantitative language, when the data are analyzed this way, they tell you nothing. The confidence intervals are much bigger than the reported number, meaning there is more uncertainty than information in your analysis.

The NTSB has refused to make the raw data available. This is malpractice for any statistical analysis, but it is staggering in the case of a government entity set up to disseminate information. It's pretty good evidence that the NTSB knows how bad its analysis is. We're not talking about private or proprietary information, fatal bus crashes are a matter of public record. The only thing the Board is concealing is their estimates of the number of busses for each carrier. Curbside carriers are on double super secret probation, which seems to be precisely the opposite of the NTSB's charter. How can it possibly improve transportation safety to conceal information?

In some ways, even more shocking than the number abuse is the terrible quality of the data. The organization charged with investigating transportation safety had no information on curbside bus safety, despite the fact that these companies had grown rapidly to account for the majority of scheduled intercity bus service. The NTSB had to get data from other places, and it could not obtain or estimate crucial figures like the average number of people on each bus, and the average mileage per bus. A company that carries twice as many passengers per bus, and runs each bus twice as many miles per month, could have four times the fatalities per bus and be just as safe as another company. Less important, but still significant, is you need data to adjust for urban versus rural routes, long versus short trips, day versus night runs, and other variables that influence accident rates.

Another key omission is the data do not distinguish between bus occupants killed and those people killed either in other vehicles or as bystanders, and do not distinguish fault. If there is a major pile-up involving 20 cars, and a bus dents a fender slightly at the tail end, all the fatalities among all vehicles are charged to the bus, the same as if a drunken bus driver had killed the same number of people driving his bus off a cliff or if a negligently-maintained bus exploded. Since most of the fatalities listed in the study were not bus occupants, this is a major point.

Not only do these basic data problems mean it was irresponsible for the NTSB to issue any report at all on the subject, but it calls into question the NTSB's ability to do meaningful investigations. Suppose a bus crash is caused by a driver falling asleep after driving twelve hours with only short breaks. In order to issue a finding that may be turned into regulations by other agencies and will be used to cripple an innovative industry, you need information about overall accident rates as a function of driver fatigue, and how rules will affect price and convenience of service, and whether those changes might send some passengers into more dangerous modes of transportation. If you don't have those data, and if the data you do have come from the agencies and companies you were created to be independent from, it's hard to see how you're adding to the discussion.

Getting back to my real subject, imagine how this will play out in a financial disaster. Suppose that curbside venture capitalists begin raising capital for small, local businesses from middle-income residents by making presentations in school auditoriums and church basements. There is a spectacular collapse in which hundreds of investors lose their savings. Congress calls for an investigation into curbside venture capital firms.

The National Financial Safety Board will not have any data on these curbside companies as it is focusing all its attention on the large, regulated firms doing mainstream business. So the NFSB does a study that finds that curbside venture capitalists have a 30% better track record than big firms: Better average returns to investors, less risk, and more successful businesses, not to mention lower fees. Is anyone naïve enough to believe these facts will get published? It seems more likely to me that the NFSB will find some way to twist the numbers to the convenience of Congress and the established industry, and so they are less embarrassing to federal regulators. The press will report the distortions as fact; no one will read the report and think about the data. The NFSB will act to suppress innovation and competition.

What is a good model for disaster investigations? All the useful data in the NTSB report come from the University of Michigan Transportation Research Institute. UMTRI operates on $13 million per year, versus $98 million for the NTSB. It puts out a wealth of authoritative and useful reports, and maintains valuable data for other researchers (including the NTSB). It also provides extensive educational services. Although it is funded by car manufacturers and the government-and accepts paid consulting work-that does not seem to compromise its independence.

So if you want to argue for more independent investigation of financial problems, consider funding a university research institute. At least you can point to an example that seems to work.

No positions in stocks mentioned.
The information on this website solely reflects the analysis of or opinion about the performance of securities and financial markets by the writers whose articles appear on the site. The views expressed by the writers are not necessarily the views of Minyanville Media, Inc. or members of its management. Nothing contained on the website is intended to constitute a recommendation or advice addressed to an individual investor or category of investors to purchase, sell or hold any security, or to take any action with respect to the prospective movement of the securities markets or to solicit the purchase or sale of any security. Any investment decisions must be made by the reader either individually or in consultation with his or her investment professional. Minyanville writers and staff may trade or hold positions in securities that are discussed in articles appearing on the website. Writers of articles are required to disclose whether they have a position in any stock or fund discussed in an article, but are not permitted to disclose the size or direction of the position. Nothing on this website is intended to solicit business of any kind for a writer's business or fund. Minyanville management and staff as well as contributing writers will not respond to emails or other communications requesting investment advice.
PrintPRINT

Busy? Subscribe to our free newsletter!

Submit
 

WHAT'S POPULAR IN THE VILLE