The Space Reviewin association with SpaceNews
 


 
Atlas 76E failure
Atlas 76E spins out of control shortly after liftoff in 1981, a failure linked a simple error with one of its engines. (credit: USAF)

Launch failures: information flow


Bookmark and Share

In a recent article (see “Launch failures: what’s changed?”, The Space Review, March 11, 2013), there were some questions in the comments as to why the US Air Force launch failure data was treated in such a restrictive manner. This piece addresses that and related topics.

The Air Force approach

The US Air Force AFR-127 series of regulations that applied over most of the last 50 years dealt with safety, and included very specific instructions as to the investigation of aerospace accidents. However, the Air Force uses the term “mishap” rather than “accident.” The term “accident” implies an unexpected error, and after all, damage to Air Force equipment and injury to personnel may well be due to deliberate action rather than some kind of mistake. As the old joke goes, getting bitten by a rattlesnake is not really an accident, since the snake clearly did it on purpose.

Implementation by Air Force organizations of the corrective actions recommended in the report had to be accomplished not by explaining everything that occurred and the associated analyses but rather by directing specific actions.

The safety regulation specified very strongly that information gathered as a part of a Mishap Investigation had to be held in the strictest confidence. The primary reason for this was to ensure that information gathered would be free of fear of punishment or retribution. The idea was that people had to be open, honest, and not worry that their admitted actions could be used against them. The priority was on ensuring safety by identifying causes and corrective actions.

That priority on safety did not mean that individuals could not be subject to disciplinary action as a result of the mishap, but any information used for such adverse action could not be gathered as part of the formal mishap investigation. A separate legal investigation would be required.

Thus, the mishap investigation was conducted behind closed doors, using data and testimony that was impounded and controlled by a specifically designated Mishap Investigation Board. At least some of the data would be permanently impounded, locked up forever. Mishap Board members were cautioned that such data had to be kept as close hold, since once it had gotten outside the board’s control it could not be recovered. The chief of the Mishap Board typically was always someone not assigned to the organization that was responsible for the mishap and the rest of the personnel were at the very least not directly involved; it was to be an independent investigation.

The Mishap Investigation itself culminated in a formal report that literally was divided into two sections. One section of the report, contained on the left side of a special standardized folder, showed what happened, a detailed description of the events. The right side of the folder showed why it happened, the mechanisms and conditions that led to the mishap, and provided formal findings and recommendations to avoid future mishaps of that type.

The “what happened” info, on the left side of the folder, could to some degree be released in a limited fashion to the public; for example, pictures of the crash site might appear in the news media. The “why it happened” and recommendations for corrective action section of the report could never be released.

Distribution of the formal mishap report was strictly limited by the regulation. In addition to the Air Force Inspection and Safety Center, a few others, including the organization responsible for the mishap and Air Force Logistics Command Headquarters, received copies; the total number of copies issued typically was about six or seven. The reports were kept as close hold information by all of the organizations, and if they involved classified information (for example, for a classified space launch) they were subject to further controls as well.

Implementation by Air Force organizations of the corrective actions recommended in the report had to be accomplished not by explaining everything that occurred and the associated analyses but rather by directing specific actions. Even if a private firm knew quite well that the Air Force Mishap Board had concluded that a defect in the equipment the company produced had caused a mishap, they would not receive a letter to that effect but rather direction to make any required modifications, usually under the terms of a contract. The Aerospace Corporation, the Air Force’s corporate memory for space launch, had the job of determining if such corrective action had been applied and reported as appropriate.

The more copies of the final mishap reports that were issued, the greater the chance that some legal jurisdiction would succeed in ordering a copy be released to support a lawsuit.

In 1982, following the completion of a formal Mishap Investigation on the loss of an Atlas mission from Vandenberg AFB, the launch vehicles system program office (SPO) Director instructed me, the only SPO member of the Mishap Board, to brief the maker of the failed rocket engine, Rocketdyne, on the results of the investigation. Rocketdyne had provided extensive technical support to the board and knew full well the cause of the failure. But it only made sense to brief the contractor on the formal conclusions of the Mishap Board and the recommendations.

I called the local safety office and told them what I had been ordered to do. They responded with instructions for me to meet them at the local Judge Advocate General Office (JAG).

The JAG explained that the authority to keep the Mishap Board information restricted was an extension of executive privilege. To be able to release that data required permission from a suitable level. If we wanted to tell Rocketdyne, we were going to have to call President Ronald Reagan.

The JAG further explained that the Air Force would be potentially financially liable if we briefed Rocketdyne and the company then fired the employee who made the mistake, since that employee had been assured of confidentiality. Of course, Rocketdyne not only knew the truth, but their employee’s actions of the time had been reviewed and approved by not only their own quality control but Air Force Quality Assurance personnel. And everyone agreed that the Rocketdyne technician had followed the approved manual as well as the fact that the manual did not address the possibility that such a problem could occur. Yet, despite all that, Rocketdyne did not get briefed.

Some months after one Atlas launch failure I happened to be present at meeting at General Dynamics that involved what was supposed to be a very minor engineering change to the booster. The available stock of synthetic engine lube oil soon would be used up and the oil to be substituted lacked the synthetic’s superior low temperature properties. So, a new temperature sensor was being installed in an available port in the vehicle’s lube oil tanks in order to ensure that the oil did not get too cool prior to launch.

It seemed simple enough to GD, and totally uncontroversial, but purely by chance I was present and pointed out that the change violated the Mishap Investigation Board’s recommendation that all such potential leak points be secured with safety wire. Of course, GD was unaware of that requirement, having never been briefed on it, per the existing USAF policy. Safety wiring of the other leak points was done at Vandenberg AFB, so no one at GD’s main plant knew about it.

Aside from the desire for honesty, the possibility for liability associated with a mishap was a big concern—and became an even larger one soon thereafter. With aircraft mishaps the possibility for injury, death, and damage to private property is substantial. With space launches such losses are essentially unknown; there are reports of a cow being killed in Cuba in the earliest days, and a V-2 launch from White Sands hit a cemetery in Mexico in the 1940s (and word is they are still finding bodies associated with that disaster). Nonetheless, the same Air Force regulations applied to both space launches and aircraft mishaps.

The ever increasing litigiousness of American society was a factor in the Air Force traditional approach to the release of mishap data, but later, in 1982, it became an even bigger driver. A 60 Minutes TV show piece broadcast in that time frame illustrated the problem. A young Air Force widow attempted to obtain detailed mishap investigation information in order to support a lawsuit against General Dynamics for defects in her deceased husband’s F-16. In the view of the Air Force, such legal efforts imperiled the Mishap Investigation process, and the more copies of the final mishap reports that were issued, the greater the chance that some legal jurisdiction would succeed in ordering a copy be released to support a lawsuit. Once again, the chances of such a lawsuit happening in the case of a space launch failure were essentially nil, but “one size fits all” still ruled the day.

So, orders went out for all copies of formal mishap reports held anywhere outside of the Air Force Inspection and Safety Center to be destroyed. Needless to say, this did not aid in the dissemination of valuable information and made an already inadequate information flow even worse.

After a few years, things did loosen up a bit. In 1985, 1986, and 1987, the US suffered a catastrophic series of launch failures, including two Air Force Titan T34Ds, a NASA Delta 3914, a NASA Atlas Centaur carrying a DoD payload, and the Space Shuttle Challenger. Everyone became highly sensitized to the possibilities of failure and information flowed far more freely than in the past. In fact, the Air Force Launch Vehicles SPO chief even personally briefed General Dynamics on the two Titan failures; previously the Air Force could not even have briefed GD on Atlas failures. And the Air Force, NASA, and the industry as a whole became highly sensitized to the problems of badly routed wiring following the Delta failure of May 1986. Of course, the Challenger investigation was conducted in the open—televised, in fact—and the final report was made available to everyone. But as it turned out, all this did not indicate a major change in policy; USAF space launch mishaps would still be investigated behind closed doors.

The big change in Air Force policy would not occur for another decade. On January 17, 1997, a Delta II 7925 carrying an Air Force GPS spacecraft exploded soon after lifting off from SLC-17 at Cape Canaveral Air Force Station. The Air Force proceeded to set up its usual formal Mishap Investigation Board, and a howl of protest went up from private industry.

The majority of Delta payloads scheduled for following 12 months were commercial payloads. The owners of the payloads were highly distressed over the impact the failure would have on their schedule, but found the idea that they would never learn exactly what occurred or what were the recommended corrective actions to be totally unacceptable. The Air Force initially agreed to keep the commercial users informed, and almost two years later formally announced that its policy on space launch mishap investigations had been revised to allow the broader release of data.

And among the lessons learned from the 1997 Delta mishap were a number that were identical to those that came from the 1986 Titan failure at Vandenberg AFB relative to reducing the potential for damage at the launch pads. They had never been implemented at Cape Canaveral because no one knew about them. The Air Force relied on the Aerospace Corporation, but Aerospace did not become involved in such issues, which were policy related rather than technical in nature.

NASA

NASA did not have to follow the Air Force 127 series of regulations but the agency had its own problems when it came to implementing fixes.

The Air Force was willing to share data with NASA on its launch failures to a degree. But that did not mean that NASA either felt obligated to implement the Air Force fixes on its own hardware, or, for that matter, on hardware it procured for Air Force use.

Unlike the Air Force, where all launch vehicle procurement was handled by one organization, NASA used multiple field centers for launch vehicle procurement. Originally, NASA Goddard procured the Delta booster, NASA Lewis procured the Atlas Centaur, NASA Langley procured the Scout, and NASA Marshall was responsible for the Space Shuttle. There was no central authority or expertise relative to implementing mission assurance corrective actions; NASA had no equivalent of Aerospace Corp. As a result, each NASA center was free to consider or ignore lessons learned.

And ignore them is often what they did. Each center had its own workload and tended to view the other centers as competitors, at best. Problems revealed on one launch system tended to be tossed off as irrelevant on the others. Each NASA center could make its own decision, and with no real review undertaken as to why.

The Air Force was willing to share data with NASA on its launch failures to a degree. In fact, it was common for NASA to be invited to have a member on the Air Force Mishap Investigation Board, at least in the case of Atlas vehicles. But that did not mean that NASA either felt obligated to implement the Air Force fixes on its own hardware, or, for that matter, on hardware it procured for Air Force use. Just prior to the final Atlas H mission the Air Force Test Director just happened to discover that the lubrication system safety wiring had not been accomplished on any of the five Atlas H vehicles that NASA had procured for Air Force use. The Air Force was in violation of its own corrective action requirements, had been for years, and no one even had realized it. NASA had chosen not to implement the Air Force fix on its own vehicles and that same decision had unwittingly applied to the Air Force as well.

In addition, the Air Force was shocked to discover that NASA had countdown management problems with not only the STS-51L mission that resulted in the loss of the Shuttle Challenger, but also the Atlas Centaur that was “shot down” by a thunderstorm the following year, followed by a nearly chaotic Scout countdown at Vandenberg AFB. Following the Atlas Centaur failure the Air Force had reviewed its own countdown procedures but it appeared that no such thing was done by NASA.

NASA considered the establishment of a truly independent mission assurance review organization following the loss of the Challenger but rejected the idea. The concept was accepted by the agency only after the loss of the shuttle Columbia.

Today: both better and worse

Today, when it comes to launch failure information flow we have both the best of times and the worst of times.

Time and time again, companies, organizations, and individuals have demonstrated that only experiencing failure first hand, and repeatedly, can cause them to develop the kinds of attitudes required to ensure success.
The Air Force no longer acts as if launch failure information was classified data that has to be closely held. NASA now uses one procurement agency at Kennedy Space Center to procure all of its launches, which offers at least the potential for better crossfeed of information between programs. NASA also now has an independent safety agency, and theoretically that should hemp information flow as well.

But on the other hand, the USAF has announced a policy of not sharing data with anyone—even NASA—unless it is of direct benefit to the Air Force to do so. And, a few years back, when the Aerospace Corporation offered to present a lessons learned summary to NASA, free of charge, it had no takers, even though the briefings were held at a NASA facility.

Commercially, US private firms are usually less than forthcoming about their experiences; in this age of both intense competition and frequent litigation companies often release less failure information than even the USAF used to.

Internationally, Russia and the Ukraine have realized that they need to be far more open about vehicle problems and corrective action if they wish to compete commercially. As a result, the Proton User’s Guide even has a summary of launches and failure causes. But China remains as inscrutable as ever, Iran and North Korea’s experiences are impenetrable except by rumor, and even nations such as Israel, India, and South Korea are hardly open fonts of information.

US Navy submarine officers study the Space Shuttle Challenger failure as an example of what not to do. But neither US aerospace companies nor our institutions of higher learning study failures to teach engineers what can happen.

Time and time again, companies, organizations, and individuals have demonstrated that only experiencing failure first hand, and repeatedly, can cause them to develop the kinds of attitudes required to ensure success. Perhaps the greatest failure humanity’s space launch efforts have experienced is that of the failure to learn and pass on information that was gained at such heartbreaking cost.

Information flow for launch failures was, and still is, a significant problem.


Home