I’d originally planned to write a different email today, but in doing some quick research to make sure I wasn’t being loopy, I found something I wanted to share with you. The excerpt below is one of many recollections of Col. Francis Harold Potter, USAF, I ran across:
===
In 1957 I was flying B-52’s for the 92 BWH at Fairchild AFB, WA. On Dec 12th of that year, around 1600 hrs, we were waiting our take-off time when a B-52 took off toward the west. A very experienced instructor pilot was on board, also the wing commander. After breaking ground, the aircraft assumed an extremely nose high condition. It climbed steeply to about 500 ft, never recovered, stalled and crashed right wing down. Only the tail gunner, who exited while in the air, escaped. The cause of this unnecessary “accident” was a real “Rube Goldberg” type malfunction. During manufacture the aircraft was assembled with a stabilizer trim motor that was produced with a malfunction in that it would run in the opposite direction of the desired action. To correct for this, the factory people crossed two control wires at a nearby junction box. This made the malfunctioning motor work correctly. On the day of the accident the scheduled plane had a problem with the stabilizer trim motor, so one was cannibalized and installed. Yep, you guessed it, they took the cross wired one and placed it into a correctly wired aircraft. This made it work in reverse. It was not caught on the maintenance checks or the preflight check. (which was later revised to be able to catch such an occurrence) A wing commander, an experienced instructor pilot and crew were needlessly lost. The gunner recovered and continued flying. Why?
===
Up until the early ‘90s, B-52s still had a crew of 6, so that meant this was a tragedy for 5 people. And the point of Col. Potter sharing this story was him recalling all the non-combat accidents he’d witnessed over the years and wondering why the days’ of the airmen in question ended, how they did and whether there were anything they could’ve done to get different outcomes.
Now, one would hope that security getting things wrong wouldn’t have quite the same fatal consequences highlighted above, but there’s a couple of illustrations of human behavior here that I’d like to highlight. But, if you want to understand the point of the email and why it’s relevant, you first need to understand a little more about the history of the B-52 itself.
If you look at the timings, the date of the crash was 1957. This was only 5 years after the first YB-52 prototype flew, and the plane had “been on the books” since 1945 when Boeing won the original contract for a post-WW2 bomber.
Originally, the specs were met by a shift in technology and requirements where jet engines and the ability to carry nuclear weapons became urgent priorities for the Air Force, so the project was put on hold in 1947. While trying to adapt to the new requirements, Boeing almost lost the contract to Northrop’s flying wing alternative you’ve probably most recently seen in Captain America, The First Avenger (at least the Hollywood version).
A year later, Boeing’s president salvaged the contract, and 4 years later, the first plane flew. In June 1955, the plane officially entered service as the primary US nuclear delivery vehicle 10 years into the Cold War.
Less than a year later, there still weren’t any combat-ready B-52 crews, and the airplane was subject to all kinds of teething and operational problems, so it’s highly likely that the crash above was one of the 50 aircraft originally produced for operational service.
Around the world, things were getting pretty hot and heavy, and 1957 was only 3 years before the infamous Nikita Khrushchev incident at the UN.
So, the aircraft’s early days were against the backdrop of a pressure-cooker environment of delays, cost overruns and escalating fear that the end of the world was just around the corner. Kinda the pinnacle of high-stress crucibles.
I have no idea what a single trim motor for a B-52 cost when it was being built, but the design of the airplane called for a motor that worked a specific way. The one in front of the assembly technicians worked the opposite way. Being the clever people they were, they saw a way they could make the part work and get the airplane off the line on schedule, so that’s what they did.
And in their defense, it’s quite hard to say what an operational delay for that particular aircraft would’ve been – or was perceived to be – at that time. But, the perception was the thing driving their decision to “make it work” because the objective was a finished airplane they could fly.
So, all good for some time. Then the faulty motor was moved to another environment, and because the good enough solution in one context was the exact opposite of what was the expected behavior of the overall system, 5 people ended up losing their lives.
Now, what does this tell us that’s relevant to our world?
To me, there’s a couple of things.
First, it’s about the classic “seems like the right decision on the ground” mindset that’s currently so trendy in security today. In fact, it’s one of the quotes I saw touting the benefits DevSecOps somewhere, “Pushing the decisions down to where you have the most context,” implying that the most “context” exists in operations.
It doesn’t.
Yes, operations needs the ability to make judgement calls and keep the lights on kinds of decisions, but they need to have those decisions visible to the rest of the team. Those decisions need to be reviewed, evaluated and potentially changed based on a greater understanding of the incident they were made to avoid in the future.
And the second is closely related to this, but it highlights the fact that “what works in your environment (system) may not work in mine. My airplane may have been modified to make the defective part work, so if you give me one that works correctly, I have no idea how to predict the results…
…unless I really understand the nature of both the part and my system as a whole.
Do we?
As the typical cybersecurity team in a global organization, do we have that certainty that we can objectively and intelligently evaluate vendor patches, tools, recommendations that are GENERIC against our specific environment that the vendor and majority of the outside world HAS NEVER SEEN?
As far as I’ve seen, we don’t have a very good track record with this.
So just because you’re under pressure, and you think you have a solution for the problem in front of you. It doesn’t mean it’s the right solution.
Or if it is the right decision…right now…
It doesn’t mean it’s the right decision in the long term.
Had the assembly techs communicated what they’d done, and that message was relayed to the right people, a plan could’ve been put in place that resulted in both an immediately operational aircraft and avoid the loss of 5 lives.
And it wasn’t, because nobody knew.
Nobody really knew the true state of the system and the compromises that were made to deliver an objective under intense pressure, high-stress and short timelines.
Again, I’m not saying the potential impact of loss of life is the same for most security teams. What I’m trying to do is illustrate the issues of human behavior we need to proactively work against in order to do our jobs of keeping our organizations safe.
So we need better ways than we’re currently using to enable better results across the whole team.
We need better ways to avoid burying problems today that will end up potentially causing bigger problems tomorrow.
And we need to have a system that make sure everything is visible, connected and second nature.
Architecture is a key enabler of solving the problem, but architecture alone won’t do it. It needs to be actively used and maintained. And it needs to evolve as the needs of those who use it evolve—organically and naturally.
In the August issue of the paid, subscription-only Security Sanity newsletter, I present exactly such a system. It’s 40+ pages include a set of 7 principles, 14 core habits and 3 reusable views of the world that literally can change the way your security team – or you – operate in less than a week.
And, naturally, it’s built on SABSA. But SABSA alone isn’t going to get you there nearly as fast.
If you want it, you have less than 4 days left to subscribe to the newsletter before it goes to the printer. Yes, the subscription costs money. It’s less than $4/day, and every month, you’ll get more practical insights, tips and techniques that you can put to work immediately to improve security in your organization.
But here’s the thing. After Wed, July 31st when I send it to the printer, the only way you’re going to get access to the information in the August newsletter you can get today for $97 will cost you quite a lot more. Because I’m now busily in the process of realigning everything I’ve ever done – and everything Archistry does – with the system in the newsletter too.
To me, it’s that important. And it’s that different.
So if you want in, subscribe here with credit card in hand: https://securitysanity.com.
If you don’t, that’s cool too. It probably won’t work for you anyway.
Stay safe,
ast
—
Andrew S. Townley
Archistry Chief Executive
P.S. Here’s something you can do if you liked today’s post: you can sign up for those daily emails that annoying pop-up keeps asking you about. Or, if you want to know more about what you’re going to get if you do and how it works, then just go knock on the front door: https://archistry.com and you’ll get the whole deal.
Or…you can just keep reading the blog, or ignore me and Archistry all together. I’m good either way.