full

full
Published on:

27th Aug 2025

Pentagon 9/11 Recovery: Lessons in Crisis Response & Readiness

On September 11, 2001, the Pentagon was left crippled by fire, water damage, and destroyed infrastructure after the terrorist attack. Days later, armed guards escorted my team and me into the still-smoldering building to restore critical communications for the U.S. Department of Defense.

In this episode, I share that story firsthand — what it was like to step into the epicenter of national crisis and lead a forensic recovery team tasked with bringing the Pentagon back online.

You’ll learn:

  • 🔎 What really happened inside the Pentagon’s network after 9/11
  • The triage process for restoring thousands of servers and critical links under extreme pressure
  • 🧠 Why readiness beats reaction — and why lessons learned matter more than blame
  • 🏛️ Leadership under fire, featuring insights from Col. David Wills (U.S. CENTCOM & Joint Chiefs of Staff)
  • 📑 Best practices for disaster response, from documenting your infrastructure to building resilient “tiger teams”
  • 💡 How today’s organizations can apply these lessons to avoid catastrophic downtime — from data centers to Wall Street trading floors

This is more than a war story — it’s a framework for crisis management, cybersecurity resilience, and business continuity. Whether you’re a technologist, executive, or leader of any kind, the lessons from 9/11 remain vital today.

👉 Key takeaway: It’s always more fun to be ready. Preparation, documentation, and collaboration can turn disaster into an opportunity for growth.

Transcript
Speaker:

Hi, I'm Bill Alderson coming to

you from Austin, Texas, right

2

:

here in the heart of the country.

3

:

I just have one message for you.

4

:

It's more fun to be ready.

5

:

When disaster strikes, it's really good

to know that you are ready to respond.

6

:

If disaster strikes and you're not quite

ready, it's not the time to be judgmental.

7

:

Just move on and recover

well and recover fast.

8

:

Lessons learned.

9

:

Make sure that you record all the

lessons learned while you're going

10

:

through so you can make a profit

on the disaster that's occurring.

11

:

Alright, today we're going to

talk about the 9 1 1 disaster.

12

:

Yes, I was at home on a Sunday afternoon,

just after 9 11, and I got a call from

13

:

a Pentagon General on my cell phone.

14

:

Amazing!

15

:

Here we are watching television,

glued to our television sets,

16

:

watching what's happening around the

world after the 9 11 events and the

17

:

disaster that occurred to our country.

18

:

And I get a phone call.

19

:

I'm ready.

20

:

It's awesome.

21

:

The Pentagon General asked if our team

could come in and help them recover

22

:

communications at the Pentagon.

23

:

And it's good to be ready.

24

:

We responded.

25

:

Jumped on planes, moved back to the

Pentagon, got escorted in, armed

26

:

guards surrounding the Pentagon.

27

:

The building was still smoldering at

the time, water damage everywhere, and

28

:

they had moved hundreds of servers, and

a lot of stuff just was not working.

29

:

Key links and key

capabilities were missing.

30

:

The Pentagon was having some trouble.

31

:

We did get a chance to go in,

and it was our honor to go

32

:

in and respond at this time.

33

:

But I will make one

thing, no, we were ready.

34

:

We are forensic analysts, network forensic

people, and knowing about best practices.

35

:

And we have learned from all

of the things that we've had

36

:

to troubleshoot over the years.

37

:

Here we are.

38

:

In the Pentagon,

Recovering Communications.

39

:

Here we go.

40

:

Let's take a look at this.

41

:

First of all, I've got a number

of really cool exhibits that

42

:

you might not have seen before.

43

:

About where the plane hit and how it hit.

44

:

I'm going to try and rush through some

of these things because the key points

45

:

are something I want to focus on.

46

:

But Uh, before, I want to make sure

that you know that we are here not just

47

:

to tell our story, but our primary,

long term, is to tell your story.

48

:

We want to find out what you,

as a planner, or a responder to

49

:

emergencies, or as part of a team,

what you learn from the incidents or

50

:

other disaster recovery incidents,

what you learn, so we pull out those.

51

:

Case studies of what you've learned

and then apply them and try and

52

:

put them into best practices

so that you can implement 'em.

53

:

Because guess what?

54

:

More fun to be ready.

55

:

We're gonna talk a little bit about

what happened at the Pentagon, but

56

:

before I go into talking about what

happened at the Pentagon, I want you

57

:

to understand that this broadcast.

58

:

I introduce you to some of my resources,

my friends, my peeps in the industry

59

:

that help us know more, understand

more, and are resources to us when

60

:

we're, uh, stuck or disaster strikes.

61

:

We have a bunch of resources so

that we can reach out and get some

62

:

advice from very capable people

in the world and organizations.

63

:

Now the first person that I want

to talk about just keynoted our

64

:

conference, the Austin Cyber Show.

65

:

And he talked about hope is not a plan.

66

:

Yeah, when Fabulinity

goes up, it's too late.

67

:

You have to act quickly and

you have to have been prepared.

68

:

Now, Colonel David Wills was

the chief networker at U.

69

:

S.

70

:

CENTCOM.

71

:

Now, if you understand the military

environment and when Dave talks in

72

:

his about a 35 40 minute address

during our keynote, he talks about

73

:

how the military is divided up into

different parts of the world and

74

:

different types of combatant commands.

75

:

So, You do understand U.

76

:

S.

77

:

CENTCOM, they take care of the Central

Asia area and they took care of both

78

:

the Iraq and the Afghanistan wars.

79

:

So this guy was the one who was the chief

engineer over all the networks that went

80

:

in to support both of those war efforts.

81

:

Now, after he did that, he went to the

Joint Chiefs of Staff at the Pentagon

82

:

directly and he took care of that

network for about 4, 000 plus people who

83

:

worked for the Joint Chiefs of Staff.

84

:

Now, you have Army, Air Force, Navy,

Marine, those guys have specific jobs

85

:

and then you have the Joint Commands.

86

:

The Joint Commands.

87

:

offer and use resources from all

the different parts of the military.

88

:

That's why you have an army officer

like Colonel David Wills taking care

89

:

of CENTCOM at Joint Chiefs and at U.

90

:

S.

91

:

Strategic Command.

92

:

Those are all Joint Commands.

93

:

Those joint commands are over

the entire military or really

94

:

coordinate for the entire military.

95

:

And when you are in a joint command,

there's Army, Air Force, Navy,

96

:

Marine, every part of the military.

97

:

And I wouldn't be doing justice

not to talk about the Coast Guard

98

:

who takes care of our homeland.

99

:

I'm going to introduce you to David

Wills in a little bit, and he's going

100

:

to address you for a minute or so.

101

:

With a video that we produced during

the keynote, and then I'm going to give

102

:

you in the show notes where you can

go and listen to Dave for 30 minutes,

103

:

talk about a number of different

things, the wars in Iraq, the joint,

104

:

uh, chiefs of staff and US STRATCOM.

105

:

Now, if you don't know what US

STRATCOM is, it's very important.

106

:

They take care of all US

government nuclear systems

107

:

development and deployment.

108

:

So those are the guys.

109

:

We're making sure that we are

ready in some very important ways.

110

:

I hope you have a cup of coffee

or a beverage to enjoy while

111

:

you're listening to this.

112

:

Maybe you're driving on your way to work

or on your way home from work or You've

113

:

chosen to show this to your staff, to

your team, as a team building exercise.

114

:

Whatever the thing is, we're going to

bring some very cogent information to

115

:

you, and we really hope that you follow

us and work with us and participate by

116

:

bringing us your stories as we go through.

117

:

Now, the second person I'm going

to introduce to you may not

118

:

need much of an introduction.

119

:

His name is Gary Hayslip.

120

:

And he and four of his co authors wrote

this book called The Executive Primer.

121

:

And it's the executive's

guide to security.

122

:

Here in his executive guide,

they wrote multiple books.

123

:

But here in this particular book, he

writes about how to work with your

124

:

chief information security officer, how

to interact as a board, as a leader,

125

:

as a company officer, and also as a

subordinate, how to work with, and get

126

:

along with, and get the most out of.

127

:

Your relationship with your chief

information security officer.

128

:

So those two people I'm introducing

you to today, I will provide a link to

129

:

a much longer version of their story

and their information subsequent.

130

:

But I just pop in a little tidbit

to help you understand who they are.

131

:

Now, when I went to the Pentagon with five

of my team members, we stopped everything,

132

:

obviously, and we went to the Pentagon.

133

:

Now, a year later, at the anniversary

of the Pentagon disaster and the 9

134

:

1 1 disaster, all the news networks

did these pieces on who responded.

135

:

So we were chosen in the ABC News

Sacramento market to be interviewed.

136

:

I'm a pilot, and so these guys wanted

to see what I did in live action.

137

:

So we flew from Sacramento to

some of my customer environments.

138

:

Yes, I couldn't believe it, but the

whole ABC News team jumped in my

139

:

bonanza and we flew down to some of

our customers and did some recordings.

140

:

A lot of fun.

141

:

Anyway, that video I'm

going to play for you.

142

:

It's pretty short.

143

:

It's just a couple of minutes and they did

a really good job of telling the story.

144

:

So I hope that you enjoy it.

145

:

Well, for one high tech company here in

the Valley, the events of 9 11 brought

146

:

the greatest change ever in its history.

147

:

A call to service that led them

right into the ruins of the Pentagon.

148

:

And the job that they did there

helped speed the recovery at

149

:

the nerve center of the U.

150

:

S.

151

:

military, and to get the war

on terrorism up and running.

152

:

Dave Marquis reports.

153

:

Bumps are in my hair standing

on end, uh, you know, what my

154

:

country's about to ask me to do.

155

:

For Bill Alderson, challenges

usually come without warning.

156

:

But the Sunday afternoon call from a

Pentagon general still came as a shock.

157

:

We need the best company in the world

at doing critical problem resolution.

158

:

And he says, everyone's told

us that You're the company.

159

:

When Flight 77 hit the Pentagon, much of

the damage came at the heart of the U.

160

:

S.

161

:

Army's computer network, and the

toll on human lives was far worse.

162

:

One of the most tragic things

that happened was the gentleman

163

:

who was in charge of the Army's

part of this network, the airplane

164

:

apparently flew through his window.

165

:

So they lost many critical personnel.

166

:

Clear!

167

:

The next morning, Alderson and five top

engineers were on their way to Washington.

168

:

They will never forget Two

fifths of the Pentagon was gone.

169

:

Computers, servers, an entire network had

been shattered, its remains reassembled

170

:

in another part of the building.

171

:

But after 11 days, it was barely working.

172

:

The Pentagon could hardly talk to itself.

173

:

You know, those are the sort of moments

that you prepare for all of your life.

174

:

Alderson and his engineers went to work,

searching for bottlenecks and broken

175

:

connections in a maze of systems, whose

online documentation was mostly missing.

176

:

You have the Internet, firewalls,

routers, VPNs, VLANs, switches.

177

:

The company is to computer networks what

a forensics expert is to a murder case.

178

:

Trying to decipher clues that will

solve a mystery others have given up.

179

:

I basically try to get a three

dimensional view of the technology.

180

:

I tron into these systems and try

and figure out how they're working.

181

:

Like others at the Pentagon, he and his

engineers working under extreme pressure.

182

:

They had to get up every day and

decide to move themselves into harm's

183

:

way, to go back to that building

which could still be a target.

184

:

His team began quickly

finding the bottleneck.

185

:

We did an optimization here,

increased it, and then we found

186

:

another problem and increased it.

187

:

One important data link

soon improved by six times.

188

:

And within days, the system was

back up and running near capacity.

189

:

To Alderson and others at the Pentagon,

getting things running normally

190

:

was the best way to answer back.

191

:

We should be moving on with

life as usual, or even more

192

:

so, uh, in the face of danger.

193

:

That's what Americans are about.

194

:

He and his company are

ready for the next call.

195

:

Until then, Alderson believes

answering the threat of terror means

196

:

living as we have always lived.

197

:

Our retaliation is going out

and doing what we always do.

198

:

And that's the best retaliation,

and that's how we're gonna overcome.

199

:

In Folsom, Dave Marquis, News 10.

200

:

And they do it very well.

201

:

And by the way, that next call did come.

202

:

Bill Alderson and his team recently

returned from another troubleshooting

203

:

trip to the Pentagon, and they're ready

to go back again when they're needed.

204

:

Great work from Bill

Alderson and his team.

205

:

Okay, real quick, because it's a

new podcast, I want to introduce

206

:

you to myself a little bit.

207

:

I've been doing publications, I write

reports, like the SolarWinds report.

208

:

You've probably heard about

the SolarWinds breach.

209

:

In this, I have color diagrams of how

the breach occurred, each step, the

210

:

11, I call it the 11 evading steps.

211

:

And how we as victims got caught

in this and how we can gain lessons

212

:

learned from that type of event so that

we don't have that occurrence again.

213

:

I've also written for publications

and I've done trade shows and actually

214

:

I did the Forensics Day events at

NetWorld Interop for a number of years.

215

:

So, we got involved and we know a

lot of folks and we train, started

216

:

training thousands of people

in computer network diagnostics

217

:

and computer network forensics.

218

:

We ended up having the default

leadership in that and we trained

219

:

thousands of people, created a

certification program called Certified

220

:

NetAnalyst where we certified.

221

:

Over 3, 500 of the top security

and forensic people in the world.

222

:

Deep packet inspection, absolutely

understanding the technology from the

223

:

client to the server, the application,

all points between, and all the

224

:

security components between, and how

the protocols and systems Work to

225

:

deliver that information, wrote a

bunch of stuff called on the wire,

226

:

because that's where my focus has been.

227

:

So I'm very involved with the Security

Institute and the ISSA organization.

228

:

Okay, that's a little bit about me.

229

:

Let's move on to some

understanding of data crisis.

230

:

That's been my focus.

231

:

It doesn't matter.

232

:

What kind of disaster you have,

typically it involves some data

233

:

or some sort of problem getting

access to data like the 911 problem.

234

:

My last episode I started

with was on the U.

235

:

S.

236

:

stock market denial of service and

how we addressed that particular

237

:

problem very successfully.

238

:

And we brought up all the U.

239

:

S.

240

:

stock markets after they

were almost completely down.

241

:

Because of a distributed

denial of service.

242

:

This is really where

the rubber meets road.

243

:

You'll learn a lot of stuff, and

at an executive level, board level,

244

:

and also as a technology level.

245

:

It's a lot of fun.

246

:

In the future we'll be doing additional

stories that we have responded to.

247

:

And one of the things I like to tease

people with is, how important is this?

248

:

Let's take a look at Facebook.

249

:

October 4th.

250

:

In 2021, Facebook's network,

Mark Zuckerberg's network went

251

:

down for about four to six hours.

252

:

During that time, they lost about

5 percent of their stock value,

253

:

which was about 25 to 50 billion.

254

:

Now if you want to talk about ROI,

I have the exact best practice from

255

:

lessons learned long ago at AOL,

another company, America Online,

256

:

that you might not remember.

257

:

That basically brought us a large scale

commercial Internet that, that the

258

:

average person could get in touch with.

259

:

So I learned some things there and

troubleshooting that environment that

260

:

if Facebook would have done the same

best practices, they wouldn't have had

261

:

that downtime and it's very powerful.

262

:

So we're not talking about yesteryear.

263

:

We're talking about right now and

the potential for saving billions.

264

:

Yes, that's billions with a B.

265

:

Of dollars and lowering the

time it takes to recover from.

266

:

Communications disaster.

267

:

So that's what I've been

doing all my career.

268

:

I feel a little bit like Forrest

Gump who just ends up in places

269

:

I never thought I would be.

270

:

Here I am, and I am sharing with you

these lessons learned and helping.

271

:

You prepare for the potential of

disaster by gathering all of the

272

:

lessons learned and helping you

impute those into your organization

273

:

and gather those lessons learned.

274

:

I'm a good friend in time of need

and I really love that relationship

275

:

and I always tell everybody.

276

:

It's more fun to be ready.

277

:

Now, I do talk a little bit about

the disaster recovery timeline

278

:

and the fact that you can make a

disaster an opportunity for growth.

279

:

And if it befalls you, whether it's a

security incident or other, the type

280

:

of attitude to have is, How can I

Make this an opportunity for growth.

281

:

It's not time to look back.

282

:

It's not time to flog all of your people.

283

:

It's a time to learn and to

gain an opportunity for growth.

284

:

So you start out with this timeline.

285

:

I'll talk about that in future sessions,

but just generally you can see.

286

:

You have known risks, well those known

risks can have a prodromal build and

287

:

that can then end up with an acute

and chronic crisis or disaster,

288

:

which you then need to triage.

289

:

You need to minimize and operate, you

need to diagnose the problems, you need

290

:

to mitigate the problems, and then you

need to recover, and you need to recover

291

:

rapidly, and you need to recover well.

292

:

So my tip for you is to make sure

you capture the lessons learned.

293

:

Where did I learn that?

294

:

I learned it over on the right.

295

:

You see the discovery to recovery teams,

the critical problem resolution team.

296

:

I call that a tiger team or a CPR team,

where you go in and you build the best

297

:

people from all the various organizations,

all the various disciplines, physical

298

:

security, data security, all the

different aspects of your business.

299

:

And then you form a team from

all the best people and that team

300

:

addresses this problem and if

you're prepared and you have that

301

:

team ready to go, it's much better.

302

:

A lot of times you don't know what kind

of a disaster you're going to have,

303

:

obviously, but you need to have a few

people aligned up so that if the disaster

304

:

strikes, you can take care of that.

305

:

Also, you can go back and look at lessons

learned and make certain that you have

306

:

the systems and the communications.

307

:

Training of your, your team for

this sort of thing is like having

308

:

a preview of what's going to

happen by running some scenarios.

309

:

And those scenarios are key to helping

you learn what to do, learn where you're

310

:

not prepared, and then Prepare better.

311

:

Now I've been known for what

we call peeling the onion.

312

:

Every time you peel back a problem, it

just, it exposes yet another problem,

313

:

and then you have to troubleshoot that

problem, and then there's another problem.

314

:

Finding root cause is about basically

assuming that there's multiple problems

315

:

in every situation, and that You're

not going to have one magic bullet and

316

:

that I've learned through following the

wrong things and saying, Oh, Eureka.

317

:

No, I'm very careful about coming to

a conclusion too quickly, but we have

318

:

to be in a mindset of iterate and

analyze and then diagnose, fix the

319

:

problems, move on to the next problem.

320

:

So you need to have a

system to make certain.

321

:

That you have a philosophy

and an understanding that to

322

:

recovery is an iterative task.

323

:

There's a lot of different

things happening.

324

:

So you want to make sure you

record those things, identify the

325

:

lessons learned, build them into

best practices, and guess what?

326

:

The ultimate in your credibility as

a professional disaster recovery data

327

:

professional is crisis avoidance.

328

:

If you don't have a crisis because

you've prepared and your best practices

329

:

prevented it, that's the very ultimate

in credibility is not to have a problem.

330

:

The fingerprint of mission critical, every

company, every organization is different.

331

:

I don't care what you say.

332

:

Used to be, we called in IBM and they

took care of everything computing wise.

333

:

One vendor, one phone call, one belly

button to talk to that change with the

334

:

advent of the PC, computer networks,

distributed computing, the promise of

335

:

distributed computing, and here we are.

336

:

But how did we get there?

337

:

That's your fingerprint.

338

:

Your DNA of your enterprise.

339

:

You cannot take what company A, B, or C

did and just simply apply their formula.

340

:

It just doesn't work.

341

:

And if you think that's what you need to

do, that's why you probably go through,

342

:

you know, you got a new CIO and then

something happens and then you get

343

:

another CIO and then another CISO and

you keep flipping the problem really is.

344

:

Your mission critical enterprise

is unique to your organization.

345

:

You need to study yourself

like Sun Tzu in the Art of War.

346

:

Know your enemy, absolutely,

but know yourself better.

347

:

You have to document your system so that

you can train all of your people to be

348

:

ready for a disaster when that happens.

349

:

So that's my preamble and what

I'm talking about to get to

350

:

this slide that says, Slup.

351

:

If you take your lessons learned from

other people, oh, it's much better

352

:

to learn lessons that other people

experience and then apply them to

353

:

your situation so that you don't

have to experience them yourself.

354

:

And that's what this slide is about.

355

:

This slide talks about best

practice amplification.

356

:

Your organization is going to take

those high fidelity, low noise inputs

357

:

and then amplify them through your

leadership and executive functions to

358

:

impute and apply those best practices

so that you get tangible results.

359

:

Now, you may be spending money like

a drunken sailor on products and

360

:

that sort of thing and continually

overrunning your budgets.

361

:

And Not a good thing.

362

:

Yes, you do need significant budget to

run these kinds of programs, but the

363

:

best things to do are the essentials,

the fundamentals, and using lessons

364

:

learned and those best practices are

the best fundamentals and they are free.

365

:

Yeah.

366

:

What a concept.

367

:

Good system management and

fundamentals are key to being prepared.

368

:

So make sure you find those key lessons.

369

:

Amplify them into your organization

and receive the tangible results.

370

:

Now, if you're a large organization,

you might need some help.

371

:

McKinsey, Boston Consulting Group,

Bain, Deloitte, Booz, you name it.

372

:

Accenture, GDIT, somebody may need to

help you impute those best practices.

373

:

But taking those free best practices

and making sure that they are well

374

:

integrated and imputed into your

people, your processes are going

375

:

to help you recover much faster.

376

:

And you're going to save a

buck because a lot of times the

377

:

fundamentals are what weren't done.

378

:

Yeah, you got all this esoteric software,

esoteric systems, artificial intelligence

379

:

out the wazoo, but what happens?

380

:

You still have this problem of making

certain fundamentals are taken care of.

381

:

And that's one of the key things that

I'm here to help you learn, understand,

382

:

and then build out those best practices.

383

:

Okay.

384

:

Disaster.

385

:

stream.

386

:

That's the, that's the site that I

use to talk about this particular

387

:

disaster recovery responder stories.

388

:

So you can go to disaster.

389

:

stream.

390

:

So.

391

:

You can see additional information,

those videos I told you about

392

:

my friends and associates in the

industry that you can learn from.

393

:

So here I've got a collection and I'm, and

I've got blowups of these in subsequent

394

:

slides that I'm going to go over, but

I just want to tell you what's coming.

395

:

First of all, I'm going to go

over the organization layout

396

:

inside the Pentagon that got hit

by the aircraft as it came in.

397

:

Down here, you'll see.

398

:

Where the aircraft came in,

hit the building right there.

399

:

Some of the other things like the

heliport and that sort of thing,

400

:

just to help you understand the

big picture of what happened.

401

:

These are some pictures of

actual video that were captured

402

:

by cameras at the Pentagon.

403

:

And so you can see here, the, uh, the

aircraft coming in, it's zoomed here

404

:

and then boom, you see when it hit.

405

:

So for all those.

406

:

Folks who might be non believers that the

event actually occurred, yeah, it occurred

407

:

and here's a little bit of proof for it.

408

:

Here's a bigger picture of the

approaching aircraft came in and hit.

409

:

To understand a little bit about the

background, you may have heard that

410

:

the Pentagon had some renovations

and it just finished this part of

411

:

The Pentagon being recovered, they

spent a lot of money on construction

412

:

on new fourth and other such things.

413

:

So the fact that the aircraft in an

area that had just been recovered.

414

:

Or just been renovated was

actually serendipitous.

415

:

In addition to that, yes, there was a lot

of lives caught and people killed in this

416

:

particular part of the 9 1 1 disaster.

417

:

However, it could have been much worse

because they had just finished the

418

:

renovations and people were just starting

to move back into these new office areas.

419

:

So there weren't as many people there

that day because they were just starting

420

:

to move back in after the renovation.

421

:

Okay, so I hope that helps you

understand a little bit more about that.

422

:

Now, here is the track of the

aircraft in through the organization.

423

:

So you can see that it hits

square into the Army's part of the

424

:

Pentagon in this particular area.

425

:

It knocked out a number

of key people and systems.

426

:

So You have to keep in mind that sometimes

when you're doing disaster recovery,

427

:

you're not going to have your entire team.

428

:

So your team needs to be trained to lose a

few people here and there, and then figure

429

:

out how you're going to backfill those

positions if a particular disaster occurs.

430

:

Also, Like I was mentioning, if it

had hit somewhere else, it may have

431

:

taken out several key single point

of failure communication points.

432

:

The ingress and egress locations of

data and telecommunications and that

433

:

sort of thing were basically affected,

but not nearly as much as could have.

434

:

The Pentagon has multiple points of entry,

multiple points of ingress, egress, but.

435

:

In communications, there was some

single point of failures that had

436

:

the aircraft hit in different areas.

437

:

We looked at this and said, wow,

we would have been down for many

438

:

months recovering communications

if it would have hit here or here.

439

:

So after the event, they took our

report and other reports and HP took

440

:

on the renovation to basically put

in additional redundant systems.

441

:

And one of the key things that they did

at the Pentagon after the recovery of this

442

:

was if you were in any of these areas here

and you hit file save on a document or

443

:

you got a phone call or any of those sort

of things that were data oriented that

444

:

information was stored in the Pentagon.

445

:

And if it got hit, boom, you're

a single point of failure.

446

:

And I'm going to talk to

you about a single point of

447

:

failure that we experienced.

448

:

That really impacted our ability

to recover in just a moment, but I

449

:

want to call out the fact that we

basically went back in and rebuilt and

450

:

spent, I think, 700 million plus on

creating a second 5ESS AT& T switch,

451

:

even though we had voice over IP

coming in and now that's predominant.

452

:

They put in a second switch.

453

:

Verizon put in multiple places of ingress

egress for all of their data, and that

454

:

was a very costly exercise, and they

put those single point of failures in

455

:

different places around the Pentagon so

that in the event something like this

456

:

happened again, they had their data.

457

:

As I was getting to, if you hit

file save, it would save it in

458

:

the Pentagon prior to 9 1 1.

459

:

After 9 the renovations that

occurred in the years beyond.

460

:

If you hit file, save on a document or

sent an email or something of that nature.

461

:

It's saved in the Pentagon, but it also

saved a hundred miles plus away at an

462

:

alternative site that had the recovery

capabilities so that people from that

463

:

part of the Pentagon could go a hundred

miles away and they could reassemble

464

:

and all of their data was there and

their operations could continue.

465

:

Even though the event had happened,

file save saves at the Pentagon, but

466

:

then it automatically replicates to

over a hundred miles away where a

467

:

recovery site could be put up very

rapidly to bring things together.

468

:

So that.

469

:

One thing made the Pentagon

much more survivable subsequent.

470

:

Of course, it was a disaster

of mammoth proportions.

471

:

We'd never seen anything like this,

never even thought of it, but that just

472

:

talks about the evil in the minds of men.

473

:

A lot of people they want to destroy

and it's a very sad situation.

474

:

Anyway, it's been 20 plus years now and

we've recovered from this particular

475

:

thing, prosecuted a couple of wars,

spent trillions of dollars trying to

476

:

basically stop it from happening again.

477

:

We'll see if we're successful.

478

:

Hopefully that works.

479

:

Here's a nice pic of all the brave

responders going up to the roof of

480

:

the Pentagon and fighting that fire.

481

:

And of course, during all of these

times, nobody knew if perhaps there

482

:

was going to be another event.

483

:

Maybe, maybe a second

shoe was going to drop.

484

:

We didn't know.

485

:

They had all those airplanes.

486

:

And those 19 different attackers, maybe

they were going to have a second silo.

487

:

And that's why we stopped all aircraft

movement and that sort of thing for a

488

:

period of several days so that we could

basically improve our security around

489

:

the nation to make sure that there wasn't

something else that they could exploit.

490

:

All right.

491

:

Now, as you might think, in our computer

networks, we have systems that send us.

492

:

Basically alarms and all these automatic

systems like UPSs, my battery's

493

:

out, boom, send an alarm, a server

room that's too hot, boom, send an

494

:

alarm, all of these sort of things.

495

:

When the event went up, we

started getting thousands of

496

:

these notifications and alarms.

497

:

They had about 83, 000 alarms

a day and Sadly, they didn't

498

:

have enough people at the time.

499

:

And remember, they had just lost

some folks and we weren't really sure

500

:

what was going on, and here we have

evidence of literally thousands of

501

:

events alarming to the few people who

were left to recover the situation.

502

:

And that was one of the best practices.

503

:

We basically helped them put

them into different buckets.

504

:

of sensitivity, of criticality,

and then respond more rapidly

505

:

to the critical alarms first.

506

:

And of course, this is an ongoing

battle with any kinds of servers and

507

:

systems, especially as we are now mainly

in the cloud, and we need alarms to

508

:

come in and tell us what's happening

so that we can then respond well.

509

:

And a lot of that is, is working to

be done with a little bit of machine

510

:

learning, artificial intelligence, but

this is where we really had to go to

511

:

work rapidly to prioritize what do we go

take care of first, second, and third.

512

:

So those were good lessons learned.

513

:

Now, the second thing that happened that

I, I teased you about this information

514

:

that was destroyed by the aircraft was

the network and system documentation.

515

:

It was gone.

516

:

Why?

517

:

Because it had hit some servers

in the Army's part of this network

518

:

and those servers were destroyed,

containing all of the network diagrams.

519

:

So I said, don't you

have printouts of these?

520

:

And sadly, no, there were no printouts.

521

:

So one of my key things is that

for disaster recovery, make

522

:

certain that you have accurate

documents, accurate diagrams.

523

:

And that those are stored in an off site.

524

:

And key to this is being able

to print those things out.

525

:

It doesn't matter whether they're

super large in a large network or

526

:

application diagram, but you need to

be able to visualize and see where

527

:

all your dependencies are going.

528

:

And then you can troubleshoot along

those dependencies more effectively

529

:

when you have Good system documentation.

530

:

All right.

531

:

Now I want to just introduce you,

stop the flow here for a minute and

532

:

rethink what does a good manager do?

533

:

What are we supposed to be

doing as technology managers?

534

:

And this is the CISO Executive Primer

that Gary Hayslip, Bill Bonney,

535

:

and Matt Stamper wrote together.

536

:

It's a fabulous group of books.

537

:

And this particular one.

538

:

is about how to interface and how to

work with and how to best employ a

539

:

chief information security officer.

540

:

So this is really great stuff

right from the horse's mouth.

541

:

And I will come back in just a minute

or so after you've heard from Gary.

542

:

Also, remember, I will provide a link

to Gary's entire session so that you

543

:

can get to know him a little bit more.

544

:

And that's part of the process of

this, this broadcast is to bring you

545

:

some great resources and help you

understand things a little bit better.

546

:

We have a whole bunch of these sort of

things to bring to you in the next year.

547

:

Take a listen to Gary Haslip.

548

:

I was asked to speak about the executive

primer, the recent book that myself and

549

:

my co-authors wrote, and we're gonna

discuss that in some of the topics.

550

:

To begin, the book was written with my

co-authors, bill Bonnie and Matt Samper.

551

:

It's written primarily.

552

:

It's very different than the other

books that we've written, the CISO Desk

553

:

Reference Guide series, and then some

of the, some of the domain specific

554

:

books that we've written for CISOs.

555

:

This one actually is written

for the CISO's colleagues.

556

:

It's written for people that actually

work with CISOs, that actually

557

:

work with security professionals.

558

:

The book is really one of expectations.

559

:

And what I mean by that is, we're looking

at what expectations does the CEO have.

560

:

When they're working with the

CISO when they were, how should

561

:

a chief financial officer support

a CISO and the security team?

562

:

So it was, we were trying to write it

more about how people should be able to

563

:

work with a chief information security

officer and that professionals, a

564

:

security team and security program,

and it is in the discussion.

565

:

Even though the book has multiple

chapters, I picked three domains,

566

:

three sections that I thought might

be interesting for our talk today.

567

:

And those are basically the expanding

role of the CISO in the business, what

568

:

components are part of the cybersecurity

program that I find to be really

569

:

important, and then executing the security

program, actually being able to be

570

:

effective and being able to make sure we

get things done to protect the business.

571

:

Okay, we're back.

572

:

This is an example of actual

reverse engineering of key systems

573

:

inside the Pentagon in order

to solve problems that we had.

574

:

Most of you probably don't understand some

of these buzzwords, but I'll, they're on

575

:

the screen, I'll use them a little bit.

576

:

Switches, which we know,

switches and routers.

577

:

Switches have these things.

578

:

that are absolutely key to configuring

them so that they can be redundant

579

:

and have automatic failover, and

they block certain paths, and Another

580

:

friend of mine named Radia Perlman.

581

:

Radia is one of these brilliant engineers.

582

:

She worked for Digital Equipment

Corporation, DEC, years ago,

583

:

and then worked for Novell and

now works, I think, for Oracle.

584

:

I'm not really exactly sure who she's

with today, but she's a brilliant

585

:

technologist and she talked about

and built the Spanning Tree Protocol.

586

:

So, I have been in her sessions

and learned from her over the

587

:

years how to manage Spanning Tree

so that it does not create loops.

588

:

Loops in a Spanning Tree network

will bring an entire network down.

589

:

And that's what was happening a lot of

times in these environments is the network

590

:

would go down because there were loops

in the technology and one packet looping

591

:

can bring down the entire internet,

bring down the entire data center.

592

:

Because if they're not managed well,

you have to document who is the root,

593

:

bridge, where the different things are,

and you have to reverse engineer the

594

:

environment and diagram out who the

root is, and then there's all these

595

:

algorithms that we use to basically be

able to have a loop free technology.

596

:

Automatically, and those systems

don't always work automatically,

597

:

so we had to reverse engineer all

the switches and systems so that we

598

:

could figure out what was going on.

599

:

Here's another diagram of

gateways and different systems.

600

:

We put test points so that we could test

between two points to determine that we

601

:

did get A good throughput between two

different points after we fix things.

602

:

And one of the things that's interesting

is that nowhere else in the world do

603

:

we not do this, but I, I say, Hey, if

you just bought a brand new Corvette,

604

:

the first thing you do is you go out,

put the pedal to the metal and see

605

:

how fast it'll go or how fast it'll go

from zero to 60, that sort of thing.

606

:

Now it's no longer the Corvette,

but it's probably a Tesla.

607

:

Those things are really fast.

608

:

But the first thing that we do is we test

if the circuit or if the car is hitting

609

:

the theoretical numbers that are stated.

610

:

So between two points we put things in so

that we can test between those two points.

611

:

To ensure that we are getting the

throughput that we have purchased

612

:

from the data system providers.

613

:

Okay, so we did that and we,

but we had to reverse engineer

614

:

the network in order to diagram.

615

:

These are actual diagrams that

we created during the event.

616

:

Cool.

617

:

We also had to.

618

:

Find various errors and use various

tools to diagnose the problem.

619

:

And so we would go out and we

would find where certain errors are

620

:

like CRC areas, errors, cyclical

redundancy, check errors, that's a.

621

:

Big fancy word for making sure that

the data that you received was the

622

:

data that the sender meant to send.

623

:

Yeah, that's pretty cool.

624

:

Isn't it?

625

:

Okay.

626

:

So CRC errors means that the

data got corrupted in transit and

627

:

when it arrived, it was wrong.

628

:

And when we have those sorts of things,

we know something is errant between two

629

:

points, and then we can quantify that

and say, yeah, that shouldn't be at all.

630

:

Should have zero.

631

:

And it has some, so we have to go

diagnose those problems and then we

632

:

have to look at the network diagram

to see where those problems are

633

:

created along the set of dependencies.

634

:

It's pretty simple if you've been there.

635

:

It's not rocket science.

636

:

Problem is that people who

don't have experience need to

637

:

be trained by people who do.

638

:

And then you need to get your entire

team trained in how to look at your

639

:

network documentation, how to see

where your dependencies are and

640

:

what's broken and what's not working.

641

:

One of the problems that they had after

moving hundreds of servers is that

642

:

their firewalls were all misaligned.

643

:

So they had about 7 firewalls,

7, 8, 9 firewalls there on

644

:

this particular picture.

645

:

But we had to go look at statistics

and find out why some firewalls were

646

:

delaying packets and what was going on.

647

:

So we had these throughput charts and we'd

go from firewall 1 through 7 and figure

648

:

out what kind of traffic was going on and

how we could rebalance those firewalls

649

:

so that things would work better.

650

:

We had to reverse engineer these

diagrams and that's another

651

:

part of the key takeaway.

652

:

You must have, in a large organization,

people who can basically look at and

653

:

respond to zero day problems down at

the very basic fundamental levels.

654

:

And if you just have a bunch of

clickologists or plugologists, and

655

:

you'll understand just from that term,

if all they know how to do is click and

656

:

install, and all they know how to do

is buy and plug in, You're in trouble.

657

:

You need technologists who have the

theory behind the understanding so that

658

:

they can reverse engineer, they can

basically troubleshoot and look at deep

659

:

packet, look at security fundamentals.

660

:

And see why someone's trying to

break in, what's happening down

661

:

at a detailed theoretical level.

662

:

Very important.

663

:

And if you want to know a little

bit more about firewalls, listen to

664

:

my first broadcast on the denial of

service attack on the stock markets.

665

:

I go through in great detail what we did

to solve that particular problem by using

666

:

a myriad of different firewall techniques.

667

:

So take a look at that.

668

:

It's not super techie.

669

:

But it does give an executive like

yourself or even a board member an

670

:

understanding of what kind of problems

are you solving, how are you working

671

:

through these, what resources do we

need, what focus do we need, what

672

:

training do we need, what kind of

people do we need, and it helps you

673

:

understand Some of these things.

674

:

I try to avoid the big buzzwords.

675

:

It's inevitable in a data world, but a

lot of executives understand some of these

676

:

things, and so hopefully these exhibits

will help you relive some of these

677

:

things and understand what's going on.

678

:

Okay, so here is an example

of a circuit that was highly

679

:

degraded, very low throughput.

680

:

We found a problem and then improved it.

681

:

Now this is where the iteration comes in.

682

:

We got it improved by about

50%, but it wasn't the full

683

:

improvement that we could get.

684

:

That's the peel of the onion.

685

:

That's the fact that there's multiple

problems causing these things.

686

:

And so you have to take an

iterative, analyze, find a problem,

687

:

solve it, like we did here.

688

:

Find another problem,

solve it, like we did here.

689

:

Find another problem, solve it,

until the system is working.

690

:

Optimally, and your users and your

business can return to operation.

691

:

Talking about best practices in

documentation, I prepared this slide

692

:

some number of years ago about the

need for visualization of details.

693

:

And at the very top, you'll

see disaster recovery.

694

:

Yes.

695

:

In the event of a disaster, you have to

have visualization of details because

696

:

you may have to rebuild a circuit, get a

secondary circuit put in, you may have to

697

:

do all types of different redesigns in a

disaster, and so consequently, you need

698

:

the most visibility and the most iteration

of documentation for disaster recovery.

699

:

And so I'm going to show you

some examples of some of these

700

:

different types of documentation.

701

:

And you can take those

away and benefit from them.

702

:

This is what your management, your

leadership, your users need to know.

703

:

The basics of where your

systems are connected.

704

:

And second, this is a, an application

that was very slow out in California

705

:

and it got worse and worse under a load.

706

:

And so, data was being brought to

and from Boulder, Colorado from

707

:

California across very low speed links.

708

:

And so I showed in the sickness

of the data moving back and

709

:

forth and the path and the

dependencies that it was traversing.

710

:

And so if you were local, like between

a server and a workstation on a local

711

:

area network, bandwidth is free and

you can get very rapid capabilities.

712

:

But the offered load was akin to

what should be for a local area

713

:

network, but it was trying to go

back across a very low speed lane.

714

:

So, you can't put 10 pounds in a 5

pound bag, that's what this does.

715

:

This helps you understand,

visualizes the network and the

716

:

offered load to the network for the

different types of transactions.

717

:

This is an example of how you

might see your technology in, in

718

:

your equipment racks, here, and

how they might be connected, but

719

:

how they are connected physically.

720

:

Is different than how they are configured.

721

:

What connects to what?

722

:

And we use technologies like VLAN

and routing and other such things.

723

:

And we call those layer two and layer

three technologies on the OSI model.

724

:

And if you're familiar with at

least the term of those things,

725

:

the OSI model, here we take and

break out these same exact devices.

726

:

But we show you the Layer 1 and

Layer 2, what VLANs they go through.

727

:

And just because a big switch,

it has a plug in it, doesn't mean

728

:

it's connected to everything.

729

:

Those are logical connections

based upon configuration.

730

:

What is allowed to access different

things through firewalls, etc.

731

:

So you have to be able to see your system.

732

:

From a holistic standpoint, this is

a large diagram reflecting of Layer

733

:

2 and Layer 3 technologies here.

734

:

And I've superimposed some of the

details that you put on a server,

735

:

the different interfaces that you may

have, various types of network and

736

:

system configuration and dependencies.

737

:

And this is even more important

in a cloud environment as to how

738

:

it's connected in, in, in basics.

739

:

So that you can see what your dependencies

are when a disaster hits, you need to

740

:

see how things communicate from point

A to point B and point C to point D.

741

:

And the only way I know of is the good old

fashioned WORK, W O R K, I can barely even

742

:

say it because it is a four letter word.

743

:

WORK is required to diagram these systems,

and there's no automatic, remember

744

:

how I told you that your fingerprint

of technology is unique to you?

745

:

These systems are unique to you.

746

:

They're unique to every organization.

747

:

They lay out differently, whether you're

a centralized bank or a decentralized

748

:

aerospace company or a retail vendor

and that sort of thing, you need to

749

:

take a look at your enterprise and

then help your employees be able to put

750

:

their finger on a diagram and move it

through to see the dependencies so when

751

:

there's a problem, they can diagnose it.

752

:

A lot of organizations don't

train their technologists.

753

:

They pay them a lot of money, hundreds

of thousands of dollars a year, and

754

:

they hire some new person who was at

another company and they were really

755

:

smart and did a really good job.

756

:

So you hired them, but they're not going

to come and tell you, Hey, guess what?

757

:

I'm impotent.

758

:

I can't understand your environment.

759

:

Because you don't have any

network documentation, I know

760

:

you're paying me a lot of money.

761

:

They're not going to

come and tell you this.

762

:

It's just human nature.

763

:

But without good documentation and

systemization, your people will take

764

:

years to assimilate and understand

a complex architecture, instead

765

:

of weeks if you have a diagram.

766

:

So, Your system diagram should show

your people how everything works and

767

:

the various dependencies so that when

they come to work for you, it takes

768

:

two or three weeks, some good training

on your documentation and on your

769

:

architecture, and then they can understand

it and your hu let's say you have a

770

:

hundred plus people working on security

and network and that sort of thing.

771

:

If two or three of them understand this

because they've been there for forever,

772

:

they're inundated and they can't support

and help everyone understand every

773

:

problem, they become the bottleneck.

774

:

So by documenting your infrastructure,

you basically do away with that bottleneck

775

:

and everybody is enabled by this diagram.

776

:

Yes, it's costly, and yes, it takes a

lot of work, very focused work, to keep

777

:

it up to date, but you will be glad

you did when you bring in a new CIO,

778

:

bring in a new CISO, bring, there's a

disaster, you are going to thank your

779

:

lucky stars that you had with you.

780

:

The thought ahead of time of

documenting your systems, it's key

781

:

and you don't want to necessarily

outsource this because it's outsourcing

782

:

your architecture and I created a

term called architecture ownership.

783

:

Every company has a

different architecture.

784

:

They need to own it and they need to

understand it and they need to make sure

785

:

that it is documented for the future.

786

:

This is just a very simple

flow diagram that I created.

787

:

While we were documenting large Fortune

500 networks, we started out at Burlington

788

:

Northern Railroad building these beautiful

diagrams of their train network and

789

:

of their office automation networks

and did a really awesome job on that.

790

:

I learned a lot from those engineers

and then we took those technologies in

791

:

a service we call DocuNet and basically

can go in, in a matter of weeks, reverse

792

:

engineer an environment and build

these beautiful documentation systems.

793

:

I don't do that anymore.

794

:

I don't do it for you necessarily,

but I do provide the leadership.

795

:

They wherewithal the how to

and help you build a system and

796

:

build in these best practices.

797

:

So contact me if you're

interested in some help on that.

798

:

Here's a different view of

troubleshooting a big routed network

799

:

and when problems happen and it

brings down an entire energy company.

800

:

A multi billion dollar energy company

went down because of some problems

801

:

with their routing that they couldn't

diagnose until we started diagramming

802

:

out and seeing where the problem was.

803

:

We had two different environments

of ERG10 in two different areas

804

:

and they were competing with one

another, but you couldn't see

805

:

it because it wasn't diagrammed.

806

:

Okay.

807

:

This is also very cool.

808

:

I know it's got a lot of stuff, but two

switches there up in the top, it says

809

:

trunk on the top and then the VLANs and

they're color coded and then down at the

810

:

bottom, W1, FW2, those are firewalls.

811

:

People buy two of everything

today for redundancy.

812

:

Here's the fallacy.

813

:

When you diagram and show the dependencies

for each critical transaction, and

814

:

you that, for instance, the yellow

transactions goes three fire, three

815

:

devices, you can pull that firewall,

or you can pull that switch, or you

816

:

can, it can have a disaster and break,

and your entire system goes down, even

817

:

though you bought two of everything, you

have to diagram out and figure out if

818

:

a single point of failure will take it.

819

:

Those systems down and you buy

two of everything you want it

820

:

to be redundant and resilient.

821

:

The problem is if you don't take a look

at and see where your transactions go

822

:

through and what they're dependent upon,

you don't see that you've configured

823

:

the capability that requires all four

of those devices, even though they're

824

:

redundant to be up and running for

your transaction to complete properly.

825

:

And then when that, something like

that happens, you're wondering,

826

:

why isn't my redundancy working?

827

:

Exactly.

828

:

We've been called in to troubleshoot a

number of huge organizations that pulled

829

:

the plug on some things, and then after

they tried to reconnect it, every time

830

:

they'd reconnect their redundant devices.

831

:

The network would break again

and it would cause a big meltdown

832

:

and they didn't want to do that.

833

:

So they end up running with a

single point of failure and not

834

:

using the redundant technology.

835

:

So they bought two of very expensive

network components and systems, but

836

:

they couldn't connect them together

to, to build a reliable network in

837

:

the case of a problem with one device.

838

:

It just was a single point of failure.

839

:

All right.

840

:

So that kind of helps you.

841

:

Get down, this is a very complex

configure, set of configuration

842

:

variables about transparent

bridging and that sort of thing.

843

:

I'm not trying to teach you

those details because they're

844

:

irrelevant to 90 percent of you.

845

:

However, you do need to realize that

certain key things Will break and

846

:

when they break, if you don't know

how they're configured and you can't

847

:

visualize the environment and how

they're configured, there's no way you're

848

:

going to be able to recover gracefully.

849

:

And it's just going to continue

to be a kerfuffle and you're just

850

:

going to keep having great problems

until you really get it nailed down.

851

:

So that's what that is about.

852

:

Now, when there is a disastrous problem.

853

:

CAUSED BY A NATURAL DISASTER,

CAUSED BY SOME OTHER THING.

854

:

Take a look at, and you see, I

have this disastrous problem.

855

:

And the status quo, your

view Is what I call a square.

856

:

And so you can see that

I have a square up there.

857

:

It's got your team, your environment,

your problem, your symptoms, and all

858

:

of these different things are out

there that we know about the problem.

859

:

We know things aren't working

well, but you can see that it's

860

:

just basically two dimensional.

861

:

It's a square.

862

:

In order to overcome a disastrous

problem, typically it requires

863

:

what I call a paradigm shift.

864

:

You cannot solve today's problem

with today's information.

865

:

You need some new input.

866

:

You need something that tells

you here's where the problem is.

867

:

And I call that moving from a square, two

dimensional, to a three dimensional cube.

868

:

And I'm going to show

you a picture of this.

869

:

Boom, one, two, three.

870

:

You can see that with a new input, a six

sided cube allows you two more viewpoints.

871

:

And that allows you to have new

information, whether that's another

872

:

technologist coming in to help

you with the key information.

873

:

A lot of times I'm a deep

packet inspection guy.

874

:

And when I come in, I add another

input, another perspective to.

875

:

Uh, to view, so that you

can get a different payoff.

876

:

In some cases, it may be a different

cloud person or application person

877

:

who comes in and sees a new finding,

a new visibility, a new diagram,

878

:

a new metric, a new root cause.

879

:

And that allows you to shift the

paradigm and you can solve today's

880

:

problems with new information.

881

:

And then it's always simple, right?

882

:

Why didn't I think of that?

883

:

Yeah, exactly.

884

:

Problems require new thinking, new

information, and the pressure of

885

:

a disaster is exactly when you can

harness those capabilities of your team.

886

:

It's really awesome.

887

:

Paradigm shift.

888

:

Now, the whole purpose is to build

business continuity so that we can have

889

:

resilient systems, ongoing operations.

890

:

If we do have a problem, we can

recover more rapidly and we have

891

:

good systems of communication

with management, planning, etc.

892

:

So we're harnessing all the best

practices to maintain business continuity

893

:

and it's all part of the system.

894

:

Now, Alright, at this point, I want

to just introduce you to a fabulous

895

:

leader, a technologist and leader.

896

:

And this is Colonel David Wills.

897

:

He's going to talk to you just for

a moment about a little leadership

898

:

principle, and then you can go through

and listen to his longer talk about

899

:

technology and managing the entire war

in Afghanistan and Iraq and building

900

:

out large networks and diagramming

them and that sort of thing in both

901

:

Central Command at the Joint Chiefs.

902

:

And at Strategic Command, you

will not find a more experienced,

903

:

knowledgeable fellow than this

leader, Colonel David Wills.

904

:

I'm currently employed by General

Dynamics Information Technology.

905

:

Retired not even a full year ago.

906

:

And you all say that's mildly interesting.

907

:

Why are you our keynote speaker?

908

:

Bill, I'm still trying

to figure that one out.

909

:

But I think it has to deal with the

fact that Vince created, spawned

910

:

the DDN, which is now the DZN.

911

:

I spent the last 20 years.

912

:

Making change on that

network and infrastructure.

913

:

In my current position, I get to

continue making change and leading

914

:

change from a technology perspective.

915

:

As I talked about a couple words

that weren't on the slide, trust

916

:

is what sticks out in my mind.

917

:

At the end of the day, trust is

what leadership boils down to.

918

:

We're back.

919

:

So here we have cybersecurity.

920

:

Cybersecurity is truly under

disaster recovery because when the

921

:

cybersecurity event hits, the disaster

recovery team has to go to work in

922

:

order to figure out what's going on.

923

:

And so the, the disaster recovery

field and professionals in disaster

924

:

recovery are key to helping

us make certain that we can.

925

:

Manage through and navigate

cybersecurity incidents well.

926

:

And it's a similar sort of process to

what disaster recovery people have been

927

:

doing for decades and bringing them in

to help lead the cybersecurity events

928

:

and incidents is a really good thing

to integrate those two teams together

929

:

to build redundancy and reliability.

930

:

Again, you've seen this before, I'm

going to reiterate, take the best

931

:

practices, the high fidelity, low noise.

932

:

And then amplify that into your

team, impute and apply those

933

:

best practices in advance.

934

:

And it's always more fun to be ready.

935

:

So figure out your lessons learned, or you

learn from other people's lessons learned.

936

:

Get those things imputed, get your network

documentation, get your alerting systems

937

:

and put all those things together and

then build out some tangible results.

938

:

And like I said, even Mark Zuckerberg,

who can hire the smartest people in

939

:

the room all the time, his network went

th,:

940

:

And it lost him 25 to 50 billion

in value in a matter of hours.

941

:

Why is that?

942

:

There's some best practices that

he and his team did not put into

943

:

their system, and it allowed a very

problematic situation costing his

944

:

organization billions of dollars.

945

:

So I'm going to, I'm going to

talk to you about that in future

946

:

analysis of a problem with Facebook.

947

:

com going down, and then we'll

discover those sort of things.

948

:

But I just need you to know that this

is More relevant now that we are more

949

:

dependent upon data, and it doesn't

matter what kind of disaster we have,

950

:

natural disaster or what, it always

involves today, data, because our

951

:

environment is so dependent upon data.

952

:

Here is some additional things that

we're going to talk about in the future,

953

:

biometric systems and federal government.

954

:

Matter of fact, it sounds like

I've done more with military than

955

:

anything else, and that's not true.

956

:

Actually, most of my work is with Fortune

100, Fortune 500, energy companies,

957

:

financial, healthcare organizations,

various types of data disasters.

958

:

And cybersecurity events and it required

some experience to make certain that

959

:

you're ready because like I always

say, it's a lot more fun to be ready.

960

:

I'd love to tell your story.

961

:

So if you have a story of as a planner,

as a implementer, as a responder to

962

:

some type of disaster, my job today

is to bring your stories in, pick

963

:

out all the lessons learned and the

best practices so that other people

964

:

can benefit from these things.

965

:

We will be out there serving you and

helping you solve those problems.

966

:

And.

967

:

We're always happy to be a friend

to you when you are in need,

968

:

whether you need to review your

architecture, make sure you're ready.

969

:

We can take a look at that.

970

:

And if you have a disaster, you can click

on our website and boom, go in and say,

971

:

I have a disaster and I need some help,

whatever that is, we're happy to help you.

972

:

And we really enjoy teaching, training.

973

:

And helping you impute the best

practices that will save you time

974

:

and money and possibly obviously

lives when those things are at stake.

975

:

Thank you so much for being with me today.

976

:

Look forward to seeing

you in our next broadcast.

Listen for free

Show artwork for Disaster.Stream

About the Podcast

Disaster.Stream
Disaster Stream is a podcast series that delves into the world of disaster recovery
Disaster Stream is a podcast series that delves into the world of disaster recovery, cybersecurity incidents, and critical problem resolution in major organizations. Hosted by Bill Alderson, the podcast features expert insights, case studies, and interviews with leaders and pioneers in the technology and cybersecurity fields. Each episode shares lessons learned and best practices for crisis management, aiming to help organizations prepare for and respond to disasters effectively. Available in both audio and video formats, Disaster Stream is your go-to resource for understanding and navigating the complexities of disaster recovery and cybersecurity

About your host

Profile picture for Bill Alderson

Bill Alderson

Bill Alderson is a historian at heart, a storyteller by nature, and a technologist by trade. For more than four decades, he has solved some of the toughest challenges in cybersecurity and networks — from helping restore communications at the Pentagon on 9/11 to training thousands of professionals worldwide.

But beyond technology, Bill is the proud grandson of Mabel and Ed Plaskett, California pioneers who passed down stories of resilience, family, and the rugged Big Sur coast. As the family historian, he has gathered photographs, journals, and documents to preserve the heritage of the Plaskett family for future generations.

Through this podcast, Bill shares those stories — weaving together history, heritage, and personal reflections — so that listeners, whether family or friends, can connect with the enduring spirit of the Monterey County coast.