full

full
Published on:

27th Aug 2025

US Military Biometric Systems: Digital War Lessons from Iraq

In this episode of the Disaster.Stream Podcast, host Bill Alderson takes you inside one of the most critical — and least understood — battles of the Iraq and Afghanistan wars: the fight to keep U.S. military biometric intelligence systems running in the middle of the digital war.

Bill recounts how he was called by Army G2 at the Pentagon and deployed with U.S. CENTCOM to Iraq when a biometric watchlist system—used to identify insurgents through fingerprints, iris scans, and facial recognition—failed at a crucial moment. With millions of records in play and soldiers depending on accurate intelligence at checkpoints and bases, the failure threatened operational readiness.

🔑 What you’ll learn in this episode:

  • How biometric systems were used to enroll entire populations in Fallujah and beyond
  • The technical challenges of scaling biometric databases during wartime
  • Why replication delays and network bottlenecks nearly crippled the system mid-war
  • How battlefield packet analysis and root cause troubleshooting restored operations
  • Lessons learned that still apply today in cybersecurity, disaster recovery, and digital identity management

👥 Special segments also feature:

  • Charlene Deaver-Vasquez (FISMACS) on mathematical models for forecasting cyberattacks
  • Jon DiMaggio, author of The Art of Cyber Warfare, with insights into nation-state cyber threats

From the Pentagon and CENTCOM to forward-deployed teams in Iraq and Afghanistan, Bill shares firsthand experiences of solving mission-critical failures when lives were on the line. These stories carry timeless lessons on incident response, readiness, and building resilient systems.

Transcript
Speaker:

Thank you for joining me today.

2

:

I'm Bill Alderson with

Disaster Stream Podcast.

3

:

We're coming to you from

beautiful Austin, Texas.

4

:

Right behind me is Lady Bird

Lake and downtown Austin.

5

:

And if you want to have a good

time, come to Austin, the live

6

:

music capital of the world.

7

:

I've got a great responder

story for you today.

8

:

I got a call.

9

:

I got a call because the United States

military had a problem with one of

10

:

their digital war fighting systems.

11

:

It was an intelligence system that used

biometrics and it was key to maintaining

12

:

and having a watchlist of the worst

insurgents in the wartime effort.

13

:

First contacted by the G-Men

actually, Army G2 at the Pentagon.

14

:

Flew back there, got briefed on what

was going on, and then, talked to

15

:

various folks, JOINT, CHIEFS of staff.

16

:

Then we went down to US CENTCOM, where

we got outfitted with the rest of

17

:

the team to take us over into Iraq.

18

:

They said, would you mind if

you deployed with the troops?

19

:

And I said, it'd be my honor.

20

:

And the rest of our team, they were

excited about it and we all jumped at

21

:

the opportunity to fly to Iraq and help

solve this incredible digital war problem.

22

:

I'll talk to you a little bit more

about it, but before I do, I want to

23

:

make sure that you all remember that

it's definitely more fun to be ready.

24

:

And that is the message that

I am talking about today.

25

:

We were ready to go to Iraq to help

with the problem, and we want you to

26

:

be ready when you get that call or your

company calls upon you in time of need.

27

:

Here we go.

28

:

I call it disaster stream.

29

:

It's on our website is disaster.stream.

30

:

That's it.

31

:

Just disaster.stream.

32

:

Put that in your browser and you'll go

to our website where you'll find all

33

:

of our podcasts and other information.

34

:

This is season one, episode

three, we're brand new.

35

:

This is something new that I've been

developing for the last six months or so.

36

:

Stories about where responders have

to respond to a particular disaster.

37

:

Mainly these disasters are of a nature

that affects data because that's where

38

:

I have spent my entire career nearly

40 years now, looking at packets

39

:

on the wire, diagnosing critical

problems leading teams to solve some

40

:

sort of a problem or help recover

from some sort of a disastrous event.

41

:

We've got the US military in Iraq and

Afghanistan, and the digital war that they

42

:

were fighting there, which a lot of people

probably don't really know, but we have

43

:

been fighting predominantly a digital war.

44

:

How do we do that?

45

:

We have insurgents and it's very

difficult to identify your insurgents

46

:

or to identify the average person in a

community versus that of an insurgent.

47

:

What the US military did was first,

the Marines took this system and

48

:

they went to Fallujah, which was

the hometown of Saddam Hussein.

49

:

They went to that town, they made

everyone leave, and then they burmed the

50

:

whole city so no one could go in or out.

51

:

And they had three

points of ingress egress.

52

:

And when somebody went in, they had

to give their fingerprints and an IRIS

53

:

scan and other identifying information,

and they built a small dossier on

54

:

everyone in that particular area.

55

:

Later on, you're going to

see why that was important.

56

:

But when they got to 1.2

57

:

million enrollments into the

system, it stopped working.

58

:

And that's why I got the call

to go in and figure out why this

59

:

incredible biometric intelligence

system stopped working mid war.

60

:

We went in and started taking

some action along with USCENTCOM.

61

:

Part of our mission is to tell your story.

62

:

I talk about my stories as an

example so that you can see how your

63

:

story might fit into our formula.

64

:

What we do is we try to tell a story

from the responder's viewpoint and

65

:

what happened, and then we pick out

the lessons learned, the things that

66

:

we did really well, and the things

that maybe we didn't do so well.

67

:

Those are also lessons learned.

68

:

We can become more ready to respond

to a disaster professionally.

69

:

If you have a story, your team has a

story or someone you know has a story,

70

:

let them know to come and watch.

71

:

Disaster recovery responder stories

here, and we will show them and talk to

72

:

them about being part of our broadcast.

73

:

On every broadcast.

74

:

I like to talk about other people

that I know and work with that

75

:

have spoken at our conferences

or I've worked with in the past.

76

:

I have two people today, Charlene

Deaver-Vasquez of FISMACS.

77

:

She's going to talk about something

really cool and you're going to want

78

:

to understand this because it's a,

takes a very short period of time to

79

:

understand something that might really

provide some very positive understanding

80

:

of what's going on in the mathematical

models for forecasting Cyber attacks.

81

:

What I do is I just queue

up a promo of her session.

82

:

Her session is about 30 minutes, and she

goes through and teaches on mathematical

83

:

models for forecasting Cyber attacks.

84

:

I give you the link in the show notes

to about a 30 minute piece that she did

85

:

for us at our recent Austin Cyber show.

86

:

Another treat that we have

for you today is Jon DiMaggio.

87

:

Jon DiMaggio has written this great

book, the Art of Cyber Warfare.

88

:

Jon has looked at nation state

Cyber threats, and he has some

89

:

cogent information for you.

90

:

He'll introduce himself.

91

:

And then I will give you the link so

that you can listen to him for about

92

:

30 minutes, talk about insights of his

experience doing nation state analysis.

93

:

And then, of course, his book is a

great resource to add to your library.

94

:

Out of my lessons learned and over my

career, I have had to respond to a lot

95

:

of data crisis, a lot of disasters and

respond . These are a list of some of my

96

:

problems that I have gone in and disasters

that I've had to help recover from.

97

:

But I want to just point

out how relevant is this?

98

:

It what lesson learned from the past

might have prevented Facebook being down

99

:

on October 4th, 2021, just about a year.

100

:

Where they lost full connectivity

for their entire organization

101

:

for over what, four to six hours.

102

:

It was a disaster.

103

:

They lost about 5% of their market

value, which was about 25 to $50 billion.

104

:

Now, it doesn't take long before not

having that occurrence and saving 25 to

105

:

$50 billion in market value, somebody

lost that market value on that day.

106

:

It popped down, and then of course,

it may have gone back up later, but

107

:

somebody lost 25 to $50 billion.

108

:

Why?

109

:

Because there was a lesson learned,

a best practice that Facebook didn't

110

:

implement, and it cost them mightily.

111

:

I will talk about this

in a future session.

112

:

I always I'm basically wanting you

to understand that even though some

113

:

of these things, Pentagon was 20 years

ago, some of these things were in the

114

:

past, but the things that you can learn

from the past can save you millions

115

:

that are possibly billions of dollars

in the future if you are prepared.

116

:

And that's my message.

117

:

I usually go over some little tidbit

of a disaster recovery timeline.

118

:

I'm not going to do that today, but

as I introduce myself and what we're

119

:

doing, I talk about the timelines.

120

:

I talk about tiger team

formations, how to document and

121

:

peel the onion and do analysis.

122

:

So I, I'll always bring to

you a little bit on that.

123

:

This is just boilerplate that I go over

and I'm just introducing the topics and

124

:

the ideas so that you get an understanding

of what we're talking about on the show.

125

:

. Now, the main purpose, once I find and

identify lessons learned, then we want

126

:

to move that into a best practice.

127

:

And those best practices often

are not easily implemented.

128

:

You may need some help.

129

:

Maybe you're a large Fortune 500, fortune

100, or a government institution or a

130

:

nation, while you still want those best

practices, but it's hard to get those

131

:

things imputed into your organization.

132

:

If you take a look over on the left,

I've got the high fidelity low noise

133

:

input that you want to amplify.

134

:

That's why the center of this shows

a little transistor amplifier.

135

:

And your organization is going to amplify

the results of those best practices.

136

:

And that's what we're

talking about is imputing and

137

:

applying those best practices.

138

:

Sometimes you need some help

if you're a defense contractor.

139

:

You might want to go out and look

at Cap Gemini general Dynamics

140

:

it, other consulting organizations

like McKinsey, et cetera.

141

:

Here's today's story.

142

:

The US Iraq, Afghanistan Digital War,

using biometric watchlists failed at a

143

:

very critical time, now authentication

and biometric recognition identification.

144

:

It's very difficult in a

wartime situation to take the

145

:

population and identify people.

146

:

They don't always have driver's licenses

with fingerprints or any other sort

147

:

of definitive, and the thing that is

definitive about identification is your

148

:

fingerprints, your IRIS scan, facial

recognition, and other such things.

149

:

So the military used these in

building this particular capability

150

:

out, and it's very powerful.

151

:

We're talking about taking Fallujah,

for instance, and the Marines

152

:

took the city of Fallujah, which

was Saddam Hussein's hometown.

153

:

They made everyone leave and then they

burmed the city so that only three

154

:

points of ingress egress were allowed.

155

:

And marines were stationed at those

ingress egress, and they had to submit

156

:

to getting a biometric identification

dossier built on every person going in

157

:

and out so that you had fingerprints.

158

:

And the whole purpose of this

was so that we could identify the

159

:

insurgents by biometric information.

160

:

And I'm going to show you in subsequent

slides how you can actually find an IED

161

:

bomb maker by finding their fingerprint

and searching through those fingerprints

162

:

and finding that needle in the haystack.

163

:

And that is how the Iraq war surge

was so successful is because they

164

:

had this tool to identify out of

millions of people who were the bad

165

:

guys who were on the watch list.

166

:

And I'm going to go through and

show you how some of that happened.

167

:

Now, fingerprints, IRIS scan, they're

coming up with palm prints, they're

168

:

coming up with various facial recognition.

169

:

There's a lot of various biometrics

in this system allow you to put in

170

:

almost any type of biometric, and

they were adding to it on a regular

171

:

basis and most likely still are.

172

:

This is the biometric ID path to

identify and find and catch a bomb maker.

173

:

Up here on the top left, you have an IED

that exploded and killed some people.

174

:

While some of those fragments of that

IED still have fingerprints or other

175

:

identifying information, maybe a hair

sample or something of that nature, Or

176

:

they found a place where they were making

those IEDs and they scrambled away and

177

:

left, but they left some incriminating

evidence about the fingerprints of the

178

:

people who were there making those IEDs.

179

:

By taking the general population and

taking their fingerprints and IRIS

180

:

scans and then scanning back, what

happens is after they find those

181

:

IED fingerprints, they do CSI work.

182

:

You watch television,

you can see the CSI work.

183

:

And so they go in and they recover

hair, DNA fingerprints off of maybe

184

:

some tape that was used to tape up the

electrical components of a, of an IED.

185

:

And they take those biometric components

and they bring them centrally and they

186

:

communicate them across large networks.

187

:

And this is what was happening.

188

:

I'll explain, a little bit more, but

basically by identifying all of these

189

:

people we can bring together and come

in and we can win the digital war.

190

:

And it's an enablement to be able to

find bad guys from their fingerprints,

191

:

IRIS, scanner, other identifying

information that's definitive.

192

:

Intelligence systems go out.

193

:

Identify that the, there was a

cell of people making bombs in a

194

:

certain area, and then they go in,

take fingerprints, send it to the

195

:

CSI lab, they're right in country.

196

:

And then they bring all that

data back to intel analysts and

197

:

they put everything together.

198

:

And pretty soon were able to match those

fingerprints that were on that IED to

199

:

somebody that we did an enrollment.

200

:

And then when they come through

a security checkpoint, they put

201

:

their finger on the platen boom.

202

:

We know that this person was a

maker of an IED and that's how we

203

:

prosecuted the digital War in Iraq.

204

:

Pretty cool.

205

:

This is the system.

206

:

You take a variety of inputs,

fingerprints, IRIS, scan,

207

:

pictures, other such things,

and you build it into a system.

208

:

Now this is a diagram that I made so that

I could understand it a little bit better

209

:

and then once we had an enrollment with

a dossier that would go in and hit a SQL

210

:

database in various files, those files

were then, all around so that people

211

:

could share those particular enrollments.

212

:

They also went back to theUSto the

biometric fusion center and even back

213

:

to the F B I, so that people could

see what was going on with this.

214

:

And then, but this system got overloaded

and it would replicate between all

215

:

of the various servers all over the

world that was involved with the war

216

:

effort across Iraq and Afghanistan.

217

:

And it stopped being able to replicate.

218

:

That's when I got that call and said,

Bill, would you go over with the troops

219

:

and go to Iraq and help us figure

out what is wrong with our system?

220

:

Why is it not working?

221

:

Why did it stop working mid war?

222

:

I'll show you a few things.

223

:

As you can imagine, things were, these

are enrollments in Iraq, in Afghanistan.

224

:

There was a lot more enrollments

in I Iraq than there were in

225

:

Afghanistan at that particular time.

226

:

And then there were other parts

of the worlds that we were

227

:

taking these enrollments as well.

228

:

And I took these statistics and I

put them into, A chart so that I

229

:

could see what was going on as I

worked on this particular system.

230

:

And here's yet another one a

similar kind of chart, but it

231

:

showed you the enrollment by source.

232

:

How many did SOCOM do?

233

:

How many did BAT do in various

biometric enrollments across the

234

:

entire war area of the US military?

235

:

And then they would also keep track

of how many latent matches we were

236

:

getting and how productive the system

was at identifying and putting people

237

:

on the watch list so that when they

would come through a checkpoint boom,

238

:

or we had a suspect, we would know

that they were in this particular area.

239

:

They also used this system for things

like vetting who could go to work in,

240

:

on a military base because they used

Various TCN, Third Country Nationals,

241

:

and they used national Iraqis and

Afghanis to help run the military bases.

242

:

They did, labor, they got paid,

they work, like a regular job.

243

:

But every day they would let them

come onto the base and then they

244

:

would leave the base and they

would use these biometric systems

245

:

to allow them on and off the base.

246

:

Also, the military used these

for base access so that they

247

:

knew that it was definitively a

military member or a member of the

248

:

institution being allowed in and out.

249

:

So we analyzed all the biometric systems.

250

:

Now I want to talk to you a

little bit about the workflow.

251

:

Whenever you're trying to solve a problem,

you have to have some telemetry data.

252

:

you see at the top in the blue, you

have strategic instrumentation and

253

:

various metrics and systems that

you have to prepare for in advance.

254

:

You have to instrument your environment.

255

:

And that's what we spent a lot of time

doing is instrumenting the environment.

256

:

We instrumented the entire war network

in multiple locations around the world.

257

:

And then we would take the findings

from that and we would break

258

:

it down and we would figure out

problem by problem, issue by issue.

259

:

Illogical findings.

260

:

And then we would discuss, troubleshoot,

test, make recommendations.

261

:

And it was a continual improvement

program to solve the problems.

262

:

And then we would record all of

our findings over time so that

263

:

we could make certain that we.

264

:

Use these new lessons

learned continuously.

265

:

that's how we did the work over there.

266

:

I developed this methodology over 40 years

of doing critical problem resolution.

267

:

And this is how I work.

268

:

I peel the onion back, I do analysis.

269

:

And the problems are many.

270

:

There's not usually one problem.

271

:

There's usually a myriad of problems.

272

:

that's why I call it peeling the onion.

273

:

You peel the onion back on the first

problem and then you incrementally

274

:

break it down until you get down

to the heart of the problem.

275

:

That's the most material problem

because you can always find problems.

276

:

The thing is that you want to find a

problem that is causing the most pain.

277

:

That's what this is designed to do.

278

:

Now here's some pictures of me

and some of the team members.

279

:

When we were going over or coming back I

made six trips to Afghanistan, Iraq, Qatar

280

:

Kuwait Bahrain, Djibouti, you name it.

281

:

I was in all of those various countries.

282

:

They had various communication

systems going in and out.

283

:

And then I also went to all

the locations around the US

284

:

where the data would flow into.

285

:

To have that intelligence analyzed

286

:

. And we analyzed every step from the war

fighter putting in the initial fingerprint

287

:

and doing the initial enrollment.

288

:

And as that enrollment would

move through the system and then

289

:

replicate around the entire world.

290

:

I was analyzing all of

those different things.

291

:

And this is just a couple of pictures.

292

:

This is the, a Al Faw.

293

:

This is where Saddam Hussein was

often located and his sons Qusay

294

:

Uday, his, their houses were nearby.

295

:

They had a lot of these palaces,

I think over a dozen of them,

296

:

and they were quite palatial.

297

:

Here's SU Saddam Hussein's big chair

that was out in the middle of the area.

298

:

And we would all take turns to

sit down and have a picture there.

299

:

And here you see I was in a C 17 and we

had a huge this actually is a picture of

300

:

me inside of C 17 and I was joking around

because it was a big power generator.

301

:

And I was saying, Hey, here's how I

can hook up my my razor to this system.

302

:

Anyway, little joke.

303

:

And we had marines, we had Army, we

had Navy Air Force people on the team.

304

:

We also had civilians.

305

:

We had quite a large civilian

population who came with us.

306

:

We had, each time a colonel would

lead our 10 to 20 people who would

307

:

go into country to do a lot of

these various analysis things.

308

:

And that's and help us

accomplish the goals.

309

:

The colonels would then brief the generals

as we were coming through and what work

310

:

we we are going to do on which trips.

311

:

And it was very well coordinated by

USCENTCOM who led the entire operation.

312

:

We had to instrument the entire Afghan,

Afghanistan, Iraq, Qatar, Kuwait,

313

:

Djibouti, CENTCOM, all over the world.

314

:

We had to put systems in that would

monitor the network and that would

315

:

watch these transactions as they

went from the war fighter through the

316

:

intelligence systems replicated to

other servers inside the AOR, the area

317

:

of responsibility, that's a name that

they, the US CENTCOM was responsible

318

:

for both Iraq and Afghanistan.

319

:

And there was a lot of instrumentation

we had to set up in advance.

320

:

This is what that looked like.

321

:

Okay.

322

:

Now like I said, as I go through, I

like to introduce you to some of the

323

:

folks who I work with or that I know and

have spoken at our various conferences.

324

:

And here we are going to talk to, and

listen to Charlene talk about mathematical

325

:

models for forecasting Cyber events.

326

:

Hi, and welcome to this

session, mathematical Models

327

:

for Forecasting Cyber Attacks.

328

:

My name is Charlene Deaver- Vasquez.

329

:

And there's actually quite a lot that I

want to share with you, I kind of want

330

:

to dive right in and just get started.

331

:

Some of the things I want to cover in

today's session is an overview first of

332

:

the mathematical methods that we use for

estimating probability of Cyber attacks

333

:

and that we use in some of the models.

334

:

And also I want to cover some of

the mathematical model use cases.

335

:

And then I want to talk a little bit about

what the analytical process itself is.

336

:

And then at the very end I want to be able

to share with you a brand new model that

337

:

will give you a glance of what the future

of these kinds of models might look like.

338

:

It's based on some math that was

actually only theorized a year ago.

339

:

So excited about that.

340

:

Let's get started.

341

:

Okay, here we are.

342

:

We're back.

343

:

Thanks for listening to

Charlene for a minute.

344

:

I hope you'll go back and check out

her longer video and learn a little bit

345

:

more about what she's talking about now.

346

:

Remember how I told you that

all of these servers and systems

347

:

would replicate around the world?

348

:

There was a problem with it, right?

349

:

We'd had to replicate that watch list

and that system around the world,

350

:

and this was a picture of some of the

servers and how the replication would

351

:

work and how many hours it would take.

352

:

And we had to calculate all that out.

353

:

How much data was moving

back and forth to figure out.

354

:

For the Intel people from the time

a new enrollment came in, how long

355

:

would it take it to get to the us?

356

:

How long would it take it to get to other

areas that we would know if it was 10

357

:

hours, 20 hours, 72 hours from the time

an enrollment of a bad guy until that

358

:

watch list was replicated around the

world that if they, we encountered them.

359

:

Of course, now just thinking about this

would be a very good thing to have at

360

:

our southern border right now, as we have

thousands of people coming through a day

361

:

from 150 different countries, we ought to

have them put their finger on the platen

362

:

or the IRIS scan that we know if they

are on the terrorist watch list or not.

363

:

I think they do that, and I wouldn't

be surprised that was how they

364

:

were catching a lot of these folks.

365

:

now, when you replicate from server

to server, it says, first of all,

366

:

what do I need to send to the other

side and how long does it take

367

:

to get there across the network?

368

:

we're talking about seconds

here of going back and forth.

369

:

So it would say I need to

have all the new updates.

370

:

And that would take a certain amount

of time, and they were a certain amount

371

:

of Bytes and that's what we were doing.

372

:

We were analyzing how all of this

stuff worked and how long each

373

:

one of these processes would take.

374

:

I broke it down.

375

:

To, each individual software function,

fetching the keys, send the keys.

376

:

Now the keys are the differences

in the entries in the database.

377

:

And then it would say, I

need this many rows of data.

378

:

And then it would send that data repeat

and it would continue on until the

379

:

whole database was replicated to another

server in another area around the world.

380

:

And then that w that server would

be updated and then that server

381

:

would update another server.

382

:

Very complex system.

383

:

And we had to time each one of

these things to determine why

384

:

would not continue to replicate

once it got to a certain size.

385

:

we had to figure all of these things

out that we could then go back.

386

:

Ultimately to Fort Huachuca , the

Army's Communications Center in Arizona

387

:

and helped them rewrite the code.

388

:

And that's what I did.

389

:

I went to Fort Huachuca . I took all

of this information and I helped them

390

:

rewrite the code and then we tested it.

391

:

We had a huge lab there where we sent

example traffic simulation, traffic,

392

:

simulating the satellite links and all

the various links and simulating the

393

:

replication across the AOR only we did

it in a lab that we could see if we were

394

:

improving things, where we were going.

395

:

It was a very interesting, very capable,

had some incredible programmers who

396

:

would improve each one of these.

397

:

So here's just some views of time.

398

:

this is time.

399

:

Each one of these are packets.

400

:

The packets would come through, it would

process the data, it would send the

401

:

packets to the next server, but there

were some problems that would happen.

402

:

There's what's called packet

loss and packet duplication.

403

:

In other words, they were sending

the same packet multiple times.

404

:

It was not productive.

405

:

And you can see that it would basically

stop right here and it would slow

406

:

down the the ultimate completion of

these replications, which would take

407

:

many times, well over 24 hours to re.

408

:

So that's what we were doing there

and that's what we were analyzing.

409

:

And I have some more

details here I'll show you.

410

:

I know that you're not a necessarily

a technologist, but these pictures

411

:

help you understand what your

technologist should be looking at.

412

:

we missed two packets here, dozens

of retransmissions, you can see

413

:

the same packets sent twice.

414

:

I call that data duplication.

415

:

In other words, you're using the bandwidth

twice or three times or even more.

416

:

I call that data duplication, where

it's not just retransmitting something

417

:

that was lost, it's actually sending

the same data multiple times.

418

:

And of course, that can contribute

overall to congesting all the links.

419

:

And then also in increasing the

time to finish the synchronization.

420

:

. Now, as with any problem, you

first quantify what's my problem?

421

:

What do I have?

422

:

And I use the difference between a

square and a cube, and I'm going to

423

:

define this here for you, A disastrous

problem, something that I have

424

:

diagnosed my entire 40 year career.

425

:

I come in and I take a look at.

426

:

What the problem is, what the environment

is, what the team is saying, what the

427

:

symptoms are, what the reported symptoms

are, and they're never always the same.

428

:

There's a lot of different conjecture,

and you cannot solve today's

429

:

problem with today's information,

otherwise you would solve it.

430

:

Right?

431

:

There's something that is missing, and

many times it's a different analysis.

432

:

It's a different viewpoint.

433

:

It's a different perspective.

434

:

that's why I use the difference between

a square, which just gives you one,

435

:

two dimensional piece of information.

436

:

And then I compare that to this,

which is a cube, which gives you

437

:

three dimensional information.

438

:

And that the, that third dimension

is a new input, something new.

439

:

There was something missing

in the disastrous problem

440

:

discussion or the findings.

441

:

There was something that was

not understood about it, and in

442

:

order to solve the problem, we

had to have some new information.

443

:

Once we are equipped with some new

information, yesterday's problem can

444

:

be solved with new information, and

that's what we call a paradigm shift.

445

:

A paradigm shift is yesterday.

446

:

I couldn't solve it.

447

:

I knew what the problem was,

but I didn't have the key

448

:

understanding of what was going on.

449

:

And then today I have some new findings,

new visibility, some new knowledge.

450

:

I have a new expert.

451

:

I have a new piece of forensics.

452

:

I have new diagrams.

453

:

Illustrator, help me understand.

454

:

I have a new metric or some

root cause analysis that helps

455

:

me understand and get a payoff.

456

:

In other words, the new data

allows me to solve the problem.

457

:

And it's actually typically quite simple.

458

:

It's a little disappointing because

when I go in to solve a problem, it's

459

:

a square, nobody knows the answer.

460

:

And then I come in and I solve it,

and I diagnose it with new information

461

:

or analysis, and I figure it out.

462

:

And we have a paradigm shift.

463

:

And sometimes the paradigm

shift is rather simple.

464

:

It's some new piece of information

that told us what the problem was.

465

:

And then we took hypothesis,

tested it, the new improvement,

466

:

tested it, and validated that it

actually was in the the solution.

467

:

that's what I've been doing all my life.

468

:

So that's why I have these kind of views.

469

:

Now, the response time from the servers,

remember how I showed you the diagram of

470

:

where we put instrumentation all around?

471

:

We put that instrumentation in

order to do server response time.

472

:

when a packet comes to a server

and then there's a response

473

:

from the server application, we

would classify and figure out.

474

:

How much of the time it was just sitting

there listening and calculating what

475

:

the answer was going to be, how long

it took for retransmissions, how long

476

:

it took for the data to traverse across

the speed of light and back and forth

477

:

with the various protocols and systems.

478

:

And we would calculate this out

so that we could visualize and

479

:

see when the response time went

high, maybe it's due to congestion.

480

:

There's a lot of requests coming in, and

they're queued and there is a queue depth

481

:

and it, and there's a response time delay.

482

:

you basically come up with these response

times what I call a rule of thumb or a

483

:

general rule of, hey, 449 milliseconds.

484

:

Now when packets are going across a

network, it's about one millisecond

485

:

per hundred miles of distance latency.

486

:

That's.

487

:

That's not just a good idea.

488

:

It's the law.

489

:

It's a speed of light.

490

:

when you calculate all these things

out, if you're going to send a

491

:

packet, for instance, to London from

the US it's probably, 5,000 miles.

492

:

you go 5,000 miles over, that's 50

milliseconds and 5,000 miles back,

493

:

and that's another 50 milliseconds.

494

:

you are going to have a hundred

milliseconds roundtrip delay.

495

:

it depends upon how far you're going

because of distance latency or the and

496

:

then there's satellite delay, which I'm

going to talk about because satellite

497

:

delay can be incredibly significant.

498

:

And now we have these, new Starlink

systems that Elon Musk has launched.

499

:

He's got thousands of low

earth orbit satellites.

500

:

Most satellite communications have to

go down to a satellite that's on the

501

:

equator that's going around the equator.

502

:

And of course the equator is the

longest distance around the us.

503

:

And these satellites are at the

equator, and they're 22,500 miles to

504

:

the equator and 22 500 miles back down.

505

:

So it would take, 125

milliseconds each way.

506

:

250 milliseconds just to

traverse a satellite link.

507

:

And that's a lot of latency and that's

why you can't really use a telephone

508

:

across a satellite link very effectively

because there's a lot of latency.

509

:

It's kind of like you say something

and then you have to say, over, and

510

:

then you have to wait for the response.

511

:

And it takes a lot of time

because of the latency.

512

:

When you have low Earth or orbit

satellites, those satellites are about

513

:

a hundred miles straight up above

you, and they're very fast, right?

514

:

Because you just, you don't have to

go 22,500 miles down to the equator.

515

:

And that's why Elon Musk

Starlink is brilliant.

516

:

It's just, it's they're

very simple physics.

517

:

And here with the Ukraine War, we deployed

those and very powerfully we're able to

518

:

continue to get Internet service across

the Ukraine, even though the Russians

519

:

destroyed all the infrastructure.

520

:

Of the terrestrial lengths on the ground.

521

:

Okay, that's a little bit about that.

522

:

Now, when you're analyzing these packets

I developed some of these charts and

523

:

I basically just helped people who

weren't technical like me understand

524

:

what was happening and the delays

and various things, and what happens

525

:

every time there's a packet loss.

526

:

These things would happen in the recovery

of the TCP/IP protocol oftentimes

527

:

was not very efficient and it would

actually cause a lot of time to recover,

528

:

or it would retransmit the same data

multiple times, wasting bandwidth.

529

:

we ended up having to visualize

this for generals and other folks to

530

:

understand, and then we had variations

of variables that we could change

531

:

on the servers that would affect

and improve that for certain things.

532

:

And we had to visualize this.

533

:

I took this and built these charts

and graphs . Other people who were

534

:

not technical could understand

what I was talking about, because

535

:

I don't need those charts.

536

:

Now I like them because I can very rapidly

see and explain, but I don't need those.

537

:

When I look at a problem, I know

what the problem is and I can

538

:

say, Hey, you need to do this.

539

:

But other people, they have to spend

money they have to change product,

540

:

they have to change protocol stacks,

they have to change applications.

541

:

you have to help them understand.

542

:

And the best way I know to do that is

to help people visualize a problem.

543

:

that is a visualization of

data duplication across a

544

:

TCP/IP Internet type circuit.

545

:

The other problem that we have

is you have a lot of interfaces.

546

:

You have a router and then you have

a satellite, and then you have a

547

:

satellite, and then you have another

router, and then you have another

548

:

satellite, then you have another router,

then you have a terrestrial link.

549

:

And many of these circuits.

550

:

because it's a wartime network, would

have problems and they would have errors.

551

:

we would count the errors.

552

:

We would quantify the errors, because

here we are in a noisy environment,

553

:

bad cables or cables that were

longer than they were supposed to be.

554

:

By necessity.

555

:

They had to build cables that were a

little bit longer and we ended up having

556

:

interfaces, having errors and drops

557

:

. And that's why we had to show facts.

558

:

These are scientific facts

about why it was slow.

559

:

So that if you want to mitigate

that, you have to fix that by

560

:

spending some money or changing

some parameters or doing some study.

561

:

. Here's another example

of the very same thing.

562

:

How often it occurs,

where they're occurring.

563

:

And then that way you can go to those

devices, you can go to those systems

564

:

and fix them more efficiently if you

know where problems are occurring.

565

:

And this was packet

loss and poor recovery.

566

:

TCP is supposed to have the Internet.

567

:

Every time you use your

browser, you click on something.

568

:

Sometimes you keep

clicking because it's slow.

569

:

Sometimes that's because there's some

error on the Internet that's causing

570

:

your packets to be discarded because.

571

:

There's either congestion, too much

traffic offered to the same router,

572

:

or there's some physical error

somewhere, or some link in this huge

573

:

worldwide network where it's broken.

574

:

So I take these packets, these

are packets, one by one, and I

575

:

show how come they're delayed?

576

:

What's happening?

577

:

What is the protocol doing?

578

:

Again, to visualize for people who are

not technical for the chief network

579

:

engineer or the chief financial officer,

why do we have to buy these new circuits?

580

:

This type of information helps people

understand why they have to do this.

581

:

You don't, as a technologist,

just say, trust me, I'm brilliant.

582

:

No, it doesn't work that way.

583

:

. You have to help people understand

and begin to trust you, and the way

584

:

that you do that is by showing them.

585

:

Scientific facts and showing them the

delay, what's happening, the knowledge

586

:

of the theory, the knowledge of the

operating, the test equipment and systems,

587

:

and then describing what's going on so

that people who are making decisions

588

:

about where to spend the money, where

to spend the time, where to spend the

589

:

energy, have the confidence in your

analysis to be able to say yes and verily,

590

:

this is what we need to accomplish.

591

:

This is very true in the security world.

592

:

You can't just guess at these things.

593

:

You have to show people facts, figures,

scientific information so that they

594

:

can trust what it is that you're doing.

595

:

Now, over there, in these environments,

we had a lot of low speed lines, and

596

:

those low speed lines slowed things down.

597

:

You can't put 10 pounds

in a five pound bag.

598

:

the military bought these Riverbed

WAN optimization, gizwatchies, and

599

:

they basically put one-on-one into

the circuit in Afghanistan and the

600

:

other, on the other end of the circuit

in the US and then they compress.

601

:

They do a variety of latency and

throughput optimizations that you

602

:

hopefully get, you can put, more,

you know how like on your disc you

603

:

can encrypt or you can compress

your disc it uses bit compression.

604

:

That's the sort of thing that

these devices would do only on a

605

:

link so that you could get twice

as much, three times as much.

606

:

Bandwidth out of the same particular link.

607

:

The problem was that

several things happened.

608

:

The packets would go from the war area

to the US across one path, and they

609

:

would come back across a different path

and they would not hit the same link.

610

:

So in order to use these WAN optimization

methods, you have to send your

611

:

packets across to this other one.

612

:

They do all this magic compression,

et cetera, and then they

613

:

decompress it on the other side.

614

:

Same thing when they're

going back the opposite way.

615

:

The trouble is that if you send

your packets over here and they go.

616

:

and they don't hit the other device.

617

:

They can't decrypt, they can't

decompress, they can't do the

618

:

opposite job of the optimization

and then send the packets out.

619

:

they were sending a lot of packets across

the network because of asymmetrical.

620

:

In other words, it used a different

path in one direction than it

621

:

did in the other direction.

622

:

They were not symmetrical.

623

:

consequently, there were a lot of

problems because of this, and I

624

:

was showing them here how I can

prove that in the analysis here.

625

:

Again, same sort of stuff, visualizing

the TCP protocol visualizing

626

:

this, the selective act process,

visualizing the window size and

627

:

that sort of thing on a TCP circuit.

628

:

here are some more enrollments.

629

:

This was how many enrollments

they had in a day, right?

630

:

Enrollment volume, how many Bytes

by country, Bytes by protocol.

631

:

How did all this?

632

:

I took all of these statistics from

the instrumentation that we put in

633

:

so that we could understand what

was happening out there on the.

634

:

So that was the enrollment volume, a

thousand people a day or 4,000 Bytes,

635

:

and how large was an enrollment . Okay.

636

:

Now this is basically a very simple thing.

637

:

here you have your war fighters and

Falluja, Mosul, Kabul, et cetera.

638

:

And they were using various types

of links, CENTCOM links, dis links,

639

:

email, various types of communications

methodologies to get these over to

640

:

the biometric fusion center, the FBI,

where they did additional analysis

641

:

. And then they would come back and

forth using various communications

642

:

methodologies, and they used a variety

of sec of security networks . you have

643

:

the red and the black and th this is

very complex, but basically, you're

644

:

trying to get your enrollment from

war fighters in the field back to

645

:

the FBI and biometric fusion center

and other intelligence resources.

646

:

And this was over inside the war area.

647

:

And this was in the

continental United States.

648

:

that's basically showing how

communications, and then I'm going

649

:

to break this down and see that

there was severe packet loss in

650

:

this part of the network, which is

from the war network over to the US.

651

:

Now, inside the US we didn't

have that many problems

652

:

because we have good networks.

653

:

It's not a wartime network.

654

:

It's not, sand in all the servers.

655

:

There's not all sorts of problems.

656

:

And then this is end-to-end communications

and what kind of problems we had with

657

:

satellites in, in, in Bagram and Baghdad

and different locations around the world.

658

:

And then this basically helped

them understand where their

659

:

problems were most material and

helped them make better decisions.

660

:

All right, now where we've come to the

part where we are going to listen to

661

:

Jon DiMaggio for about a minute, he is

going to introduce himself, tell you a

662

:

little bit about himself, and then you

are going to, down in the show notes,

663

:

you'll have links to Jon's 30 minute

session where he is going to talk about

664

:

the art of Cyber warfare and nation state

financial attacks and other types of.

665

:

Cyber problems that Jon has analyzed.

666

:

he is going to help you understand

a lot of that firsthand.

667

:

let's listen to Jon.

668

:

I spent about the first 14 years

of my career working for one of the

669

:

government intelligence agencies,

and I was really fortunate.

670

:

I came in at a time where nation

states were really starting to put

671

:

together the programs that eventually

began to attack the United States

672

:

using Cyber espionage campaigns.

673

:

So I got to spend the first, better

half of my career really digging

674

:

into nation state espionage actors,

and learning a lot along the way.

675

:

Since then I've transitioned

into the private sector.

676

:

I've done a number of

investigations many of them.

677

:

Things that have been, on newspapers,

headlines things that you'd be

678

:

familiar with since then you can

see on the right-hand side, this

679

:

is just the past year, some of the

ransomware research that I've done.

680

:

And I've also recently authored a book

called The Art of Cyber Warfare, which

681

:

we are going to talk about some of the

content today as well as some of the

682

:

research publications that went into the.

683

:

So everything today, don't worry, it's

not a sales pitch for my book, but the

684

:

book revolves around threat intelligence.

685

:

So that's really what I wanted

to talk to you about today.

686

:

And with that specifically some of

the objectives that I wanted to convey

687

:

are why you need to treat advanced

threats differently than you do

688

:

traditional day-to-day Cyber threats.

689

:

Okay.

690

:

Thanks for listening to Jon for a

minute . Now I'm going to talk to

691

:

you about some other problems routed.

692

:

Networks like the TCP/IP

network, like your Internet.

693

:

Sometimes things are changing

out there in the Internet proper.

694

:

We typically, in the US have very

good networks, very low errors.

695

:

Very low packet rate packet

problems packet errors, packet loss,

696

:

. But at certain times there's more or

less one of the attributes is in order

697

:

to update how big the Internet is or

how small it is, we add networks in and

698

:

they're constantly having ads deletes

and changes, and that's occurring.

699

:

And we have millions of

networks that are all connected.

700

:

Routers are updating where, what network

IP network numbers are going where?

701

:

I put a system on here called

the BGP activity summary,

702

:

showing the number of changes.

703

:

So you can see here that we

would have what I call churn.

704

:

You'd have sometimes 10,000

changes in a very small period.

705

:

Now inside the.

706

:

These are usually very small down here.

707

:

But once in a while we'd hit

a window where there was an

708

:

incredible amount of churn.

709

:

And here you can see when networks

were coming on, new networks were

710

:

coming on, new networks were leaving.

711

:

So you have withdrawals and you have

updates, and that all contributes

712

:

toward the changes in all the routers.

713

:

And that's why packets would go over in

one direction and come back in another

714

:

direction defeating the ability of these

wan optimizing capabilities because the

715

:

packets wouldn't come back the same path.

716

:

And that was a big problem and they had

a lot of those things because they were

717

:

trying to use lower speed satellite links.

718

:

Now when they had true.

719

:

Drone operations or weapon systems,

that was a different network.

720

:

This is mainly communications for the

war fighters, thousands of war fighters.

721

:

it was not as real time

intensive as flying a drone

722

:

or something of that nature.

723

:

And they'd use a different

kind of network for that.

724

:

But this is just showing that you those

changes, and this just shows you that

725

:

there's a lot of routing metrics changing,

and I have some examples of this.

726

:

a packet comes into a router, pops

through, goes to another router, and

727

:

then it pops up 22,500 miles to a

satellite that's on the equator, and

728

:

then it comes back down over here.

729

:

And that's going to take a lot of time.

730

:

Now, if that were.

731

:

A 500 mi from camp A to

Camp B, it's only 500 miles.

732

:

It should be five

milliseconds, and yet it's 250.

733

:

What was the problem here?

734

:

In Iraq, Afghanistan, and other such

places, the war destroyed all the fiber

735

:

optics and a lot of the terrestrial

systems so that you had to depend upon

736

:

satellite communications in order to

communicate even, shorter distances.

737

:

And that increased the latency in a

lot of our applications, no matter

738

:

what it is, waiting a half a second

for communications and you add that up

739

:

if you have to get something 10 times.

740

:

And.

741

:

A half a second every time.

742

:

That's five seconds.

743

:

it ends up being a big problem.

744

:

this just highlights that.

745

:

That's just one satellite link.

746

:

what I want to show you is that because

of some routing inefficiencies, instead

747

:

of going just between camp A and camp

B, it would go from camp A to camp B, to

748

:

camp C to Camp B, and it would cascade

those problems and it would double the

749

:

latency or triple the latency and go

90,000 miles with a packet instead of

750

:

the 500 miles that you would normally

experience if you had a terrestrial

751

:

network under normal operating conditions.

752

:

Okay, now remember, The if during the

Iraq war and I and and the Iraq Afghan

753

:

Wars, we would've had the satellites, a

low Earth orbit satellites that that Elon

754

:

Musk launched with they, they don't have

to go up to 22,500 miles to the equator.

755

:

They go straight up, hit a satellite,

and then there's a, an array

756

:

instead of one satellite that's

sitting there on the equator.

757

:

And you go up and back and up

and back to the same satellite.

758

:

These are an array of satellites around,

they call them low earth orbit satellites,

759

:

and you go up and it's like maybe one or

two milliseconds, and then they go boom,

760

:

around the world, and then they come down.

761

:

you have satellite to satellite

communications, and it's almost as if.

762

:

You had this type of terrestrial links

and you don't have to go under the

763

:

ocean, you can just use the air up

inside the atmosphere or up inside the

764

:

just beyond the ionosphere out there.

765

:

We have better technology now, but

these were the existing technologies.

766

:

So we used all of.

767

:

We deployed all of those satellite

links, but it took a lot of time

768

:

and we ended up with routing errors.

769

:

We would compound our problem.

770

:

And so now you can understand why we

had some of those particular problems.

771

:

as we went through, we would find these

problems and we would improve and it

772

:

lower the latency, lower the number

of sessions, we would improve it.

773

:

And we would basically look at

quantifying the improvement.

774

:

So we had a big improvement, then another

improvement, and that's what we're

775

:

basically looking at is incrementally

improving things as you have a problem,

776

:

you have a diagnosis, you have.

777

:

A fix.

778

:

And then you start putting those fixes in

and sooner or later you get down to where

779

:

you've got a very low, see that green?

780

:

That's the best response time you could

get are the best number of sessions.

781

:

And here you've got the best situation

that we possibly could have, and

782

:

it's improved by orders of magnitude.

783

:

Okay, what lessons learned did we have?

784

:

We ha we had a quite a few lessons

learned that if we went back and

785

:

fought another war, Or our military

wanted to upgrade some systems.

786

:

We already know what we could do to

improve operations pretty significantly.

787

:

And it's only been actually, I

think I was over there in, we

788

:

didn't end the war until last year.

789

:

And consequently, they were still

using some of these systems.

790

:

They had.

791

:

Fully replaced.

792

:

But if we we're going to fight a

new war, we would probably do it

793

:

and architect it a lot differently.

794

:

And you would take the lessons learned

from this war and you would apply

795

:

them to military communications in the

future to improve all of these things.

796

:

So we wouldn't have

asymmetrical network paths.

797

:

We would have the same path

coming and going so that we could

798

:

use WAN optimization equipment.

799

:

We would watch for the continual

churn, and we would not have flapping

800

:

routes and flapping packets and

packets would go over and then stop

801

:

because they had nowhere else to go.

802

:

And they would spin inside the

until their Hop count expired.

803

:

And there was no convergence.

804

:

There was.

805

:

There was no just quiescent happiness.

806

:

Things were always changing dramatically.

807

:

And then of course, satellite delays.

808

:

We would've probably designed our

satellites to be more like low earth orbit

809

:

satellites that Elon Musk is using with

Starlink, instead of these satellites

810

:

that have to go 22,500 miles in each

direction we would have better battlefield

811

:

communications and we would have better

instrumentation because we learned

812

:

a lot by instrumenting these things

and so that we could find the heirs.

813

:

Now we know how we need to instrument

so that we can find the errors and

814

:

then solve them rather quickly.

815

:

Rather than just suffer and suffer

for months and months or years with

816

:

the same errors that were occurring.

817

:

We can find the errors, solve them,

mitigate them, and move on and keep.

818

:

An error free environment.

819

:

Okay.

820

:

And one of the problems was that they

threw all this stuff up and it was

821

:

thrown up by different people and

different organizations, the Army,

822

:

air Force, Navy, Marine, different

battalions, different things.

823

:

And that it was very difficult to have

the network architecture documentation.

824

:

And that's one of the things that

we worked hard to try and help

825

:

USCENTCOM take care of was the

network architecture documentation and

826

:

automating some parts of that so that

it wasn't rare and always different.

827

:

It became more symmetrical,

more reasonable.

828

:

And then we made sure we had diagnostic

tools and we would ensure that we had

829

:

good technical training for war fighters.

830

:

There's a lot of lessons

learned that can save.

831

:

Billions and billions of dollars in

the next war, or the next war effort or

832

:

the next incursions around the world.

833

:

Pretty cool stuff.

834

:

All right, this is a picture

of our fearless leader.

835

:

There is on the second

from the from the right.

836

:

That's Colonel James Kirby.

837

:

He took us out there and he was the one

who brought us through the first time.

838

:

And Ben Kohler there on the right.

839

:

Steve Mercklein in the center myself.

840

:

And then we had major, I can't even

pronounce her name, but she was a

841

:

Marine and she was pretty tough.

842

:

Okay, that's a little bit about

what happened now, the next.

843

:

Time that I went a total of six times,

went one time with Colonel Kirby, and

844

:

then came back and got paired up with

Colonel David Wills, who I went on the

845

:

next five trips across the next several

years over to Iraq and Afghanistan

846

:

and other points around the world.

847

:

We talked about him in, in one of the last

sessions that I did, and I'll include the

848

:

link so you can hear his keynote address.

849

:

Hope is not a plan and I'm not going to

put his His little intro here, but because

850

:

I've already done that in a previous

broadcast, but you can find his link

851

:

to his talk in this particular thing.

852

:

So now I've gone through the

stock market denial of service.

853

:

I've gone through the Pentagon 9/11.

854

:

This is now the US Military Biometric

Intelligence Application in Iraq,

855

:

Afghanistan, and around the world.

856

:

I have a whole bunch of others with the

federal agencies, everybody from the I

857

:

r s to the Department of Justice, to the

department of State energy companies,

858

:

financial companies, telecommunications

carriers, other fortune 500 data

859

:

disasters, problems that Cisco had with

their products or other companies had

860

:

with their products that I diagnosed.

861

:

Anyway, I'll be going through

those particular things as we go.

862

:

Remember, if you're one of the

responders or some of the responders

863

:

or your team responded and you did

some really cool stuff I want to hear

864

:

about, I want to interview you on the

show and then pull out the lessons

865

:

learned so that we can all learn.

866

:

we are going to hear from a 9/11 New

York responder somebody who was working

867

:

with all the trading floors or the trading

companies in a little bit in the future,

868

:

and I'll so be on a lookout for that.

869

:

All right.

870

:

thank you for joining us today.

871

:

I really appreciate it.

872

:

You can go to Disaster Stream and look at

a little bit more, watch other episodes.

873

:

Thank you so much for being with us today.

874

:

We really appreciate

you hanging out with us.

Listen for free

Show artwork for Disaster.Stream

About the Podcast

Disaster.Stream
Disaster Stream is a podcast series that delves into the world of disaster recovery
Disaster Stream is a podcast series that delves into the world of disaster recovery, cybersecurity incidents, and critical problem resolution in major organizations. Hosted by Bill Alderson, the podcast features expert insights, case studies, and interviews with leaders and pioneers in the technology and cybersecurity fields. Each episode shares lessons learned and best practices for crisis management, aiming to help organizations prepare for and respond to disasters effectively. Available in both audio and video formats, Disaster Stream is your go-to resource for understanding and navigating the complexities of disaster recovery and cybersecurity

About your host

Profile picture for Bill Alderson

Bill Alderson

Bill Alderson is a historian at heart, a storyteller by nature, and a technologist by trade. For more than four decades, he has solved some of the toughest challenges in cybersecurity and networks — from helping restore communications at the Pentagon on 9/11 to training thousands of professionals worldwide.

But beyond technology, Bill is the proud grandson of Mabel and Ed Plaskett, California pioneers who passed down stories of resilience, family, and the rugged Big Sur coast. As the family historian, he has gathered photographs, journals, and documents to preserve the heritage of the Plaskett family for future generations.

Through this podcast, Bill shares those stories — weaving together history, heritage, and personal reflections — so that listeners, whether family or friends, can connect with the enduring spirit of the Monterey County coast.