Episode 3

full
Published on:

14th Nov 2024

Catherine Gregory | Ada Lovelace Institute

How do you craft consistent communications in a field as rapidly evolving as AI and data? Can we shift the narrative from perceived future existential risk to actual present day harms? What's the best way of making your messages resonate with policy makers?

Join Peter Barker in conversation with Catherine Gregory - Head of Communications and Content at Ada Lovelace Institute, an independent research institute with a mission to ensure that data and AI work for people and society.

We hope you enjoy, rate, review and share this episode - and we encourage you to join the conversation with any comments and questions!

---

📕 This week's recommendation is a documentary called 'Daughters' available on Netflix.

---

Visit us at www.orinococomms.com | Contact me at peter@orinococomms.com | Join our community and subscribe to our newsletter at orinococomms.substack.com | Tell us what you think bit.ly/orinoco-speakpipe_pod

Transcript
Speaker:

In a complex and rapidly moving

2

:

area like AI and data,

you have to reevaluate

3

:

your comms and messaging and definitions

periodically.

4

:

It's not going to be something

that's evergreen.

5

:

It's not going to be something

that's static.

6

:

Welcome to Research

7

:

Unraveled, the podcast, where we take

a deep dive into the somewhat niche

8

:

but hugely impactful world of research,

communications.

9

:

In particular,

we're exploring the idea of complexity,

10

:

which lies at the heart of this field,

and hearing from communications experts

11

:

about how they navigate

and overcome or unravel that complexity.

12

:

Research unraveled is brought to you

by me, Peter Barker,

13

:

owner of Orinoco Communications,

14

:

an agency where we specialize in working

with research based organizations.

15

:

This month, we're talking about a topic

that has had a fair bit of coverage

16

:

over the past couple of years.

17

:

Artificial intelligence.

18

:

I don't know about you, but

19

:

even though this emerging technology

has dominated the headlines

20

:

and social media feeds during that period,

I still feel like there are huge gaps

21

:

in my understanding

of how AI is being used

22

:

in ways that affect me and other members

of society on a daily basis.

23

:

We hear about the apocalyptic visions

of what

24

:

might happen if we lose control of AI,

and about the potential

25

:

for whole professions

to be replaced by artificial intelligence.

26

:

And personally, I'm becoming

quite comfortable with using platforms

27

:

like ChatGPT on a daily basis

28

:

to help me with tasks both at home

and in my professional life.

29

:

But both those dramatic visions

of the future and ChatGPT,

30

:

which sucks up

quite a lot of the conversation around AI.

31

:

I really just the tip of the iceberg

when it comes to how

32

:

artificial intelligence can

and is impacting our lives.

33

:

Thankfully,

there's a research organization

34

:

that's dedicated to investigating

that very question.

35

:

The Ada Lovelace

Institute is an independent, UK based

36

:

research institute

that was set up in:

37

:

In my conversation today,

we will hear about its mission to ensure

38

:

that the opportunities

and benefits of data and AI are shared

39

:

by all, and the ways in which

the Institute communicates that mission.

40

:

From this month's guest,

Ada is Head of Communications and Content.

41

:

Katherine Gregory,

in what I hope you'll agree is a lively,

42

:

fun and most, well,

fascinating conversation with Katherine.

43

:

We cover how to communicate

with policymakers,

44

:

to reframe narratives and some techniques

to increase the likelihood

45

:

that your message will stick.

46

:

We talk about, in particular,

how the Ada Lovelace Institute

47

:

attempted to influence

the conversations being had about

48

:

AI at the AI Safety

mit held at Bletchley Park in:

49

:

We also talk about the thorny issue of how

50

:

to pin down a definition of a constantly

evolving technology like AI,

51

:

and about the benefits of bringing

researchers and communicators together

52

:

early on in a project to generate the

highest quality communications content.

53

:

Of course, we'll finish by hearing

Katherine's favorite bit of comms advice

54

:

that she's ever received

and her book recommendation,

55

:

which actually this time

is a documentary recommendation.

56

:

Here it is.

57

:

Katherine,

thank you so much for being here today.

58

:

I'm really looking forward

to our conversation.

59

:

And I just wanted to start

by hearing a bit from you about the Ada

60

:

Lovelace Institute and when it was set

up, why, what its mission is.

61

:

Brilliant.

62

:

Well, first of all,

thank you so much for having me, Peter.

63

:

So Ada is an independent

research institute with the mission

64

:

to ensure that data

and AI work for people and society.

65

:

So we occupy a really interesting place

in the AI and data ecosystem.

66

:

We work in this space,

but we don't actually focus on the types

67

:

of technologies we want to build,

but we focus on the types of societies

68

:

we want to build

and how AI and data fit into that.

69

:

We are part of the Nuffield Foundation and

we are a relatively young organization.

70

:

We are set up about six years ago.

71

:

Wonderful, fantastic.

72

:

And so how did you what's your sort

of connection, your story with Ada?

73

:

How did you get involved

in the organization?

74

:

So first of all, I definitely don't have

75

:

a technical background, and I hope

that's helpful to some of your listeners

76

:

who may want to get into this space, but

might not be techie people, so to speak.

77

:

But one of my passions,

and one of the things

78

:

that really attracted me to this job,

is that I love to translate complexity

79

:

into easy to digest, compelling messages

that resonate with audiences.

80

:

And my my first comms job ever was working

in the Obama administration.

81

:

I mean,

if you want a place for storytelling,

82

:

that was definitely the place to be.

83

:

I was a speechwriter

at the Treasury Department over the summer

84

:

while I was getting my master's degree.

85

:

And I would say that

learning the discipline of speech writing

86

:

is one of the things

that's really impacted my career the most.

87

:

So I learned how to translate topics

like the debt ceiling,

88

:

which is something you hear,

but no one really gets what it is

89

:

into punchy messages

that would really resonate and land

90

:

with the different audiences

that we were speaking to.

91

:

And I think so much of those skills

92

:

are transferable

into other areas of comms as well.

93

:

Generally,

I've spent my career in the public

94

:

and third sectors, so purpose

driven work is really important to me.

95

:

I've overseen columns in Europe

for the Urban Land Institute,

96

:

and then just before Ada,

I was head of comms at Working

97

:

Families, the national charity

for working parents and carers.

98

:

So yeah.

99

:

And then and then Ada happened.

100

:

And as I said, I've never worked

in the tech space before.

101

:

I'm a digital native,

but I definitely walked into Ada

102

:

with very little knowledge

on the AI ecosystem.

103

:

And the complex dynamics around it.

104

:

Luckily, I work with experts,

and I would say I probably spent

105

:

the first three months in the role

just asking my colleagues

106

:

to explain things to me

like I'm five years old.

107

:

I think that was my constant refrain,

just boiling things

108

:

down to their simplest explanations,

simplest parts,

109

:

so I could really get my brain around

what we were talking about.

110

:

That is, great, great advice,

I think, isn't it?

111

:

I mean, I think often when we're plunged

into an unfamiliar environment

112

:

and we feel like we're supposed to, you

know, if it's a professional environment

113

:

where we're working within it,

this temptation is sometimes to sort of,

114

:

you know, pretend that you know more than

you do, but sort of just asking questions

115

:

and being honest about what you don't know

is always the best way to learn.

116

:

So that that sounds, yeah, great advice.

117

:

And it's funnily enough, actually,

118

:

I think asking asking for things

to be explained,

119

:

like I'm a five year old is pretty much

all I use ChatGPT for at the moment.

120

:

So it's quite, quite interesting.

121

:

That would go.

122

:

I mean, it's always yeah,

123

:

especially in the pretend

I'm doing it for the benefit of my kids,

124

:

but only because that's the level stuff

needs to be pitched.

125

:

Bit as well.

126

:

But yeah, so that's amazing.

127

:

So you, you sort of now fully immersed

in this world, I'm sure, still asking

128

:

lots of questions, but presumably by now,

you know, you sort of

129

:

have a much stronger grasp of the field

than you did when you started.

130

:

And in the time that you've been working

there,

131

:

this podcast is all about complexity

132

:

with, you know, associated

with the communication of research.

133

:

So is there one

134

:

area of what you do,

one aspect of your work

135

:

that has come across

as particularly complex? Ada.

136

:

Yes, I,

137

:

I think we can really go back to basics

with this

138

:

and just talk about the definition

of AI itself.

139

:

Right?

140

:

I think that a definition

141

:

of what you work on seemingly

should be this very straightforward thing.

142

:

Right?

143

:

But with, AI is so rapidly evolving, it's

hard to pin down.

144

:

You know, I think if we had to define

AI a year and a half ago, pre ChatGPT,

145

:

the definition would have been

entirely different than it is today.

146

:

And so I think that complexity, it

it introduces

147

:

this tension and kind of discomfort

from a comms perspective.

148

:

Because if you think about, for example,

an elevator pitch,

149

:

you really want to just have that.

150

:

Now have that be evergreen,

leave it alone.

151

:

That's the end of it.

152

:

But with something like AI

and also data driven

153

:

technologies as well, it's

not that simple and straightforward.

154

:

And so if we've really had to reckon

with this tension,

155

:

and I think what it really comes down to

is the fact that there's not one

156

:

definition that is going to satisfy

every context and satisfy every audience.

157

:

So I think it's about, for us

158

:

at least at Ada, it's

about going back to our evidence base.

159

:

It's about acknowledging the limitations

of the definitions upfront,

160

:

actually

saying that as part of defining AI,

161

:

that it's a rapidly evolving area, there's

not a universally accepted definition.

162

:

And then looking to our evidence base,

163

:

ensuring that we're approaching

the definition from a very Ada

164

:

perspective, which for us

is looking at people and society,

165

:

the interaction between these technologies

and people in society.

166

:

There's a word

for that called sociotechnical.

167

:

And then and then reevaluating

the definition periodically.

168

:

Right.

169

:

So it's in a complex and rapidly

170

:

moving area like AI and data.

171

:

You have to reevaluate your comms and

messaging and definitions periodically.

172

:

It's not going to be something

that's evergreen.

173

:

It's not going to be something

that's static.

174

:

And so that can be uncomfortable

for Collins.

175

:

But I think you just sort of

have to embrace

176

:

the awkwardness and discomfort

and just reevaluate.

177

:

It's messy. Right.

178

:

That's that's so interesting.

179

:

You said, you know, I think one way

180

:

to to deal with that complexity

is to to remain constantly focused

181

:

on, on Ada's mission

of considering these emerging technologies

182

:

and AI in the context of it

impact on society.

183

:

That makes me think about,

you know, your audience, but, well,

184

:

the mission is, is making AI and data

185

:

work for people in society,

but actually the audience itself.

186

:

Interestingly, your audience from a comms

point of view is not society's,

187

:

such as the policymakers

who work on behalf of that society.

188

:

How are you considering that audience?

189

:

There's another way of sort of

cutting through the complexity.

190

:

Talk a bit about who they are, because

I think that's one of the things as well.

191

:

Our policymakers are often considered

to be sort of a bit of a homogenous

192

:

mass, as it were, but very, you know,

there's a lot of different elements

193

:

within policymaking and different roles

and different values and needs and so on.

194

:

So can you talk to us

195

:

about a bit about that audience

and how you take them into consideration?

196

:

Yeah, absolutely.

197

:

So, so one of our key audiences

and one of our key routes to impact

198

:

when you're thinking about legislation,

best practice

199

:

when it comes to

AI and data is policymakers.

200

:

And I think,

201

:

my personal

202

:

definition of policymakers

would extend beyond Parliament.

203

:

Right.

204

:

It would be, decision

makers and local authorities.

205

:

It would be decision makers in the NHS,

anywhere where these technologies

206

:

are being deployed and have the potential

to impact the lives of everyday people.

207

:

And so, like as you said, first of all,

policymakers are not a monolith.

208

:

This encompasses

a lot of different people.

209

:

But if you boil it down

well, I think it's easy to think

210

:

that policymakers might be subject matter

experts.

211

:

Right.

212

:

But in reality,

they really need that complexity

213

:

distilled down to the question

of how will this thing,

214

:

how will this technology impact

the daily lives of the people I represent,

215

:

of the people who use the service

that I'm in charge of, etc.?

216

:

And I think for us,

if we can always keep that question

217

:

in the back of our minds, that will really

help us communicate with policymakers.

218

:

I also think that just

in very practical terms, and this is a

219

:

I think this is probably common knowledge.

220

:

Policymakers are incredibly time poor.

221

:

And so anything that is overly long,

222

:

overly complex and sort of riddled with

jargon is not going to get read.

223

:

And so I think it's about really it's

that translation aspect coming in.

224

:

Right?

225

:

So if you have a very dense

research report,

226

:

you also need a translation

that will work for this audience.

227

:

And I

228

:

there's some research

and I wish I could cite it because it's

229

:

really interesting that someone on my team

brought it up the other day,

230

:

which says that policymakers like to read

a short briefing like the two pager,

231

:

but they like to know

232

:

that there's a 100 page paper

that is backing up that two page paper,

233

:

even if they don't really want to read

that that longer output.

234

:

So that's a really interesting dynamic

as well.

235

:

Yeah. That's fascinating.

236

:

I interviewed Caroline Wood

from the University of Oxford recently.

237

:

She's done

she did a big piece of research,

238

:

a report into communicating evidence

to policymakers.

239

:

And one of the other things

that she mentioned, in addition

240

:

to the sort of the overwhelm

that lots of them feel about

241

:

just the quantity of information they're

having to process on a daily basis.

242

:

And again, as you said,

you know, they're they're not all expert.

243

:

They can't all be experts

in every field that they're dealing with.

244

:

So it does need to be explained

accessibly.

245

:

The other aspect

was that they really appreciate

246

:

kind of policy recommendations as well.

247

:

I suppose not just here's the information,

248

:

but here are some of the sort of actions

that could be taken off the back of that.

249

:

Is that, something that you,

that you work to do as well?

250

:

Yes. This I mean, this really chimes

with our strategy as well.

251

:

So I think it's I'm thinking back

to theories I learned in graduate school.

252

:

This really comes down to choice

architecture, right.

253

:

So if you are giving choices

to these policymakers,

254

:

if you are kind of doing

some of the legwork for them,

255

:

it makes it so much easier

for them to take action.

256

:

And so as much as possible, particularly

when we're writing a policy

257

:

facing briefing,

258

:

the recommendations

are the sort of the thrust of the report

259

:

that the most important part of the report

260

:

and we work really hard

to get those right.

261

:

And what sort of recommendations and and

communications are you having with them?

262

:

Do you have any sort of examples of any of

the research that you've been doing?

263

:

Ada. And the outputs that were directed

towards policymakers?

264

:

I did, so I, I'd like to give

265

:

an example of a summit

that happened last year.

266

:

So this was the global AI summit, that it

it was in the news.

267

:

You may have heard about it.

268

:

It took place at Bletchley.

269

:

It was a

it was organized by the UK government

270

:

and it was a gathering of world leaders,

271

:

luminaries, tech company executives,

not so much civil society,

272

:

which we will get to

to talk about the future of AI.

273

:

And it was really framed around

this existential risk of AI.

274

:

Right.

275

:

So the idea that in the distant future

276

:

I could be used, maliciously

277

:

in terms of nuclear warfare

278

:

and sort of apocalyptic scenarios,

that kind of thing.

279

:

And Ada

wanted to bring a new perspective to that.

280

:

Ada, as well as our colleagues

in civil society, wanted to bring

281

:

a new perspective to that, which is that

harms are actually happening right now.

282

:

When you apply for a loan, AI can decide

whether or not you get a loan.

283

:

When you apply for benefits,

it's the same thing.

284

:

Biometrics and facial recognition

technology

285

:

can have adverse impacts when it comes

to misidentifying someone as a criminal.

286

:

So these are real harms happening now.

287

:

So we're not really wanting to talk

about the distant future when there are

288

:

things that are happening right now.

289

:

And that was a framework that wasn't

really in place for the summit.

290

:

And so we, along with our civil society

291

:

colleagues, work

together to sort of shift this narrative.

292

:

And there are a few ways

that we did this right.

293

:

So first of all, we wanted to provide

a frame

294

:

that connected to our mission of AI

and data working for people in society.

295

:

Right.

296

:

And so when our director had

the opportunity to speak at the summit,

297

:

her first words were,

298

:

this summit

299

:

isn't fundamentally about technology

or regulation.

300

:

It is about people,

the people who make technologies,

301

:

the people who regulate them,

302

:

and most importantly, the people

whose lives are affected by them.

303

:

At this lot,

these lines were incredibly powerful,

304

:

and our director

had multiple senior people in government

305

:

coming up to her after her speech

and acknowledging the importance

306

:

of this framing and the fact that it

it wasn't loud enough, this narrative.

307

:

So that was really powerful.

308

:

First of all, so the first thing we did

was provide a frame.

309

:

The second tactic we used was giving

examples and analogies that really stick.

310

:

So often talking about something

like regulation

311

:

or governance is really intangible,

and you need these analogies

312

:

so that they just stick in people's

brains. Right.

313

:

So we had a few things that we want to see

314

:

in the regulation of AI,

315

:

and we connected them to things

that are already happening.

316

:

So for example, for AI manufacturers,

in the same way that cars are crash

317

:

tested, liable

318

:

for when things go wrong, like when

a restaurant gives you food poisoning,

319

:

and then a voice for people

who were going to be impacted,

320

:

like what happens

with planning permission.

321

:

So we use these three everyday examples

that most people are going to know about,

322

:

and we connected them

to how we think I should be governed

323

:

and regulated and

and those are really effective as well.

324

:

A third tactic,

which is one of my favorites,

325

:

one of my personal favorites

is if you want to influence policymakers,

326

:

it's really helpful to use their messaging

to get your own messaging across.

327

:

So one of the key messages

ahead of this summit

328

:

was that the UK wants to be an

AI superpower.

329

:

They want to be at the forefront

of innovation.

330

:

Right.

331

:

And so we made sure to use this language

to say that if the UK

332

:

is going to be an AI superpower, then

we need protections for everyday people.

333

:

We need a safety net, we need regulation.

334

:

So that was a tactic for us to be able to

335

:

turn the government's

messaging on its head

336

:

and use it to make sure

that we really emphasize the importance

337

:

of people in society

and this ecosystem really.

338

:

You mentioned that

339

:

you'll come back to it, which was perhaps

a lack of representation of,

340

:

I think you said civil society,

the Bletchley Park summit,

341

:

even though your target audience

then is policymakers.

342

:

How are you actually involving people?

343

:

I guess not that policymakers

aren't people, but other people,

344

:

other members of society

in the research that you do.

345

:

And how do they

the sort of ideas and values and concerns

346

:

perhaps feed into the research

and the reports that you're creating?

347

:

Public participation research

is a really important workstream at Ada,

348

:

and it actually cuts across

a lot of the research that we do.

349

:

So one of our main objectives

as an organization is to elevate

350

:

the voice of the public

to understand what they want and need.

351

:

When it comes to AI, one way

we did this in the context of the summit

352

:

is that we supported a brilliant

organization called Connected by Data

353

:

to hold a public deliberation on AI.

354

:

So it was essentially a five day exercise

where a group of members of the public

355

:

heard from experts, heard sessions

that were happening both at the Bletchley

356

:

AI summit and also the AI fringe event

that was happening in London as well,

357

:

and then came together

to make some recommendations

358

:

for what they want to see in terms

of governance and regulation

359

:

and how people can be protected,

as these technologies involved.

360

:

So that was

that was a really interesting and concrete

361

:

way that the public can be involved

in these decisions.

362

:

And I think that's probably

another podcast actually.

363

:

Like, how can you use public voice

to shape policy?

364

:

But it's definitely something

that's really important to us at Ada.

365

:

And I would say from a comms

perspective to this summit

366

:

was a really unique situation for me

as a head of communications.

367

:

So in past organizations,

I've been used to the media coverage

368

:

being sort of the pinnacle of comms

victories, I guess.

369

:

Right.

370

:

So if it's what you want, you want to get

as much coverage as possible.

371

:

And what was really unique about this

summit

372

:

is that we had a convening role at Ada.

373

:

So we helped to amplify

civil society voice.

374

:

That was our objective,

even if that wasn't necessarily our voice.

375

:

So there was a real sense of community.

376

:

We were sharing media opportunities

around other civil society organizations,

377

:

which with my PR hat on,

I had never really done that before.

378

:

Usually

you're very protective of that, right?

379

:

But I think what happened

is the diversity of those voices

380

:

really did shift

the narrative in the media as well.

381

:

So the media had really been covering

this existential risk

382

:

that I talked about before,

sort of the apocalyptic scenarios.

383

:

And I really think and we

we monitored the media coverage

384

:

and even to this day,

the coverage of existential risk

385

:

is almost nil,

because I really think in that moment,

386

:

we, the, the media and policymakers alike

387

:

realized that that wasn't

really the issue to be talking about.

388

:

It is the current harms

that are happening now.

389

:

And a big a big indicator of impact for us

was that we saw

390

:

our messaging echoed in Kamala

Harris's remarks around the summit.

391

:

So she she made a very pointed remark

392

:

that it's not about these future

existential scenarios.

393

:

It's about the harms

that are happening here now.

394

:

And so that was a big win for us

and for the rest of civil society

395

:

that felt quite underrepresented

at the summit.

396

:

And where was that focus on the sort of

present day dangers and risks

397

:

over the existential?

398

:

Was that reflected in the concerns of the

through

399

:

the sort of public deliberations

that you did as well?

400

:

Yeah, absolutely.

401

:

And I think the, the,

the number one consensus

402

:

that came out of those deliberations and,

and that has come out of our research

403

:

as well, is that people,

the public wants regulation with teeth,

404

:

as in robust regulation and safeguards

as this sort of unpredictable

405

:

and new technology is deployed, more

and more in our everyday lives.

406

:

So this isn't a particularly

comms question, I guess is more

407

:

just a question about

because I'm interested to know

408

:

it's who stands to gain

from not having a new regulation.

409

:

It's industry.

410

:

It's this sort of technology companies.

411

:

Right? Yeah. That's it essentially.

412

:

That's it. Okay, fine.

413

:

That that's it. Exactly.

414

:

But but I think that

how how that becomes a comms question

415

:

is, is the whole regulate

to innovate theory.

416

:

Right.

417

:

So it's how can we

how can we convince tech companies

418

:

that regulation and innovation

are not mutually exclusive?

419

:

And obviously,

420

:

that's not just a comms question,

but comms plays a really big role in.

421

:

So that's in that storytelling. Yeah.

422

:

We spoken about it.

423

:

Well, obviously, you know, the work

that you and your team Ada have done from

424

:

as a comms team to,

to to get these messages out and so on,

425

:

how about the researchers themselves?

426

:

A I guess question is

what kind of research are they doing?

427

:

I'm sure it's very broad, but just yeah,

you know, it would be fascinating

428

:

to hear about some of the things

that they're exploring at the moment.

429

:

And secondly.

430

:

What's your

sort of working relationship with them?

431

:

Like are you trying to empower,

432

:

enable them to sort of

tell their own stories as well?

433

:

Do they just kind of getting on

434

:

with the research, feeding that to you,

435

:

and then you go

and communicate it on their behalf?

436

:

How does that, relationship work?

437

:

So first of all, we have some really

interesting projects coming out soon.

438

:

Our scope is very wide

in terms of the actual topics we cover,

439

:

because if you imagine

the intersection of technology

440

:

and people and society is very broad,

there are lots of ways

441

:

that people and society can be impacted

by the deployment of technology.

442

:

Some of the reports we're working on

right now are around,

443

:

I genomic powered health prediction,

which is really fascinating.

444

:

So it it's around

how I can be used to predict disease risk,

445

:

to predict the way that someone, responds

to a certain medication.

446

:

And the report is about the implications

of this positive and negative

447

:

that this could have on the health care

system and people in society.

448

:

We're doing a report on gender and AI,

which explores

449

:

the lived experience of transgender

and non-binary people

450

:

and how data driven systems in healthcare

have affected them.

451

:

And then we're also

we're doing a big workstream

452

:

on the use of AI and data driven systems

in public services as well.

453

:

I think often there is a lot of optimism

454

:

about how I can transform public services,

455

:

and we know at Ada

that there is a lot of nuance to this.

456

:

Technology doesn't exist in a vacuum

and has knock on effects for people

457

:

in society when it is deployed in a system

458

:

that already exists and functions.

459

:

So we are we're looking at those knock

460

:

on effects

and offering recommendations on how

461

:

this technology can be deployed

in the public sector in a responsible way.

462

:

So those are

some of the things we're working on.

463

:

And in terms of our actual research

process, this is also something unique

464

:

about Ada that I've never experienced

in another research institute,

465

:

that I've worked out before.

466

:

So typically the workflow, at other places

I've worked

467

:

would be that researchers

468

:

draft a report, fully pass

469

:

the draft to Collins, Collins polishes it,

and then it goes out into the world.

470

:

So there's very little dialog back

and forth at those early stage is

471

:

what's really unique about Ada

is that Collins is involved

472

:

from the inception of an idea.

473

:

And what that means is that at those very

early stages, we can define audiences.

474

:

We can ensure that the framing reflects

475

:

our mission, vision and values

as an organization.

476

:

Right.

477

:

So I think when you're writing

about technology,

478

:

it's very easy to fall into a slightly

479

:

jargony and overly technical way

of describing things.

480

:

So our comms team is there to ensure

that people in society are centered

481

:

and also that these reports are written

in a way that are accessible,

482

:

for the audiences

that we want to reach as well.

483

:

And so that what it means is

that the process is probably a bit longer,

484

:

than what it would be

if it were just sort of this, you know,

485

:

research has it and then comms has it

because there is lots of back and forth.

486

:

There's lots of dialog.

487

:

And we've been working

to really clearly define editorial roles

488

:

and responsibilities between research

and comms, so that everyone is comfortable

489

:

with the process

and that it runs as smoothly as possible.

490

:

But what what we get out of it

are evidence based, well-written,

491

:

and really robust reports, that

that we can be proud of and that

492

:

and that we know reflect really,

really excellent work.

493

:

And presumably,

494

:

you know, I don't know

if you've sort of had feedback

495

:

from the researchers about this,

but the hope would be that by asking

496

:

the questions

always about who this is for whilst

497

:

the research is happening,

you know, having that constant focus on,

498

:

I don't know what the term is, sort of end

user or, you know, the sort of people

499

:

who they're doing the research

on throughout the process, one would hope

500

:

would have a beneficial effect

on the research itself as well.

501

:

Yes, absolutely. Defining that.

502

:

Yeah.

503

:

Defining that even before

the research really begins.

504

:

And but also keeping in mind

that these things can shift and evolve.

505

:

Right.

506

:

So there's nothing about this process

that we want to be too

507

:

set in stone and too rigid

because we know that research can evolve.

508

:

We know that writing can evolve.

509

:

And and also the external environment

can evolve, right?

510

:

So can evolve.

511

:

So, you know,

there could be a surprise election.

512

:

There could be something that means

that your audience has changed and

513

:

and that's built into the process

processes.

514

:

This slight flexibility.

515

:

But if you have an idea

of who you're talking to

516

:

before

you even start conducting the research,

517

:

then that really helps

to focus the research team.

518

:

Yeah.

519

:

So that's really interesting

to hear about,

520

:

sort of the process of that report

writing embedded in with the researchers

521

:

and the reports themselves, presumably,

you know,

522

:

they're quite sort of text heavy,

I suppose, sort of dominated

523

:

by copy and the writing

and the statistics and findings and so on.

524

:

One of the areas

I sort of interested in as well

525

:

was AI communications

about artificial intelligence, visuals.

526

:

I spoke a few years ago with Research

cats.

527

:

Do you how who was talking

about global narratives in AI?

528

:

And one of the side projects she was doing

529

:

was about representing regions of,

of artificial intelligence.

530

:

And it was all those sort of

531

:

glossy

532

:

white robots,

basically a sort of one of the things,

533

:

how are you coping

534

:

with that side of things

with the visuals? Yes.

535

:

I think this is a perennial challenge

for anyone in comms, who works in the tech

536

:

or AI space is that if you go on a stock

537

:

photo library, you're exactly right.

538

:

What you will see is fluffy white robots.

539

:

You will see Tech Blue, as we call it,

which is a very specific

540

:

shade of blue

that's used to illustrate technology.

541

:

You will see lots of people holding

smartphones right.

542

:

You will see very futuristic images.

543

:

And for us as a brand, this does not

reflect what we want to talk about.

544

:

As I said at the beginning of this chat,

545

:

we want to focus on people in society,

not on the actual technologies themselves.

546

:

And the solution that we found is that we

are actually we're working on a project

547

:

now where we are creating our own library

548

:

of illustrations alongside a designer.

549

:

We're focusing on our areas of work,

which are society,

550

:

justice and public services,

emerging technologies,

551

:

policy, governance and regulation,

and then public participation.

552

:

It is a real

553

:

challenge to illustrate

something like governance or regulation.

554

:

And we've had some really interesting

chats with our designers

555

:

about how to how to convey these things

556

:

in images in a way

that also centers people in society.

557

:

So this is definitely a work

in progress for us, but it's something

558

:

that will hopefully be

a really useful tool for us and our brand.

559

:

Right.

560

:

And when we when will we be able

to see those illustrations?

561

:

Do you have a.

562

:

We're hoping

by the end of the year. Great.

563

:

I look forward to it.

564

:

Before I'm going to ask

a couple of sort of final questions

565

:

that are a bit more general.

566

:

But before we move away from I, I do have

one question I'm very curious to know

567

:

because there's a lot of conversations,

568

:

obviously, around

how is AI impacting society?

569

:

There's a lot of conversations

570

:

within the communications, research,

communications sector around

571

:

how AI is going to impact

in that sector in particular.

572

:

So very curious to know

your thoughts on on that, on how you feel.

573

:

It's going to sort of influence

and transform

574

:

the work of professional communicators

and also how you currently, if at all.

575

:

I imagine you are in some capacity

using AI in

576

:

your own kind of daily or weekly

workflows.

577

:

This is my personal view,

and I'm very much

578

:

in the creative writing

editorial space. I.

579

:

I realize that other comms professionals

might might think of using

580

:

AI in different ways.

581

:

So I think that generative

AI is the thing that I think about

582

:

when it comes to potential threats

for comms professionals.

583

:

And at the moment,

I can say with confidence that ChatGPT

584

:

is not going to replace any good writers.

585

:

I can give you an example of that.

586

:

I, I occasionally use ChatGPT

587

:

to brainstorm title ideas or reports, so,

588

:

you know, giving a prompt

that says, I'm writing a report about X,

589

:

please could you give me ten titles

that have puns?

590

:

I think one example recently is that

we were writing a report about education,

591

:

and one of the titles it gave back to me

592

:

was eye of the Tiger

Taking a Bite out of Education.

593

:

I, I,

594

:

I just I'm very confident by the,

by the outputs

595

:

they get from ChatGPT that they are not

replacing actual writers any time soon.

596

:

I'm very confident with that. Yeah.

597

:

Are there any sort of useful?

598

:

I mean, sounds like

you're experimenting with it thus far.

599

:

It's not doing the job that,

600

:

you know, doing that job,

but are there any useful applications

601

:

you're finding or you're still just

in the kind of experimental phase?

602

:

So our data, we are really cautious about

603

:

using AI in our research

and in our evidence building.

604

:

And that's simply

because we want to make sure that we're

605

:

examining all of the knock on effects

of the technologies that we use.

606

:

And so because of that, like, ironically,

because of that,

607

:

I think we're probably slower

608

:

adopters of these technologies because

we really want to just examine them.

609

:

I mean, I would say that in a

perfect world, if there were some kind of

610

:

AI system that generated perfect footnotes

that I didn't have to edit,

611

:

I would really appreciate,

I would pay a lot of money for that.

612

:

I have not found that yet,

613

:

but that would be my

that would be my dream application of AI.

614

:

Yeah, I think that's it, isn't it?

615

:

Most people's hope is that

it will be able to replace the sort

616

:

of the drudgery of some of the work,

the sort of more mundane tasks

617

:

such as the fun, creative.

618

:

Yes, that's really fascinating.

619

:

Thank you.

620

:

So finally, a couple of questions.

621

:

One, that AI questions

that I ask all my guests.

622

:

The first is whether you have a particular

piece of advice that's been given to you

623

:

throughout your career and comms

that you come back to most often,

624

:

or that sort of help to help steer

your the way you work.

625

:

For me, that's a really easy question.

626

:

It is always go back to the strategy.

627

:

When in doubt,

always go back to the strategy.

628

:

Go back to the mission.

629

:

When it comes to making tough

630

:

decisions

when it comes to crisis communications,

631

:

just go back to the strategy.

632

:

What does your organization stand for?

633

:

Who are the people that you're talking to?

634

:

Yeah, but it's just a fundamental thing.

635

:

And it's helped me through

lots of tough situations.

636

:

That's the sort of the overarching

strategy of the

637

:

of the organization as a whole.

638

:

The comms strategy. Exactly, exactly.

639

:

Yeah.

640

:

And is that

641

:

I mean, that's something presumably

periodically you're looking to kind of

642

:

renew or revisit those strategies

643

:

anyway in other sort of evolving

processes, right?

644

:

Yes, absolutely.

645

:

And Ada, we have a new director

and we're actually working on a new Strat,

646

:

a new three year strategy right now,

which should launch next year.

647

:

And so, yes, I think this might tie back

to what I was saying

648

:

about the definitional piece earlier

in that it

649

:

it feels comfortable to have something

that's evergreen,

650

:

but in reality, a definition will evolve,

an organization will evolve.

651

:

And I'm lucky to work in an organization

where comms

652

:

is always a really close collaborator.

653

:

But those processes.

654

:

Final question is a recommendation.

655

:

So often people recommend

a particular book that has

656

:

sort of inspired the work that they do,

but it could be anything,

657

:

could be documentary,

radio show, podcasts, whatever it is.

658

:

What? What's yours?

659

:

So mine is very topical.

660

:

It's not exactly about communication,

but it is the perfect example

661

:

of using storytelling

to illustrate a complex issue.

662

:

So it is a new documentary

called daughters.

663

:

It's on Netflix in the UK,

and it zooms in on a father

664

:

daughter dance that is being held

in a prison in Washington, DC.

665

:

So that's the story that's being told.

666

:

You see interviews with the fathers,

667

:

you see the daughters outside of prison

living their everyday lives and the

668

:

struggles that they're encountering,

you know, with their fathers in prison.

669

:

And then everyone

is preparing for this dad.

670

:

So that is the story that's being told.

671

:

But it illustrates the systemic problems

of racism,

672

:

of poverty, of poverty,

of the prison industrial complex,

673

:

and without actually,

674

:

you know, being heavy handed about it

and talking about these things

675

:

in an academic way, it shows it through

the stories of these actual human beings.

676

:

And it is so powerful and moving.

677

:

And I think it's a really good lesson,

actually.

678

:

I will I can't wait to check

that out. Thank you so much.

679

:

What a fascinating and fun chat that was.

680

:

I'm enormously grateful

to Catherine for her time,

681

:

and also very grateful to everybody

working at the Ada Lovelace Institute

682

:

for all they're doing to keep society's

interests front and center

683

:

when it comes to the development

of new technologies involving data and AI.

684

:

Next episode is going to be

another research unraveled extra.

685

:

That's our bonus episode where Bianca,

my co-host, and I will reflect

686

:

on this interview and more importantly,

we'll hear what you thought about it.

687

:

So please do let us know what resonated.

688

:

Do you have any of your own tips

for how to engage with policy makers?

689

:

I know that this is an area that lots of

you want to explore in greater depth,

690

:

so it would be wonderful to hear

what you think.

691

:

As ever, you can either write to me

directly at Peter at Orinoco comms.com

692

:

spark conversation on social media

using the hashtag Research Unraveled.

693

:

Or if you'd like your actual voice

to be heard and featured in that bonus

694

:

episode, you can drop us a voice memo

by at the link in the episode notes.

695

:

If you do one thing today to

696

:

help us, it would be to share this episode

with someone in your network.

697

:

If you could do two things,

698

:

it would be to also leave a review and

rating wherever you listen to podcasts.

699

:

All would be massively appreciated

and help us to grow this podcast

700

:

for the benefit of the wider

research communications community.

701

:

So thank you and see you next time.

Show artwork for Research Unravelled

About the Podcast

Research Unravelled
A podcast exploring the complex world of research communications
Welcome to Research Unravelled - a podcast exploring the impactful world of research communications.

We'll be digging into the complexity that lies at the heart of this field and hearing from expert practitioners about how they navigate or unravel that complexity.

Research Unravelled is hosted by Peter Barker and brought to you by Orinoco Communications - the creative agency where we specialise in helping research organisations to tell their stories and give their research the reach it deserves.

In addition to his monthly conversations with experts Peter will also be joined by colleague and co-host, Bianca Winter, for bonus episodes where they will respond to listeners' comments and questions and discuss the latest news from the world of research comms.

About your host

Profile picture for Peter Barker

Peter Barker

Peter is a multimedia producer with more than fifteen years experience creating documentaries, animations and other forms of digital content for TV and online. Before starting Orinoco Communications in 2016 Peter worked as a television producer and director, a job that took him all over the world, filming everywhere from NASA bases in the U.S.A. to volcanic islands in the Pacific ocean to ancient Mayan ruins in Central America. Now he has a more sedate life, living with his family by the sea on the east coast of England.