
Red Fern Book Review by Amy Tyler
Find your book club picks and get your literary fix here. I lead bookish discussions with authors, friends and family minus the scheduling, wine, charcuterie board and the book you didn’t have time to finish. My tastes skew toward the literary but I can’t resist a good thriller or the must-read book of the season. If you like authors like Donna Tartt, Ann Patchett, Jonathan Franzen, Marie Benedict and Rachel Hawkins this podcast is for you.
Red Fern Book Review by Amy Tyler
Canadian Book Club Awards Winners Part 2: Ignacio Cafone
Oxford Professor of Law and AI Regulation Ignacio Cofone joins the podcast to talk about his book The Privacy Fallacy: Harm and Power in the Information Economy. Ignacio is a 2024 Canadian Book Club Awards Winner in the Non-Fiction/Education category. He talks about what we need to think about when we sign data consent forms and convinces me that computers are not overtaking the world anytime soon.
Books discussed:
The Privacy Fallacy: Harm and Power in the Information Economy
About The Canadian Book Club Awards:
- Canada's largest reader's choice awards
- Open to all authors, regardless of publishing type (self published, hybrid, traditional)
- Readers just want good books!
- Submissions for the 2025 awards are open
- Sign up to be a verified reader and help select 2025 finalists books
Follow the Canadian Book Club Awards:
Instagram: @thecanadianbookclubawards
Website: canadianbookclubawards.ca
Follow Ignacio:
Follow Red Fern Book Review:
Website and to leave a voicemail: https://www.redfernbookreview.com
Instagram: @redfernbookreview
Facebook: https://www.facebook.com/redfernbookreview/
Newsletter: https://www.redfernbookreview.com/newsletter
Amy, hello. Welcome back to the red farm book review. I am your host, Amy Tyler, and today I am interviewing one of the winners of the Canadian book club awards in the education and non fiction category, and his name is Ignacio cofone, and he's written a book called The Privacy fallacy. So before we get to him, I wanted to let you know this is a little bit of a mini so a bit of a baby episode, because he was supposed to be in the last episode, but he's in the UK, and there's some scheduling stuff. It's kind of my fault, but the good news is he gets his own episode so but let me tell you a little bit about the awards. The Canadian book club awards are Canada's largest Readers Choice Awards. They're open to all authors, regardless of publishing type, whether you're self published, traditionally published, or a combination of the two. And submissions for the 2025 awards are open. So if you want to submit your book, you can and I'll put a link in the show notes. And what makes these awards Canadian? You don't actually have to be a Canadian to win them, but they're read by Canadian read and voted on by Canadian readers. So if you want to be a verified reader for the 2025 awards, which will be announced later this year, you can do that. So I'll send, I'll put a link in the show notes to that too. A little bit about Ignacio. He's a professor of law and regulation of AI at the University of Oxford, and he's also affiliated with the Yale Information Society project, and he's a former professor at McGill University, and his interest is in AI and data governments and with a focus on regulatory design and remedies. So with that, we're going to move over and talk with Ignacio. It's so nice to meet you, Ignacio, and thanks for joining the podcast.
Unknown:It's very nice to meet you too. Thank you so much for having me here.
Amy Tyler:I wanted to start just the first question is, how did you decide to specialize in ethics and AI? It's super topical, but how did that come about for you?
Unknown:Yeah, so I became interested in AI ethics by working on privacy, as you might have noticed from the book. So over time, when I was working on privacy, both from a policy and from a law angle, it started becoming clear to me that many of the pressing issues regarding data and privacy, like how people are profiled, how they're nudged, how they're scored, wouldn't really be understood without also thinking about the systems and technologies that drive those different data practices. And so the book tried in different ways to bridge those two ideas, to show how connected thinking about AI ethically and thinking about data and privacy are. What drew me in was, was seeing how technology and AI systems in particular can concentrate power in lots of invisible ways. They can amplify power imbalances between individuals and large platforms. And because of that increased power imbalance, they can often escape public scrutiny, creating harms that traditional frameworks fail to recognize and address. And so that tension between what the law and what public policy sees for AI and what actually happens and causes harm is what I try to keep at the core of my work.
Amy Tyler:Okay, so I want to, I'm very curious about the writing process for you. You're, an academic, you're probably very comfortable writing book chapters and scholarly articles. You're obviously comfortable public speaking, presenting material, but writing a full book is something different. So I just wanted to know what, what was that process like for you, and what, what was maybe tell us something that surprised you about the process.
Unknown:So I wanted to write this in a way that would be exciting for people who work in the field, but it could also be accessible to a general audience. Because writing a book gave me a unique opportunity. It gave me the opportunity to write a narrative. Connects the dots between many of the different specific problems that I was working on. When you write an article or a book chapter, you get to focus on a specific aspect of a specific problem, but you don't always get to share your broad view of a field or your broad view of why you think something is a problem when the writing process was different in the sense that I tried to pitch it to a different audience. I didn't write it for for a group of experts. I wrote it for for a general audience, which meant that I read more broadly than I normally read, both in terms of academic work and non academic work. And I workshopped it differently than how would normally workshop it also with with policy audiences, with non academic audiences. And the exciting thing about it is that it gave me space and the opportunity to ask not only why something was a problem, but also what caused that problem and what avenues could we potentially have to then fix that problem?
Amy Tyler:What do you mean by workshop? So you would test out your material? Would that be through a lecture? Or what do you mean by that?
Unknown:Sometimes yes. So sometimes it literally means going to a workshop and presenting it, not only at an academic conference, but also on an event that has legal non legal professionals, for example, or an event that has lots of regulators, and sometimes it just meant informal conversations with different people. I sent it to friends who are not academics and are not lawyers. I sent it to regulators that I've met through through other work on privacy and AI to see what feedback, what they would have. I sent it to people working on the technology to make sure that whatever I say about it is realistic and is and is accurate. Because the timeline was longer and the argument is more general than other pieces that I'm used to right there was this pretty nice opportunity to get input from from several different groups.
Amy Tyler:What's something that someone told you you had to take out or you had to change? Can you give an example
Unknown:something that I had to take out or I had to change? I used to have a very short conclusion, and that's because I tend to never read the conclusions of the books that I read, which is probably I now notice that terrible habit. I think most people just read the book and then skip the conclusion, because it usually doesn't say much that is new. And people told me that was a terrible idea and I should actually write a longer conclusion. I know that I did. I'm happy that I that I wrote it, because lots of things that if I hadn't written it would be kind of implicit throughout the book. I could make explicit at the end about the more general view. And I think that was part of the writing process, of moving away from writing academic papers, where at the conclusion, you just want to summarize or restate the arguments that you had before, versus writing a book for a different kind of audience, where you have to be more explicit about what the general view is,
Amy Tyler:okay, so let's explain to listeners what you mean by privacy fallacy. So explain your book.
Unknown:The Privacy fallacy is the contradiction that I think is at the core of how many legislators, companies and even sometimes advocacy groups treat privacy. We often say that privacy is very important social value, but then when it comes to protecting to protecting it, we often find ourselves caring only about the tangible and measurable harm that can happen as a consequence, like energy theft or financial loss, but saying that privacy is a social value means saying that there's something valuable in it, not just around it, and if we forget that there's something valuable in privacy, there are lots of deeper harms that we stop paying attention to, like how people can be manipulated. They can be excluded from opportunities or decisions about their lives can be made by by opaque system. So I thought the book needed to address and push back against the privacy fallacy, because it is a book about the information economy. The information economy is the system in which companies profit, not only from the money that we give them, but also from the data about us that they have. And so by addressing the privacy fallacy, I wanted to cut through a number of misconceptions that do interfere with the way that that we can understand the information economy. One of them, for example, is the idea that if you have nothing to hide, you have nothing to fear. Well, it could be that you actually have nothing to hide, and no negative material consequences will happen to you if you don't hide or keep something private, but it could be that you want to keep that information private nevertheless, and now. Another idea that cuts across the information economy that relates to this prosecu Is that consent ensures safety, that if we ask people to consent to data practices, then everything is okay, but lots of bad things can happen to people even if they agree to certain data practices. So I think taking the idea that we should not exploit people through personal information seriously helps us see lots of aspects of the information economy a lot better.
Amy Tyler:So this is sort of a what if question. If you were writing a terms and conditions for a company and no one really, I mean you might, because you're a lawyer, I don't you know, I'll just be in a hurry to do something, and I'll just check it off. And what would you do differently? Or what would you highlight?
Unknown:Yeah, in this hypothetical, would the company let me write anything that I like? Ethical company? Well, I would first thank this wonderful company for being so ethical and wonderful and open to new ideas. And then I would put it sounds quite general, but I think it's quite important. I would put in place a corporate duty not to exploit its users. And I think just by having this idea of non exploitation embedded at the core, we can get through a lot. Part of the problem that we have, both with how we write terms and conditions and how we write our privacy laws, is that we try to be as specific as possible, but data harms are contextual, so when we try to be so specific and prohibit very targeted practices, then lots of harms end up falling through the cracks because they're context dependent, because we didn't predict them. But if we tell companies don't exploit the users that gave you their information, then we can catch all of those things, and lots of these things end up being common sense. If I could go further, then I would also write some disclosure requirements where the company discloses not just the mechanics of data collection and as terms and conditions of them do, but also the potential for aggregation, for influence, for harms at scale. And I think that will be more helpful for whoever ends up actually reading the terms and conditions, if anyone,
Amy Tyler:if anyone, now, when you say non exploitation, so normally you get, you see something that says you have to give permission to share your data. Yeah. How is this different? Like, what's something other than giving your data to another company that could then market to you? What? Give me another example, because you're making it broader.
Unknown:Yeah. So data sharing can be very good in many contexts and very bad in other contexts. Data Sharing can be something as simple as a tech startup that is very responsible in how they handle data. Don't have enough space in their server to have all of the data there, so they need to share with a third party, because they're renting their servers, and that's a totally fine way to share data. Data Sharing can be as bad as a company selling health information about its users to another company in a way that exposes them to discrimination and financial harm or that exposes them to manipulation. So when we say data sharing okay or not okay, we're putting in the same bag lots of things that are very different, and it is impossible to come up with a rule to say, Oh, this company should do data sharing. This company should not be do data sharing. And it is wrong to put on the shoulders of users the information gathering process and the decision process to figure out which kind of data sharings are okay, and which ones are not. Particularly when saying data sharing is okay, they're basically handling that, handing that company a blank check to do the good types of data sharing and also the bad types of data sharing.
Amy Tyler:Okay? So I'm curious. I think everyone's afraid of AI. I mean, I am and and is there anything in this space that gives you hope or makes you optimistic because it's it's scary?
Unknown:Yeah, I think we're often scared of AI for the wrong reasons a lot of people or we are sometimes scared of AI because we think that you will become too intelligent and will try to wipe us off the surface of the earth, and that's highly unlikely. I'm not saying that no one should pay attention to that, but that should be the center of attention. I'm more worried about other types of AI harms. I'm worried about misinformation, disinformation created by generative AI. I'm wondering. I'm worried. About the different ways in which we use predictive AI wrong and make decisions about people that aren't fair. But what does give me hope is that I think the conversation is shifting in a number of ways, and we increasingly see efforts, both in the regulatory space and in the activism space, that pay more attention to the question of power. And the question of power is at the core of data, and is at the core of AI. Holding data about people is holding power over people, and holding that power over people allows one to either do lots of great things for them or to exploit them. And the ways that we traditionally have to handle that power just don't cut it anymore. So when we see these new activists and new regulatory efforts take power into account seriously, and when we see people filing lawsuits and trying to to push back against abuses of that of that power, I think that's the kind of shift where meaningful change can happen, and I am cautiously optimistic about things improving.
Amy Tyler:Okay? And then I guess the last question, just sort of a fun question, is, how have you what's the experience been like for you to just enjoy your book is done, so now you talk to people or just have your book around, or what's that been like for you being a first timer? Oh, it's
Unknown:great. It's great fun. Also, I think the publishing process gave me a wonderful work, a wonderful break. I spent so much time editing in the last couple of months that I think I couldn't see it for a little bit anymore, and just a couple of months that a publisher takes to have it come out was just the break that they needed. And Cambridge University Press was wonderful. They did so fast. They did so efficiently. They were they were really great to work with. So, yeah, it was nice to get to talk about it with people, to speak with groups that I don't usually get to speak when I do academic work like you and your listeners and and to try to bring people into the conversation a bit more, to try to have this not be just a conversation between lawyers and regulators about the technicalities of how to resign AI regulation or privacy law, but rather to get input from people working on technology or on the corporate sector or on other fields about something that I mean, it sounds distant, but does affect everyone's daily lives in lots of hidden ways.
Amy Tyler:Thank you so much. Thanks for joining the podcast, and I really enjoyed our conversation. Oh,
Unknown:thank you. Thank you for having me, and I enjoyed the conversation as well. Thanks.
Amy Tyler:So again, the book is the privacy fallacy, harm and power in the information economy, by Ignacio cofoni And I really enjoyed our discussion. This is not a beach read, but it's a very well laid out book that addresses a very topical problem that we're facing. And really what he's talking about is power imbalances and and how it's a fallacy that when we give our consent, when we click, I agree, that we're really protecting ourselves, and that we need to really look at stronger systemic regulation that addresses kind of why corporations have the power that they do. But what I really enjoyed from talking with him is you can feel his passion about the subject matter, but also that, you know, I personally am pretty I'm afraid of AI. I'm nervous when I sign those forms that I, as I mentioned, like I think most of us, I don't read closely, but he was more about, let's just explain what's going on and come up with ways to make positive change. But it's good to know that he doesn't think that computers are taking over the Earth anytime soon. So hopefully that won't happen. Anyway, it's thanks so much for tuning in, and I will be back with you in a few more weeks with another set of interviews from the Canadian book Club awards. Thanks so much for tuning in. Bye. You.