Safety by Design with Julie Inman-Grant

In this safeguarding podcast we talk about Safety by Design with Australia’s e-Safety Commissioner Julie Inman-Grant, an American in Australia protecting children around the world. She discusses the Child Dignity Alliance, how the suicide, caused by online abuse, of a top TV presenter led to the creation of the e-Safety Office, and the three key principles of the Safety by Design code.


There’s a lightly edited for legibility transcript of the podcast below for those that can’t use podcasts, or for those that simply prefer to read.

Neil Fairbrother

Welcome to another edition of the SafeToNet Foundation’s safeguarding podcast where we talk about all things to do with safeguarding children in the online digital context.

The online digital context comprises three areas: technology, law and ethics are culture, with child safeguarding right in the centre of this Venn diagram, and it encompasses all stakeholders between a child using a smartphone and the content or the person online that they are interacting with.

Different countries have different cultures, they have different laws and different regulations and you can’t get much further away from the UK than Australia, and joining me on the line from Australia is Australia’s e-Safety Commissioner Julie Inman-Grant, welcome to the podcast. Julie.

Julie Inman-Grant

Thank you so much for having me Neil. And just to confuse your listeners further I am in Australia about as far away as you can be, but I gather you’ve picked up my American accent, so I’m in Australia by way of Seattle in Washington D C

Neil Fairbrother

Yes, you have quite an illustrious a CV. You spent a lot of time with Microsoft, I think. Perhaps you could start by giving us a brief resume of your career and your background, Julie, and outline the work that your team does.

Julie Inman-Grant

Sure. Well, I left Uni in the early 1990s full of big ideals and even bigger hair because it was the 1990s, after all. But I went to work for my hometown Congressman in Washington DC and I started working on a range of social issues and he came to me one day and said, we’ve got this small little software company in our district called Microsoft, maybe you could work on telecommunications and technology issues as well.

So I started working at the intersection of technology and social policy before there was even an internet. I followed that up by living in Europe actually for a while and working out of Brussels, and then joined Microsoft is their first lobbyist in Washington DC in 1995, right before the US Department of Justice sued them for antitrust violations. So that was a fascinating time in terms of policy development, it was part of the development of the Communications Decency Act of 1996 which essentially gave the online companies, of course, there wasn’t social media then, but [it gave them] intermediary liability or immunity from responsibility for what users were doing on their platforms. Now I think those laws are seriously being reconsidered, twenty three years later.

I also worked on the first White House summit on online safety when Clinton was in the White House. So fast forward five years and a bruising engagement with the Government later, Microsoft sent me out to the formal penal colony in Australia to sort out their community affairs, industry and government relations programs, which I expanded to cover safety, privacy and security across Asia Pacific through 2009. And then I finished up with them working as their global lead on internet safety and privacy outreach out of Redmond.

Three children later, I joined Twitter here in Australia and covering all of Southeast Asia, trying to help them turn their safety fortunes around and did that for a couple of years. And then I joined Adobe for something a little bit different. And then I was recruited by our Prime Minister to no longer work within the technology companies, but to start regulating them for the harms that were happening on their platform.

And I’ve got to say it’s the most rewarding, most difficult job I’ve ever done, but it does marry all those unique experiences I’ve had over the past 25 years working at the intersection of technology, social policy and safety.

Neil Fairbrother

Excellent. Brilliant. So you’re also involved with the Child Dignity Alliance. What is the Child Dignity Alliance?

Julie Inman-Grant

That’s right. Well, Pope Francis gathered a group of I guess, probably a hundred of the top online safety experts to the Vatican in November, 2017 and made some really important acknowledgements about the role of the Church of not only combating child sexual abuse within their institution, but also acknowledging that child sexual abuse is a huge issue online.

So he issued a Rome Declaration which was really about preserving the dignity of children in the digital world. And out of that Baroness Joanna Shields, Ernie Allen, who used to head up the National Centre for Missing Exploited Children in the US, formed the Child Dignity Alliance. And I was honoured that they asked me to serve on the technical task force.

Now, technology can be used in so many positive ways and harnessing technology to be able to combat child sexual exploitation and identify victims and go after perpetrators is important. But we also know how the internet has helped with the proliferation of child sexual abuse. In the earlier days, child sexual abuse was really about a predator having direct contact or access to a child, whether as a family member or a family friend, a coach or a teacher.

But now we see predation and we’re giving predators of vast new repository of people that can be groomed and can be reached. And of course, there’s also the extremely traumatizing challenge of these children being tortured and abused and their abuse being memorialized and recorded online and proliferating online and following those individuals for the rest of their lives, so we can’t underestimate the trauma that that causes.

So there are positive and negative roles that technology can play in this space, so this working group was about harnessing the positives, identifying the true barriers to collaboration in terms of using technology to combat the scourge, but also creating an inventory to identify all of the advanced technologies and products that are available to NGOs, law enforcement, and industries that are willing to join in the fight to combat child sexual abuse, so some real important works been done there.

Neil Fairbrother

Okay, excellent. Thank you. Now we don’t have an e-Safety Commissioner in the UK. What is the role of the e-Safety Commissioner and how does that position fit into Australia’s political system?

Julie Inman-Grant

Sure. The e-Safety office was established in 2015. It actually came about… there was “Australia’s Next Top Model” [an Australian “reality TV” series], believe it or not, there was a woman by the name of Charlotte Dawson who was the host and she was very visibly trolled on Twitter. She was open about issues that she struggled with, mental health issues, depression. She went and got some help, tried to get better again, got back on Twitter, received horribly targeted abuse and sadly ended up committing suicide. That was referred to as the Twitter suicide. It blew up in the media. A petition was started and there was a lot of pressure on government to do something about all the harms that the internet was creating, particularly around online abuse.

The Government’s response in 2015 was to establish the Children’s e-Safety Commissioner and to start small in terms of looking at the most vulnerable to online abuse as being children, in the way that the legislative cyberbullying scheme works. If a child under the age of 18 experiences serious cyberbullying, and that’s anything seriously threatening, harassing, intimidating or humiliating, and they report it to a social media site and it doesn’t come down, they can come to the e-Safety office to serve as that safety net and advocate on behalf of that child.

We’ve done so in about 1400 cases where we’ve gotten that harmful content taken down rapidly. We’ve done that by building strong relationships with the social media sites, but also gaming platforms and apps. And we have a hundred percent compliance rate.

Neil Fairbrother

That’s fantastic.

Julie Inman-Grant

Yes. Well we see the companies that are responsible clearly don’t want abuse happening on their platforms. And this is where “Safety by Design” comes in. Had they actually been doing the risk assessments up front, using all the intellectual capital on advanced technologies, investing in the resources they need to build safety protections in at the get go, rather than retrofitting protections after the damage has been done, we may not be seeing this as an epidemic on the internet.

That hasn’t happened to date and things fall through the cracks. So you think about the fact that Facebook has 2.4 billion users, 400 hours’ worth of content is uploaded to YouTube every minute, and there are billion tweets on Twitter every two days. Things fall through the cracks. Some of these moderators have 30 seconds to a minute to decide whether or not a single post or a tweet or video violates their terms of surface.

So that’s why we were set up, as that safety backstop and to advocate on behalf of children. When we phone up or communicate to the social media sites, there is this inherent power balance that exists where the big behemoths, when you report something and you get a “No, it doesn’t contravene our standards or a terms of service”, you don’t get a description as to why, if you get a communication at all. So we’re fulfilling that function. If the companies don’t take down content we deem to be serious cyberbullying content, we can find them up to $26,000 a day, which isn’t a huge amount.

Neil Fairbrother

Well, it would soon mount up though, wouldn’t it? It’s more than a token symbolic payment.

Julie Inman-Grant

Right. That’s right.

Neil Fairbrother

So I understand the background behind that. In Australia you have the Australian Competition and Consumer Commission, which has got “Product Safety Australia” emblazoned across their websites, and they say that it’s an offense to supply goods that do not comply with mandatory standards. Why couldn’t digital online services be covered by the any existing product safety standards as opposed to creating a new e-Safety Commission?

Julie Inman-Grant

All right, well that’s what Safety by Design is all about. I should take you a step back because I gave a long winded description about how we started, but I didn’t tell you what we do in its entirety.

So if I were to just describe how we’ve evolved as the e-Safety Commission over the past four years, not only do we deal with youth based cyberbullying, we deal with adults cyber-abuse. We take down intimate images that have been shared online without a user’s consent and we have a 90% success rate in terms of doing that. And we also have an online content scheme that targets child sexual abuse material. We’ve done about 43,000 investigations since we’ve been established and we work through the In Hope network.

Neil Fairbrother

Yes. This is analogous to the Internet Watch Foundation we have in the UK.

Julie Inman-Grant

Exactly, yes. So our cyber report team is, and the National Centre for Missing and Exploited Children and IWF are part of that. And then we we’ve been given a new set of powers in the wake of the Christchurch tragedy.

One is around issuing notices to bad websites and content hosts around abhorrent violent material, and that could be child penetrative rape, torture, murder, terrorist acts, anything that’s meant to incite further violence, terrorism or hate, and we’ve just recently been given powers to issue directions to ISPs to block terrorist or abhorrent violent content.

So our regulatory schemes have expanded, but we also have a significant focus on prevention and what I call “proactive change”. So two of the initiatives that we’re working on around proactive change, it’s one thing to just respond to complaints and play that game of whack-a-mole in terms of notice and take down.

What we’re really trying to do is think about how we can create a safer, more secure, more powerful, empowering internet. And part of that starts with shifting the responsibility back onto the technology providers and that’s exactly what Safety by Design does and you’re asking exactly the right questions.

Remember back in the 70s when we could just drive in the car without a seatbelt. You know, we’ve evolved into the place, there were too many casualties, too many car accidents where people’s lives were lost and now we have mandatory standards about the brakes working, the seatbelts being effective and airbags deploying.

We asked ourselves exactly the same question, why shouldn’t we require the same standards from technology companies? I see tech casualties on the internet every day. So what are the things we can do to get there? We can regulate, right? But my approach, having spent 25 years in the technology industry was to sit down with a broad number of members of the industry and say, here, why don’t we come to a consensus about what a Safety by Design framework looks like. What are the principles we’re trying to get to? What are the outcomes we’re trying to achieve?

Neil Fairbrother

That’s right. And you’ve got three principals, I think with the Safety by Design code: Service Provider Responsibilities, User Empowerment and Autonomy and Transparency and Accountability. Shall we focus on those three?

Julie Inman-Grant

We came to that consensus. We started with four in our deliberations and discussions with a broad range of industry and other stakeholders about what was going to be actionable, what was going to be achievable and what was going to be effective. And this is the consensus as to where we came through.

We’re starting phase two of the Safety by Design proposal next week where we’re spending a lot of time in Silicon Valley and Seattle, Los Angeles and elsewhere, developing best practice guidance for companies. So not only are we’re saying you should do this, but we’re going to show you how you can do this, whether you’re a very mature or multi-layered company that needs to do a detailed risk and safety impact assessments or whether you’re an apps developer in a garage. Here’s the checklist of the things you need to think about.

Neil Fairbrother

Okay. So in the Service Provider Responsibilities principle, one of the points you make is that online companies must “take steps to ensure their services are less likely to facilitate, inflame or encourage illegal and inappropriate behaviours”. Now, cyberbullying, particularly when it comes to children, is obviously inappropriate, and you touched a little bit on this earlier, but is it actually illegal in Australia?

Julie Inman-Grant

Well, and I think this is the challenge that you have. I mean, we have significant civil penalties for serious cyberbullying and yes, we do have some laws. You cannot use a service provider to menace, harass or cause offense. There are criminal penalties there, but the simple fact of the matter is police and prosecutors are overwhelmed, and so you rarely see criminal laws being used to target those who are cyberbullying.

I’m not sure that it would be desirable any anyway, and you’re applying that after the damage has been done. So that’s why the prevention is so important, basically doing target hardening by getting companies to do more of the right thing.

Neil Fairbrother

Sorry what do you mean by target hardening?

Julie Inman-Grant

I guess that is a term that’s used in security a lot. Think about how your platforms and your tools that are developed for social interaction can be misused and make that vector limit the ability for people to misuse it in ways that cause harm. And you’re starting to see companies do some of that through AI and machine learning when they spot keywords that could be damaging, you know, reminders. Are you sure you want to send that email?

Developing user empowerment tools instead of just a blocking tool, coming up with a muting tool, so you don’t necessarily let the person on the other and know that they’re getting to you, but you’re not being damaged by the invective that’s being thrown at you.

Neil Fairbrother

Okay. So this is personal resilience, I think it’s also known as. You’re educating and informing and equipping individuals so that they’d become more resilient to the online environment. One of the other service provider responsibilities you talk about is implementing a Social Contract at the point of registration. So when someone signs up, not only do you have the T’s and C’s to go through, but you’ve also got some kind of Social Contract, but we all skip past T’s and C’s. No one ever reads them. Won’t we simply do the same with a social contract? And how could you enforce this kind of social contract?

Julie Inman-Grant

Well, in our legislation that governs our work, we have minimum safety standards. We’re in the process of updating our legislation and I fully anticipate that we will bolster those minimum safety standards, to require companies as a social license, as a contract to operate, to do a better job at investing and innovating and building and responding and providing these tools so that their platforms aren’t used to cause harm.

Neil Fairbrother

The second Service Provider responsibility, User Empowerment and Autonomy says that service providers should “…provide tools that allow users to manage their own safety”… but not all harms are a matter of safety. If you have a look online gaming for example, the online gaming industry is beginning to gain a fairly poor reputation for some of the practices they use, which are often seen by people as exploiting children in particular for money and getting them involved in gambling, online gambling, using things like loot boxes, skin gambling and so on.

But as far as the children are concerned, they don’t know any different. For them, that is what gaming is because that’s the environment that they’ve grown up in and they don’t see that it’s harmful because it’s fun. How can you convey to children in that kind of environment that this is a risky behaviour and it is harmful?

Julie Inman-Grant

I mean, you’re asking that sort of perennial parenting question of our time. You know, we don’t send our kids off when they’re five and six to join a pickup game and soccer with a bunch of strangers. We as parents can’t let them out into the big wide world of online gaming or online content without providing them with what we call the four R’s of the digital age. Teaching them about Respect, Responsibility, helping them build their digital Resilience and critical Reasoning skills, letting them co-playing and co-viewing, letting them know what the guidelines are, setting up parental controls, all of these things. You know, these games are designed to make money. They’re designed to engage young people and often they’re designed to, whether it’s through the selling of skins or loot boxes or other means, to trick kids and then get their parents to pay money.

So these are practices that do need to be rooted out, I think there’s been a lot more visibility and transparency that this is happening and governments are responding. Australia did its first inquiry on in-App purchases, probably back in 2013. So this has been happening for a long time, and this is where we’re really hoping to see that independent regulatory authority be established in the UK, and we’re working with the Irish around their development of a digital safety commissioner.

These are global issues that require global solutions and we need to be working together collectively to address them and I expect that over the next five or 10 years as you have really a global community of data protection authorities, that we’ll see a range of commissioners and regulators beyond the e-Safety office. Right now we’re, we’re the only ones doing this, so we welcome others and I think every country will bring a different approach, will bring a different set of tools, but as long as we’re working together, I think we can achieve a lot more.

Neil Fairbrother

Okay. The third of the three principles that you’ve identified is Transparency and Accountability and within there one of the things that you’re asking for is for the online companies to publish an annual assessment of reported abuses plus open publication of meaningful analyses of metrics of abuse, effectiveness of moderation and so on. But isn’t this simply asking them to mark their own homework?

Julie Inman-Grant

Well, you can never make improvements if you aren’t measuring and evaluating whether or not the tools that they’re putting in place, the policies, people and practices they’re using are being effective. Transparency and accountability are really challenging. You’ve got a number of companies that are putting out transparency reports. I wouldn’t describe any of them as demonstrating radical transparency, I’d say it’s selective transparency. But at the same you are talking apples and oranges. And so I understand the reticence of companies to publish abuse statistics because they mean different things.

So this is where we’ve said these are the outcomes that we want to see. We need to understand what the threat vectors look like, what your abuse rates look like, what interventions you’re applying and whether or not they’re being effective, because I think where we’re going to be getting to is, parents need guidance. They’re going to want to have a score card as to how relatively safe or unsafe a given platform is, and companies are going to have to realize that they’re going to need to start to compete based on safety, or on reducing toxicity, and increasing civility on their platforms. This is a real turning point.

Having worked in the industry for 25 years, the whole idea for Safety by Design came out of my frustration working for years within Microsoft that they invested in Security by Design, Privacy by Design, Accessibility by Design and I kept saying, why have you stopped at Safety by Design? Skype is being used as a primary vector for livestream child sexual abuse shows and sextortion, why aren’t we doing this and safety? What does it take, a reputational, or a regulation, or a revenue concern to make you act? You know it’s quite frankly dumbfounded me that more companies haven’t been focused on privacy as a real attribute, a way of building trust, a way of reducing risk and building reputation.

Neil Fairbrother

Yes, and reputation is coming under increasing pressure obviously as more and more cases of dreadful things happen. But when it comes to transparency and accountability, you’ve worked in the industry, you’ve worked at Twitter, there’s a lot of secret sauce in the codes and the algorithms used by these companies, and transparency means that they need to open up how these things are working. But they’re not going to do that because they will claim that it’s competitive privilege. You know, it’s their secret sauce, it’s how they differentiate themselves. So why would they be transparent about how all of that technology is going to work?

Julie Inman-Grant

Well, I think you’re right to point out that the algorithms are their secret sauce. They’re their trade secrets and no, I don’t think they’re going to open up the hood there, but that’s not what we were asking. We were really more asking about understanding the rates of abuse? What is the nature and type of abuse? How are they addressing these abuses? What’s working and what isn’t? So that we can help assess the overall safety of the platform, and they as companies can assess whether or not their interventions are being successful.

Neil Fairbrother

In the detail of the Safety by Design overview, there’s an interesting graph that shows that the perception of online risks are far higher than what is actually experienced. So is there a danger that all of this is a bit of an overreaction and we’re going to kill the golden goose?

Julie Inman-Grant

It depends on, again, what the reaction and the interventions are. We take a very pragmatic approach. As I said, we try and work cooperatively with the companies where we can, we’re not trying to run them out of the business. We’re trying to push them to do better and to work with us to create a safer environment for their users and for our citizens and when things fall through the crack, we’re asking them to act and to take down the content.

There is a risk. We see a lot of fear wrangling, a lot of scaremongering and we know in terms of working with parents and educators and others, if you use too much of that and talk about all the ills, it results in an amygdala hijack, so a flight or fight response. So parents are either engaging in device denial, which isn’t constructive because that means when something goes wrong online for your child, they won’t go to talk to you or another trusted adult.

Neil Fairbrother

And what do you mean by device denial?

Neil Fairbrother

Device denials. So a common, I suppose, blunt force approach, we see parents or even schools taking to limit the bad things happening online is to say, Oh, we’re going to take away your phone. You know, all this terrible stuff is happening, or we’re going to shut down your internet.

Neil Fairbrother

Yes. And I think there’s some pressure coming from is it your Federal Education Minister Dan Tehan, is that his name? He’s pushing to have an immediate ban on mobile phones in Australian classrooms, I think.

Julie Inman-Grant

Yeah. And we’ve seen that happening around the globe. This is not the first moral tech panic that the world has ever experienced, whether it’s the radio, the TV, or the internet. Now I don’t think anybody thinks it’s a great idea for little Johnny to be sending snaps during math class, but I think we need to target the behaviours and not the technologies.

I’ve never seen a case where pure technology bans have resulted in minimizing cyberbullying or image-based abuse or sexting. We know in Australia that most teenagers spend more than 33 hours outside of the classroom online. So we want to make sure that we’re coming up with interventions that don’t shift the problem, or absolve the responsibilities from schools or others for incidents that may be happening on their platforms.

We want schools to be teaching these tools and these values, the Four R’s of the digital age: Responsibility, Respect, Resilience and critical Reasoning skills. We need to prepare our children for the workforce of tomorrow. Technology is here to stay. We need to teach them the skills and the behaviours and that includes the resilience and the critical reasoning skills to be able to navigate this world successfully.

Neil Fairbrother

Okay. We are running out of time, Julie, but I do want to ask you one final question if I may. What is the e-Safety Woman program? That sounds very interesting.

Julie Inman-Grant

e-Safety Woman is dedicated. We’ve trained about 10,000 what we call “domestic frontline workers”. We know in 98% of family and domestic violence cases that TFA or Technology Facilitated Abuse is a feature in 98% of them. So usually it’s low tech, you know, hundred harassing text messages in a day, using GPS to track a former partner’s movements, putting a surveillance device on a Teddy Bear or under a pram.

But we’re seeing increasingly sophisticated abuse factors with the proliferation of the Internet of Things. So we’ve seen former partners flying drones over a safe house. We’ve seen women whose cars have explicably stalled when they drive more than five kilometres beyond their home, thermostats that are controlled remotely by phone, digital locks, smart speakers and AI, different kinds of surveillance devices that are being used to surveill harass and stock.

And so what we’re trying to do is teach women, particularly in domestic violence cases when they’re escaping a violent situation, to be able to use their technology and their phones as a lifeline for support, and that normal sleep normalcy of needing to be connected to friends and family rather than having that technology being weaponized against them and used as a tool for debasement or harassment.

Neil Fairbrother

Okay. Well all the very best of luck with that program, it sounds like it’s very much needed today. Thank you so much for your time. It’s been a fascinating discussion. It’s great to hear what’s going on in another part of the world.

Julie Inman-Grant

Thank you so much for having me. And I look forward to seeing how the Online Harms white paper and all of that work pans out. We’re looking forward to having a friend in the UK.

 

CAB