Advancing Ethics in the Games Industry

A Round Table

Francine Blumberg

August 12, 2024,

Abstract

How can we institute good ethical practice in the games industry, safeguarding the welfare of players and employees alike? Inspired by the code of ethics drafted by the Ethical Games Initiative, this roundtable convenes industry regulators and leaders from Minecraft to Sesame Workshop to discuss these questions. They highlight the importance and difficulty of finding common ground in cultural diversity; the leverage in laws and regulations, positive aspirations, making a business case, and groundswell employee demands; and the need to align scalable systems with contextual player experiences and behavior.

1 Introduction

How can we safeguard the rights and welfare of video game players and their developers? What are the main ethical issues facing the games industry? What does “good” ethical practice look like? How can we drive its awareness and adoption? Concretely, is a shared code of ethics for the games industry, as proposed by the Ethical Games Initiative, a plausible and desirable way to advance ethics in the industry? These are the wider questions informing the inaugural Ethical Games Conference (January 11–12, 2024), and the closing roundtable documented in the following transcript. Convened by Celia Hodent, it featured different voices and perspectives from policy makers and leaders in the digital game industry. The panelists touch on issues including the diversity of industry stakeholders and their interests; barriers to the adoption of codes of ethics such as public perception and business case; the difference between legal compliance and ethical aspiration; cultural diversity; and squaring the holistic and contextual nature of experienced harm with the need for technically and socially scalable solutions. Statements have been edited for length and clarity by Fran Blumberg.

2 Participants

Moderator Celia Hodent holds a PhD in psychology and is an independent game user experience consultant and founder of the Ethical Games Initiative (ethicalgames.org).
Régis Chatelier is Innovation and Foresight Project Manager at the Commission nationale de l’informatique et des libertés(CNIL), an independent French administrative regulatory body whose mission is to ensure that data privacy law is applied to the collection, storage, and use of personal data.
Eve Crevoshay is Executive Director of Take This, a nonprofit organization dedicated to decreasing stigma and increasing support for mental health in games. This spans considering the cultural and structural factors inside games that contribute to or detract from well-being.
Kate Edwards is CEO and principal consultant of Geogrify, a game culturalization consultancy, and CXO and co-founder of SetJetters. She is the former executive director of the International Game Developers Association(IGDA) and the Global Game Jam.
Carlos Figueiredo is Director of Player Safety at Minecraft (Mojang Studios, Microsoft) and co-founder of the Fair Play Alliance, a global coalition of gaming professionals and companies that develop and share best practices to encourage healthy communities and player interactions in online gaming free of harassment, discrimination, and abuse. He started his journey in online safety through his work on the Disney game Club Penguin.
Alan Gershenfeld is president and co-founder at E-Line Media, co-founder of design agency Experimental, co-author of Designing Reality (Basic Books, 2017), and the former chairman for Games for Change. Before, he helped rebuild and was SVP of Activision Studios.
Katherine Lo is the Content Moderation Lead for Meedan, a technology nonprofit that builds software and programmatic initiatives to strengthen journalism, digital literacy, and accessibility of on- and offline information. She is currently a visiting researcher in the Center for Responsible Ethical and Accessible Technology(CREATE) at UC Irvine.
Miles Ludwig is Vice President, Digital Media, at Sesame Workshop, an American nonprofit responsible for the production of several educational children’s programs, including its first and best known, the internationally televised Sesame Street.

3 The Roundtable

3.1 Challenges in Establishing a Shared Code of Ethics

Moderator: What do you think are the main challenges to putting together a code of ethics and then enforcing it?
Alan Gershenfeld: I’m going to start by zooming out a bit. Interestingly, my oldest brother—shout-out to him—specializes in things like creating charters and codes of ethics in lateral non-hierarchical systems: trade groups, internet governance, really what we’re talking about here. So I’m going to talk a little bit about the methodology that I’ve worked on with him. The first step is identifying the stakeholders. There are many beyond developers and publishers: players, parents, trade groups, investors, regulators, researchers. But I’m going to focus on the industry because that is the focus of this panel.
Even within the industry, there are different stakeholders: the leadership of AAA companies, studios and publishers working on high-budget games. AAA companies’ top creative talent and employees are actually different stakeholders. The indie sector of independent game studios differs yet again from AAA, same for folks who focus on mobile games versus PC and console games—there is some overlap between these groups, but not complete overlap. As Miles mentioned, publishers who focus on children under 13 are different from those focusing on players over 13 years of age. So rather than looking at developers and publishers as one body, really understanding who the different stakeholder groups are is important.
Second is identifying their interests. Where do they align, where do they overlap? I’ll mention a few that may be intuitive but in some ways not totally intuitive. Clearly one is making or saving money. That tends to hold across most of the commercial industry. I think the folks from the Fair Play Alliance can speak to this: for instance, toxicity in the gaming community was becoming such a business issue for Riot Games—they weren’t able to attract new players—that they saw this as something they needed to address from a business perspective.
Another interest is improving the player experience. If players are happy, the business tends to do well. Another interest that I wouldn’t underestimate is employee motivation. Here, Playing for the Planet1 is another example of a successful alliance in the game industry, one of commercial companies that are making environmental commitments. And in many ways, it grew out of employees who had a passion about the environment and games—not necessarily the top leadership at these companies. There were a few, but it was often employees. Leadership has to keep their top talent happy and motivated. If part of that is recognizing that they care about a code of ethics or they care about the environment, that’s an interest.
And then there’s reducing liability. I think identifying stakeholders, their interests, and then alignment and misalignment are key steps towards developing a code of ethics.
Now obviously, such a code will be much stronger and likely to succeed if the industry is directly involved in the process of developing it. Here, the two most important questions for any game company are: “How important is this to you?” and “How difficult do you think it would be to implement?” If it’s really important and not that difficult to implement, you got low-hanging fruit. I don’t think that’s the case here.
When I left Activision after helping to build the studios there and I was seeing the billions of hours of lean-forward engagement, I got very interested in how we could create more value and mitigate more harm. There’s been a lot of creating value for games for learning, health, and social impact since, but much less in terms of mitigating harm. So I think having an aspirational code of ethics is a big step forward that people can start discussing. Then employees at companies will start to embrace it and it can bubble up internally. In many ways, that’s how Playing for the Planet had success.
Carlos Figueiredo: I definitely agree, Alan. Understanding the motivations and goals of the different people we need to involve in the conversation is absolutely critical. But while business is such a chief motivator, the genesis of the Fair Play Alliance2 wasn’t business. That was something we stumbled upon in the practice of forming the alliance. The genesis was really this shared aspiration—you used that word, and I love that. It was this shared aspiration to design games that are conducive to prosocial behavior, a thriving community. We stumbled into the wider conversation realizing that the main barrier many companies faced around implementing positive social systems and making sure that safety was top notch was about understanding the relationship between that and business outcomes.
After 15 years in the industry, I prefer to think that this is just the right thing to do. We live in an increasingly unsafe world, and games can be a safe haven for a lot of people—I really believe that as well. I’ve seen a lot of good in games, and we should lean into this positive element and be aspirational, as you said, Alan. How can we aspire to something that we all share as values, as principles that will guide our practice?
I want to talk a little bit more about barriers. And I think one of the key barriers to implementing something like a code of ethics is understanding our biases. When we talk about ethics, do we run the risk of really over-indexing on Western ethics? I think we easily run that risk. Not to necessarily tie that up for Kate, but I know Kate has a much deeper experience in this area of just global thinking.
Kate Edwards: When I worked for Microsoft, I established a team to handle cultural and geopolitical risk. One of the things that I ended up doing was I was the one of the five-member team that wrote the code of ethics for Microsoft. When they approached me to do that, I was kind of surprised because that was 20 years into the existence of the company and they had no code of ethics yet. You have to ask yourself why it took Microsoft so long in the first place.
I think part of the barrier is that when you’re talking to C-level people, when you say we need a code of ethics, the optics of that for a lot of people is: Why do we need a code of ethics? Are we doing something wrong? That’s often the perception, that now we need to contain wrong behavior by actually coming up with a code of ethics. That’s been my experience in some of the companies in which I’ve worked: to take that step, as good and positive as it is, can also be seen as an implicit admission of guilt. So tread very carefully around this issue. Even when I was approached to do the Microsoft task, it was kept extremely secretive because at the time they did not want people to know that we actually did not have an official code of ethics. But if you went through the internal website, you would not find anything. So that’s part of the barriers too, to navigate those optics.
I think Diversity, Equity, and Inclusion (DEI), for example, is an easier thing to embrace and more universally accepted, whereas with the ethics issue I still think you have that implicit liability that is worrying to companies.
Moderator: Just for context, in 2020, Kate Edwards, Katherine Lo, Carlos Figueiredo and I started the Ethical Games Initiative.3 We put together a draft, an aspirational draft, as Alan would say, to try to address broadly what a code of ethics could look like. It’s really hard to articulate something that would work globally for all different games and situations, but the idea was to stay as broad as possible, and then derive specific guidelines for specific situations. Does anyone want to add up to what has been said so far?
Eve Crevoshay: I think what’s so interesting is this juxtaposition of business case and ethics. We recently did some work with audience research company Nielsen.4 We released data showing how many players are playing less or spending less because they experienced toxicity online. And the numbers are really high. The right thing to do serves people. And when people are served, they are happier and they tend to stay engaged and spend more money. The same is true on the developer side. When developers are treated like human beings, they do better work. So one thing follows the other. You treat people well, you recognize their humanity, and then from that comes something better.
I also want to touch on something else that Kate was saying about the industry: not wanting to say something publicly and being very afraid of the blow-back—this fear of players and desire to cater to them. As you know, some really harmful things have come out of that pattern in terms of the culture and expectations of players, the norms and behaviors. We think about games as shaping and reflecting culture all the time. But player behavior around games shows the same thing: it shapes and reflects online culture and broader cultural norms. I do think there’s a moment here to think critically about the culture that we want to reinforce, the norms that we want to set—not just around content and the way we treat people in studios, but the behavior that we’re encouraging. I think the Fair Play Alliance and what they’re doing is really exciting, because we can set new norms to change the feedback cycle around what is acceptable among players, what is appropriate, and how we think about it.
Katherine Lo: I really appreciate you calling out the interplay between online gaming spaces and broader internet culture and how a code of ethics really has greater implications beyond specifically developing a particular game.
I like to look at this through the lens of implementation and measurement: how are we expecting people to implement these codes of ethics, and how are we measuring what we’re implementing? That’s why I also appreciate that at this conference, people are sharing tool kits, standards, ways to think about ethics in more structured, concrete, granular ways.
The Fair Play Alliance has documentation to help us consider what metrics we are collecting and what incentives we have for collecting them. We’re talking about making business cases, and in many cases, metrics themselves end up introducing issues. You don’t want to admit that you’re screwing up, and by committing to measuring ethics, you are now promising to reveal how unethical you are, either internally or externally, in terms of PR coverage and reporting to investors. And then, it’s the player base reacting to some of this. So transparency really is an important value, but also adds to the reluctance to implement. And that’s where a code can help, not just as guidance but also as a standard of what we expect of everyone equally.
Alan Gershenfeld: One thing I would encourage is looking at other sectors where codes of ethics have evolved over the past 10 years. There are complex sectors, like DNA sequencing, where codes of ethics have emerged and even gotten to the point of being monitored and enforced, either through self-monitoring or external monitoring. That should be part of the conversation.
I also want to put out there that not everybody is in agreement on what is and isn’t ethical. That is part of identifying stakeholders and value alignment and misalignment. Similarly, in some areas, there might be conclusive science; in others, the science could be emerging or conflicting. I wouldn’t assume there is a single, monolithic “what’s ethical” in some areas.
I was going to just make a point about aspirational futures because I’ve been thinking a lot about that and we actually have a whole business focused on designing aspirational futures. Half the puzzle is having the collective vision of an aspirational code of ethics, which I think can and should emerge. The way it can then develop from aspiration to realization is rooted in game design theory: You have a goal, let’s call that the aspirational vision. You have agency to act. You can fail safely. And you get feedback as you pursue your goal—from “the game,” from peers, and from mentors.
Say we’re reducing toxicity and that is creating a better player experience, which yields economic and social benefits. That’s a feedback loop where a code of ethics affords agency, is making a difference. So rather than a monolithic view, we should help actors see these areas where individually and collectively, actions are making a difference. It’s critical to communicate that, if acting aligns with people’s interests, for that then starts to build the stable steps toward the larger vision. So I think insights into what makes games work is relevant to how to turn a code of ethics into a reality.
Moderator: There’s a lot to unpack there. To speak to the science, the whole point of this conference is to see the state of science today so that we can find at least some settled elements: What are known true harms in what precise contexts?
I think the economic argument is going to have more weight, especially now with a more active Federal Trade Commission (FTC), the US regulatory agency. I don’t know if you followed it, but Epic Games had agreed with the FTC to pay a settlement of half a billion dollars for using dark patterns and violating children’s privacy.
On different values and views of what counts as ethical, the intent of the Ethical Games initiative is to have many other discussions in the future that invite a lot of other people, so that we have diversity in perspectives and cultures.

3.2 Challenges in Enforcing a Shared Code of Ethics

Moderator: I would like to talk more about monitoring and compliance. If we manage to have a code of ethics, how can we make sure that we enforce it? Since many people might not want to jump in voluntarily, this is where regulatory bodies can have an impact. If we have a code of ethics and identify specific harms, then we can have laws and policies around that.
I would like to turn to Régis. What do you think would be the challenges to enforce such a code of ethics? And maybe you can talk a little bit of what you’ve experienced so far in terms of data protection.
Régis Chatelier: I think we have two different topics here: one is law; the other is ethics. In France, my organization, CNIL, is responsible for implementing the EU General Data Protection Regulation (GDPR). We also have a mission, under French law, to lead the debate on the ethical and social issues raised by digital technologies. That’s why a few years ago, we published a report on the ethics of artificial intelligence and organized an ethics event. So, our mission is not directly ethical—although the core of EU data protection rules is based on fundamental rights.
So in terms of law, we work with the text and regulations of the GDPR for data protection. But now we have new regulations in Europe. We have the Digital Services Act for online content, the Digital Market Act, and now the AI Act. All these regulations are based on translating some choices, sometimes ethical, into law. Here, compliance is not negotiable. These are legal obligations.
It can be different with ethics. That’s why I think a code of ethics should be based on this dual framework of law and ethics. It should not just draft an ethical charter, nor just be limited to compliance with the law: it should use the law as a basis for making proposals that then aim to do better, not just deliver the bare minimum as we saw with data protection.
In 2018, when we were promoting the GDPR to everyone, our idea was you had to be compliant, but you should also try to do better and understand data protection as something that can be positive for your business. If game developers can similarly approach their rules not as constraints but as opportunities and goals, maybe they can similarly use a code of conduct or ethical charter within the organization to make the people aware of why we need ethics.
With the GDPR, our studies showed that making it mandatory to understand what data protection is in an organization made people aware of the risks in their personal life. At times, people who had viewed the GDPR as a constraint came to understand that it’s for themselves, and they now use data protection for themselves.
This is really interesting: if you put constraints within a company, it can work to educate people, and the same people will educate others. Just as an example, among the recommendations we made on artificial intelligence was the principle of vigilance and reflexivity: ethical issues are engaged methodically and deliberately and not just the responsibility of an external ethics committee. This is what we call Privacy by Design in data protection. For that to work, it has to be “by design,” where you work with all stakeholders within the companies to implement dialogue on every part of your ethical charter. And that’s why I think it can work in a company, not just as a banner, but may really be integrated into practices. That’s why I think the GDPR imposes the need to safeguard fundamental rights of customers on everyone in an organization.
The limit to this approach may be that it has to be an enforceable law. The idea is not to enforce it, but to make people really take care of ethics. As Alan said, we don’t all have the same ethics, but we have to co-create it within the scope of the data regulation, which sometimes is based on ethics.
Moderator: That’s a great point that is resonating with accessibility specialists I speak with. They say you need to be compliant with accessibility laws like the Americans with Disabilities Act in the US, but that doesn’t result in inclusive design. We need laws, especially to protect specific people such as children and adolescents here. People should design in an ethical way and think about it early on.
Eve Crevoshay: I’m very sensitive to the different political environments that we are operating in and how much more functional from a policy making and social engineering perspective the EU is than the US. In the US, even developing a legal framework is difficult. So from my very American perspective, developing a code of ethics is a good start even though there’s not a culture here in the States to do so.
Alan Gershenfeld: To speak to monitoring and enforcement, I think there are, in some ways, three big buckets if the goal is to get to more positive behavior at an institutional and a company level that happens over time.
One is government regulation, which can look like China, at the most extreme. It exists in every country in different forms, and it’s hard for government regulation to move as fast as our constantly changing industry. There are continually new research and new challenges. But government regulation is one tool in the toolbox.
Another category is industry self-regulation, which people are justifiably cynical about. That said, if you look in America, the current age rating system, ESRB, was basically the trade group ESA [Entertainment Software Association] saying we’re going to self-regulate. Everyone was skeptical, but the effort has served some value to players and parents. Is that better than the government doing it? Fair debate. Is it better than nothing? Fair debate. But that is another tool in the toolbox: industry self-regulation through trade groups, which exist around the world.
The third category is one we have alluded to, one that is very powerful and speaks to aspirational goals, which is having key employees embrace it and just start the patterns of behavior in their own companies and in their own projects. And again, if this is a talent-driven industry, if the folks who care about and are making the games are saying, “this is important to us,” and they start doing it, there will be a repeated pattern of behavior. I’m seeing this with the Playing for the Planet initiative regarding environmental commitments. I think it’s similarly happening with the Fair Play Alliance. All three are powerful ways to help move this process forward.

3.3 Challenges of Cultural Diversity

Kate Edwards: I think it’s time to talk about cultural–ethical relativism, which is the elephant in the room when it comes to this topic: we have spoken about it mainly from a Western perspective.
I’ve done consulting work for Chinese companies, quite large and well known, and I can guarantee that a code of ethics is probably not going to come into existence there. And if it does, it’s going to look nothing like anything that we would come up with here. It’s true in China and elsewhere that industry’s concept of ethics is going to be more about reinforcing whatever cultural hegemony they are pushing. That’s really what it comes down to: ethics fits into cultural hegemony.
We’re already seeing it with game content—how different content is allowed or disallowed in certain regions. I’m not talking about the big three—violence, profanity, sex—I am talking about all of the explicit and implicit messaging in games, the use of allegory and the like, which more and more ratings boards in different countries are getting in tune with. They’re already in tune with this with film and television programs, and getting more savvy around video games. It is going to be a very complicated issue, but I could see us conceive of these codes of ethics as reflecting a certain aggregation of values in a certain region. And then maybe we find ways to translate them and have some level of interoperability between them. Because I do think the baseline will be the same in terms of what we want to achieve in terms of experience for players and at the workplace, but the reasons why it’s being done will be vastly different.
Eve Crevoshay: I have a clarifying question for Kate. Do you think an outcomes-based code of ethics may be the place to start?
Kate Edwards: Exactly, because what do we all want at the end? If we start with why we want a code of ethics—usually some moral or values-based realm of discussion—that already kind of torpedoes what could come after. So instead, if we ask concretely, what do we all want? We want this, we want this, we want this. If we work backwards from those outcomes, that might be more productive.

3.4 Challenges around Safeguarding Children and Young People

Moderator: I want to take some time to talk about minors and children more specifically. We can all agree that this is a population that we want to protect, and we can see that across the globe, including in China, where there are laws to protect children. I want to hear Miles about his perspective, given his work at Sesame Workshop, which has to speak to children, but also [be] inclusive and benevolent. How do you do that?
Miles Ludwig: As a nonprofit, for a long time, we thought of not collecting data as an ethical value. You know: “We’re the gold standard, we’re better than the rest, we don’t collect data.” So that was an ethical value, and we were quite self-satisfied with that. And now, we see that parents are shocked. Like, “My kid just logged in again and you don’t remember what it did last week? How could you not know my kid?” So now parents and users of all ages—children as well—have this expectation that there is in fact a persistent two-way “know and be known” relation between media, including games, platforms and themselves. It’s now mandatory.
We’re a 54-year-old organization, so it can take some time to turn our battleship around. But we are moving quickly to become intentional about how we instrument our games, how we keep data, build profiles, present experiences that persist across sessions, and where we can actually assess and build on where children are at from a skills, interests, and progression point of view, to lead to better educational outcomes.
We want a code of ethics because, broadly speaking, we want a better world. But I think selfishly for Sesame, we are unable to do some things that people want us to do and that we should do. Social gaming is a very obvious example, especially with a pre-literate age group. We want to be able to create interactions between children and other real children safely. At the same time, we are in a time of real concerns about mental health. You know, we have a fairly significant presence on Roblox, and this is one of the spaces where online child welfare is going to be tackled and hopefully solved first. We’re very hopeful that working with Roblox will be one of the places where safe social gaming emerges and we can answer some of the questions that are coming up here.
At the moment, we have a lot of conversations about the downside of competition, and that creating competitive play spaces and competitive gaming actually promotes toxicity currently in the culture. But guess what? Humans want to play games where they keep score. It’s really fun. So right now, I find myself having to push back internally against anxiety that, “Oh, if we keep score, if we allow children to compete rather than force them to cooperate, it’s going to lead to toxic behavior and a lot of bad feelings!” Again, this is just something that is so broadly broken that we’re facing that same challenge. I think that if we want to have children compete against one another and we want to do so in a way that doesn’t lead to toxicity and doesn’t leave kids feeling worse, we desperately need a code of ethics of the sort that this community is working on.
There’s so many great things that we could do to help and educate children, and many of them we can’t do because of these problems today.

3.5 Challenges in Content Moderation

Moderator: So one thing we did not get a chance to touch yet is content moderation. In the online survey we have run in parallel to the panel, many people have been asking: How can we develop a code of ethics when players themselves may be unethical or even adversarial? We can design many things differently so that they’re not fostering so much competition or toxicity. But there’s still that question of how we can deal with unethical and adversarial players. I know Take This has also been working on extremism as something that also exists outside of games. So Kat, Carlos, what are the biggest challenges here?
Katherine Lo: The question that everybody raises with content moderation is scale. Games have a challenge that’s in ways much harder than social media platforms: they are bodies in the world that are acting in certain ways—it’s not simply text or images. So how do we scale systems where the information that’s automatically collected of toxicity is often just text and image, using whatever AI technology to classify? When users want to advocate for themselves and report when abuse is happening to them, how much agency and scope do they have to reflect their own experience in that? It’s often not much, because to scale, content moderation systems typically focus on this single unit of content or interaction.
So how can we holistically reflect the experiences of players who are experiencing harm? Any number of things can be used to harm people, to cause negative feelings in association with someone’s identity, nationality, and the like. Are the ways that users can protect themselves through muting or kicking or reporting effective? How do you make these systems scale in a way that’s still human centered? This is an area that moderators arguably understand best—how to reduce harm. Content moderation teams can help build better systems and work with designers to account for toxicity that might occur if a game feature is introduced. But are we going to account for these likely harms before we introduce a new feature? There are so many different parts to this ecosystem, in terms of designing the system, implementing it at scale, and then making that system accountable and protecting the moderators who will also need support.
Moderator: That’s a great point and one reason why the code of ethics that we’re trying to put together is concerned with workers—they’re critical to the process.
Carlos Figueiredo: I want to touch on content moderation as part of a larger ecosystem because with content moderation, we’re typically talking about something that happens after the behavior has manifested. It’s downstream. But when we think about content moderation within the larger ecosystem of what I call player safety, with game systems and social systems, things that are upstream, things that could lead to negative and disruptive behavior come into view.
For instance, not everyone experiences safety the same way. I can almost guarantee that if we poll all seven of us here in this panel right now, we will have different safety needs, different levels of comfort and what we’re willing to put up with. I say this because you can see the potential pitfalls in trying to use a one-size-fits-all approach to safety already. And safety is only one element of ethics for this particular conversation.
Further, when we’re talking about ethics, we’re not just talking about bad stuff; we’re talking about the ideal state of things. When I think about games, I think about connection and sense of belonging, specifically in multiplayer games, and creativity or the ability to do things together, to create and imagine things together. So if we are aligning around a rough set of principles of what we want to see happening in games, that’s a good start. From there, we can hopefully align on what are the real destructive types of behaviors, which run completely contradictory to the very purpose of online games, to our positive principles. If we can align across the industry or have this shared language, hopefully there’s an opportunity even to bridge across cultures. We need to find a similar footing at least for some of those negative outcomes. I encourage us to think in those terms.
When you asked about the hardest challenges in content moderation, I couldn’t think of anything harder right now than moderating gaming communities where there’s discourse about the various conflicts and wars happening in the world.
Eve Crevoshay: Carlos, I’ll jump in. Where can we find common ground? I think a lot of people want to find common ground around children and safety, and there’s some basic shared ideas around grooming and radicalization and extremist recruitment. But I think there are different cultural ideas about what it means to radicalize or recruit someone or groom someone. So I wonder how far we can go here in terms of shared safety standards. And I also think that we currently work pretty much exclusively in a Western context on this issue. Basic cultural differences crop up here. How do we give cultures and spaces and people the opportunity, the flexibility to bring their own values to the table as long as those values don’t hurt people who are not part of that community or not opting in? How do we give as much choice as possible in spaces, recognizing that that is also fraught?
Katherine Lo: One related issue I want to point out is companies in the US giving a mandate to people outside of the US, or localizing games for places and people outside of the US. Here, you’re often just sort of carrying Western values through, like a Google translate. Also, when companies hire teams and moderators, they often don’t understand the sociopolitical or regional context, say the history of a conflict within a country, which affects what constitutes hate speech. For instance, when I was studying toxicity in Nigeria and Kenya, tribal hatred was not in a lot of policies of social media companies, and in many Western contexts we don’t understand what that means.
Kate Edwards: I agree because I deal with that every day. That’s partly why my job exists: navigating culturalization is pretty nightmarish. One person’s hate speech is another person’s cry for freedom. How do you discern that? What qualifies you to discern that, based on what? That comes back to forcing companies to take a stand. They can’t just hide in the shadows.
Katherine Lo: One component of this is AI automation of moderation: it can be very helpful for reducing harm and for scaling well. But it’s important to use AI to assist, not replace, people—like helping people to understand the cultural context, like when they search a term, making everything available that a content moderator needs in front of them. How do we make it so that automation is helping us practice our values rather than trusting it to enact our values for us?

3.6 Engaging Reluctant Stakeholders

Moderator: I would like to conclude with a great question from one of our conference speakers, Adam Jarrett: “We’re all at this conference because we care about increasing ethical behavior. But how can we reach people who don’t want to come to the table?” I’ve certainly talked to people who don’t think it’s actually a good idea or don’t want to even have to tackle this because it’s really complicated.
Kate Edwards: It’s pretty simple: you have to tie it to money and how it is going to help the bottom line of companies in some way. “How can it help the bottom line?” is a better framing than “How can it hurt?” because a lot of companies already know how it could hurt them. In my culturalization work, I needed to convince companies that taking account of political and cultural nuance in their game content was needed to show respect for players and their cultures. This argument plays better today than 30 years ago, when the argument needed to be that making content viable in different markets would yield more money because your content will be meeting local expectations around the world. I think for a lot of people in the industry, especially at the C-level of senior executives, that connection has to be made in some form.
Katherine Lo: I think surveying what people are doing right now is key. One of the projects I’m working on with Alex Leavitt at Roblox is cataloging across games what every user reporting form is like, what are all its features, and how it works.5 If we do more of these types of surveys across companies, we can name and shame people who aren’t doing it. We also show how well some people use some of these measures.
Carlos Figueiredo: It’s exactly the folks who disagree with the idea of a shared code of ethics that probably could offer a lot of great insights. We need to hear their voices, however that can be facilitated, from conferences to industry gatherings. Actively seeking dissenting voices and understanding why people think it’s a bad idea probably would teach us a lot.
Eve Crevoshay: I think this industry is so diverse that there are spaces where this is a welcome conversation and spaces where it isn’t. Using the welcoming spaces to be generative, to create possibility, examples, and diverse models, and then have those percolate through the industry—I think that’s a possible and hopeful shift. We don’t have to work with the biggest companies from the outset; we can have these conversations and embrace the diversity of the industry to get this started.
Régis Chatelier: I think the money is important and you have to convince the big companies that they have to tackle ethics, maybe looking at other businesses, like social media and big tech platforms. In the end, many countries and regions like Europe are trying to tackle these topics, with some enforcing fines and some new laws. Maybe you can first try to work with the people who want to do this, so that you reach a critical mass. We have to work with the people who want to work on it, and not use the cultural differences in the world as an argument not to start the process. We have to start somewhere, but we can’t wait to agree with everyone. We have to create with what we have now.
Moderator: I agree with you. I come from user experience, so I’m familiar with such an iterative design process, which I think is a good way to approach it. So here are the takeaways for me: We need more science so that we can distinguish harms versus moral panics. We need to define what it means to be safe, depending on the context and the population. We have to be more diverse and multicultural, so we need to invite more people to this conversation. We can think of our efforts as developing an aspirational code, so that we can be broad and get people to join us bottom up. Elements from such an aspirational code can be taken by policy makers and regulatory bodies to help them create and enforce new laws when we know that there are clear harms. And finally, money is a forcing function that can also help convince the people who have yet to be convinced. Thank you all so much.
Francine Blumberg

Francine Blumberg

Blumberg's research interests concern the development of children's attention and problem solving skills in the context of informal and formal digital learning settings.

Share this story. Choose your platform.

Want more updates and information from ACM? Sign up for our Newsletter.

Related Articles

  • close-up of a person's eyes through black eyeglasses

    The Toxic Cost of Cheap Usernames

    Toxicity in video games, acting in a rude, abusive, bullying, or deliberately losing manner, ruins competitive team-based video game experiences for everyone involved.

    Sarah Chen

    August 12, 2024,

  • A very young person visits the metaverse with goggles and a handheld controller.

    What Are the Points of Concern for Players about VR Games

    In recent years, the VR boom has signaled fruitful applications in fields such as education, industry, animation, and entertainment.

    Jiong DongKaoru OtaMianxiong Dong

    August 22, 2024,

  • An Empirical Study of VR Head-Mounted Displays Based on VR Games Reviews

    In recent years, the VR tech boom has signaled fruitful applications in various fields.

    Yijun LuKaoru OtaMianxiong Dong

    August 22, 2024,

  • A young person wearing a headset is tired with hands on head.

    The Trouble with Burnout

    Burnout sadly remains an evergreen topic in the games industry.

    Raffael Boccamazzo

    August 12, 2024,