From the course: Ethics in the Age of Generative AI

Applying Vilas' framework in a real world situation

From the course: Ethics in the Age of Generative AI

Applying Vilas' framework in a real world situation

- Now it's time to put the framework we covered in the last video into practice. I want you to consider the following scenario involving the CTO of a technology company. Sarah enters the conference room for an emergency meeting, something serious is happening. She's told that the company's new AI driven chatbot designed to help customers with online orders has been making some inappropriate, inaccurate, and even offensive responses to customers. Sarah knows that this isn't just a product issue, it's an issue that's going to be grounded in ethical decision making. She knows her immediate step is easy. She needs to take the chatbot offline, and she does so, but then she has to figure out what her next step is. As a technologist, she knows to start with data. In other words, how was this tool trained? From talking with her team, she learns that the underlying data set came from an unscrubbed set of internet conversations. In the rush to production, the team didn't run the data set through a set of filters and tools. She knows what the next step is. She directs her team to use a new data set primarily composed of the company's own database of interactions with customers and only after scrubbing the data of any personal information, ahe directs that the model be run through a number of bias detection processes and filters but she knows that data isn't the end of the problem. Sarah finds out as she continues to inquire that customers are using the chatbot to do more than support customer service. They're taking the opportunity to have far ranging conversations on topics that have nothing to do with the company or the product. She knows the technology team should have reviewed other ways that customers might use the tool and considered ethical safeguards. Because the scope of use is widened, the team needs to limit what subjects the chatbot discusses with a customer, and they need to make sure that responses are tailored to the specific expertise that that chatbot is supposed to have. Sarah brings in the customer support team and she engages frontline workers on how they experience conversations with customers. She wants to know what do clients usually want to talk about. Using that information and engaging a shared design process, the team builds entirely new boundary conditions for topics that are relevant for the chatbot to discuss and limit excessive non-business conversation. Finally, Sarah realizes that she queries the tool that she has no way to explain some of the really insensitive outputs that the chatbot is producing. Her team needs to allow for better traceability and evaluation of the tool's outputs. So Sarah encourages the team to build multiple input-output checkpoints, and she encourages the creation of an internal audit process to regularly monitor and check outputs from the chatbot. Accompanying this, she has a risk assessment and response framework to allow a user to flag inappropriate conversations in real time so the team can address issues immediately. Now, let's acknowledge this is a significant effort by the company. It could range from weeks to months, but if the company had followed ethical practices when designing the chatbot, this expense, time, and stress could have been totally avoided. Ethical analysis needs to be intertwined with the initial design of new products and at every phase of deployment. Now there's a happy ending here, acting on our framework, Sarah's company is able to address the ethical dilemma surrounding its chatbot and get back online selling happy widgets to happy customers. How would your organization handle a dilemma like the one faced by Sarah's company? Which steps will you take today to center ethical analysis in your decisions about AI product design?

Contents