Understanding the Ethical Principles for Standards-Based Conversational AI Linux Foundation
” If you do ask, it is important to check the results and incorporate the learnings back into the chatbot. New York City passed a law requiring companies to audit their AI systems for harmful bias before using these systems to make hiring decisions. Members of Congress have introduced bills that would require businesses to conduct algorithmic impact assessments before using AI for lending, employment, insurance and other such consequential decisions. The Open Voice Network invites you to explore the fast-paced world of conversational AI and voice technology, all while ensuring that ethics and responsible practices remain at the forefront of every AI interaction.
The framework highlights the significance of user feedback, integrating it as a core component of evaluation alongside subjective assessments and interactive evaluation sessions. By amalgamating these elements, this paper contributes to the development of a comprehensive evaluation framework that fosters responsible and impactful advancement in the field of conversational AI. Although the paper mentions various approaches, the focus there is more on the ethical aspects rather than on the analysis of the approaches.
1 Missing ethical issues
She has 15 years of experience in software engineering, working with many technologies across the full stack. Her understanding the changing paradigms that serverless is bringing to software architecture and that cognitive technology is bringing to human-computer interaction. She loves big ideas, discussing technology, sharing what she is learning, and building things that make things better for people. There are several conceivable tools that are not clearly present in the above analysis. Fairness, explainability, and privacy are most often addressed with algorithms and software approaches.
One of the most challenging aspects of conversational AI is building the Natural Language Processing (NLP) / Natural Language Understanding (NLU) model. Given the unstructured nature of conversation, it is difficult to know all the things a user may ask, or how they may ask them. Cost savings is often a driver for chatbots and voice assistants as enterprises hope to reduce the load on live-agents. As the influence of AI continues to grow, ethical conversational design standards must become a framework that all developers and designers follow.
Simulated Emotion and Empathy Can Help Build Trust
The approaches vary greatly in their degree of specificity and operationalizability. They are concrete and usually address a very specific potential ethical shortcoming, e.g. regarding bias or explicability. One interesting case describes a design pattern for achieving privacy using Unified Modeling Language . Firstly, policy documents such as the EC guidelines mentioned above strongly focus on principlism and practically adopt or at least implicitly suggest principlism as an approach towards ensuring the ethicality of AI systems. Secondly, philosophical principlism often focuses more on debating their underlying rationales while many framework documents focus on just the set of principles. Thirdly, the principles, although laudable, provide very few concrete constraints on system design.
Take, for example, the case of removing bias from a model and ensuring that the AI system treats everybody fair. One of the problems is that fairness has many different interpretations, and it is not straightforwardly clear which mathematical fairness function to use in a selected application context, cf. On the other hand, although such functions may be hard to design ex-ante, a running system will usually always implicitly define a function. The question of mapping the notion of fairness onto a computable function thus is inescapable, albeit very difficult to decide without concrete application context. In several frameworks, there is little distinction between concepts and principles. For example, explainability can be taken as a requirement (principle) and as a basic concept requiring further conceptual clarification.
We may still want to deploy such systems as the creation of a new data set may not be feasible in terms of time or costs and using artificial data may not be able to solve the problem at hand. In such situations, the usual approach is to be transparent and warn about the identified potential threat or shortcoming. The analysis of approaches demonstrates a huge interest in improving ethical AI systems design and a broad range of proposals from researchers and practitioners from engineering and other academic fields. For example, the field of privacy-preserving machine learning is now a whole new subdiscipline of machine learning and explicability is a major research topic in AI. Similarly, a range of standard process models is being developed with the aim of improving the ethicality of AI systems. For example, IEEE P7000Footnote 4 is one of the first standards for ethical system engineering.
Read more about What Are the Ethical Practices of Conversational AI? here.