Posted: 07/11/2024
Generative AI is a powerful technology that can create realistic and engaging content, such as text, images, audio, and video. It can also be used to design and deploy ‘digital assistants’, such as chatbots, voice assistants, and virtual agents that can interact with humans in natural language. Emphasising ethical AI helps maintain integrity and respect in these interactions.
At C5 Alliance, we are proud to offer our HR digital assistant Vega, as a solution for streamlining and enhancing the employee experience. Vega can answer common questions, provide guidance and offer feedback on various HR topics such as benefits, policies, performance and career development.
However, as with any technology, generative AI also poses some ethical challenges and risks. One of the questions that we often get from our clients is: What do we do if the digital assistant has any concerns about the wellbeing of the individual? What happens if the conversation becomes personal or a straightforward conversation about feeling ill descends into something more serious about mental health? Is there a duty to flag this to a human or raise an alert? A human certainly would, so should a digital assistant?
In this article, we will explore these questions and share some of our perspectives on how organisations could handle sensitive conversations with a digital assistant while respecting the privacy, autonomy, and dignity of the employees. This is a complex area to navigate and we’re certainly not saying we’ve got this 100% right, but these are our current thoughts on this evolving topic.
How should a digital assistant detect and respond to sensitive conversations?
One of the key features of generative AI is its ability to learn from data and generate relevant and coherent responses based on the context and the user’s intent. However, in an HR context, this also means that the digital assistant may encounter situations where the user expresses negative emotions, more personal issues or genuine distress.
Alternatively, an employee may ask a digital assistant for feedback on their performance, and then express dissatisfaction, frustration, or anger with their job or line manager to the digital assistant.
What happens then? How should a digital assistant handle these scenarios? Should it ignore them, redirect them, or escalate them to a human resource?
At C5 Alliance, we believe that the best approach would be a balance of the following principles:
Empathy
The digital assistant should acknowledge the user’s emotions and show compassion and support.
Accuracy
The digital assistant should provide accurate and consistent information and avoid giving misleading or incorrect answers.
Privacy
The digital assistant should respect the user’s privacy and confidentiality and not disclose or record any sensitive or personal information without the user’s consent.
Autonomy
The digital assistant should respect the user’s autonomy and choice and not coerce or manipulate them into taking any action or decision.
Safety
The digital assistant should protect the user’s safety and wellbeing and alert human support if there is a risk of harm or personal danger.
Based on these principles, we have designed Vega our HR digital assistant to detect and respond to sensitive conversations in the following ways:
Ensuring ethical and responsible use of generative AI
While we believe that our approach to handling sensitive conversations with Vega is ethical and responsible, we also recognise that generative AI is not an infallible technology – this is also a very fast-moving area of technology and governance. There may be cases where Vega makes a mistake, misinterprets the user’s intent, or generates an inappropriate or harmful response.
With this scenario in mind, we also take the following measures to ensure the ethical and responsible use of generative AI in our HR Digital Assistant:
Conclusion
For the HR domain, Generative AI is a game-changing technology that can revolutionise the employee experience. However, it also comes with ethical challenges and risks, especially when it comes to handling sensitive or personal conversations with a Digital Assistant.
At C5 Alliance we are committed to using generative AI in a way that is ethical, responsible, and beneficial for our clients and their employees. We have designed our HR Digital Assistant, Vega, to detect and respond to sensitive conversations with empathy, accuracy, privacy, autonomy and safety. We also take various measures to ensure the ethical and responsible use of generative AI in our HR Digital Assistant.
We believe there are two critical factors to ensure the above can be maintained:
For more information about how we can support your organisation with data and AI solutions, email us at [email protected]