Using AI in a Responsible, Value-driven Way

 In Technology

Another article on Artificial Intelligence (AI)? Yes, but instead of assessing the extensive technological possibilities, I would like to broaden the focus to consider business value as well as the positive (and potentially negative) disruption that Generative AI (GenAI) will bring to our work environments.

AI, as a concept, has been with us since the beginning of computing, and with the expansion of computing power and our understanding of advanced analytics models and algorithms, we already use AI technology to help people with physical and mental disabilities, to support medical professionals, and to enable the administrative processes within healthcare and health insurance.

While we think of the exciting possibilities of AI, let’s also recognize that there are current, established processes and approaches to achieve customer and business values from ever-advancing technological progressions. We have proven techniques — digital transformation, organization change management, and innovation — to think about what we are doing, why we are doing it, and how can we use it.

This expertise has taught us the importance of understanding business challenges, outcomes, and the necessary adaptations that often accompany new technology. Some of these changes call for updated training, organizational structures, rules, roles and governance processes.

Right now, we are seeing GenAI evolve rapidly. Using publicly available data, AI models can be trained not just to retain and regurgitate data, but also to learn from it and use it as a base for intelligent responses. The roles and functions that humans do today could be augmented or potentially fully replaced by computing technology. In addition, Bard, ChatGPT, and similar products are available globally to the consumer, and people are already starting to experiment with creative possibilities. The closest historical, technological and digital transformation we’ve experienced within the last two decades was the introduction of the iPhone. It put power, unlike any of its kind before, in the hands of the consumer and forever changed the way we work, socialize and operate. However, GenAI presents an even greater quantum leap.

Assuming we have a defined business need and have projected possible outcomes, how do we venture towards this new horizon?

Some perspicacity and imagination are required. Human-centered AI design identifies and intentionally incorporates all human interaction points with AI into an overall solution. This type of thoughtful, purposeful design helps to ensure that appropriate use and oversights are built into the process from inception and forces relevant questions around what can and should be done with AI.

It is important to note that there is a significant risk of unintended consequences with AI arising from the data used to train these models. The first releases of ChatGPT, the most well-known GenAI solution, was trained on publicly available, online data including information through the year of 2021. This has changed with the latest releases, but these are still closed models, which makes it difficult to fully explain how tools like ChatGPT arrive at answers or responses to questions. This may lead to the creation of false information — or AI hallucinations — and could potentially produce unexpected biases.

There is also a wide spectrum of rules across industries and countries regarding data use. Different industries and countries have different appetites for risk. For example, the European Union has tighter controls around personal data privacy than the United States. We know that even before GenAI, people were using technologies outside the bounds of their original intent. There are few, if any, technical controls that prevent people from doing things that have damaging and sometimes litigious outcomes. So, policies and guardrails that exist to protect healthcare data, like HIPAA and other government regulations, must be embedded in the end product.

At NASCO, we want to embrace AI because we see it as a transformative technology that offers the possibility of significant competitive advantage, but we want to do so in a responsible, value-driven way. We will take an intentional design approach, and we will embed appropriate security, data and risk management controls in our products. We are determined to understand the data used, where it’s being used, and how it’s being used both within our company and in the products we provide to support our customers.

This article was not authored by ChatGPT or any GenAI tool.

David Weeks is the Senior Vice President, Chief Digital and Technology Officer at NASCO. He is responsible for technology innovation, product digital transformation and information security at the company. He also leads NASCO’s technology office, collaborating with stakeholders and health tech partners, and oversees the strategic technology direction of NASCO’s platforms and products.