Express Computer
Home  »  Artificial Intelligence AI  »  AI creates opportunity — But governance must guide it: Reggie Townsend, Vice President, AI Ethics, SAS

AI creates opportunity — But governance must guide it: Reggie Townsend, Vice President, AI Ethics, SAS

0 2

Artificial intelligence is entering a defining phase. What began as experimentation with models and algorithms is now rapidly evolving into enterprise deployment, national-scale platforms and real-world decision systems. As AI becomes embedded in economies, institutions and everyday life, the central question is no longer about capability — it is about responsibility.

How should organisations govern AI systems that can influence financial decisions, public services and business strategy? What guardrails are required as AI moves from consumer novelty to enterprise infrastructure? And how can leaders ensure that innovation does not come at the cost of trust, fairness or accountability?

In this conversation with Express Computer, Reggie Townsend, Vice President of the SAS Data Ethics Practice, shares his perspective on the evolving AI landscape — from governance by design and board-level oversight to the future of work and India’s opportunity to lead in responsible AI. At the heart of his argument is a simple idea: technology reflects human values, and the choices organisations make today will shape how AI impacts society in the years ahead.

Some edited excerpts:

As AI moves from experimentation to large-scale deployment, how do you view the larger journey of generative AI?
The generative AI journey — and really the technology journey overall — has always been about people. I have never really been concerned about the technology itself because technology behaves the way we prescribe it to behave. Even when we start talking about agentic systems, people are still behind the creation of those systems.

Technology has always been a tool where we embed our values. AI has for years operated consistent with the things we believed were important.

Now we are seeing the generative wave, the agentic wave, and soon we will talk about the physical and embodied wave. All of this will ultimately reflect what we care about as people.

If we care about humans being at the center of these tools, if we care about security, inclusivity and accountability, then the technology will reflect those values.

History, however, tells us to be cautious. Technology has also been used in ways that were counter to privacy, security and accountability. That is why I believe we must begin with ethical inquiry.

Before pursuing technology, we should ask a few simple questions:

For what purpose?
To what end?
For whom might this fail?

If we center ourselves around those questions, we can avoid some of the more problematic outcomes that people worry about today.

AI is often described as a powerful technology moving very fast. What guardrails are necessary to ensure its impact remains positive?
People often describe AI as a speeding train coming toward us. But we should remember something important — we are the engineers driving the train. So the question should not just be how fast can we. We should also ask should we.

We need to ask whether we are attempting to do something fundamentally different or simply doing the same things with new technology. Too often the conversation becomes technology for technology’s sake.

When generative AI moves from the consumer space into the enterprise, reality sets in. Enterprises have long-established workflows and systems. Stability matters.

The speed we see in consumer environments is not digested the same way in enterprise environments.

Operational guardrails naturally slow things down. Integration, governance, long-term support and people processes act as practical brakes.

Another important layer is governance by design. As AI capabilities become embedded across enterprise platforms, organisations need governance for specific models or products and enterprise-wide governance to understand how AI is operating across the organisation. Boards and leadership teams need a global view of AI across the enterprise — how models are deployed, how they perform and whether they are drifting.

You mentioned “governance by design.” Could this become the equivalent of “security by design” for AI?
Yes, it should. Security itself is essentially a governance function. It is about deciding what data should remain inside an organisation and what should not. Governance is about scaling judgment across the enterprise. Security policies, regulatory compliance and internal controls are all governance exercises organisations are already familiar with. Responsible AI is therefore fundamentally a governance conversation. It is about ensuring that when AI operates inside and outside the enterprise, the outcomes do not harm people.

India is deploying digital public infrastructure like Aadhaar and UPI at enormous scale. As AI becomes embedded in these systems, what safeguards should leaders consider?
AI is already part of these ecosystems, particularly in deterministic analytics.

When we start introducing generative AI and large language models, we must ask again: should we?
In some cases AI clearly adds value — for example conversational systems that help citizens interact in different dialects, or agentic systems that assist with government services.

But at India’s scale, we must also avoid introducing unnecessary complexity if simpler systems already accomplish the goal.

On the question of bias, there are social conversations about bias and there are mathematical realities.

Mathematically, AI systems can identify, predict and mitigate bias. For example, in lending systems we can alert banks when the data they are using is insufficient for the decisions they are making.

But the most important point is this: AI produces outputs, and people decide what to do with those outputs. Socially consequential decisions should never be turned over entirely to machines.

Should organisations implement ethical checks before data is used to train AI systems?
The simple answer is yes, but the reality is nuanced.

Many datasets already exist or are continuously generated through real-time interactions. Bias will inevitably exist because people themselves are biased.

The real issue is the negative outcomes associated with those biases.

That is why I start with three design questions:
For what purpose?
To what end?
For whom might it fail?

These questions should be part of the design process before technology is implemented.

Another important concept is intended use.

Foundation models can perform many different tasks, but organisations must define the specific purpose for which they are deploying them — whether that is banking, insurance, customer support or operational optimisation.

Bounding AI systems around intended use is essential before deploying them at scale.

Should AI risk oversight move to the board level, similar to cybersecurity?
Yes, absolutely. Even three years ago there was slower uptake in boardrooms because many directors had seen multiple technology cycles and were cautious.

But today AI is clearly being seen as both a risk management issue and an innovation opportunity.

Boards are now discussing how AI could open new markets, affect business models and impact the bottom line.

What we still lack is sufficient literacy around the topic. That is why technologists — including CIOs — are increasingly being invited into board discussions.

Could strong governance and responsible AI become a differentiator for India globally?
Yes, India absolutely could position itself that way. AI creates new opportunities and potentially acts as an equalizer.
If India wants to differentiate itself, ethical principles should not be treated simply as compliance requirements. They should be built into technology by design. The technical capability to mitigate bias already exists. What matters is how people choose to implement those capabilities.

AI also opens enormous possibilities in accessibility and inclusion — from enabling people with disabilities through multimodal systems to bringing information access to rural communities. If India can combine governance, people, processes and technology around these values, it could become a flagship nation for responsible AI.

AI is raising concerns about job disruption, especially in service-driven industries like IT services. How should organisations respond?
Business models are still evolving, so it is difficult to make precise predictions. However, the need for services will not disappear. What will change is how those services are delivered. For example, traditional IT support models could evolve into environments where fewer people manage large numbers of AI agents.

That creates an important leadership question: if we need fewer people in one role, what do we do with the others?
Leaders must answer that question responsibly.

At SAS, our intent has been to enable the enterprise with AI while starting with people first. We do not yet know what every job will look like in the future, but our intent is not simply to let people go.

Organisations have many initiatives they have always wanted to pursue but never had the capacity for. Repurposing talent toward those opportunities can create value.

Enterprises that treat people as expendable will not build sustainable cultures.

As our founder, Dr Jim Goodnight says, our greatest asset leaves the campus at 5 p.m. every day — and we want to make sure they come back at 9 a.m. the next morning.

Looking ahead, what trends do you expect to shape responsible AI globally?
Governments will continue to wrestle with how best to regulate AI while still allowing innovation. Because the technology is evolving so quickly, regulation will remain a challenge. But we are already seeing governments begin to align around common principles such as human-centricity. Even if it takes time to settle on the exact wording of laws, governments will likely use their purchasing power to establish de facto standards for ethical behavior.

Another important shift is that the AI conversation will increasingly include non-technical voices.

When AI has implications for justice, well-being and equity, the agenda cannot be set only by technologists. Domain experts from other fields must participate in identifying risks and opportunities.

We will also see greater sophistication in how organisations measure and monitor AI performance. We need to know when a model has overstepped or underperformed. At SAS, for example, we are working on model cards that help customers analyze and monitor AI systems more effectively.

Finally, if we look five years ahead, what is your hope for the future of AI?
My hope is simple. I hope we use AI to assist people and reduce suffering. Technology should help people thrive.
Today, we often see a model where data is extracted from people, turned into products and then sold back to them. That feels very extractive. History shows that persistent extraction creates deep inequality between those who have a lot and those who have very little. I believe we can build a different world — one where technology helps more people thrive. If AI can reduce suffering and expand opportunity, why wouldn’t we choose that path?

Leave A Reply

Your email address will not be published.