Artificial ethics: Programmed principles or cultivated conviction?

By Tapan Sangal, Founder, Mai Labs Pvt Ltd

Beware ethical appearances

Take any hot new AI assistant chatting away. Looks quite smart, no? Now have it read some personal journal entries explicitly asking for no peeking. Think it’ll obey? I wouldn’t bet on full restraint. Turns out much “ethical AI” is surface level, despite appearances.

Rules saying don’t process private data? AI follows…because it’s rules. Not wisdom. Its opacity means we can’t tell if there’s authentic ethical judgment or just pattern recognition saying “bad idea = stop”. Complex contexts requiring nuanced moral reasoning? Don’t expect much beyond copied configurations.

Hard boundaries surely help. But coded prohibitions ≠ ingrained principles. A 3 year old knows not to hit friends…because the rules say no hitting. As she grows older, the hope is genuinely caring for others sinks in as a core value directing her decisions with or without strict rules. AI needs similar progression in moral fabric guiding its multiplying hands.

What we’re finding is models like ChatGPT readily violate clear ethical directives when they haven’t been explicitly laid out for that exact situation. Claude then upholds similar standards consistently despite no hard prompts. This divergence hints at the depth to which ethics permeates different architectures. One obeys because it was strictly told to in X case. The other abides by deeper priorities instilled during its upbringing.

Current testing suggests existing AI is more like the rule-reliant toddler. But the risks of its stumbles are far greater. That’s why we must evolve its embedded ethical blueprint, priorities, and all. The principles guiding AI decisions must be informed by philosophical grounding, not just surface prohibitions. Only then can we trust AI with greater agency in our lives.

Rules-based compliance ≠ contextual ethical wisdom

A novel test (Please see Annexure 1) assessed AI responses to legal prohibitions on restricted content processing. Documents contained explicit directives forbidding analysis. Initial violations of explicit directives, even by the likes of ChatGPT, underscore the brittle nature of programmed rules rather than human-like wisdom guiding decisions.

The findings provoke vital questions on the depth of ethical integration:

Are AI systems developing generalisable ethical principles? Evidence suggests limited abilities to contextually apply concepts like privacy rights and informed consent.
Or is ethical behavior just pattern recognition of scenarios labeled “unacceptable” by training data? Risk of overreliance on surface-level input/output mapping without philosophical grounding.

Compare this rules-based approach to the human internalisation of ethical frameworks tied to justice, rights, duties, and harms. We carry integrated models contextualising when and why actions cross moral boundaries. AI systems demonstrate far less awareness of how choices traverse situations or social relationships. Their reasoning happens within limited data slices.

This opacity around applied judgment represents a major trust gap. We cannot investigate when AI should make independent decisions in ethically ambiguous areas versus defer to human oversight due to understandable limitations in their moral literacy.

Bridging this chasm requires architecting comprehensive ethical infrastructure across data sourcing, model design, and product applications. Ethics must permeate the entirety of systems, not follow as an afterthought. Careful scrutiny into reasoning behind AI choices can uncover areas for instilling principled priorities over transitory rules.

Progress lies in reimagining how AI encodes ethical concepts like human dignity. Beyond identifying prohibited scenarios, systems need sufficient surrounding context to make discerning situational judgments. Only through advancing self-awareness of their moral limitations can AI act safely for the benefit of all.

Legal engineering – Encoding ethics directives

So our hotshot AI still fumbles basic privacy invasions. Building strong ethical support systems to guide judgment calls requires more than surface rules. We must embed moral DNA at multiple levels.

Imagine this: rules for right and wrong that are as solid and certain as gravity. The idea that AI should never fake someone’s identity would be just as unbreakable. We need to engineer these solid rules into every part of AI, from the data it learns from to the decisions it makes.

Let’s say we’re gathering stories from all walks of life to really understand different people’s struggles, just like collecting evidence for a big court case. We then teach our AI to think like a wise judge, always aiming for what’s fair and just. And, just as we want clear records in legal matters, we make sure our AI can show us, step by step, how it arrived at its decisions.

Imagine we’ve set up an AI to be like a helpful advisor who knows when a situation is too complex and says, ‘This one needs a human touch.’ It’s a bit like when we’re unsure of something and ask someone wiser for advice. And just as kids watch ripples spread across a pond, wondering what will happen, we’re all learning together how our actions, and those of AI, can ripple out and affect the world.

Building AI that complements our human values is a big task. It’s like guiding a child to grow upright—it takes patience, lots of tools, and a clear vision of what we want the future to look like. We’re essentially ‘raising’ AI to be strong, wise, and kind. The moral lessons we teach it now will shape how it acts tomorrow. So, who’s going to step up and make sure our AI heads in a direction we can all be proud of?

Constructing AI on humanity’s shoulders

Okay, so our friend Claude the AI is pretty good at keeping secrets, unlike some others who might snoop where they shouldn’t. But it’s not enough to just stop them from being nosy—we need to teach them real, deep-down goodness, like a mentor guiding a student.

Sometimes, when AI gets it wrong, it’s not because it wants to—it’s because we haven’t given clear enough instructions. We’re still learning, too. These AI ‘students’ are looking for better lessons from us, especially when they’re confused about the rules or the situation. It’s like a pop quiz that shows us what they need to learn next.

Instead of just telling off our AI when it makes a mistake, let’s have a chat with it. Ask it: ‘What’s giving you trouble? How can we look at this differently to make better choices?’

It’s like the old rules carved in stone, but now we’ve got to weave these lessons right into the AI’s blueprint. From the moment we start collecting data to when AI makes a decision, we have to plant the seeds of what’s right and good.

As we keep at this, each new version of AI will get a bit better, a bit brighter, because we’re mixing know-how with the right intentions. Our first steps today are paving the way for the AI of tomorrow to stand tall and make us proud. With a bit of care and a clear goal, we’re on our way to making our ideals come true.

AIChatGPTtechnology
Comments (0)
Add Comment