New Delhi [India], February 16: Shekhar Natarajan, Founder and CEO of Orchestro.AI, explains the impact of global influence that could change narratives in this opinion piece.
The problems documented in the preceding articles share a common architecture: systems optimized for single metrics, blind to consequences, incapable of compassion.
The Aadhaar system optimized for fraud elimination — and starved a child.
Loan apps optimized for collection — and drove families to suicide.
Delivery algorithms optimized for speed — and killed workers.
Facial recognition optimized for identification — and jailed the innocent.
Language models optimized for probability — and learned to discriminate.
In each case, the technology performed exactly as designed. The problem wasn’t a bug. The problem was the design itself — systems built with efficiency as the only virtue, with no mechanism for compassion, no voice for caution, no agent for ethics.
Shekhar Natarajan’s Angelic Intelligence framework represents a fundamental rethinking of how AI systems should be built. Instead of single-purpose optimization, it deploys 27 specialized agents — each embodying a cross-cultural virtue — that must collaborate on every significant decision.
The Architecture
Each agent in the Angelic Intelligence framework represents a virtue drawn from wisdom traditions across cultures — Hindu, Buddhist, Christian, Islamic, Indigenous, philosophical. Together, they form a council that must reach consensus before any significant action is taken.
Karuna (Compassion) — Considers the suffering that actions might cause. Asks: Who will be hurt by this decision? Can we achieve our goal without causing harm?
Satya (Truth) — Ensures outputs are accurate, not merely probable. Asks: Is this true? Or is it just statistically likely based on biased data?
Ahimsa (Non-harm) — Prevents actions designed to cause suffering. Has veto power over any action whose primary purpose or predictable effect is human harm.
Nyaya (Justice) — Ensures fair treatment across groups. Asks: Does this decision treat all people equitably? Does it perpetuate historical discrimination?
Raksha (Protection) — Safeguards vulnerable populations. Asks: Are there children, elderly, disabled, or otherwise vulnerable people who might be affected? What special protections do they need?
Sama (Equanimity) — Maintains balance and prevents extremes. Asks: Is this demand compatible with human limitations? Are we optimizing so aggressively that we’re causing harm?
Maitri (Loving-kindness) — Approaches all beings with goodwill. Asks: How would we treat this person if we loved them? How would we want to be treated?
Viveka (Discernment) — Distinguishes appropriate from inappropriate action. Asks: Is this the right action in this context? Are we being applied correctly?
Prajna (Wisdom) — Considers long-term consequences. Asks: What are the downstream effects of this decision? What precedent does it set?
Sahana (Patience) — Pauses before irreversible actions. Asks: Is immediate action necessary? Can we wait, verify, confirm?
And seventeen more, each representing a distinct ethical perspective drawn from humanity’s collective wisdom about how to treat one another.
How It Works
When a decision is required, all 27 agents evaluate it from their respective perspectives. If there is consensus — if efficiency and compassion and justice and protection all agree — the action proceeds.
If there is disagreement — if efficiency says “act” but compassion says “wait,” if probability says “Sharma” but equity says “ask” — the system escalates to human oversight.
“The key insight,” Natarajan explains, “is that ethical decisions are almost never single-variable optimizations. Real ethics involves trade-offs between competing goods. A system that can only optimize for one thing cannot be ethical — it can only be efficient. And efficiency without ethics is just sophisticated cruelty.”
Applied to the Cases
Santoshi Kumari’s ration card: Before deletion, Karuna would have asked about the family’s circumstances. Raksha would have flagged the presence of children. Sahana would have required a waiting period before irreversible action. Nyaya would have asked whether the family had adequate opportunity to comply. The deletion would have been paused, escalated, and reviewed by a human — not executed automatically.
Loan app harassment: Ahimsa would have prevented any action designed to cause psychological harm. Maitri would have required that collection tactics treat borrowers with basic dignity. Viveka would have distinguished between someone gaming the system and someone genuinely struggling. The morphed images, the calls to family, the threats — none of it would have been possible.
Gig worker timelines: Raksha would have flagged delivery windows that require dangerous driving. Sama would have prevented demands that exceed human physical capacity. Satya would have ensured that promised earnings match actual earnings. The 10-minute delivery promise would never have been made.
Facial recognition arrests: Nyaya would have required corroborating evidence before any arrest. Satya would have flagged the technology’s 2% accuracy rate. Sahana would have demanded patience before life-altering actions. Umar Khalid would not be in his fifth year of imprisonment without trial.
Caste bias in AI: Sama would have checked for disparate treatment across caste groups. Nyaya would have flagged outputs that reinforce historical discrimination. Satya would have distinguished between statistical probability and truth. ChatGPT would not have changed Singha to Sharma.
The Patent Fortress
Natarajan has filed over 207 patents protecting the Angelic Intelligence framework — not to extract profits, but to ensure the technology cannot be co-opted or corrupted.
“Without patent protection,” he explains, “anyone could take these concepts and implement them badly — or implement them in name only while pursuing the same old optimization. The patents ensure that anyone using this framework must implement it correctly, with all 27 agents functioning as designed.”
The patents cover not just the multi-agent architecture, but the specific mechanisms for inter-agent deliberation, the escalation protocols when agents disagree, and the interfaces for human oversight.
“This is a thousand-year project,” Natarajan says. “We’re building AI that will shape humanity’s future. It has to be built right. It has to be protected from those who would cut corners.”
If you object to the content of this press release, please notify us at pr.error.rectification@gmail.com. We will respond and rectify the situation within 24 hours.
Comments are closed.