July 2025
Vice President AI/ML
What does your job entail within your organization?
As a seasoned Technology and AI leader, I serve as the Vice President within the Global AI Practice and Innovation Team at Genpact. I lead both the Responsible AI competency and the broader AI Engineering capabilities, shaping the strategic direction and operationalization of ethical, scalable, and enterprise-grade AI solutions.
In my role, I lead the development and deployment of robust AI systems by building and managing high-performing global teams across Machine Learning Engineering, MLOps, LLMOps, and AI Governance. My work ensures that every AI initiative we deliver is not only technically sound but also aligned with the ethical, regulatory, and business imperatives of our clients.
As a certified Artificial Intelligence Governance Professional (AIGP) from the IAPP, I bring a deep governance lens to AI adoption. In my role, I have rolled out a comprehensive, technology-agnostic Responsible AI framework across our practice. This framework enables our teams and clients to embed principles of explainability, traceability, fairness, accountability, privacy, security, and reliability into every stage of the AI lifecycle from ideation through deployment and post-production monitoring. This included the development of an AI Risk Score Framework, a dynamic model that enables the quantification and continuous monitoring of AI-related risks spanning business, operational, privacy, and security domains. By integrating likelihood and probability scoring, a systematic approach has been designed to qualify AI risks and activate appropriate governance controls. This not only improves the reliability and trustworthiness of AI systems but also empowers organizations to make informed, risk-adjusted decisions around AI adoption.
Overall, my work is at the intersection of cutting-edge AI engineering and rigorous governance, ensuring our AI solutions deliver measurable value while upholding the highest standards of responsibility and trust.
What do you think are the biggest challenges your organization faces related to integrating AI into your business?
ne of the most significant challenges organizations face in integrating AI is moving from experimentation to scalable, enterprise-grade deployment. While many companies have piloted AI in isolated use cases, integrating it deeply into core business processes at scale and with accountability remains a hurdle. This gap stems not only from technical complexity but from organizational, governance, and ethical considerations.
Key challenges I see consistently include
- Lack of Enterprise-Ready AI Governance
Without a clear Responsible AI framework, organizations struggle to manage the ethical, legal, and reputational risks of AI. Issues like algorithmic bias, lack of explainability, and weak accountability mechanisms often become roadblocks during scaling. Integrating AI responsibly requires a robust governance structure that spans data, model, human oversight, and continuous monitoring. - Fragmented Data and Technology Infrastructure
AI thrives on clean, connected, and accessible data. Most organizations face legacy systems, siloed data environments, and inconsistent metadata, which hamper model performance and reliability. Without modern data engineering and ML Ops foundations, AI initiatives often fail to sustain in production. - AI Risk Management and Regulatory Readiness
As regulations and global privacy laws evolve, enterprises must adopt proactive AI risk assessment and classification frameworks. The challenge lies in translating regulatory ambiguity into tangible risk policies, impact assessments, and control mechanisms that can be embedded into real-world AI solutions. - Change Management and Talent Readiness
Integrating AI is not just a technology shift; it’s an organizational transformation. Many companies underestimate the cultural shift required, from enabling AI fluency across business and operations teams to redefining roles in an AI-augmented workforce. Coupled with the global shortage of experienced AI professionals, this becomes a key barrier. - Trust and Stakeholder Confidence
Earning trust from employees, customers, and regulators is essential. Organizations must demonstrate that their AI systems are not only performant but also safe, fair, and aligned with human values. Building explainable, auditable, and transparent AI is still an evolving capability in many enterprises.
In my experience, those organizations that succeed in integrating AI at scale are the ones that treat Responsible AI and AI Engineering as joint pillars, and invest early in governance, risk mitigation, and cross-functional collaboration. It’s not just about building smarter algorithms but building trustworthy systems that the enterprise can rely on.
Why must organizations prioritize responsible AI governance and systems in their business?
Because if you’re serious about scaling AI, then you have to be serious about governing it.
Responsible AI governance isn’t just about ethics, it is about business resilience, risk reduction, and long-term value. As AI moves deeper into decisions that impact customers, employees, operations, and compliance, organizations need a solid foundation that ensures AI is used wisely, fairly, and safely.
Here’s why it matters, both strategically and practically:
- Drive ethical innovation at scale
Governance isn’t a roadblock, it is the ramp that lets you experiment freely while keeping the enterprise safe and sound - Avoid expensive AI mistakes
One flawed model can make bad calls at scale, causing financial, reputational, or operational damage. - Enable faster, safer deployment
A governance framework acts like a seatbelt, it doesn’t slow you down, it just keeps you safe at high speeds. - Build trust inside and out
Whether it is customers, regulators, or your own employees, people need to know your AI is reliable and accountable. - Stay ahead of regulation
The EU AI Act and others are just the beginning. Being proactive with AI risk classification and controls is smart business. - Make better decisions with confidence
AI that is explainable and auditable helps leaders validate outputs, challenge assumptions, and trust recommendations. - Reduce compliance and rework costs
Embed checks and balances early to avoid late-stage fixes, audit nightmares, or last-minute model redesigns. - Boost adoption across teams
Clear rules of engagement help business and ops teams use AI confidently, knowing it won’t break things or cross lines.
What’s a lesson you’ve learned that has shaped your work in responsible AI?
One of the most defining lessons I’ve learned in my Responsible AI journey is that technology moves fast but trust moves slow.
That hit home for me not during a project, but in an everyday moment several years ago, something we have all experienced scrolling through a streaming platform and wondering, “Why is it recommending this to me?” I remember searching for a documentary, and then suddenly my feed was flooded with a single type of content, narrowing what I saw. It made me realize how invisible and influential these AI systems are and how quickly they can shape our choices, our worldviews, and even our behavior.
That moment crystallized something important: AI doesn’t have to be harmful to be unhelpful. A lack of transparency, explainability, or balance even in a consumer app can quietly erode trust, reinforce bubbles, and leave users feeling manipulated or excluded. And if that’s how we feel with entertainment content, imagine the impact when AI drives decisions about jobs, credit, healthcare, or education.
Why is this work important to you?
Because AI doesn’t just live in code, it lives in people’s lives. Because AI is becoming invisible infrastructure, guiding what we see, what we choose, and what we are offered. It’s not just about algorithms anymore, it’s about agency. And ensuring that AI serves people fairly, safely, and transparently isn’t just part of my job, it is the reason I do this work. And as leaders in this space, we owe it to society to ensure that the AI we build doesn’t just perform well, but performs responsibly, transparently, and fairly. That is the legacy we all must strive to leave behind.
About the Responsible AI Institute
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, Genpact, KPMG, Kennedys, Ally, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Responsible AI Institute
news@responsible.ai
+1 (515) 715-6899
Find Responsible AI Institute Here: