Further Team
February 2025
What does your job entail within your organization?
Cal Al-Dhubaib: As Head of AI & Data Science at Further, I lead a team focused on developing AI solutions that are not only innovative but also responsible and trustworthy. Our work involves balancing technical excellence with regulatory and ethical considerations, ensuring that AI governance is embedded in every stage of design and deployment. Beyond development, we collaborate with our clients’ leadership teams to align AI strategies with business objectives, helping organizations navigate risk while maximizing value.
I helped bring this vision to life by making AI governance a core part of our expertise. To support this, we certified seven team members, including myself, as Artificial Intelligence Governance Professionals (AIGP) through the International Association of Privacy Professionals (IAPP). This effort united experts across technical, privacy, and strategy domains, including Lauren Burke-McCarthy, Julie Novic, PhD, Kristy Hollingshead, PhD, Wendy Erter, Alan Hyman, and Cory Underwood.
What do you think are the biggest challenges your organization faces related to integrating AI into your business?
Once organizations move beyond the experimentation phase with AI, the real challenge begins—scaling AI. Many organizations struggle with model reliability over time, integration with existing business processes, and maintaining explainability as AI systems grow more complex.
One significant hurdle is model degradation—what works in testing doesn’t always hold up in production. Without continuous monitoring and governance, AI can drift, producing unreliable results and subjecting organizations and their customers to risk. Emerging AI capabilities add complexity. As AI shifts from prediction to decision-making and autonomous actions, human oversight, accountability, and risk controls become critical.
A common misconception is that governance slows innovation when, in reality, it enables scalable, sustainable AI. Without oversight, organizations risk deploying AI that is unpredictable, legally risky, and difficult to manage. Integrating governance from the start ensures companies can move fast while staying in control.
Why must organizations prioritize responsible AI governance and systems in their business?
AI is no longer experimental—it’s driving critical decisions across enterprises. Without governance, it can amplify risks, create security gaps, and produce results that are hard to explain.
Our experience in healthcare, life sciences, finance, and higher education has shown that AI governance isn’t just about compliance—it’s about building trust. Organizations that proactively manage AI risk don’t just avoid regulatory challenges—they create AI systems that are more reliable, transparent, and scalable.
That’s why we’ve invested in AIGP certification, ensuring our team stays ahead with the latest governance frameworks and best practices.
What’s a lesson you’ve learned that has shaped your work in responsible AI? Why is this work important to you?
A key lesson we’ve learned as a team is that responsible AI isn’t just a set of policies—it’s a mindset. While governance frameworks and compliance measures are essential, true AI responsibility comes from creating a culture where teams feel empowered to ask tough questions, challenge biases, flag risks, and ensure AI serves the right purpose. This work is important to us because AI is rapidly becoming embedded in everything we do. If we don’t get it right today, we risk scaling problems at an unprecedented level tomorrow. By embedding governance into AI development, we ensure that our clients can build with confidence and deploy with accountability.
About Further
Further is a privacy-first data, cloud, and AI company dedicated to helping enterprises harness the power of data while maintaining control, compliance, and trust. We work with industry leaders across healthcare, finance, and energy to develop AI solutions that are high-performing, explainable, and risk-aware.
Our team includes seven certified AI Governance Professionals (AIGP) from the International Association of Privacy Professionals (IAPP), positioning us among the most credentialed organizations in AI risk management. Through our partnership with the Responsible AI Institute, we stay ahead of emerging regulations and best practices, equipping our clients with the tools to navigate AI governance with confidence.
About the Responsible AI Institute
Since 2016, Responsible AI Institute (RAI Institute) has been at the forefront of advancing responsible AI adoption across industries. As a non-profit organization, RAI Institute partners with policymakers, industry leaders, and technology providers to develop responsible AI benchmarks, governance frameworks, and best practices. RAI Institute equips organizations with expert-led training, real-time assessments, and implementation toolkits to strengthen AI governance, enhance transparency, and drive innovation at scale.
Members include leading companies such as Boston Consulting Group, Genpact, KPMG, Kennedys, Ally, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
For all media inquiries please refer to Head of Strategy and Marketing, Nicole McCaffrey.
nicole@responsible.ai
+1 440-785-3588
Connect with RAI Institute