January 2025
Sr. Silicon Design Engineering Manager
What does your job entail within your organization?
Building a safe and inclusive AI Ecosystem for AMD, its partners and customers. The Global AI Markets team at AMD is responsible for expanding AMD’s strategic vision for AI, driving new ecosystem capabilities and accelerating strategic AI engagements globally across public and private sectors.
What do you think are the biggest challenges your organization faces related to integrating AI into your business?
1. Aligning AI Adoption with Business Goals
Challenge: Ensuring that AI initiatives align with the organization’s strategic objectives, such as improving hardware design, optimizing supply chain management, or enhancing customer support.
Impact: Misaligned efforts can lead to wasted resources and missed opportunities for value creation.
2. Cost of AI Implementation
Challenge: Implementing AI across the organization requires substantial investment in infrastructure, talent, and tools. Balancing these costs with profitability targets can be difficult.
Impact: High upfront costs may deter comprehensive adoption or limit innovation.
3. Navigating Regulatory and Export Controls
Keeping up to speed with innovation, striking a balance with governance, and ensuring and understanding the benchmark that needs to run to comply with our partners. Staying at par with our competitors and keeping up with the governance across the world, and different regulatory and policy bodies are also challenges.
Why must organizations prioritize responsible AI governance and systems in their business?
1. Upholding Trust and Reputation
The company’s hardware forms the backbone of AI systems used globally, making it a key player in the AI ecosystem. If the AI systems powered by its hardware are used irresponsibly or lead to harm, the company may face reputational damage, even if it isn’t directly responsible for the software.
2. Managing Ethical Riks
AI applications often involve sensitive and high-stakes use cases, such as healthcare, autonomous driving, and surveillance. Hardware providers play an indirect but crucial role in how these systems operate, ensuring responsible AI practices help mitigate risks like bias, misuse, or unintended harm, reducing the likelihood of negative societal and legal consequences.
3. Meeting Regulatory and Compliance Requirements
Governments and regulatory bodies worldwide are enacting stringent AI regulations (e.g., EU AI Act, data privacy laws) that increasingly hold hardware providers accountable as part of the AI supply chain. Proactively implementing responsible AI governance ensures compliance with existing and emerging regulations, safeguarding the company’s global operations and market access.
4. Addressing Open-Source Model Risks
Open-source AI models are widely accessible and can be misused for unethical purposes, such as deepfakes or misinformation campaigns. Responsible governance in releasing and managing open-source models can establish safeguards against misuse, ensuring these tools are used for innovation and positive applications.
5. Supporting Customer Needs and Expectations
Hyperscalers, OEMs, and enterprise customers increasingly demand responsible AI practices as part of their procurement criteria to meet their own ethical and governance standards.
6. Safeguarding Intellectual Property and Security
Hardware and open-source contributions can be exploited for malicious purposes if not secured appropriately. This includes risks of adversarial attacks, data leaks, or espionage.
7. Ensuring Long-Term Sustainability
AI workloads are resource-intensive, consuming significant energy and materials. Customers and stakeholders increasingly prioritize energy-efficient and sustainable solutions. Incorporating responsible governance helps align the company with global sustainability goals and reduce its environmental footprint.
8. Staying Competitive in a Changing Market
Responsible AI is becoming a differentiator in the marketplace, as organizations and governments prioritize ethical, sustainable, and secure AI solutions. Prioritizing responsible AI governance positions the company as a forward-thinking leader, enhancing competitiveness and long-term success.
What’s a lesson you’ve learned that has shaped your work in responsible AI? Why is this work important to you?
One key lesson I’ve learned is that AI’s pervasive influence means it is no longer confined to niche applications—it is becoming the backbone of nearly every hardware device, software application, and decision-making process across industries. This ubiquity amplifies its transformative potential but also highlights the immense responsibility we have as developers, manufacturers, and providers of AI technology.
AI doesn’t just change the way we do things—it redefines our systems, decisions, an
interactions. Whether in healthcare, finance, education, or public safety, AI has the power to enhance efficiency, improve outcomes, and create new possibilities. However, it also carries risks like bias, misuse, and unintended consequences. Responsible AI ensures that, as we innovate, we uphold ethical principles, transparency, and accountability.
This work is important to me because it goes beyond technology—it’s about trust, fairness, and doing the right thing. When we integrate AI into hardware, applications, and systems, we are not just creating tools; we are shaping the world people will live in. Responsible AI ensures that the solutions we build are safe, equitable, and aligned with societal values. AI’s integration into every aspect of life underscores the need for responsibility at every step—from development to deployment. It’s not just good business practice; it’s a moral imperative that aligns with our values, ensures trust, and ultimately drives positive change.
About AMD
Aarti Choudhary comes from the AMD Research and Advanced Development (RAD) team, pathfinding and incubating performant efficient AI computing, and the Security and Responsible AI program at AMD, which is the foundation of RAI program. Now, she is joining the Global AI market team at AMD, which is responsible for expanding AMD’s strategic vision for AI, driving new ecosystem capabilities and accelerating strategic AI engagements globally across public and private sectors.
About the Responsible AI Institute
Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. We accelerate and simplify responsible AI adoption by providing our members with AI assessments, benchmarks and certifications that are closely aligned with global standards and emerging regulations.
Members include leading companies such as Amazon Web Services, Boston Consulting Group, Genpact, KPMG, Kennedys, Ally, ATB Financial and many others dedicated to bringing responsible AI to all industry sectors.
Media Contact
Nicole McCaffrey
Head of Strategy & Marketing
Responsible AI Institute
nicole@responsible.ai
+1 (440) 785-3588
Find Responsible AI Institute Here: