And the RAISE Award nominees are…

Mark your calendars for December 7th, 5:30pm-7:30pm—our annual community gathering RAISE 2022 is just 22 days away!

We’ll be hosting a celebration in-person at the Javits Center in NYC as well as a virtual livestream of the event. Check out our website for more on the agenda of RAISE 2022.

Read on for more details about the event and to find out who we’ve nominated for our annual RAI Institute Leadership awards!

What is RAISE 2022?

RAISE is the Responsible AI Institute’s annual community event dedicated to bringing positive impact in the field and identifying how we can collectively raise the bar for responsible AI.

RAISE 2022 will include a social aspect for attendees to get to know our work and an overview of how the landscape has changed in the past year, followed by a discussion on why AI standards are needed and crucial to the success of AI adoption and deployment.

To close out the event, we will host our annual RAI Institute Leadership awards to honor and celebrate the amazing work being done by leaders in various fields.

At this event, we’ll do so by sharing the winners of our following four awards:

  • Leading Community Organization Award
  • Leading Start-up Award
  • Leading Individual Award
  • Leading Enterprise Award

Our RAISE 2022 Award Nominees are..

Without further ado, here are the nominees for our RAISE 2022 Awards by category! Thank you to each of you for the amazing work you do to improve responsible AI worldwide!

In the coming days, we’ll be sharing more about the incredible work that each nominee is doing but we encourage you to check out their work now!

Leading Community Organization Award

To recognize an outstanding nonprofit or academic organization that has had a meaningful impact in the responsible AI community through its research, initiatives, open-source projects, or partnerships.

Mila’s AI for Humanity

Socially responsible and beneficial development of artificial intelligence is a fundamental component of Mila’s mission. As a leader in the field, Mila hopes to contribute to social dialogue and the development of applications that will benefit society.

CITRIS Policy Lab, University of California

The CITRIS Policy Lab at the University of California (UC) supports research, education, and engagement with policymakers to promote the responsible development and use of AI. The Policy Lab is leading work with the State of California to guide its responsible AI strategy and with the UC Office of the President to establish the UC Responsible AI Principles and Practices, a first-of-its-kind strategy that could become a national model for policies and practices in higher education.

Data Nutrition Project

The Data Nutrition Project believes that technology should help us move forward without mirroring existing systemic injustice. Their team:

1. Creates tools and practices that encourage responsible AI development

2. Partners across disciplines to drive broader change

3. Builds inclusion and equity into their work

First Nations Information Governance Centre (FNIGC)

The First Nations Information Governance Centre (FNIGC) envisions that every First Nation will achieve data sovereignty in alignment with its distinct world view. They assert data sovereignty and support the development of information governance and management at the community level through regional and national partnerships.

Data for Black Lives

Data for Black Lives is a movement of activists, organizers, and scientists committed to the mission of using data to create concrete and measurable change in the lives of Black people. Through research, advocacy, and movement-building, they support the vital work of grassroots racial justice organizations to challenge discriminatory uses of data and algorithms across systems.

Leading Start-Up Award

To recognize an outstanding start-up organization that has made meaningful contributions to responsible AI through its work. This can include any organization that identifies as a start-up, whether for-profit or not-for-profit, launched within the last few years.

Armilla AI

Armilla AI is a Quality Assurance platform for models allowing large enterprises to govern, deploy and scale their AI/ML systems. Our clients leverage Armilla AI to test, validate, and monitor any and all models across the enterprise in a reliable, consistent, and repeatable manner. We make Responsible and Trustworthy AI a reality.

Fairly AI

Fairly AI’s mission is to accelerate the broad use of fair and responsible AI by helping organizations bring safer, faster and compliant AI models to market. Fairly AI started as an interdisciplinary research project involving philosophy, cognitive science and computer science. After extensive product concept and design iterations, Fairly AI was formally incorporated in April 2020, and is a global operation.

Skinopathy

Skinopathy is developing ground-breaking digital health technology that will forever change how we practice medicine in Canada and abroad. Skinopathy’s mission is to provide accessible healthcare on demand to everyone. This is to enable people to live healthier and longer lives by getting the care they need when they need it.

SkyHive

SkyHive’s mission is to democratize labor opportunities so we can all benefit from a more capable workforce and a more efficient global economy. SkyHive has built the world’s only Quantum Labor Analytics platform to optimize human economies in real time for companies, communities, and countries. Essential for digital transformation, our platform informs and benefits the entire job cycle from the individual worker to the corporation to the global economy.

BrainBox AI

As innovators of the global energy transition, BrainBox AI’s game-changing HVAC technology leverages AI to make buildings smarter and greener. Working together with our trusted global partners, BrainBox empowers building owners to reduce their carbon footprints. BrainBox AI brings sustainability to the built environment to significantly reduce energy consumption and costs.

Leading Individual Award

To recognize an outstanding individual that has led meaningful work in the field of responsible AI.

Tina Lassiter, PhD Student studying AI and Ethics at the University of Texas at Austin

Tina Lassiter is a researcher, legal expert and communicator in the AI industry. Tina recently co-authored a paper on AI in Hiring which was presented at the 2021 Conference of AI, Ethics, and Society (AIES). Her work encompasses privacy issues, Human AI Interaction and AI Ethics, Her KUNGFU.AI project researched how to implement Ethics in the Technology sector.

Heather Benko, Senior Manager at the American National Standards Institute (ANSI)

Heather Benko is a senior manager at the American National Standards Institute (ANSI), where her work includes the role of Committee Manager for the International Organisation for Standardization (ISO)/International Electrotechnical Commission (IEC) Joint Technical Committee (JTC) 1, Subcommittee 42 on Artificial Intelligence. JTC 1/SC 42 develops standards for the entire AI ecosystem.

Lama Nachman, Intel Fellow and Director of Human & AI Systems Lab in Intel Labs

Lama Nachman is an Intel fellow and Director of Anticipatory Computing Lab in Intel Labs. Her research focuses on creating contextually aware experiences that understand users through sensing and sense making, anticipate their needs and act on their behalf. Lama’s experience lies in the areas of context aware computing, multi-modal interactions, sensor networks, computer architecture and embedded systems.

Reva Schwartz, Research Scientist in the Information Technology Laboratory (ITL) at National Institute of Standards and Technology (NIST)

Reva Schwartz is a research scientist in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) where she serves as Principal Investigator on Bias in Artificial Intelligence. She has advised federal agencies about how experts interact with automation to make sense of information in high-stakes settings and advocates for a socio-technical systems approach to AI practice, including human-centered design processes and evaluating AI in real world contexts.

Elizabeth Adams, AI Ethics Advisor at Stanford HAI

Elizabeth Adams is a technology integrator, working at the intersection of cyber security, AI ethics and AI governance, focused on ethical tech design. She is a member of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems building global standards for AI Nudging & Emotion AI. Elizabeth has also affected change in the local Civic Tech & Tech Design Racial Equity framework in Minneapolis.

Renée Cummings, Assistant Professor and Data Activist in Residence at the University of Virginia’s School of Data Science

Renée Cummings joined the School of Data Science in 2020 as the School’s first Data Activist in Residence. Her research follows the impact of AI on criminal justice, specifically in communities of color and incarcerated populations. Cummings specializes in implicit bias, AI ethics, and best-practice criminal justice. She founded Criminal Justice Intelligence Inc. and Urban AI and is an East Coast Regional Leader for Women in AI Ethics.

Matissa Hollister, Assistant Professor at McGill University, RAII Employment Working Group Co-Chair

Matissa Hollister is an assistant professor of Organizational Behavior Area at the Desautels School of Management at McGill University interested broadly in patterns of employment and inequality. Matissa also serves as the Co-Chair of the RAI Institute / WEF Global AI Action Alliance’s Automated Employment Working Group to help develop RAII’s certification program to assess use cases on automated employment.

Barbara Cosgrove, Vice President and Chief Privacy Officer at Workday, RAII Employment Working Group Co-Chair

Barbara Cosgrove is vice president, chief privacy officer at Workday. She has extensive expertise in leading data protection, ethics, and compliance programs: global data privacy programs, implementation of technology compliance standards, and development of machine learning ethics-by-design frameworks. Barbara is also Co-Chair of the RAI Institute/WEF Global AI Action Alliance’s Automated Employment Working Group.

Karen Silverman, ​​CEO and Founder of The Cantellus Group

Karen Silverman is a leading global expert in practical governance strategies for AI and other frontier technologies. She is CEO and Founder of The Cantellus Group, which advises governments, startups and Fortune 50 companies on governing cutting-edge technologies in a rapidly changing policy environment. Karen is a WEF Global Innovator and sits on its Global AI Council doing ongoing work on AI, data, and cybersecurity issues.

Leading Enterprise Award

To recognize an outstanding company or enterprise that has made meaningful contributions to responsible AI adoption, solutions, or research.

Boston Consulting Group (BCG)

Boston Consulting Group is a global consulting firm that partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. Our success depends on a spirit of deep collaboration and a global community of diverse individuals determined to make the world and each other better every day.

Hugging Face

Hugging Face is on a mission to democratize good machine learning, one commitment at a time. They aim at recruiting people who have a generalist & diverse mindset. Hugging Face thrives on ‘multi-disciplinarity’ and is passionate about the full scope of machine learning, from science to engineering to its societal and business impact.

Westpac

Westpac is Australia’s first bank and oldest company, one of four major banking organisations in Australia and one of the largest banks in New Zealand. Westpac provides a broad range of consumer, business and institutional banking and wealth management services through a portfolio of financial services brands and businesses.

Jackson

Jackson is committed to reducing the complexity of retirement planning. Their retirement products, financial know-how, history of award-winning service and streamlined experiences strive to reduce the confusion that complicates customer plans.

ATB Financial

ATB was started to help Albertans through tough economic times. They’ve since grown from one small Treasury Branch to become the largest Alberta-based financial institution. ATB has helped transform people’s understanding of what banking can–and should–make possible. For the past seven years, ATB has been named a top employer by Great Place to Work Canada.

Vote Now!

To help determine our award winners, we need your help!

We’re asking our community to vote for which individual, community organization, start-up, and enterprise you think most deserves recognition. You can do so at the form and be sure to share it with a friend!

Voting closes by midnight EST on Friday, November 25th so please submit your ballot before then! And stay tuned for more updates from us about the event!

Share the Post:

Related Posts

Risks of Generative AI

Responsible AI Institute Webinar Recap (Hosted on April 17, 2024) The proliferation of deepfakes – synthetic media that can manipulate audio, video, and images to...

Responsible AI Institute + AltaML Case Study

Objective    AltaML, an applied artificial intelligence (AI) company that builds powerful AI tools to help businesses optimize performance and mitigate risks, recognized the importance...

Responsible AI Institute Q1 2024 New Members

Multiple Fortune 500 Companies Join RAI Institute; New Member Resources Spotlight Responsible AI Journeys and AI Industry Experts  AUSTIN, TEXAS – Apr. 14, 2024 –...

News, Blog & More

Discover more about our work, stay connected to what’s happening in the responsible AI ecosystem, and read articles written by responsible AI experts. Don’t forget to sign up to our newsletter!