Today’s business environment is more complex than ever. Whether it’s new regulatory requirements or the battle for talent, customers all have a common opportunity: identifying new ways to make smarter decisions that lead to better outcomes. With this in mind, Workday is incorporating machine learning (ML) technologies—a subset of artificial intelligence (AI)—into our applications so that customers can make more informed people and business decisions, accelerate operations, and assist workers with data-driven predictions that lead to better outcomes.
At Workday, ML isn’t about supplanting human decision-makers. Rather, ML-fueled applications make predictions that, when combined with human judgment, help inform better decisions. But the success of ML, like any emerging technology, depends upon trust, and that trust will exist only if companies adhere to responsible, ethical practices.
Workday believes ML in the enterprise will fundamentally improve the way we work and live, but in the face of such a profound technological and societal change, it’s vital that we commit to an ethical compass. Ours is comprised of six key principles that guide how we develop ML for the enterprise responsibly and work to help address its broader societal impact:
We Put People First
Workday always respects fundamental human rights. We apply ML to deliver better business outcomes and help people in their decision-making. Our solutions provide customers control over how recommendations are used.
We Care about Our Society
We believe that humans will always be at the center of work. We focus on how ML can align opportunity with talent, and on contributing to the development of an ML-ready workforce.
We Act Fairly and Respect the Law
Workday acts responsibly in our design and delivery of ML products and services, and strives to identify, address, and mitigate bias in our ML technologies. We aim to ensure that ML recommendations are equitable. Our products and services are developed and designed to enable compliance and we are engaged in the policy dialogue around regulation of new technologies.
We Are Transparent and Accountable
We explain to customers how our ML technologies work, the benefits they offer, and describe the data needed to power any ML solutions we offer. We demonstrate accountability in ML solutions to customers and give them a wide range of choice in how they deploy them.
We Protect Data
Workday’s Privacy Principles apply to all of our products and services, including to our ML efforts. We minimize the data used, and embrace good data stewardship and governance processes.
We Deliver Enterprise-Ready ML Technologies
We apply our leading quality processes—with input from customers—when developing and releasing ML technologies. We deliver meaningful ML-powered solutions that help our customers tackle real-world challenges.
But it isn’t enough just to have ethics principles: we are building them into the fabric of our product development and are ensuring we have processes that drive continued compliance with them. We have a long history of this in the privacy space, including privacy-by-design processes as well as third-party audits against our controls and standards.
We are embracing a similar set of ethics-by-design controls for ML, and already have in place robust review and approval mechanisms for release of new technologies, as well as any new uses of data. We’re committed to ongoing reviews of our processes, and evolving them to incorporate new industry best practices and regulatory guidelines.
Above all, as we look forward and as with all our product efforts, we are focused on our customers’ needs and requirements so we can provide new services and technologies that allow them, and their people, to achieve more. By partnering together, we can ensure ethical development and use of ML in the enterprise.