The Equipment Leasing and Finance Association’s Accounting & Technology Conference held a breakout session titled Leading the Future of Work – Considering Unconscious Bias in Design, Data & Technology, with the discussion led by Stephanie Case, global chief human resources officer at DLL, and focused on delivering a more inclusive user experience for all by addressing unconscious bias.
Bias is defined as a prejudice in favor of or against a thing, person or group compared with another, usually in a way that’s considered to be unfair. Biases may be held by an individual, group or institution and can have negative or positive consequences. Not limited to ethnicity or race, bias can exist toward any social group whether that be based on age, gender, gender identity, physical abilities, religion, sexual orientation, weight, location, income and more.
Bias exists in two forms. The first is conscious bias (also known as explicit bias), which is manifested in the words, actions and thoughts that affect other groups. The second is unconscious bias, which manifests in social stereotypes about particular groups of people that develop outside of our conscious awareness.
Everyone holds some level of unconscious thoughts and ideas about social and identity groups, as these biases stem from our innate tendency to organize social worlds by categorizing them.
Unconscious bias is far more prevalent in our society than conscious bias and can form in multiple ways for a litany of reasons, often without our recognition and with unintended consequences. Human flaws, however unavoidable they are, are expected, but it’s a different story when human flaws find their way into programming code, laws and legal proceedings and products and services.
Identifying unconscious bias is the key to eliminating it, according to Keelie Fitzgerald, senior vice president of marketing at Odessa.
“There are three things to consider from a business perspective when concentrating on reducing implicit biases: systems, processes and data,” Fitzgerald says. “If you work to improve a data set but don’t have adequate processes in place to regularly audit the effectiveness of ongoing data collection and application of automated outcomes, then very quickly you will find yourself on the back foot.
“At Odessa, the primary question we ask for ourselves as we build solutions for the market is who are we engaging with and how can that experience be as equitable and enjoyable as possible? Then, we try to be humble about what we don’t know. Are there users that are experiencing something differently? Let’s ask them! Is there a broad enough cross-section of people involved? How do we measure that? The amount of diversity on the teams building and auditing your technology will directly inform how well they can hypothesize the broadest set of outcomes and hopefully reduce the presence of bias.”
A circumstantial issue inherent within artificial intelligence learning and algorithmic data collection and prediction regarding unconscious bias is the majority of user data is based on ‘conveniently provided data’ or data that exists before the notion to use it for a specific purpose exists and therefore doesn’t account for every parameter of that specific purpose (like race, gender, income, etc.). Lisa Nowak, director of platform product management at Solifi, says trust is the foundation on which most industries are built.
“We all have some degree of unconscious bias (as does historical data), so building a culture where this is accepted and transparent helps us all drive toward more effective applications of technology,” Nowak says.
Divulging details on ways equipment finance companies can objectively approach this inherent bias, Nowak suggests a few key actions.
So how do we eliminate unconscious bias from the equipment finance industry?
“We will always have biases, so it’s not necessarily ‘how do we eliminate biases’ but instead ‘how do we keep our biases in check’ through robust systems and processes designed to uncover biased practices,” Fitzgerald says. “It is critically important to ensure our teams are sufficiently diverse. Only then are we able to affect an outcome that considers a larger breadth of worldviews and experience.”
“To correct bias within data requires a modified construct for assessing and governing risk,” Nowak says. “And we in equipment finance are experienced in developing and governing risk models.”
Nowak says combating bias in AI should occur in the design phase and should include audits and conversations designed to detect and eliminate bias. Data itself should be scrutinized for highest relevance and highest quality, including the data sets being fed into the model and those used to test its efficacy. Acknowledgement of algorithmic risks, which are defined as systemic or repeated errors in a computerized system or data set, in congruence with output viability and interpretation is important as well because no stone should be left unturned. Nowak also suggests that stakeholder diversity will directly reduce bias potential.
The equipment finance industry faces certain unique challenges in utilizing and maximizing the potential of data collection as well as automated processes. Fitzgerald says the sheer gravity of actions taken within the industry alone are enough for every precaution to be taken to ensure bias is eliminated early and consistently.
“When you look at the application of these capabilities to data sets and how that impacts the financial sector, there can be significant consequences,” Fitzgerald says. “For example, and circumstances like this have certainly been litigated, if your data set collects zip code and your algorithm starts to draw correlations between zip code and creditworthiness, even if that was not by design, you can be the owner of a process or practice that is biased. While a regulatory body to specifically consider bias in technology may not quite have taken shape yet, it is important to self-govern and be critical of outcomes as you start to implement more automation.”
“As in all healthy relationships between people and technology, we must build trust and confidence with customers, stakeholders and our industry by effectively managing risks associated with AI creation, use and bias,” Nowak says.
To understand and drive trust, Nowak believes building it in as a design aspect is paramount. Where relevant, Nowak suggest collaborating with external partners on regulation and best practices for ethical and non-biased AI and data outcomes.
“We all benefit from the technology if it can be more broadly applied in the future, driven by trust principles,” Nowak says.
Embracing industry-level and anonymized data through mechanisms such as public data sets, partnerships, consortiums, open-data platforms and marketplaces and using that data when building systems, processes and algorithms is another way to eliminate bias because these mechanisms tend to be inherently unbiased, accurate and trusted.
“In building trust, it’s important to drive the right level of transparency around how a data or AI decision is made,” Nowak says. “While the technology is innovating rapidly, its application should be ‘explainable.’ I’ve heard this described as the model should be comprehensible by the person on the street. It’s important to remember the goal isn’t to create a black box of decisions or outcomes. Rather, we want to ensure higher quality decisions and outcomes that have considered the potential impacts or implications of unconscious bias.
“In designing the solution, we must understand where needed data resides, the human needs and implications around it, the potential blind spots or gaps in its use cases or collection historically, all of which can feed into how we utilize it. It’s important to consider the AI governance structure and how it may need to be modified to effectively oversee the use of AI. To provide diverse insight, involve a diverse group of stakeholders to set objectives, test and govern. Include end users and groups they interact with to understand their impact and perceptions.”
Ian Koplin is an editor of Monitor.
No categories available
No tags available