Consider AI’s “Personality Traits” Before Adding to Your Team

by Scott Nelson

Scott Nelson is the president and chief technology officer of Tamarack Technology. He is an expert in technology strategy and development including AI and automation as well as an industry expert in equipment finance. Nelson leads the company’s efforts to expand its impact on the industry through innovation using new technologies and digital transformation strategies. In his dual role at Tamarack, Nelson is responsible for the company’s vision and strategic planning as well as business operations across professional services and Tamarack’s suite of AI products. He has more than 30 years of strategic technology development, deployment and design thinking experience working with both entrepreneurs and Fortune 500 companies.



Hollywood loves a good team story – but what about AI as the newest “teammate”? Scott Nelson, president and CTO of Tamarack Technology, explores the idea of AI not as a cold, calculating tool but as a peer with unique “personality traits” that can transform how teams operate. From AI’s tireless consistency to its innovative ability to challenge assumptions, this article examines how empathy and understanding AI’s strengths and weaknesses can unlock its potential as a true collaborator.

Hollywood loves teams – Hoosiers, Ocean’s 11, Rember the Titans, Miracle, That Thing You Do and Invictus are a few of my favorites. Even superheroes, the ultimate unique individuals, get team movies like The Avengers. Even though they have evolved from being cast only as villainous, AI characters are still misunderstood and never seem to fit in. AI is anthropomorphized more than any other technology, even automobiles – sorry KnightRider. But from the HAL 9000 to iRobot to the “blue shirt guy,” AI characters are always on their own.

Hollywood’s storytellers seem to have missed a key opportunity of anthropomorphism: empathy. Teams are built with empathy: empathy for each member as well as the collective. The result has been AI works on its own. While many view AI as a threat to their professional lives, the more optimistic among us see AI becoming “a new member of the team” who will make us better.

We all know that understanding a new member’s personality traits, strengths and weaknesses is critical in designing a role for that member to play on the on a team. While anthropomorphizing AI is entertaining and sells tickets; it can also be constructive when adding it to an organization. Just as we do with a new human teammate, we can consider what AI likes to do and what AI is good at. When compared to its human teammates, what are AI’s strengths and weaknesses? What is AI not able to do? And where does the human team need help when doing tedious and/or deeply analytic tasks? Putting AI in human form can help us empathize and build stronger teams with the technology.

AI Doesn’t Have Bad Days

AI is built on prediction models derived from the data records of past outcomes and decisions like the ones that the AI will be applied, such as Approve|Decline, Payment Delinquency and Contract Term/Buyout. AI models are basically just math, so they can’t have a bad day or feel bad about a decision. Further, once a model is trained on a defined set of available data inputs and set into the workflow, the AI cannot be influenced or distracted by arbitrary or unrelated conditions unless by design. A human underwriter might feel better about difficult applications on a sunny day or after a favorite NFL team wins a big game. But most AI underwriting models don’t see either sunshine or NFL scores.

The classic team of Mr. Spock and Dr. McCoy is a winner. Photo by Wonderlane on Unsplash
What to do:

An AI team member will be durable and consistent in its efforts. Early adopters of AI predictors often use AI to recommend to or advise a human decisionmaker to increase the breadth of analysis and balance any emotion with the decision. Whether we like it or not, AI is non-emotional to the point of “not caring.” Think of the classic team of Mr. Spock and Dr. McCoy. Mr. Spock’s logic is durable and consistent, but Dr. McCoy’s passion brings emotion and empathy that can address subtleties that are sometimes critical to a good decision. The team combination is a winner.

  • Use AI in the context of checks and balances for your human teams. An objective AI predictor can help with decision workflows that have high variance and require empathetic context, like story-credit underwriting.
  • Leverage the objectivity of AI, but don’t ignore the value of human awareness, instinct, and judgment in more complex decision-making environments. AI is very good at following rules and considering massive data inputs, but subtlety and “minor rule interpretation” is not its strength.
  • Consider including external influences in AI models carefully. Don’t prevent outside context, but make sure you understand the scope of the data you are allowing, e.g., Labor reports, GDP announcements, Dow Jones indexes, weather, etc.
  • Prudent AI implementations have guardrails both on the governance allowed of an AI automation and the pace at which new learnings are “released to the wild.” Think of an implementation in the framework of onboarding a new, less experienced human team member. Governance and trust grow with good results.

AI Can’t Take Anything for Granted

Taking something for granted requires a narrow focus and confidence, often over confidence, in the permanence of a particular assumption. The teenager assumed that mom and dad would always provide whatever transportation he needed. When he graduated from college, he quickly learned that he was taking for granted his access to a car.

AI assumes every time it finds and uses a pattern, but these patterns are found within large expanses of data coming from many sources. Confidence, and more specifically over confidence, allows one to ignore the input or opinions of others before deciding. But confidence is a feeling, an opinion of one’s performance against others. AI can be neither over-confident nor unsure – it knows what it knows and makes decisions using the mathematical outputs of its training. As such, AI cannot take any one input or simple combination for granted as “an experienced human expert” might.

Consider the pattern “PayNet score >700 leads to good business outcomes.” An AI predictor will not take a PayNet score for granted because its training model requires it to also consider and evaluate a wide range of additional credit inputs that put a credit score into proper context. An overextended borrower with a 700 PayNet score is probably not a good risk. Similarly, not all startups have bad credits. Our survey of over 3,000 borrowers with less than three years in business showed 85% had no delinquency.

Beyond avoiding the risk of expansive assumptions, not taking things for granted is a strength of innovators. Innovators often solve a problem in new ways that others assume will not work as well as the status quo. The story of Move 37 is an example of how AI’s trait of not taking things for granted can lead to surprising new outcomes – innovation.

Perhaps move 37 is the perfect example of what we can expect in the coming decades of AI Photo by Elena Popova on Unsplash

Beating a human in Go has always been viewed by AI developers as more challenging than winning at chess because the number of possible next moves after each turn is so large that the usual computational methods would not be practical. But in 2016, a new AI-based algorithm known as AlphaGo engaged the world Go champion. During the contest, AlphaGo made a move that human observers, and competitors, assumed was a mistake because it violated certain move-conventions. But AlphaGo had a plan based on experimenting it did during training when it was unconstrained by certain assumptions. The plan was a winning strategy with AlphaGo victorious in four of five matches. Today, Move 37 is being used by human champions.

Assumptions are a fact of life in business. They are necessary when one doesn’t have the data necessary to identify unknowns. Sometimes they are required due to time constraints. But when assumptions mature into “taking things for granted” discussions and experiments that might lead to new learnings and innovation end. This is an area where AI can help a human team.

What to do
  • Look for ways to have AI help with innovation on core business problems like risk management. Use AI to find and try the unexpected. Allow AI to challenge both policy and expertise in managed risk processes.
  • Use AI to help the team find patterns but validate the value and durability of the patterns before they expose the business to assumptive risk.
  • Do not oversimplify ML models when building and integrating AI predictors. Keep as many inputs as possible to cross-check the patterns identified for situational value to the desired outcome.
  • Find ways to enable your AI to try something new. Design to learn, even through trial and error, but without fatality.

AI doesn’t Have Judgment

The application of AI in equipment finance has one primary objective – improving decision making.  “Improve” takes the form of better, faster and less expensive decisions. This reality uncovers the most important personality trait to be considered when integrating with your team.

A decision has four key components as shown above – action, judgment, prediction, and outcome. It is not a coincidence that a Prediction Machine, the functional unit of AI described by Agrawal, Ganz and Goldfarb in their seminal AI book Prediction Machines, has the same four functional blocks.

But Agrawal et.al. point out that judgment is a uniquely human characteristic: “Prediction machines don’t provide judgment. Only humans do, because only humans can express the relative rewards from taking different actions.”

AI does not have judgment; it can only measure and choose alternatives as its programming instructs. When the outcomes of a prediction are quantified, either by prioritized classifications or a numeric result, making the best choice can be programmed as a straightforward logical or mathematical calculation.

But the world is rarely black and white or deterministic — every business event has a distribution of outcomes because they all involve humans. Examples of this fact are sports prediction engines. No sports gambling models are always right because human behavior, while statistically describable, is not deterministic. An AI predictor has no way to select the underdog in a game where the probability of win vs loss is 51% to 49%.

Every decision requires judgment, and AI cannot decide outside of or contrary to its construction. This is where AI must have the help of its human teammates. They can define and judge the value of future states or outcomes. AI can only be deployed if it can learn good judgment from its human teammates.

What to do
  • Leverage the value of your teams’ expertise and judgment by training AI on the decision data that they create and test predictors against their judgment before deploying.
  • Lean on your experts to design judgment policies built on AI predictor probability distributions for multi-state outcomes like delinquency, lender matches, approve | decline, etc. They will know which is a better choice for your business – approving a 75% probability of no delinquency or denying all deals with greater than 5% probability of loss. Or both.
  • Make sure your team, human and AI, is always on the lookout for outliers and contradictions in the data. These are the cases where judgment will be both critical to resolving the situation and trained by the decision experience.

No one is arguing anymore whether AI is going to change our world. ChatGPT has embedded itself in almost every news stream and it’s no longer just the subject of movies – it’s writing and illustrating them. Anyone over the age of 20 has first-hand experience with technology change – the iPhone was released in 2006.  But the human-like intelligence of AI makes this change a little more emotional and a little more uncertain. But those same human-like characteristics can help us integrate it into our organizations and our teams if we think about it as more of a peer – if we empathize.

Applying empathy to technology may be a big ask, but Hollywood has given us a range of examples from which to draw. The HAL9000 may not be huggable, but its ability to analyze a situation would be reliable. Team building, like judgment, is a uniquely human practice. Apply a little human empathy to both your new AI partner and the existing team to match personality traits and build a higher-performing organization.

Leave a comment

View Latest Digital Edition

Terry Mulreany
Subscriptions: 800 708 9373 x130
[email protected]
Susie Angelucci
Advertising: 484.459.3016
[email protected]

View Latest Digital Edition

Visit our sister website for news, information, exclusive articles,
deal tables and more on the asset-based lending, factoring,
and restructuring industries.
www.abfjournal.com