Stewart Group

View Original

Advice & The Empathy Factor

by Nick Stewart

The debate around AI is ongoing. Some hail it as the hero of all tools; others as the fall of creativity and individualism as AI scrapes existing works to take its inspiration. In financial services, you may have noticed it in the rise of chatbots and digital services over in-person interactions.

The problem with new technology remains that we mere humans do not trust it to have our best interests at heart. While a machine could theoretically arrive at the same conclusion as human reasoning and experience would, it would not have the same motivations which drive us.

Research into attitudes towards AI indicates three key areas to trusting others: mutual concern, a shared sense of vulnerability, and a confidence in competence.[i] Most important to developing rapport is mutual concern. Psychology tells us that we tend not to trust other people unless we believe they have our best interests at heart. Here lies the rub with AI; no concern exists. AI does not have the empathy factor, and we (to paint humanity with a broad brush) find that unnerving, unrelatable and ultimately untrustworthy.

You may have heard of the infamous railway wagon problem, which is generally used to determine whether someone is a morality-based reasoner or a consequentialist. It poses the following scenario: You see a wagon headed towards five people, who are tied to the tracks ahead and unable to escape. You happen to be by a lever which can divert the wagon to an auxiliary set of tracks… but if you do this, it will hit one man on those tracks who cannot be warned in time. Do you save the five and sacrifice the one? What is the best choice when the outcome of both scenarios is undesirable?[ii]

Scholars have been debating this problem since it was posed in 1967. We know an impossible scenario would cause a great deal of human consideration. If a machine was posed the same problem, however, it would pick the outcome it was programmed to favour – no philosophical or moral agony involved. This is because AI works on the logic it knows. Even ‘learning’ AI is bound to follow a programmed logic to serve its purpose.

Logic may be ok for AI in many applications where it is used as a tool. Ethics and morality, however, remain a quintessentially human area.

Another issue with trust in the AI-sphere is our inclination to lose faith in previously celebrated new technology easier than we would in people. If your coworker sent an email to the wrong person (provided it didn’t cause too large of a problem), it would likely be brushed off. If a bot did the same, you would likely see it as evidence of the technology being inferior and therefore useless. After all, to err is human… so when AI or other new technology does it, we tend to see it as a failure of concept.

The explosion of digital services powered by AI in the financial sector has been particularly noticeable post-covid. Many financial providers thinned out their ranks and pushed clients to use online services, which is in theory more convenient but also more impersonal with the loss of human contact in these interactions.

Then there’s the fact that most major banks have sold off their advice arms in the past few years. This can be somewhat attributed to the Australian Royal Commission enquiry into banking, which led to a much firmer focus on client-first products and core services in the industry. In NZ, we also implemented a transition path to much higher capital ratios for the four major (Australian) banks. Essentially it has become more trouble than it is worth for banks to invite such scrutiny when the financial services sector is only getting more regulated in all aspects.[iii]

The rise in sophisticated online scams also makes converting to a completely digital process unappealing, even for the most techno-fervent among us. While checks and safeguards can be put in place, there is little that can replace the relationship formed through face-to-face business. This is a stumbling point for most scammers, who can be anyone they want over the phone or email but would rarely have a physical footprint.

There is something to be said for going local in a time of technological advance and digital globalisation. A locally-based fiduciary can provide professional advice in person, suited to your unique goals, situation and timeframe. If you have found your experience with robo-advice less than reassuring, don’t worry – you can seek out human advisers with your best interests in mind, should you wish to have a chat with the real deal about your financial journey.

At the very least, their chat will be more scintillating than a helpdesk bot.

 

·         Nick Stewart (Ngāi Tahu, Ngāti Huirapa, Ngāti Māmoe, Ngāti Waitaha) is a Financial Adviser and CEO at Stewart Group, a Hawke's Bay-based CEFEX & BCorp certified financial planning and advisory firm. Stewart Group provides personal fiduciary services, Wealth Management, Risk Insurance & KiwiSaver scheme solutions. Article no. 316.

·         The information provided, or any opinions expressed in this article, are of a general nature only and should not be construed or relied on as a recommendation to invest in a financial product or class of financial products. You should seek financial advice specific to your circumstances from a Financial Adviser before making any financial decisions. A disclosure statement can be obtained free of charge by calling 0800 878 961 or visit our website, www.stewartgroup.co.nz

 


[i] https://www.dukece.com/insights/what-psychology-tells-us-about-why-we-cant-trust-machines/

[ii] https://www.theguardian.com/science/head-quarters/2017/apr/24/why-are-we-reluctant-to-trust-robots

[iii] https://www.ausbanking.org.au/priorities/royal-commission/#:~:text=Since%20the%20Royal%20Commission%2C%20Australian%20banks%20have%3A&text=The%20Banking%20Code%20has%20stronger,loans%20in%20the%20Banking%20Code.