Stewart Group

View Original

New Tech, Same Story: Be Wary of the Hype

Trendy venture capital picks do not solid investments make.

AI has come a long way. But is it a good investment?

By Nick Stewart (CEO, Financial Adviser)

For all those tired of hearing about crypto, you’ll be pleased to hear there’s a new flavour of the month investment to talk about: AI.

It’s exciting when new tech launches. Artificial Intelligence has been around in some form for quite some time now, but the latest chat AI programmes are gaining traction in the media. ChatGPT made Google issue a “code red” in fear it could shake its search business. For most of January, ChatGPT even surpassed BitCoin – the usual  lightning rod for trending tech – in search terms. AI chatbots are being hailed as the latest brutally efficient disruptors in a long line of industrial revolutions, with a potential to bring about job losses as they make some human efforts obsolete.[i]

It can be tempting to jump in to new and shiny things. Everyone wants to be that person who got in on the ground floor and rode all the way up. AI has been around in the venture capital world for a while, though it’s just getting a bit of buzz now among regular folk. There’s startups galore – generative AI, the area ChatGPT is specifically categorised in, saw a 425% uptick in venture capital investments between 2020 and 2022.[ii] Now it’s out there in the public zeitgeist and everyday investors may be wondering, ‘is this something I should buy into?’

Anything being hailed as a disruptor is usually something to be wary of. It’s high risk, and as we’ve seen historically with BitCoin and the NFT space, often largely unregulated as innovation and growth outpaces legal framework. This means should things turn sour, it would be hard to hold someone liable for it... at least until the law catches up.

Then there’s the existing legal implications involved with AI generated content. It opens up an interesting debate around copyright laws, which usually state that the creator of the work is its first owner. If the creator isn’t a person, who owns the content? Or does the ownership automatically fall under the creators of the creator – whoever programmed the AI? And if the opposite happens and an AI programme generates content which might infringe on someone else’s copyright, who is at fault?[iii]

The Turing Test is a well-known proposal from the 1950s, when people were first questioning if machines could think; could a machine play the imitation game and successfully fool a human into thinking they were talking to another human? This is a standard we still see coming up as ethical discussions around AI use circulate. While AI can’t think like a human brain, many of them can pass the Turing test – and ChatGPT is fluent enough to do so, unless you ask it a question it cannot answer and get the deliberately bot-like host messaging.[iv]

We also have to keep in mind that while AI is capable of learning, it still has limitations via the parameters set up by its masters. While these can be well-intentioned (such as ChatGPT’s limitations around producing content it deems to be offensive; like when a user experimented and tried to generate a poem admiring Donald Trump[v]), it does mean AI content is not an unbiased source of information at its core. It’s modeled by its makers, for better or worse. ChatGPT seems to be pro-Western, left-leaning, and controversy minimising.[vi]

This particular limit might be a cautionary measure in response to Microsoft’s disastrous Tay project, an artificial intelligence bot that was design to mimic an American teen girl on the internet. Tay interacted with Twitter users and, quite notoriously, started learning to copy and create antisemitic, racist and misogynistic content it saw from other users – including tweets about supporting Hitler. Another example of how AI is only as good as its algorithm...[vii]

Admittedly, not many things on the internet are to be trusted as sources of unbiased information. And some advancements in AI may be welcome... I’m sure we’ve all had the same frustrating experiences with chatbots online not understanding seemingly simple customer service queries.

So, should you be jumping at the opportunity to be part of the latest tech wave? Let’s quickly run through the facts. The AI industry is largely unregulated, hugely overhyped, and still in relative infancy. It’s a huge risk. If you really can’t live without being on the pulse, that’s your decision as an investor – but I wouldn’t bet the house on yet another venture capital gamble.

If you’re looking to grow or protect your wealth, you are much better off following philosophies which subscribe to scientific, evidence-based practices. And if you question your current approach or want to get into investing for but you’re stuck on where to start, get in touch with a trusted local (non-AI) fiduciary to discuss your options.

At the very least you’ll be talking face to face with a real, human person.


  • Nick Stewart (Ngāi TahuNgāti Huirapa, Ngāti Māmoe, Ngāti Waitaha) is a Financial Adviser and CEO at Stewart Group, a Hawke's Bay-based CEFEX certified financial planning and advisory firm. Stewart Group provides personal fiduciary services, Wealth Management, Risk Insurance & KiwiSaver scheme solutions. 

  • The information provided, or any opinions expressed in this article, are of a general nature only and should not be construed or relied on as a recommendation to invest in a financial product or class of financial products. You should seek financial advice specific to your circumstances from an Authorised Financial Adviser before making any financial decisions. A disclosure statement can be obtained free of charge by calling 0800 878 961 or visit our website, www.stewartgroup.co.nz

 

 

[i] https://time.com/6252404/mira-murati-chatgpt-openai-interview/

[ii] https://finance.yahoo.com/news/ai-quickly-becoming-hottest-startup-164830123.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAA8Idbi56_rpz1JRTfAS6sqWaz0I61FwI2PZYRfgwepvo_MaBivGa2jA-S7lJVKePJJNJ6a8iotWbBDdFvO1hkar3NukyPRkK4adWBqMew2JPifXarFKpEbmEtgoK6glfQnxX-PUUaiJ_Wdwj8VzkpL83GNXk2YlS_cKOoOaWlFF

[iii] https://www.lexology.com/library/detail.aspx?g=71298ed8-482e-446d-8b45-148d517ddeda

[iv]

[v] https://www.brookings.edu/blog/techtank/2023/02/07/building-guardrails-for-chatgpt/

[vi] https://marginalrevolution.com/marginalrevolution/2022/12/what-are-the-politics-of-chatgpt.html

[vii] https://dailywireless.org/internet/what-happened-to-microsoft-tay-ai-chatbot/