February 21, 2024

AI in Trading: Risks, Opportunities and What’s to Come

AI in Trading: Risks, Opportunities and What’s to Come

While the inception of AI dates back to the 50s, it wasn’t until platforms like ChatGPT took the world by storm in 2023 that brought generative AI into the limelight and sparked the great AI debate. 

Our recent ‘State of Tech Report’, surveying senior tech leaders from across industries, shows that there’s an increased interest in AI adoption, with generative AI (both for content creation and coding) being ranked as the highest priority, followed by predictive AI and using AI for data processing and stream processing. The upsurge in AI adoption prompts us to take a closer look at how AI has impacted the capital market and where we see the conversations heading.

AI Use in Trading - Overview

AI adoption within the Capital Markets emerged as far back as the 80s, with expert systems being used for trading and financial analysis purposes. Utilising rule-based algorithms, these machines were able to analyse patterns in the market data, predict stock performances and identify potentially profitable trades. 

As technology advances, computers become faster and smarter, data sets more comprehensive, and algorithms more sophisticated. With its ability to comb through a large amount of data and detect hard-to-identify patterns, it’s not hard to expect its increased popularity and adoption, amongst both new and veteran traders. In particular, algorithmic trading (also known as automated trading) is estimated to have taken up between 60 to 75% of trading on all major global stock markets, as it executes trades with minimal human intervention. 

However, AI usage in trading is in no way fool-proof with market uncertainties and volatility being difficult to predict regardless of how much data you have or how many patterns you analyse. As one would expect with any application of technology, utilising AI in trading also comes with its risks and opportunities. 


AI Bias

While AI is great at automating processes or analysing data without human supervision, it's reliant on the data that is provided, making it highly susceptible to data quality problems, and more importantly, bias. 

By bias, we’re not talking about artificial consciousness that develops its own bias (which is a whole different can of worms), but rather that it reflects or sometimes even amplifies the bias within the human input. 

To enable AI systems to run, someone will have to set up the algorithms and train them with data, which unfortunately also creates opportunities for bias to creep in. For example, the algorithm might be set up to prioritise and value certain data sets over others, or the historical data might in fact be coded with bias which then makes the AI learn to make the same biased decisions. In these unfortunate circumstances, AI that’s introduced to remove emotional bias in trading might be reinforcing other forms of bias, in turn perpetuating market inefficiencies and creating a vicious cycle. 

Cybersecurity Concerns

Cybersecurity is always a prevalent concern when it comes to technology and AI is no different. In fact, because of its efficiency, cybercriminals often exploit AI to further their schemes, from automating attacks to creating deepfakes, with 85% of cybersecurity leaders attributing cybercrime upticks to the use of AI

The amount of personal and financial data AI trading systems carry makes it a prime target for cyber attacks. Even though it’s less prone to risks associated with audio or video deepfakes, it does face data poisoning problems. Data poisoning refers to inserting fraudulent data into the training datasets in hopes of skewing the AI’s ability to make decisions, which can easily lead to false predictions and financial loss in AI trading. And because of how little human intervention AI requires to carry out its processes, data poisoning attacks are not as easy to detect and can often compromise the AI’s integrity and reliability before these issues are even noticed. 


Customisable efficiency

There are huge amounts of data going through stock exchanges every day, contributing to the volatility and constant fluctuation of the stock market. Its sheer volume of data would be too much to process without automation, especially for those trading in multiple stock exchanges globally, which is where AI trading tools/platforms come into play. 

With AI, it doesn’t just analyse real-time data to provide feedback, it can also take into account historical data and past or current patterns for analysis or better-informed decision-making. A lot of existing AI trading platforms also provide the option for users to customise dashboards based on personal preferences and present them with the desired digestible information at a glance. 

Cybersecurity measures 

Just as AI can be exploited maliciously by hackers, it can also be used to bolster cybersecurity measures - the classic cat-and-mouse chase but now with shiny new weapons. 

With AI’s ability to comb through a large volume of data in a short amount of time, it can provide continuous surveillance to identify cyber threats, preempt future attacks, and pinpoint system vulnerabilities to remedy or prioritise for the cybersecurity team to address. It can also be used to detect fraud by flagging anomalies in transactions to help prevent the likes of insider trading and market manipulation. Furthermore, AI can provide an added layer of security through behavioural biometrics, as a form of authentication that measures a user’s behavioural trait, such as the way one types or uses a mouse. These behavioural ‘fingerprints’ are very distinct and a lot harder to steal or replicate compared to passwords or devices, making them a lot more reliable and effective. 

What’s next?

As AI adoption continues to be on the rise, the great AI debate will likely remain a hot topic in the foreseeable future, particularly around how to take advantage of this double-edged sword while minimising the risk and ensuring data security and integrity. 

If you’re looking for more insights into AI or want to join in on the AI discussion, check out our next London Tech Leaders event which will be around all things AI! It will feature an esteemed speaker line-up including David Crawford, Steven Higgon (TAPP), Sophie Valentine (Healios), Tim Gordon (Best Practice AI), Lyubomira Dimitrova (Onfido), and keynote speaker Rachel Coldicutt (Careful Trouble)! 

Download the Report

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Share this Page
About the Author
Chaim Li
Chaim Li

As our Content Lead, Chaim is currently looking after WeShape's content efforts, from managing our social media profile to creating our insights reports and long form content pieces.

He is a creative copywriter, marketer & D&I enthusiast who is actively working to change the world one story at a time! 

Read more of our recent insights, ideas and points of view, curated by our expert network:

How can we help you today?
B Corporation
UK IT Industry Awards Winner
Crown Commercial Service Supplier
Tech Talent Charter
ISO 27001
DevOps award win
Business Declares
AWS Partner
2 Leman St, London, E1 8FA