AI, short for artificial intelligence, involves the simulation of human intelligence-like processes by machines, especially computer systems. Within the expansive realm of AI applications, generative AI has taken the world by surprise, mesmerising with its near real-time human-like creativity and not to mention the immense financial impetus in startups pioneering technologies like ChatGPT. These advanced AI systems possess an exceptional ability to generate a wide range of outputs, including text, images, sounds, videos and even computer code, giving rise to numerous ethical, legal and professional challenges.
Unsurprisingly, the finance sector has not remained untouched by the influence of AI and is witnessing substantial transformations. These changes span various domains within the finance sector, including banking, insurance, fund management and legal services. What was once a futuristic concept has evolved into a present-day reality, reshaping these industries in profound ways. To begin a of articles on AI development within the finance sector, let’s start by understanding how AI came into being and its current state of development.
Early AI development
The journey of AI began at the 1956 Dartmouth Conference, where the term ‘artificial intelligence’ was officially coined. Visionaries like John McCarthy and Marvin Minsky were optimistic about AI’s potential. Early AI programmes like Eliza, a simple chatbot, and SHRDLU, a virtual block-manipulating programme, offered glimpses into this potential. However, inflated expectations led to a period of reduced funding and interest, known as the ‘AI winter,’ in the late 1970s and 1980s. Despite these challenges, AI research continued, leading to significant advancements in speech and video processing in the 1990s.
Pioneering breakthroughs
The turn of the century witnessed AI fulfilling its promise through advancements in neural networks and deep learning. Pioneers like Geoffrey Hinton, Yoshua Bengio and Yann LeCun revolutionised image and speech recognition with deep neural networks. The success of Deepmind’s AlphaGo using reinforcement learning and the emergence of transformer architectures, which significantly improved language task performance for Bert and generative pre-trained transformers-like models, marked notable milestones in AI development.
Machine learning and large language models
Advances in machine learning and deep learning reshaped information processing in machines. The development of ML tools like decision trees, support vector machines and random forests in the 1990s and 2000s, and the introduction of convolutional and recurrent neural networks for image recognition and sequence data, respectively, were pivotal. Not long after came the large language models such as Bert and GPT, which completely altered the rule-based computing in natural language processing to large-scale unsupervised synthetic learning and emulations.
High-performance computing and custom AI chips
To deliver output in reasonable time, the computational needs for these models skyrocketed. Thus came the evolution of high-performance computing and purpose-build custom chips like Google’s tensor processing units (TPUs) optimised for machine learning tasks and Graphcore’s intelligence processing units (IPUs) tailored for language models. High performance computing machines, with massively parallel processing capabilities, addressed large, intricate problems and leads in supercomputing and cluster computing processes, while custom AI chips provided the much needed energy-efficient solutions for AI-specific operations, which can bargain on statistical choices without compromising the quality of output.
AI in 2023
For an AI to generate output with near human-like intelligence within an acceptable timeframe, the best training sets are those which have high-quality, interconnected data points. Thus, by 2023, AI’s reach has become vast and instrumental in precision-based and data-intensive sectors such as healthcare, automotive industries and finance. Its uses range from predictive analytics, diagnostics and drug discovery, to powering autonomous vehicles and high-frequency trading. However, this omnipresence has also sparked concerns. Issues of bias, fairness and transparency have surfaced, leading to calls for ethical considerations and responsible regulations.