‘The early development of Artificial Intelligence (AI) in the latter half of the twentieth century was marked by limited, hand-crafted systems and fluctuating perceptions of the field’s potential. Early research explored a range of paradigms – including symbolic, neural and probabilistic approaches – constrained by severe hardware and data limitations. Key technological advances, such as the invention of microchips, GPUs and later TPUs, significantly enhanced computational capacity, enabling more complex AI experimentation. Concurrently, the proliferation of digital data through the internet addressed longstanding bottlenecks in data availability. The most transformative shift, however, came from architectural innovations in neural networks, culminating in the deep learning revolution. This unfolded in two phases: the emergence of Recurrent and Convolutional Neural Networks, followed by the development of transformer-based models, which underpin today’s Large Language Models (LLMs).’