Organised in association with the British Academy, this year's Anna Morpurgo Davies lecture will be held at the the British Academy, 10-11 Carlton House Terrace, London SW1Y 5AH; it will also be broadcast online.
Attendance is free, but registration is required for both in-person and online modalities; please register using this link.
Like all ordinary meetings of the Society, the lecture will commence at 4:15pm. Instead of the usual tea before, this lecture will be followed by a drinks reception.
Large Language Models have shown remarkable abilities in natural language processing, tempting many to speak of them as if they used and understood language as humans do. However, doing so overlooks the distinction between the structural systems that support meaning and reasoning and the mechanisms for predicting what will come next in a text on the basis of similar passages in the vast amount of training data that LLMs encode. LLMs excel at prediction, and it is surprising how much can be done by memorization indexed by similarity alone. LLMs can answer abstruse questions, generate text of astonishing fluency on any subject in any style, and generate workable computer code in this way.
However, the limitations of LLMs are becoming increasingly clear. They struggle with sound logical inference, they may include convincing yet wholly inaccurate information, and they have difficulty in generalizing code beyond superficial similarity to examples they have encountered during training. This lecture will present recent research that highlights both the capabilities and the constraints of these systems. Its conclusion will be that the future of natural language processing lies in hybrid approaches that combine the precision and structure of symbolic reasoning with the power of recall and access by similarity of content of neural computation.
