By using our website, you agree to the use of our cookies.
ECC Seminar
17 October 2024

ECC Seminar: José Hernández-Orallo

Our Early Career Community Seminars are about to begin!

We’re thrilled to announce that José Hernández-Orallo (Universitat Politècnica de València, and Leverhulme Centre for the Future of Intelligence), will be the speaker at our first seminar on Wednesday, 9 October at the Syndics Room, 17 Mill Lane.

José will introduce his recent study published in Nature. He will dive into the fascinating world of large language models (LLMs) and how their rapid scaling and evolution have led to incredible advancements. But here’s the twist—while LLMs are getting bigger and more sophisticated, reliability isn’t always improving.

Abstract:

The discrepancy between what is easy and difficult for humans and large language models (LLMs) has been well known since the first LLMs solved complex problems yet failed at very simple ones. This discordance with human difficulty expectations strongly affects the reliability of these models, as users cannot identify a safe operating condition where the model is expected to be correct. In 2022, Ilya Sutskever suggested that “perhaps over time that discrepancy will diminish”. With the extensive use of scaling up and shaping up (such as RLHF) in newer generations of LLMs, we question whether this is the case. In a recent Nature paper, we examined several LLM families and showed that instances that are easy for humans are usually easy for the models. However, scaled-up, shaped-up models do not secure areas of low difficulty in which either the model does not err or human supervision can spot the errors. We also found that early models often avoid user questions, whereas scaled-up, shaped-up models tend to give apparently sensible yet wrong answers much more often, including errors on difficult questions that human supervisors frequently overlook. Finally, we disentangled whether this behaviour arises from scaling up or shaping up, and discovered new scaling laws showing that larger models become more incorrect and especially more ultracrepidarian, operating beyond their competence. These findings highlight the need for a fundamental shift in the design and development of general-purpose artificial intelligence, particularly in high-stakes areas where a predictable distribution of errors is paramount.

Full article here.

Jose Hernandez-Orallo

José Hernández-Orallo

Professor at the Universitat Politècnica de València

José Hernández-Orallo is Professor at the Universitat Politècnica de València, Spain and Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, UK. He received a B.Sc. and a M.Sc. in Computer Science from UPV, partly completed at the École Nationale Supérieure de l'Électronique et de ses Applications (France), and a Ph.D. in Logic and Philosophy of Science with a doctoral extraordinary prize from the University of Valencia. His academic and research activities have spanned several areas of artificial intelligence, machine learning, data science and intelligence measurement, with a focus on a more insightful analysis of the capabilities, generality, progress, impact and risks of artificial intelligence. He has published five books and more than two hundred journal articles and conference papers on these topics. His research in the area of machine intelligence evaluation has been covered by several popular outlets, such as The Economist, New Scientist or Nature. He keeps exploring a more integrated view of the evaluation of natural and artificial intelligence, as vindicated in his book "The Measure of All Minds" (Cambridge University Press, 2017, PROSE Award 2018). He is a member of AAAI, CLAIRE and ELLIS, and a EurAI Fellow.