A fresh set of benchmarks is needed to assess artificial intelligence's understanding of the real world.
Artificial intelligence (AI) models have shown impressive performance on law exams, answering multiple-choice, short-answer, and essay questions as well as humans [1]. However, they struggle with real-world legal tasks.
Some lawyers have learnt that the hard way, and have been fined for filing AI-generated court briefs that misrepresented principles of law and cited non-existent cases.
According to Chaudhri, principal scientist at Knowledge Systems Research in Sunnyvale, California, new benchmarks could help specialists better understand AI's capabilities.
Author: PubMed | Google Scholar
Author's summary: New benchmarks are needed to assess AI's real-world knowledge.