When a research team led by Amrit Kirpalani, a medical educator at Western University in Ontario, Canada, evaluated ChatGPT’s performance in diagnosing medical cases back in August 2024, one of the things that surprised them was the AI’s propensity to give well-structured, eloquent but blatantly wrong answers.
Now, in a study recently published in Nature, a different group of researchers tried to explain why ChatGPT and other large language models tend to do this. “To speak confidently about things we do not know is a problem of humanity in a lot of ways. And large language models are imitations of humans,” says Wout Schellaert, an AI researcher at the University of Valencia, Spain, and co-author of the paper.
Early large language models like GPT-3 had a hard time answering simple questions about geography or science. They even struggled with performing simple math such as “how much is 20 +183.” But in most cases where they couldn’t identify the correct answer, they did what an honest human being would do: They avoided answering the question.
Valve Corporation, tired of paying arbitration fees, has removed a mandatory arbitration clause from Steam's subscriber agreement. Valve told gamers in yesterday's update that they must sue the company in order to resolve disputes.
The subscriber agreement includes "changes to how disputes and claims between you and Valve are resolved," Steam wrote in an email to users. "The updated dispute resolution provisions are in Section 10 and require all claims and disputes to proceed in court and not in arbitration. We've also removed the class action waiver and cost and fee-shifting provisions."
The Steam agreement previously said that "you and Valve agree to resolve all disputes and claims between us in individual binding arbitration." Now, it says that any claims "shall be commenced and maintained exclusively in any state or federal court located in King County, Washington, having subject matter jurisdiction."