![]() ![]() Next, the researchers trained a neural network to do a task similar to the one presented to participants, by programming it to learn from its mistakes. When they did make errors, the researchers noticed that these followed a pattern that reflected known human biases. Cognitive benchmarkĪs predicted, people excelled at this task they chose the correct combination of coloured circles about 80% of the time, on average. ![]() They then had to select the correct colour and number of circles and place them in the appropriate order. For example, the phrase ‘dax fep’ was shown with three red circles, and ‘lug fep’ with three blue circles, indicating that fep denotes an abstract rule to repeat a primitive three times.įinally, the researchers tested participants’ ability to apply these abstract rules by giving them complex combinations of primitives and functions. The researchers then showed the participants combinations of primitive and function words alongside the patterns of circles that would result when the functions were applied to the primitives. Participants were trained to link each primitive word with a circle of a particular colour, so a red circle represents ‘dax’, and a blue circle represents ‘lug’. More abstract ‘function’ words such as ‘blicket’, ‘kiki’ and ’fep’ specified rules for using and combining the primitives, resulting in sequences such as ‘jump three times’ or ‘skip backwards’. ![]() ‘Primitive’ words such as ‘dax,’ ‘wif’ and ‘lug’ represented basic, concrete actions such as ‘skip’ and ‘jump’. The researchers ensured the participants would be learning the words for the first time by testing them on a pseudo-language consisting of two categories of nonsense words. To attempt to settle this debate, the authors first tested 25 people on how well they deploy newly learnt words to different situations. AI researchers have sparred for nearly 40 years as to whether neural networks could ever be a plausible model of human cognition if they cannot demonstrate this type of systematicity.ĭeepMind AI learns simple physics like a baby Unlike people, neural nets struggle to use a new word until they have been trained on many sample texts that use that word. Similarly, someone who understands the sentence ‘the cat chases the dog’ will also understand ‘the dog chases the cat’ without much extra thought.īut this ability does not come innately to neural networks, a method of emulating human cognition that has dominated artificial-intelligence research, says Brenden Lake, a cognitive computational scientist at New York University and co-author of the study. For example, once someone has grasped the meaning of the word ‘photobomb’, they will be able to use it in a variety of situations, such as ‘photobomb twice’ or ‘photobomb during a Zoom call’. Systematic generalization is demonstrated by people’s ability to effortlessly use newly acquired words in new settings. The neural network’s human-like performance suggests there has been a “breakthrough in the ability to train networks to be systematic”, says Paul Smolensky, a cognitive scientist who specializes in language at Johns Hopkins University in Baltimore, Maryland. Although systems based on large language models, such as ChatGPT, are adept at conversation in many contexts, they display glaring gaps and inconsistencies in others. ![]() The work, published on 25 October in Nature, could lead to machines that interact with people more naturally than do even the best AI systems today. The researchers gave the same task to the AI model that underlies the chatbot ChatGPT, and found that it performs much worse on such a test than either the new neural net or people, despite the chatbot’s uncanny ability to converse in a human-like manner. The artificial intelligence (AI) system performs about as well as humans at folding newly learned words into an existing vocabulary and using them in fresh contexts, which is a key aspect of human cognition known as systematic generalization. Scientists have created a neural network with the human-like ability to make generalizations about language 1. A version of the human ability to apply new vocabulary in flexible ways has been achieved by a neural network. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |