NEW DETAILS ON HUMAN VISION REVEALED THANKS TO THE USE OF AI AND A GRAPHICS ENGINE

Brain functioning has many mysteries that seek to be solved from the world of science. One of them is the ability that this powerful organ has to create graphic representations of our environment or, in other words, process all the information that is captured through our eyes in real-time.

An MIT team set out to advance this task, as the research carried out to date has achieved, through computational vision models, the execution of smaller tasks, such as selecting objects or faces.

Image result for NEW DETAILS ON HUMAN VISION REVEALED THANKS TO THE USE OF AI AND A GRAPHICS ENGINE Unlike past experiences, a team led by cognitive researchers from the Massachusetts Institute of Technology (MIT) managed to produce a computer model that captures the ability of the human visual system to quickly generate a detailed description of the scene from an image, offering for the first time an idea on how the brain achieves this.

This model suggests that when the brain receives a visual stimulus, it performs a series of calculations at a very high speed, following a procedure similar to that of a 2D graphics engine on a computer, but in reverse order.

Reaching this was possible thanks to the knowledge accumulated for decades, a period in which numerous and detailed research on the visual system of the brain has been developed, seeking to understand how the entry of light through the eye's retina is transformed into cohesive scenes. Thanks to those efforts, today, in the height of the rise of artificial intelligence, researchers have been able to develop computer models that can emulate some aspects of this system.

The team after this investigation managed to build a special type of deep neural network model, to show how a neuronal hierarchy can quickly infer the less superficial characteristics of a scene, such as a specific face. This AI was trained from a model that reflects the internal representations of the brain and not with labelled data that indicates the class of an object in the image, as is usually the case in studies of this class.

In this way, the adopted model managed to learn to reverse the steps performed by a computer graphics generator to generate faces. These programs start from the base of a three-dimensional representation of an individual face, and then turn it into a 2D image, according to the particular point of view of the person who sees it and accompanying it with a random background. From the theory proposed by the researchers, it is pointed out that the visual system of the brain can do something very similar when it dreams or evokes a mental image of some face.

This experiment does not fully emulate the complexity of human vision but offers a fairly important approach concerning what has been achieved by the scientific community so far. The researchers after this advance have the agenda to continue perfecting this technology, to seek an explanation of the brain's work in front of other types of scenes, to develop higher performance AI systems and to conclude, with all the relevant tests already collected, that in the future they will completely decipher the work of vision from the brain.

The results of this research are attributed to Ilker Yildirim, lead author of the article, who is a former MIT researcher who is now an assistant professor of psychology at Yale University. It had the collaboration of Tenenbaum and Winrich Freiwald, professors of neuroscience and behaviour at Rockefeller University, who are the main authors of the study. Mario Belledonne, a graduate student at Yale, also participated as an author. The full study was published in the specialized journal Science Advances, which you can review in its full text (in English).

Enjoyed this article? Stay informed by joining our newsletter!

Comments

You must be logged in to post a comment.

About Author