OpenAI reveals human brain-like working of artificial neuron

Elon Musk-founded startup OpenAI has discovered multimodal neurons in an Artificial Intelligence system that work just like some neurons work in the human brain.

The revelation comes 15 years after the discovery that the human brain possesses multimodal neurons that respond to clusters of abstract concepts centred around a common high-level theme, rather than any specific visual feature.

The most famous of these was the “Halle Berry” neuron that responds to photographs, sketches, and the text “Halle Berry” — but not other names.

Two months ago, OpenAI announced a neural network called CLIP which efficiently learns visual concepts from natural language supervision.

CLIP can be applied to any visual classification benchmark by simply providing the names of the visual categories to be recognised.

It is trained to recognise and people and objects withinabstract contexts — sketches, cartoons, and even statues of the objects.

In a new paper, OpenAI researchers have now released the discovery of the presence of multimodal neurons in CLIP.

One such neuron, for example, is a “Spider-Man” neuron (bearing a remarkable resemblance to the “Halle Berry” neuron) that responds to an image of a spider, an image of the text “spider,” and the comic book character “Spider-Man” either in costume or illustrated.

“Our discovery of multimodal neurons in CLIP gives us a clue as to what may be a common mechanism of both synthetic and natural vision systems – abstraction,” Open AI said in a blog post on Friday.

“We discover that the highest layers of CLIP organise images as a loose semantic collection of ideas, providing a simple explanation for both the model’s versatility and the representation’s compactness.”

–IANS

Comments (0)
Add Comment