Although Soul Machines and its backers are counting on corporate accounts to bring in revenue, building attractive faces for customer service chatbots isn’t exactly what Sagar set out to do when he left the movie business.

His real goal has always been modeling the human brain itself, and in between churning out those brand ambassadors, he and his team have been tinkering away at that original project. The result is a virtual toddler that Sagar says represents an altogether different approach to artificial intelligence than the machine learning algorithms now steering autonomous vehicles, creating deepfakes, and trading stocks.

BabyX appears to be around 18 months old. Her eyes are wide and curious, her hair flaxen. “Hi, baby. Hi, baby. Hellooo,” Sagar says, as the child on the screen turns to face him, catch his eye, and smile. Modeled on a stereoscopic capture of Sagar’s own daughter, BabyX was created shortly after Sagar opened his lab at the University of Auckland in 2012.

He punches some keys and fiddles with a series of sliders, and the toddler’s skin fades away revealing the digital layers underneath: a musculature, a skeleton, a simple respiratory system, and an elaborate model of the various regions of the brain — every element painstakingly rendered using some of those same high-end graphics Sagar developed for the movies. Each system has been built separately — “like Lego bricks,” he explains — enabling the team to add further layers of complexity as they go.

“What you’re seeing now is a representation in an anatomical form of what’s going on in the computational models,” Sagar explains as we gaze at the screen. “Oh, she’s waking up.”

“Let’s see if she can pick up a rhythm,” he says, clapping his hands in a steady beat, as BabyX begins to shimmy in her high chair. “That’s her vestibular system, which has got to do with balance,” he explains. Then he stops clapping, and she gradually slows down, a quizzical expression on her face. “I wonder what she’s giggling about,” he says.

Sagar shows me the visualization of the baby’s brain stem, along with her thalamus and hypothalamus, her basal ganglia, and adrenal system. Zooming in deeper still, he reveals a fine network of millions of nerves and synapses, lighting up as the toddler responds to various stimuli.

Examining the Soul Machines brain model in such exquisite visual detail, I have to remind myself not to take the labels too literally. These finely wrought representations of various anatomical structures aren’t real. They’re collections of algorithms, represented in graphical form. But in a simplified way, they work on the same general principles as their biological counterparts, and when stitched together, they create a rudimentary facsimile of certain simple mental processes.

Soul Machines claims that its “Digital Brain,” as it’s known, can do more than smile: It’s a system that “can sense its world, learn, adapt, make decisions, act, and communicate interactively” through both nonverbal and spoken language.

That’s a pretty extravagant claim, but Sagar was eager to demonstrate.

BabyX can’t play chess, Go, or StarCraft II, like the machine-learning A.I. developed by Alphabet’s DeepMind, and she can barely babble. But with the flick of a mouse, Sagar can place a transparent internet touch screen in front of her face à la Minority Report, allowing her to noodle away on a web-based piano and even manipulate the simple games on sesamestreet.com. Having been programmed to seek novelty — her curiosity rewarded with bursts of digital dopamine — she “enjoys” playing the games. She also gurgles when looked at or caressed via a touch pad, and gets stressed — her “cortisol” levels visibly spiking — when ignored for too long. And according to Sagar, she can learn, form memories, identify patterns, and even make predictions about future actions.

He picks up a toy spider on his desk and waves it in front of the Mac’s webcam. “Baby, oooh spider! Spider!” he exclaims, as her expression grows worried. She’s programmed to identify alarm in a human voice, and as her stress levels increase, she is “learning,” Sagar says, to associate the image of a spider with a fear response. Sagar has opened up an array of tiny windows on the screen — each monitoring a different aspect of X’s “brain activity” in real time: the levels of her various neurotransmitters, visual and auditory processing, metasensory processing, object recognition, episodic memory, and on, and on.

He puts down the spider and picks up a toy dinosaur. “Oooo, nice dino,” he says soothingly. Her eyes follow the toy. Her “cortisol” levels drop, “serotonin” rises, and she starts to smile.

A computerized baby, even one that can be trained to detect patterns and associate words, images, and its own internal states, may seem like little more than an elaborate toy, and indeed, Sagar hopes to eventually release a home version, which he likens to “a super Tamagotchi.” But he sees more profound implications in the technology.

“What I’m trying to build is essentially the foundation of a teachable sentient machine,” he says. “A machine that learns through experiences. Globally, this is probably the most crazy attempt to try to link emotion, behavior, intelligence, planning, memory, all these elements together.”

“It seems like the main contribution is in the rendering side of things rather than the intelligence side.”

Several computer scientists contacted by OneZero agree — it is crazy. “Different groups in computational neuroscience are making models of different functions, but in constrained and narrow domains,” notes Thomas Serre, an associate professor in cognitive linguistic and psychological sciences at Brown University. “What [Soul Machines] claims to be doing is synthesizing an entire brain, which literally nobody does. The graphics are very realistic, and they are able to synthesize behavior — and different facial expressions and poses — better than most things I’ve seen. But it seems like the main contribution is in the rendering side of things rather than the intelligence side.”

“Mark is highly dedicated, motivated, and creative, and anything he does is going to look really good,” says Stacy Marsella, a professor at Northeastern and the University of Glasgow who specializes in computational models of human behavior. “But what’s really hard from a neuroscience perspective is to figure out how these functions overlap. A lot of people worry about subsets of these functions. He’s trying to model them all, at multiple levels, and that’s seriously difficult.”

Sagar acknowledges as much. “With the brain so ridiculously complicated, we’re just doing pieces of things that have a direct impact on behavior,” he says. “I like to think of it as a large functioning sketch, really, because there’s so much debate in these areas. Nobody can claim anything, really.”

Most of what we call artificial intelligence involves programs, called neural networks, that can be trained to recognize patterns in enormous amounts of data. These sets of algorithms can become quite effective at certain tasks — for instance, learning to correctly identify a tumor in a medical scan or identify a stop sign for a self-driving car. By contrast, Sagar seems less interested in achieving a particular end result than in mimicking biological processes and seeing what happens.

“That’s a hard path to go down,” Marsella says, “but it’s also a very interesting path.”

“I think it’s one of the most important and interesting projects on the planet right now,” notes Peter Huynh, a partner and co-founder of the VC firm Qualgro, who has done extensive research on A.I. companies. “Deep learning is great for things where you can feed in lots of data,” he explains, “but it’s not great for advancing artificial general intelligence.” AGI is the term of art for a machine that can truly think for itself. “I’m not saying Soul Machines is AGI, but I believe the building blocks will come from what Mark is doing. They’re moving us closer toward creating AGI in the future.”



Source link