Free Shipping on orders over US$49.99

Inspiration or Imitation: How Closely Should We Copy Biological Systems?


//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>

Neuromorphic computing was born in the 1980s in Carver Mead’s lab, when Mead described the first analog silicon retina. In Mead’s day, “neuromorphic” meant emulating biological neural processes in silicon, copying them as closely as possible. But nowadays the word has a broader meaning. Different approaches to biology–inspired sensing and computing are beginning to proliferate, and some are only vaguely brain–inspired. With Moore’s law slowing and accelerated computing growing, neuromorphic sensing and computing are gaining attention as we look towards technologies that will enable the next frontier of silicon.

A recent panel discussion at the Embedded Vision Summit addressed both the contemporary meaning of neuromorphic, and the balance between taking inspiration from nature and copying it directly. While all neuromorphic technologies are based on biomimicry — taking inspiration from, or directly copying, biological systems and structures — the panelists disagreed on the right balance between inspiration and imitation.

Neuromorphic expert Steve Teig
Steve Teig (Source: Embedded Vision Summit)

“Neuromorphic is used to mean dozens of different things,” said Steve Teig, CEO of AI accelerator chip company Perceive. “It doesn’t really matter what the morph or shape of something is, it matters what function it has, so I don’t see either benefit or liability in trying to resemble a neuron.”

Teig cites the classic example of bird flight having little relevance to modern airplanes.

“We want something that does the same thing a bird does, but it doesn’t have to do it in the same way a bird does,” Teig said. “I don’t see any intrinsic advantage in trying to mimic how the bird flies in [aircraft], as long as you get flying at the end.”

James Marshall, chief scientific officer at Opteran and professor of theoretical and computational biology at the University of Sheffield, said that the company takes a very wide view of the definition of neuromorphic.

“At Opteran, we’ve broadened the definition of neuromorphic even further to include algorithms — we reverse engineer how real brains work,” said Marshall.

Neuromorphic expert James Marshall
James Marshall (Source: Embedded Vision Summit)

Opteran uses standard cameras and standard digital compute hardware in its robotics systems (no event–based cameras or spiking neural networks).

“For us, what’s important is getting the information processing the real brains do, and reproducing that in some contemporary silicon technologies,” he added.

Garrick Orchard, research scientist at Intel Labs, agrees that the meaning of the word neuromorphic has evolved since it was originated in the 1980s.

“The neuromorphic term is so broad now that it means very little,” he said.

Intel Labs is the birthplace of Intel’s neuromorphic computing offering, Loihi. Orchard said Intel Labs’ approach is to try to understand what’s going on in biology and apply them to silicon, where it makes sense to do so.

“What principles that we see in biology are really important, for us to achieve something better in silicon?” said Orchard. “There may be [biological] things that do offer advantages, but they may not translate well to silicon and therefore we shouldn’t force the silicon to do things that may make something worse.”

Ryad Benosman, professor at the University of Pittsburgh and adjunct professor at the CMU Robotics Institute, said that the right balance may not be struck before we have a full understanding of how biological brains work.

“Historically, neuromorphic was about replicating neurons in silicon, and it has evolved a lot,” said Benosman. “But nobody really knows how the brain works — we don’t even know how a real neuron works.”

Neuromorphic expert Ryad Benosman
Ryad Benosman (Source: Embedded Vision Summit)

Benosman points out that before the Hodgkin–Huxley mathematical model of the giant squid neuron (1952), there were many different ideas on how neurons worked, which effectively disappeared at that point. In his view, the way neurons work is still very much an open question.

“Neuromorphic is impressive, it’s cool, but it’s very much tied to how much we know of the brain,” Benosman said. “We agree that before we get there, there are many stages of what we can gather from [how the brain works] and what we can build in this era.”

Perceive’s Steve Teig disagreed, arguing that complete understanding of biology isn’t required to improve neuromorphic systems, since we don’t need to copy them exactly.

“Suppose we have perfect knowledge of how the retina works — it’s still biological evolution that ended up with the retina,” he said. “The retina had all kinds of constraints that are not identical to the constraints we have in building technology now. So there might be benefits in mimicking the other things that the retina is spectacularly good at, but not per se because the retina does this, that’s not appropriate engineering strategy.”

Opteran’s James Marshall raised the point that not all brains work in the same way.

“We don’t really understand if spiking is important,” Marshall said. “There are actually lots of different kinds of neuron types, they’re not all integrate and fire — in insects, you have chemical synapses, continuous action potentials, and in early visual processing that’s really important.”

Marshall explained that Opteran doesn’t use spiking in its algorithms — “just simple linear filters, but combined in a clever way, like so much of biology.”

Intel Labs’ Garrick Orchard took the opposite view. Intel’s Loihi chip is designed to accelerate spiking neural networks with asynchronous digital electronics.

“In our lab, we try to look at what principles we see in biological computation that we think are key principles, and apply them where they make sense to silicon, and spiking is one of those principles, we think,” Orchard said. “But you have to think about what properties of a spike make sense and what don’t.”

Neuromorphic expert Garrick Orchard
Garrick Orchard (Source: Embedded Vision Summit)

While Intel’s first–generation Loihi chip used binary spikes, mirroring biology where a spike’s entire information is encoded into its timing, the second–generation Loihi chip has a programmable neuron which can accept different spike magnitudes.

If the spike magnitude isn’t critical, how do we know what is important about spikes?

“[Spikes] really help us with the idea of sparsity,” Orchard said. “If you have a bunch of neurons that are only communicating very sparsely with each other, you can imagine there’s several advantages. You’re shuttling less data around and your buses have less traffic flowing over them, which can reduce the latency as things are flying around the chip, and we think that in this area there are significant advantages to operating within the spiking domain.”

What about using analog compute — the brain is an analog computer, after all?

Orchard pointed out that we could argue about where the line is between analog and digital — if spikes’ magnitude is not important, they can be represented by 0 or 1.

Loihi is digital in part due to Intel’s expertise in digital electronics, he added.

“We see a significant advantage to being able to use our latest technology for manufacturing, to go down to really small node sizes and still get digital circuits to work very reliably, so there’s a significant advantage for us there in sticking to the digital domain and coming up with repeatable computations, which is of course very helpful when you’re debugging things,” he said.

Opteran’s James Marshall said tradeoffs due to the constraints of biology may mean spikes are the optimal solution for biological systems, but that didn’t necessarily translate to silicon, and the same applies to analog computing.

“If you’re taking the brain as a reference, the brain doesn’t just do information processing, it also has to keep itself alive,” Marshall pointed out. “You don’t want to reproduce the details of neurons that are to do with housekeeping… living things have to recycle chemicals and all kinds of things to avoid dying, which is fundamental, and completely independent of the information processing components.”

Perceive’s Steve Teig is more open to analog hardware.

“It is possible that there’s value in analog, in that the average power that you spend doing analog can be significantly lower than that of digital,” Teig said. “I personally don’t have religion either for or against analog. I think that it’s an interesting form of computation. To me, this is all about stepping back to say what do you want your computer to do? What do you want your interconnect to look like? And then design something that’s like that.”

Ryad Benosman came out in favor of asynchronous digital approaches to neuromorphic computing, such as Intel’s.

“For computation, if you want to make products today… I can count on one hand analog products that you have and can use, it’s unsustainable,” he said. “I think what you need is to be asynchronous. Get rid of your clocks… I think that’s the way to go in the future.”

Overall, the panelists agreed that it isn’t necessary to blindly copy biology, instead borrowing the parts that are useful to us. There remains some disagreement, however, about exactly which are the useful parts.

“We have no idea how it is that we model the world and teach ourselves to learn and absorb information,” Steve Teig said. “To me that that thread, while scientifically interesting, has nothing to do with whether event-based hardware is a good thing, whether spikes are a good thing, or whether analog is a good thing.”





Source link

We will be happy to hear your thoughts

Leave a reply

AmElectronics
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart