With this AI chip, you won’t have to buy every new model of cellphones anymore.
Now it is time to stop rushing every time there is a new model of smartphone on the market. Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don’t have to be shelved or discarded for a newer model. MIT engineers have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip. The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.
Taking a Look at The Design
The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance, to add new sensors or updated processors. “You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.” The researchers are eager to apply the design to edge computing devices with self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing. “As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”
The team’s design is currently configured to carry out basic image-recognition tasks. It does so via layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” that the team previously developed, which together function as a physical neural network, or “brain-on-a-chip.” Each array can be trained to process and classify signals directly on a chip, without the need for external software or an Internet connection.
In their new chip design, the researchers paired image sensors with artificial synapse arrays, each of which they trained to recognize certain letters — in this case, M, I, and T. While a conventional approach would be to relay a sensor’s signals to a processor via physical wires, the team instead fabricated an optical system between each sensor and artificial synapse array to enable communication between the layers, without requiring a physical connection. “Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says MIT postdoc Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”
The team’s optical communication system consists of paired photodetectors and LEDs, each patterned with tiny pixels. Photodetectors constitute an image sensor for receiving data, and LEDs transmit data to the next layer. As a signal (for instance an image of a letter) reaches the image sensor, the image’s light pattern encodes a certain configuration of LED pixels, which in turn stimulates another layer of photodetectors, along with an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.
Resembling the Blocks Pattern
The team fabricated a single chip, with a computing core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks,” each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response. (The larger the current, the larger the chance that the image is indeed the letter that the particular array is trained to recognize.)
The post This Reconfigurable AI Chip Can Put an End to Sustainability Issues appeared first on .
Bitcoin6 days ago
What to know about Bitcoin’s pricing model and whether BTC will be ‘part of it’
Binance6 days ago
Binance Suspends Direct Deposits And Withdrawals In Brazil
Altcoins6 days ago
The many reasons why DOGE believers aren’t done yet
2:1 ratio of sellers6 days ago
Cumberland Sees Massive OTC Moves During Crypto Market Rout — ‘Most Volume We’ve Seen This Year’
ada6 days ago
Cardano [ADA]: Plotting the path to a 125% rally after 1 August
3AC6 days ago
Class-Action Lawsuit Accuses Terraform Labs Of Misleading Investors
Altcoins5 days ago
Litecoin [LTC]: How traders can leverage these profitable outcomes
Bitclub Network6 days ago
Namibian Educator: Low Level Of Crypto And Blockchain Adoption In Africa Compelled Me To Write A Book