
Since more connected devices demand increasing amounts of bandwidth for tasks such as teleworking and cloud computing, it will be extremely challenging to manage the finite amount of wireless spectrum available to share all users.
Engineers are employed by artificial intelligence to dynamically manage available wireless spectrum, with an eye towards reducing delay and increasing performance. But most of the AI methods for classified and processing wireless signals are electric and cannot work in real time.
Now, MIT researchers have developed a novel AI hardware accelerator specifically designed for wireless signal processing. Their optical processor components machine-learning at the speed of light, classifying wireless signals in a case of nanocyconds.
Converting about 95 percent of accuracy in signal classification, the photonic chip is about 100 times faster than the best digital option. The new hardware accelerator is also scalable and flexible, so it can be used for a variety of high-protest computing applications. At the same time, this digital AI hardware is smaller, lighter, cheaper and more energy-efficient than the accelerator.
The device can be particularly useful in the future 6G wireless application, such as cognitive radio that optimize data rates by adapting wireless modulation formats in changing wireless environment.
By enabling an edge device to calculate intensive-intensive in real time, this new hardware can provide dramatic speedups in many applications beyond the accelerator signal processing. For example, it can help autonomous vehicles to create divided-second reactions for environmental changes or enable smart pacemakers to continuously monitor the patient’s heart health.
“There are many applications that will be enabled by the age devices that are capable of analyzing wireless signals. What we have presented in our paper can open many possibilities for real -time and reliable AI. The work is something that can be quite impressive. Electronics (RL), and senior writers of the paper.
He has joined the paper by the lead author Ronald Davis III PhD ’24; Zajun Chen, a former MIT postdock who is now an assistant professor at the University of Southern California; And Ryan Hammerly, a visiting scientist at RLE and senior scientist at NTT Research. Research appears in today Science progress,
Light speed processing
The state-of-the-art Digital AI accelerator for wireless signal processing converts the signal into an image and runs it through a deep-visible model to classify it. While this approach is highly accurate, the computationally intensive nature of the deep nerve network discredits it for many time-sensitive applications.
Optical systems can intensify the deep nerve network by encoding and processing data using light, which is also less energy intensive than digital computing. But researchers have struggled to maximize the performance of the general-delegated optical nerve network when used for signal processing, while ensuring that the optical device is scalable.
Particularly by developing an optical neural network architecture for signal processing, which they call a multiplier analog frequency transform optical neural network (MAFT-On), researchers dealt with the head-on.
MAFT-Ed addresses the problem of scalability by executing all machine-learning operations by encoding all signal data and known as frequency domains-before it is digitized-before it is digitized.
Researchers designed their optical neural networks to in-line all linear and non-linear operations. Intensive learning requires both types of operations.
Thanks to this innovative design, they only require a MAFT-Ed device per layer for the entire optical neural network, as contrary to other methods, which require a device for each individual computational unit, or “neuron”.
“We can fit 10,000 neurons on a single device and calculate the required multiplication in the same shot,” Davis says.
Researchers completed this using a technique called photoelectric multiplication, which dramatically enhances efficiency. This also allows them to create an optical nerve network that can be easily increased with additional layers without the requirement of additional overheads.
Results in nanoseconds
MAFT-Non takes a wireless signal in the form of input, processes signal data, and passes information for latter operations that the age device does. For example, by classifying the modulation of a signal, MAFT-Non will enable a device to automatically estimate the type of signal to remove that data.
When designing the MAFT-NOS, researchers had determined one of the biggest challenges how to map machine-learning computation in optical hardware.
Says Davis, “We could not take only a normal machine-learning framework from the shelf and used it. We had to adapt it to fit the hardware and find out how to exploit physics so that it would demonstrate the calculations we wanted,” Davis says.
When he tested his architecture on signal classification in simulation, the optical neural network gained 85 percent accuracy in a single shot, which could quickly convert into more than 99 percent accuracy using several measurements. Only 120 nanoseconds were required to perform the entire process.
Davis says, “The longer you measure, the more accuracy you will get.
While state-of-the-art digital radio frequency devices can perform machine-learning in a microsecond, optics can do it in nanocycand or even picosacund.
While moving forward, researchers want to employ which is known as multiplexing plans to compute more and score MAFT-On. They want to expand their work into a more complex deep learning architecture that can run a transformer model or LLM.
The work was funded by the US Army Research Laboratory, US Air Force, MIT Lincoln Laboratory, Nippon Telegraph and Telephone and National Science Foundation.