
Philip Bur is the lead of the product in Lumai, with global product management, Go-to-market and leadership Rolls in Leading Semiconductor and technology companies have more than 25 years of experience, and a proven track record of manufacturing and scaling of products and services.
Lumai is a UK -based deep tech company developing a 3D optical computing processor to speed up artificial intelligence workloads. By multiplying the matrix-vector using the beam of light in three dimensions, their technology offers up to 50x and 90% less power consumption compared to the traditional silicone-based accelerator. This makes the AI’s findings, especially well suited for functions, including large language models, while energy significantly reduces costs and environmental impacts.
What inspired Lumai’s installation, and how did ideas develop in a commercial venture from Oxford Research University?
The initial spark was ignited when one of the founders of Lumai, Dr. Gianxin Guo was awarded the 1851 Research Fellowship at the University of Oxford. Interviewers understood the ability for optical computing and asked if Xianxin would consider patent and spin a company if its research was successful. It got Xianxin’s creative mind firing and when he, one of the other co-founders of Lumai, Dr. With James Spail, it was proved that using light to calculate in the heart of AI could dramatically promote AI performance and reduce energy, so the platform was determined. They knew that the existing silicon-only AI hardware (and still) was struggling to increase the performance without much growing power and cost and therefore, if they could solve this problem using optical computes, they could create a product that customers wanted. He took this idea in some VCS, who supported him to make Lumai. Lumai recently shut down her second round of funding, increased more than $ 10m, and brought additional investors who also believe that optical compute may continue to meet and meet the demand for AI performance without increasing increasing power.
You have an impressive career beyond the arm, indie semiconductor, and more – what did you draw to join Lumai at this stage?
The brief answer is team and technology. Lumai has an impressive team of experts from optical, machine learning and data centers, which brings experience with the choice of meta, intelter, alter, maxeller, seagate and IBM (with their own experience in the hand, indie, mentor graphics and motorola). I knew that a team of remarkable people focused on resolving the challenge of reducing the cost of estimating AI, which could do amazing things.
I firmly believe that the future of AI demands new, new successes in computing. The promise to be able to offer AI costing 50X was very good with cutting the cost of AI at 1/10th position compared to today’s solutions.
What were some of the initial technical or commercial challenges facing your founding team in scaling research success for a product-company company?
Research success proved that optics can be used for faster and very efficient matrix-vector multiplication. Despite technical successes, the biggest challenge was assuring people that Lumai could succeed where other optical computing startups failed. We had to spend time to tell that Lumai’s approach was very different and instead of relying on a single 2D chip, we used 3D optics to reach the scale and level of efficiency. There are certainly several steps to obtain from laboratory research to technology that can be deployed on a scale in the data center. We recognized the fact that success was bringing the key to the engineers who have the experience of developing products in high amounts and data centers. Other sector software is – it is necessary that standard AI framework and model can benefit from Lumai processors, and that we provide equipment and framework to make it as comfortable as possible for AI software engineers.
The technique of lumai is said to use 3D optical matrix-vector multiplication. Can you break it in simple words for general audience?
The AI system requires a lot of mathematical calculations called matrix-vector multiplication. These are calculated engines that strengthen AI reactions. In Lumai, we do this using light instead of electricity. It works like this:
- We encode information in the beam of light
- These light beams travel through 3D space
- Light interacts with lenses and special ingredients
- These interactions complete the mathematical operation
By using all three dimensions of space, we can process more information with each beam of light. This makes our approach very efficient – reducing the energy, time and cost required to run the AI system.
What are the main benefits of optical computing on traditional silicone-based GPU and even integrated photonics?
Because the rate of advancement in silicon technology has slowed significantly, each step increases a significant increase in power in the performance of a silicon-cavalry AI processor (eg-like GPU). Silicon-cavalry solutions consume an incredible amount of power and pursue low returns, making them incredibly complicated and expensive. The advantage of using optics is that once the optical domain has practically no power. Energy is used to go to optical domains, but for example, in the processor of lumai we can get more than 1,000 calculation operations for each cycle, each cycle, thus it becomes very efficient. This scalability cannot be obtained using integrated photonics due to both physical size barriers and signal noise, in which Lumai can get today with the number of calculations operations of silicon-photonic solutions at 1/8th place today.
How does Lumai’s processor receive close-zero delay conclusion, and why is such an important factor for modern AI workload?
Although we will not claim that the Lumai processor provides zero-lid, it executes a very large (1024 x 1024) matrix vector operation in the same cycle. Silicon-cavalry solutions typically divide a matrix into small matris, which are individually processed step by step and then the results have to be combined. It uses time and more memory and energy. At the time of AI processing, it is important to benefit both the energy and cost to benefit more businesses to benefit both AI and enable advanced AI in the most durable way.
Can you run us how your PCIE-Common Form Factor integrates with the current data center infrastructure?
The Lumai processor with a standard CPU uses a PCIe Form Factor Card with a standard CPU within a standard 4U shelf. We are working with a series of data center rack equipment suppliers so that the lumai processor is integrated with its own equipment. We use standard networks interfaces, standard software, etc. so that the lumai processor in externally looks like any other data center processor.
Data center energy use is a growing global concern. How does Lumai AI give itself a permanent solution for calculation?
Data center energy consumption is increasing at a dangerous rate. According to a report by Lawrence Berkeley National Laboratory, the use of data center power in the US is expected to be triple by 2028, which consumes up to 12% of the country. Some data center operators are considering installing nucleus power to provide necessary energy. The industry needs to look at different approaches for AI, and we believe that optics is the answer to this energy crisis.
Can you tell how Lumai’s architecture avoids scalability bottlenecks of current silicon and photonic approaches?
The first performance of Lumai processor is only what is attainable. We hope that our solution will continue to give huge jump in performance: by increasing the speed of the optical clock and vector width, all energy consumption without a uniform increase. No other solution can achieve it. Standard digital silicon-cavalry approach will continue to consume more and more electricity for every increase in performance. Silicon photonics cannot achieve the required vector width and therefore companies who were looking at the integrated photonics for data center compute have gone to address other parts of the data center – for example, optical interconnect or optical switching.
What role do you play in the future of AI – in the future of AI – more widely, in computing in overall?
Optix as a whole will play a huge role in data centers going forward – optical interconnect, optical networking, optical switching and definitely optical AI processing. The AI Data Center is demanding that it is the major driver of the move for optical. Optical interconnect will enable rapid connection between AI processor, which is necessary for large AI models. Optical switching will enable more efficient networking, and optical calculations will enable rapid, more power-skilled and low-cost AI processing. Collectively they will help enabling even more advanced AI, the challenges of recession in silicon scaling on the compute side and copper speed limits on the interconnect side.
Thanks for the great interview, those who want to learn more, should visit Lumai.