
Artificial intelligence system such as Chatgpt provides admirable-sounding answers to any questions you asks. But they do not always reveal their knowledge or gaps in areas where they are uncertain. That problem can have major consequences because the AI system is used to develop medicines, synthesize information and run autonomous cars.
Now, the MIT spinout themes AI model is helping determining uncertainty and correct output, before they cause major problems. The company’s Capsa platform can work with any machine-learning model to detect and correct incredible outputs in seconds. This works by modifying the AI model to enable them to detect patterns in their data processing that indicates ambiguity, imperfection, or bias.
“The idea is to take a model, wrapping in capsa, identify the model’s uncertainties and failure modes, and then to increase the model,” says Thhamis AI co-founder and MIT Professor Daniella Ras, who is also a director of MIT Computer Science and Artificial Intelligence Laboratory (CSAL). “We are excited about offering a solution that can improve the model and guarantee that the model is working correctly.”
Rus established Alexander Amini ’17, SM ’18, PHD ’22 and Elaheh Ahmadi ’20 with Alexander Amini’ 17, SM ’18, PHD ’22, Meng ’21, two former research colleagues in their laboratory. Since then, they have helped telecom companies with network planning and automation, helping oil and gas companies to use AI to understand seismic imagery, published letters published on developing more reliable and reliable chatbots.
“We want to enable AI in the highest-day applications of every industry,” says Amini. “We all have seen examples of making AI hallucinations or mistakes. As AI is more widely deployed, those mistakes can have disastrous consequences. Vamis makes it possible that anyone can predict AI and predict its failures, before they are.”
Helping the model shows what they don’t know
Rus’s lab has been researching model uncertainty over years. In 2018, he received money from Toyota to study the credibility of the machine learning-based autonomous driving solutions.
“This is a safety-intelligent reference to understand the model credibility,” Russia says.
In separate work, RUS, Amini, and their colleagues created an algorithm that could detect racial and gender bias in facial identification systems and automatically reducing the training data of the model, which ends prejudice. The algorithm worked by identifying the unproven parts of the underlying training data and identifying new, similar data samples to reproduce new, similar data samples.
In 2021, the final co-founders showed that a similar approach could be used to help pharmaceutical companies use AI models to predict the properties of drug candidates. He founded Themis AI later that year.
“Guiding drug discovery can potentially save a lot of money,” says rasa. “It was a matter of use that made us realize how powerful this device could be.”
Today Themis AI is working with enterprises in various types of industries, and many of those companies are producing large language models. By using Capsa, these models are capable of determining their uncertainty for each output.
“Many companies are interested in using LLM that are based on their data, but they are concerned about credibility,” Stewart Jamicon SM ’20, PhD ’24, see the head of the leading technology of AIMIs AI. “We help LLM to self-report their confidence and uncertainty, which enables a more reliable question that responds to and flagged the incredible output.”
Themis AI is also in discussion with semiconductor companies, creating AI solutions on its chips that can work outside the cloud environment.
“Generally these small models who work on the phone or embedded system are not very accurate than what they can run on a server, but we can both get the world’s best: low delay, efficient edge computing quality, without renouncing the quality,” Jaimisson explains. “We see a future where the edge devices do most work, but whenever they are uncertain about their output, they can forward those tasks on a central server.”
Pharmaceutical companies can use CAPSA to identify the candidates of drugs and to improve their performance in clinical trials.
“The predictions and outputs of these models are very complex and are difficult to interpret – experts make a lot of time and effort to understand them,” Amini makes comments. “Capsa can understand excluding the gate to understand whether prophecies are supported by evidence in training sets or there are speculation without much grounding. It can accelerate the identity of the strongest predictions, and we think there is a big ability to social good.”
Research for effects
Themis AI team believes that the company is well deployed well to improve the state -of -the -art AI technology to develop. For example, the company is discovering CAPSA’s ability to improve accuracy in AI technology, known as chain-off-three region, in which LLMs explain steps that they raise to respond.
“We have seen that Capsa can help guide the logic processes that are to identify the highest-confidence chains of logic,” Jaimison says. “We think it is a great implication in terms of improving LLM experience, reducing delay and reducing calculation requirements. This is a high-effect opportunity for us.”
For Rus, who co-establishing many companies since coming to MIT, Themis AI is an opportunity to ensure that their MIT research has an effect.
“My student and I have become fast emotional about taking additional steps to make my work relevant to the world,” Ras says. “AI has tremendous ability to change industries, but AI also enhances concerns. The excitement of me is an opportunity to help develop technical solutions that solve these challenges and create confidence and understanding between people and those technologies that are becoming part of their daily life.”