Generative artificial intelligence models have left such a lasting impact on digital content creation that it’s hard to remember what the Internet was like before it. You can use these AI tools for clever projects like videos and photos — but their flair for the creative hasn’t yet crossed over into the physical world.
So why haven’t we seen generic AI-enabled personalized items, like phone cases and utensils, in places like homes, offices, and stores yet? According to researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), a key issue is the mechanical integrity of 3D models.
While AI can help create personalized 3D models that you can create, those systems often don’t consider the physical properties of the 3D model. Faraz Farooqui, a PhD student in the MIT Department of Electrical Engineering and Computer Science (EECS) and CSAIL engineer, has explored this trade-off, creating a generative AI-based system that can make aesthetic changes to designs while preserving functionality, and another that modifies structures with the desired tactile properties that users want to feel.
make it real
Faruqi, in collaboration with researchers from Google, Stability AI and Northeastern University, has now found a way to create real-world objects with AI, creating objects that are durable and exhibit the user’s desired appearance and texture. With the AI-powered “MechStyle” system, users simply upload a 3D model or select a preset asset of things like vases and hooks, and prompt the tool to use images or text to create a personalized version. A generative AI model then modifies the 3D geometry, while MechStyle simulates how those changes will affect particular parts, ensuring that weak areas remain structurally sound. When you’re happy with this AI-enhanced blueprint, you can 3D print it and use it in the real world.
You can select a model of, say, a wall hook and the material with which you will print it (for example, plastic such as polylactic acid). Then, you can prompt the system to create a personalized version with instructions like “Generate a cactus-like hook.” The AI model will work in conjunction with the simulation module to generate a 3D model of the cactus along with the structural properties of the hook. This green, embossed accessory can be used to hang mugs, coats, and backpacks. Such creations are possible, in part, due to a stylization process, where the system changes a model’s geometry based on its understanding of text prompts, and works with feedback from the simulation module.
According to CSAIL researchers, 3D stylization came with unintended consequences. Their initial study showed that only 26 percent of 3D models remained structurally viable after being modified, meaning the AI system did not understand the physics of the models it was modifying.
“We want to use AI to create models that you can actually build and use in the real world,” says Faruqui, lead author of a paper presenting the project. “So MakeStyle actually simulates how GenAI-based changes would affect a structure. Our system allows you to personalize the tactile experience for your item, incorporating your personal style into it while ensuring that the object can withstand everyday use.”
This computational perfection could eventually help users personalize their accessories, for example, creating a unique pair of glasses with blue and beige dots speckled to resemble fish scales. It also produced a pillbox with a rocky texture that is surrounded by pink and watery spots. The system’s capabilities extend to creating unique home and office decor, such as a red magma-like lampshade. It can also design assistive technology tailored to users’ specifications, such as finger splints to assist with skilled injuries and pot grips to assist with motor disorders.
In the future, Makestyle may also be useful in creating prototypes of accessories and other handheld products that you can sell in a toy store, hardware store, or craft boutique. The goal, say CSAIL researchers, is for both expert and novice designers to spend more time brainstorming and testing different 3D designs rather than assembling and customizing objects by hand.
staying strong
To ensure that MakeStyle’s creations could withstand daily use, the researchers augmented their generative AI technology with a type of physics simulation called finite element analysis (FEA). You can imagine a 3D model of an object, such as a pair of glasses, that contains a kind of heat map that shows which regions are structurally viable under real weight, and which are not. As the AI refines the model, physics simulations highlight which parts of the model are becoming weak and prevent further changes.
Running these simulations every time a change is made would significantly slow down the AI process, Farooqui says, so Makestyle is designed to know when and where to perform additional structural analysis. “Makestyle’s adaptive scheduling strategy keeps track of what changes are occurring at specific points in the model. When the GenAI system makes changes that jeopardize certain areas of the model, our approach re-simulates the physics of the design. Makestyle will make subsequent modifications to ensure that the model does not break after construction.”
Combining the FEA process with adaptive scheduling allowed Makestyle to generate objects that were up to 100 percent structurally feasible. Testing 30 different 3D models with styles resembling things like bricks, stones and cacti, the team found that the most effective way to create structurally viable objects was to dynamically identify weak areas and alter the generator AI process to minimize its impact. In these scenarios, the researchers found that they could either stop styling altogether when a particular stress threshold was reached, or gradually make small refinements to prevent at-risk areas from reaching that mark.
The system also offers two different modes: a Freestyle feature that allows the AI to instantly visualize different styles on your 3D model, and a MakeStyle that carefully analyzes the structural effects of your changes. You can explore different ideas, then try out makestyle modes to see how they will affect the durability of particular areas of the artistic flourish model.
The CSAIL researchers say that although their model can ensure that your model remains structurally sound before it is 3D printed, it is not yet able to improve upon 3D models that were not viable to begin with. If you upload such a file to Makestyle, you will receive an error message, but Faruqi and his colleagues intend to improve the stability of those faulty models in the future.
Additionally, the team hopes to use generative AI to create 3D models for users instead of styling preset and user-uploaded designs. This will make the system even more user-friendly, so that people who are less familiar with 3D models, or cannot find their design online, can create it easily. Let’s say you wanted to create a unique type of bowl, and that 3D model was not available in the repository; Instead AI can create it for you.
“Although style-transfer works incredibly well for 2D images, not much work has explored how this transfers to 3D,” says Google research scientist Fabian Manhardt, who was not involved in the paper. “Essentially, 3D is a more difficult task, because training data is sparse and changing the geometry of an object can damage its structure, making it unusable in the real world. MechStyle helps solve this problem, allowing 3D stylization without breaking the structural integrity of the object through simulation. It empowers people to be creative and better express themselves through their tailored products.”
Farkey co-wrote the paper with senior author Stephanie Mueller, an MIT associate professor and CSAIL principal investigator, and two other CSAIL colleagues: researcher Leandra Tejedor SM ’24, and postdoc Jiaji Li. His co-authors are Amira Abdel-Rahman PhD ’25, now an assistant professor at Cornell University, and Martin Neisser SM ’19, PhD ’24; Google researcher Vrishank Phadnis; Varun Jampani, vice president of sustainability AI research; Neil Gershenfeld, MIT professor and director of the Center for Bits and Atoms; and Megan Hoffman, assistant professor at Northeastern University.
His work was supported by the MIT-Google Program for Computing Innovation. It was presented at the Association for Computing Machinery’s symposium on Computational Fabrication in November.