Designers, manufacturers and others often use 3D printing to rapidly prototype a range of functional objects, from movie props to medical devices. Accurate print previews are essential so that users know that a created item will work as expected.
But the previews generated by most 3D-printing software focus on function rather than aesthetics. A printed item may end up with a different color, texture, or shade than the user expected, resulting in multiple reprints that waste time, effort, and material.
To help users imagine how a constructed object will look, researchers at MIT and elsewhere have developed an easy-to-use preview tool that puts appearance first.
Users upload a screenshot of the object from their 3D-printing software, along with an image of the printed material. From these inputs, the system automatically generates a rendering of what the built object is likely to look like.
The artificial intelligence-powered system, called VisiPrint, is designed to work with a range of 3D-printing software and can handle any material example. It considers not only the color of the material, but the shine, transparency, and how the nuances of the manufacturing process affect the appearance of the object.
Such esthetics-focused previews may be particularly useful in fields such as dentistry, helping practitioners ensure that temporary crowns and bridges match the appearance of the patient’s teeth, or in architecture, assisting designers in assessing the visual impact of models.
“3D printing can be a very wasteful process. Some studies estimate that up to a third of the material used goes straight to landfill, often from prototypes that the user discards. To make 3D printing more sustainable, we want to reduce the number of attempts it takes to get the prototype you want. Lead author Maxine Peroni-Scharf, a graduate student in Electrical Engineering and Computer Science (EECS), says, “The user should not try every printing material at his disposal before deciding on a design.” A paper on VisiPrint.
He is joined on the paper by EECS graduate student Faraz Farooqui; Raul Hernandez, an MIT graduate; Suyeon Ahn, a graduate student at Gwangju Institute of Science and Technology; Szymon Rusinkiewicz, professor of computer science at Princeton University; William Freeman, Thomas and Gerd Perkins Professor of EECS at MIT and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Stephanie Mueller, associate professor of EECS and mechanical engineering at MIT and a member of CSAIL. This research will be presented at the ACM CHI conference on Human Factors in Computing Systems.
perfect aesthetics
The researchers focused on fused deposition modeling (FDM), the most common type of 3D printing. In FDM, the print material filament is melted and then inserted through a nozzle to create an object one layer at a time.
Generating accurate aesthetic previews is challenging because the melting and extruding processes can alter the appearance of a material, as well as the height of each deposited layer and the path the nozzle follows during fabrication.
VisiPrint uses two AI models that work together to tackle those challenges.
VisiPrint Preview is based on two inputs: a screenshot of the digital design from the user’s 3D-printing software (called “slicer” software), and an image of the print material, which can be taken from an online source or captured from a printed sample.
From these inputs, a computer vision model extracts features from the material sample that are important to the appearance of the object.
It feeds those features into a generative AI model that calculates the geometry and structure of the object, while incorporating the so-called “slicing” pattern the nozzle will follow as it extrudes each layer.
The key to the researchers’ approach is a special conditioning method. This involves carefully adjusting the internal workings of the model to guide it so it follows the slicing pattern and constraints of the 3D-printing process.
Their conditioning method uses a depth map that preserves the shape and shadow of the object, as well as a map of edges that represent internal contours and structural boundaries.
“If you don’t have the right balance of these two things, you can use bad geometry or the wrong slicing pattern. We have to be careful to combine them in the right way,” says Peroni-Scharf.
A user-centered system
The team also designed an easy-to-use interface where one can upload the required images and evaluate the preview.
The VisiPrint interface enables more advanced makers to adjust many settings, such as the effect of certain colors on the final look.
Finally, the aesthetic preview is intended to complement the functional preview generated by the slicer software, as VisiPrint does not estimate printability, mechanical feasibility, or probability of failure.
To evaluate VisiPrint, researchers conducted a user study asking participants to compare the system to other approaches. Almost all participants stated that it provided a better overall appearance as well as greater textural similarity with printed objects.
Additionally, the VisiPrint preview process took about one minute on average, more than twice as fast as any competing method.
She says, “ViziPrint really shined compared to other AI interfaces. If you gave the same screenshot to a more general AI model, it might randomly resize or use the wrong slicing pattern because there was no direct conditioning.”
In the future, the researchers want to address artifacts that can arise when model previews contain extremely fine details. They also want to add features that allow users to customize parts of the printing process beyond the color of the material.
“It’s important to think about the way we manufacture objects,” says Peroni-Scharff. “We need to continue to try to develop methods that reduce waste. Ultimately, this marriage of AI with the physical manufacturing process is an exciting area of future work.”
“‘What you see is what you get’ has been the main thing that made desktop publishing ‘happen’ in the 1980s, because it allowed users to get what they wanted on the first try. Now it’s time for 3D printing to get WYSIWYG. WizziPrint is a great step in this direction,” says Patrick Bowdish, professor of computer science at the Hasso Plattner Institute, who was not involved with this work.
This research was partially funded by an MIT Morningside Academy for Design Fellowship and an MIT MathWorks Fellowship.