
Adobe on Thursday announced a new upgrade to his firefly video model, and introduced new third-party Artificial Intelligence (AI) models that will be available on its stage. California -based software giant said that now he is improving the speed generation of his firefly video model to make it more natural and smooth. Additionally, the company is adding advanced video control to users to generate more consistent video outputs. In addition, Adobe also introduced four new third-party AI models, which are being added to the firefly boards.
In a blog post, the software giant company expanded new features and equipment, Adobe Jugnu users will soon receive. These features will only be accessible to the paid customers, some of which are exclusive to the web app for now.
Adobe’s firefly video model already produces videos with realistic, physics-based speeds. Now, the company is increasing its speed production capabilities to provide smooth, more natural infections. These improvements are applied to both 2D and 3D content, not only to the characters, but also to the floating bubbles, rustle leaves and the clouds that are flowing motion to elements.
The recently released Jugnu app is also getting support for the new third-party AI model. Adobe is introducing the image and video of Pukhraj Labs and Moonwali’s death. These will soon be added to firefly boards. On the other hand, Luma AI’s Ray2 and Pika 2.2 AI models, already available in boards, will soon support video generation capacity (currently, they can only be used to generate images).
Coming into the new video controls, Adobe has added additional equipment to motivate to exaggerate less exaggerated, and has reduced the need to edit inline. The first tool allows users to upload a video as a reference, and will follow its original creation in the fire -generated output.
Another new inclusive style is a preset tool. AI video -making users can now choose a style such as claimation, anime, line art, or 2D, with their prompt, and firefly will follow the style instructions in the final output. Kaframe cropping is also possible now to indicate on the stage. The users can upload the first and final frame of a video, and the fireplace will generate a video that matches the format and aspect ratio.
In addition, Adobe is also presenting a new tool, dubbed, which produces sound effects in beta. The tool allows users to create a custom audio using voice or text prompt, and layer it on the AI generated video. When using their voice, users can also determine the time and intensity of sound as firefly would have produced custom audio matching the energy and rhythm of the voice.
Finally, the company is also presenting a lesson for avatar convenience that converts the script to a video led by avatar. The users will be able to select their favorite avatar from the pre-listed library of Adobe, customize the background, and even select the pronunciation of the speech generated.