Adobe is introducing a new family of Creative Generative AI tools, known as Adobe Firefly into its Creative Cloud, Document Cloud, Experience Cloud and Adobe Express workflows.

Initially focusing on generating images and text effects, the new tools will improve the precision, power, speed and ease of editing content for distribution. Adobe Firefly will be part of a series of new Adobe Sensei generative AI services across Adobe’s clouds. Users will be able to generate content using their own words and produce images, audio, vectors, videos and 3D through access to creative tools like brushes, selections, .colour gradients and video transformations. Limitless variations of content can be produced, with creators making changes, again and again.

Adobe Firefly will be made up of multiple models, tailored to serve customers with a wide array of skill sets and technical backgrounds, working across a variety of different use cases. Adobe’s first model, trained on Adobe Stock images, openly licensed content and public domain content where copyright has expired, will focus on images and text effects and is designed to generate content safe for commercial use. The first applications that will benefit from Adobe Firefly integration will be Adobe Express, Adobe Experience Manager, Adobe Photoshop and Adobe Illustrator. Click here for a demonstration of some of the things that are possible.

Ahead of the final launch, the date of which has not been specified, Adobe has launched a beta version that showcases how creators of all experience and skill levels can generate high quality images and text effects. Through the beta process, the company will engage with the creative community and customers as it evolves this transformational technology and begins integrating it into its applications.