FIRST LOOK: Adobe’s Generative Fill Tool

      In late May, Adobe announced a new Generative Fill tool based on the Adobe Firefly family of generative AI models. Firefly was created as part of Adobe’s Sensei artificial intelligence and machine learning framework, which was launched in 2016 to help business professionals streamline their workflows. Although it’s not a stand-alone application, the Firefly toolset underpins the new Generative Fill function in the desktop edition of Photoshop (Beta) v. 24.7, which we used for this review. This version of Photoshop is only available to existing Creative Cloud subscribers.


      To install the Photoshop (beta) app, click on the Beta apps tab (circled in red) and select Install next to Photoshop (Beta).

      While some aspects of the Generative Fill function may resemble the AI image generation applications functions like DALL-E 2, Stable Diffusion and Midjourney, which have spawned a lot of press in recent months, the use of the term ‘fill’ shows it’s not capable of creating images out of nothing, as the other apps are. You need a base image to start with.

      What can Generative Fill do?
      Generative Fill can be used to apply a number of changes, including to remove unwanted elements in an image, extend the borders of an image or change a background or update subject’s outfit. In each case, the new AI-generated content is blended seamlessly into the original image with appropriate shadows, reflections, lighting and perspective.  Click here to visit Adobe’s website to find out more.  But it can also offer more to photographers – and some other potential users – than the current AI image generators because there appears to be no limit to the size of the base image you use. It also works a lot better than some of them!

      The new tool is similar to Photoshop’s existing Content Aware Fill tool, which was introduced in 2018 to help users remove unwanted objects from an image by analysing the pixels within a selection, comparing them to pixels in the surrounding area and create new pixels that blend in as seamlessly as possible. Success depends upon the user’s ability to set the brush parameters and apply the changes where the tool will work best (which requires practice).

      The AI processing enables Generative Fill to work on images in which usable clues in the base image are relatively small, quite subtle or absent. The main Firefly-based tools include adjustable brushes, insert and remove tools and a ‘match shape’ function that determines how closely the generated image will match the selection’s shape. A ‘preserve content’ setting lets you choose how much of the original content will be kept in the image, while ‘prompt guidance’ lets you specify how closely the generated content should adhere to your prompt.

      How does it work?
      All AI content generators work in a similar fashion, using iterative processing, machine learning and a large dataset of relevant content, which is used to ‘train’ the software to produce new content. This is as true for text generators like Chat GPT as it is for image generators like those mentioned above.

      New content is created by inputting text-based prompts (you ‘tell’ the software what picture or text you want it to produce).

      The size and scope of the training database will have a huge influence on the ability of the software to deliver the desired result. So will the quality of the prompts it receives. The larger the database the more information the software has to draw on, while the more precise the prompt, the better it will be able to focus upon assembling the content to satisfy the request.


      The new Generative Fill user interface in Photoshop.

      Photoshop’s Generative Fill function introduces a new Contextual Task Bar, which pops up whenever any of the selection tools is used.  This bar presents the most relevant next steps in your workflow by providing icons for selecting Select and Mask, Feather, Invert, Create Adjustment Layer, or Fill Selection.

      The bar moves as you work on the image so it’s close to the area you’re working on. You can reset its position, pin it to a place on the desktop, hide it or remove it completely.

      Clicking on the Generative Fill button displays a text prompt where you can input a description of what kind of fill should be created within the selection. If you leave this field blank, the software will determine the fill to use, based on the surrounding area.

      When you hit the Generate button, Generative Fill creates three new layers of content for users to choose from. When the best one is selected, the image must be flattened before any further processing can take place or the image can be saved.

      Performance
      Generative Fill is still in its infancy and will almost certainly be refined and improved over time, at least in part through user feedback gained through this beta release. Our tests show this tool still has a way to go – although it’s already pretty impressive when the prompts supplied integrate well with the training database.

      Each time you use the tool Photoshop should present three different options to choose from, along with three dots on the menu where you can register the result as good, indifferent or bad. You can also submit additional information to assist with further refinement of the Generative Fill tool or simply delete the picture.

      Despite some claims to the contrary, we found the Generative Fill tool we used is not yet capable of adding new content to a vertical image to turn it into a horizontal one. We tried it with 2:3 and 3:4 aspect ratio images and it failed very time, probably because the area of new content required was simply too big.

      We found the best results are achieved when it’s used with small areas and in pictures where the backgrounds make it easy to recognise and replicate their hues, tones and patterns. Where totally unrelated new content is required, the end results can be patchy.

      Here are some of the results from our trials of the Generative Fill tool’s capabilities.

      Adding new content. We asked the software to insert a hot air balloon into a shot of clouds in a blue sky. This was easily accomplished, although the inserted layer overlapped some of the clouds, obscuring them. We had to use Photoshop’s erase tool to ‘rub out’ the unwanted parts of the overlay so the clouds behind the balloons were revealed.


      The source image.


      The software inserted a shape containing two balloons that overlaid the clouds.  This screen grab shows the partial erasure of the unwanted parts of the layer to reveal the clouds behind the balloons.


      The end result.

      We also set the software the task of inserting an Aurora Australis in the evening sky in a photo but this time we selected only the sky.


      The original selection.


      The software produced three options, shown above. One (Version 2) is clearly better than the others.

      Some insertions are less successful because they fail to match the insertion with the shape of the frame or the character and age of the background. An example is provided below.

      We took a photo showing the cathedral-like interior of The Stick Shed in Murtoa, Victoria, which was constructed in the early 1940s, and asked for a tractor to be inserted.


      The original image with the insertion area selected.


      The three options offered by the software all show modern tractors, which don’t fit in well with the ‘antique’ background. The orientation of the first two is also uninspiring, while the third looks a bit out-of proportion.

      When we asked Photoshop to insert a rainbow into a selected area of sky in this outback landscape shot, the Generative Fill tool went well beyond the boundaries of the original selection. Judge for yourself whether you like any of the three options offered.


      The original image with the insertion area selected.




      Each of the options provided extended the landscape into the selected area of sky.

      Extending the canvas. We had a few tries at extending the canvas, starting by asking the software to convert a vertical photo into a horizontal one. Unsurprisingly, it was unable to do so as there was too much blank space to fill. However, it was very successful when it came to filling in small white areas, created when a picture was rotated to straighten the horizon.


      The rotated image showing white areas that could benefit from being filled in.

      We found it was best to work on one selected area at a time, rather than selecting the entire frame and asking the software to ‘Fill in the white areas’. One of the three attempts for each selection was usually better than the others.


      This screen grab shows the result of using Generative Fill on the top left corner of the rotated image. It’s not perfect but would be quick and easy to fine-tune with Photoshop’s Spot Healing brush.

      Removing and/or replacing unwanted content. Asking the software to remove a person from a photo produced some interesting results. When the person was a small component in a landscape, the software usually did a good job, as shown in the screen grab below.


      However, when we tried to remove a person from a family photo line-up, which required a more precise selection, the software replaced the area with another person, created from the program’s database. While some of the three replacements shown below might look somewhat credible, family members wanting a precious memento wouldn’t be impressed.


      The original photo.




      The three options offered by the software.

      Changing a background. Generative Fill processing did a really good job when we asked it to replace the background in the image below with the interior of a pub.


      The original image with the subject selected.


      The three background replacement options offered.

      It was impressive to see how well the software matched the lighting and tones of the replacements to the original subject. All backgrounds were nicely out-of-focus as well, adding to their credibility.

      Changing a subject’s outfit. We found asking the software to change a subject’s outfit was usually successful – provided the area of clothing to be replaced was selected with sufficient precision.


      The original photo showing the selected replacement area.


      The three options provided by the software all look completely credible.

       How does it compete with AI image generators?
      It doesn’t; Adobe’s Generative Fill is actually quite different and more like a potentially useful extension to the existing software. In some important respects, it’s a superior tool, even though it can’t create new images from scratch. As mentioned above, there appears to be no limit to the size of the base image you use. The resulting image after using Generative Fill is the same size as the original – unless you’ve chosen to crop it.

      Being based in Photoshop, the Adobe tool also provides many more opportunities for users to apply creative adjustments because all the other tools provided by Photoshop are at their fingertips. In contrast, the AI image generators permit few or no adjustments – and their small output sizes further restrict what you can do with the images created.

      Essentially, images created with the aid of Generative Fill are still photographs by virtue of their origin. Those produced with AI image generators are AI-generated pictures.

      Potential issues
      Adobe updated Photoshop (Beta) while we were trying out the Generative Fill tool but we were unable to find out what was changed over the three-week period of our tests. More updating will certainly take place as it moves towards full integration into the software.  As this happens, the results will become increasingly ‘realistic’, which means it will be important for photographers to disclose the extent to which Generative Fill was used in the creation of each image they produce.

      Back in 2019, Adobe joined forces with Twitter and the New York Times to launch the Content Authenticity Initiative (CAI), aiming to ‘fight misinformation and add a layer of verifiable trust to all types of digital content, starting with photo and video, through provenance and attribution solutions’. This initiative has published guidelines for a secure end-to-end system for digital content provenance. You can find full details here.

      Implementing content authentication will embed secure capture metadata in all images created with generative AI processing, along with history data recording any alterations to the content. This information will be able to be viewed by anyone who wants to use the image through a dedicated Verify site.

      It is hoped content transparency becomes a universal standard across the internet, thereby making authentication easier for everybody.  The CAI is working toward a future where informative, public and tamper-evident data is attached to all content — no matter where it goes.

      Questions have also been asked about the ethics of generative AI with respect to copyright. Who owns the copyright in an image that was produced on software ‘trained’ on other people’s work?

      Most of the current AI image generators started out as ‘research’ projects and were trained on image data that was ‘scraped’ from websites, usually without permission. Since these images were likely to have automatic copyright protection in most parts of the world, the question arises: should the creators receive a fee when their images are used?

      This moral dilemma is yet to be settled. The fair use (US) and fair dealing (UK) provisions in copyright law specifically classify using someone else’s work with the intent of competing with them commercially is not fair use/dealing.

      However, it will be near impossible to track down all the people whose images were used in any particular generated picture. Furthermore, if the software developers are required to restrict their training sources to images in the public domain that will certainly limit their options – and the capabilities of their software.  But paying individual image creators for enough images to train their software properly would be prohibitively expensive.

      Adobe gets around these issues by training its AI generator on content the company has a right to use, such as Adobe Stock images and public domain images. Stock photographers who have contributed to the AI training library should also be paid when their work is used, although Adobe hasn’t said how they are being notified of any payment structure.