Origami Workflow
Using generative AI to bring a concept to life.
The voracious appetite for Content
To whip up the visuals for this report, I kicked things off with some classic brainstorming. Origami seemed spot-on, representing entities sprouting from a single source—animals popping out of paper. Utilizing Adobe Firefly, I refined the concept and then scaled that for over 56 images in the series. What used to be a weeks-long grind got slashed down to just days, churning out a load of assets that were previously off the charts.
Workflow
It might be passé to expose your workflow in generative AI circles, but I am an open book. I created an initial image and concept that was approved by the IBM team. After some back and forth, I added the image to all the prompts to maintain consistency, whats called a reference image. Then started with a similar prompt structure, that I would edit to fit the individual needs of the image. After I guided the prompt to a satisfactory conclusion, I would download the image from firefly and take it into photoshop and tweak as needed. I would also scale using the new generative ai scaling feature in Photoshop and/or content aware fill on the background to reach a high quality resolution. Once the image was complete, I would match the appropriate IBM color palette.