Can AI be our friend or foe in Design?
Unless you've been hiding in a cave in Granada for the last 500 days you can’t have failed to notice the massive upsurge in using Artificial Intelligence for just about everything. Be that chatGPT, Stable Diffusion, Midjourney, Deepbrain, Synthesia or one of a thousand other emerging tools destined to change how we work forever.
As a creative industry should we be scared? We expect users, customers and our audiences to embrace new technologies to engage with brands in new and magical ways. But our methods and ways to output have typcially always been in our control, as people collaborating, designing, and creating, pushing pixels and being in control of the output with hands on tools. Now the fear is that technology is seemingly taking us away from being the ones that create and control the execution. Is it time to dust off your Great Enoch (the massive hammers used by the technology fearing Luddites for years ago) and go and smash some servers rather than Spinning Jennys? Or should we embrace AIs as just another input or tool to make our creative process smoother and faster?
Up until a few months ago I was firmly in favour of Big Enoch but then I realised I could be an illustrator, I could make screen prints, I could write a script and time it for 20 seconds. Now I’m hooked.
It started with a quick dabble in MidJourney the text-to-image service very late in 2022 version 4. I joined the free version and fired up Discord and straight away threw in some prompts and made some cute gerbils, a dragon made of strawberries, Harry Potter reimagined as a Pixar Character - meh it was rudderless I couldn’t really steer the good ship MidJouney. I was out, bored.
Five months went by and I kept hearing people chatting about it. Why? I’d been there, seen what it could do, it wasn’t for me. I’m a designer! I’d use my 30 years of amassed leanings and build it in 2d or 3d in whatever reality, if we need something specific. Let’s design it, not write some prompts - Right?
Then we got a brief, and it was to create works of art, modern twists using classics from a bygone era, Old masters and Cubism brought to life. I went to youTube and learned some basics, I bought a subscription, and set up my own Discord server and suddenly it all made sense. The dizzying stream of other people's thoughts and images was gone, it was just my images and it would with some prompt tweaking make pretty much what I asked it to make. MidJourney would be ace.
I started picking up pace. I watched some more youTube and joined ChatGPT, I taught it how to write prompts and specify cameras, lenses, lighting, contrast, colour grading and depth of focus. I was a prompt GOD. Well, chatGPT was, I was just holding the tiller making images it would have taken me months to make in 3d apps, weeks to find on Stock Imagery website. Oh websites, yeah I made those too, just as a test for a pitch, you can make anything. I found another server on Discord - DSNR - it writes amazing prompts, long-winded, rambling epics that you can really see in your head, and you just feed them into the AI and out pops a masterpiece at 1024x1024.
But what can you do with a 1K image?
The output from MidJourney at 1024 x 1024 is not useful for production of many things. It’s too small. But then I discovered AI upscalers. My preferred tool is Upscayl a freebie open-source AI tool. Get a copy from GitHub. Feed it your 1K image and out pops a 4096x4096 - . Amazing quality, better than the totally faked original, print ready. I made Stickers and off they went to get printed at my favourite online sticker printer soon to adorn the case of my MacBookPro.
What a wonderful workflow of nothing to something in my hands through inspiration, AI realisation, automation, and streamlining.
How about video?
I fed Midjourney a photo of myself with long (for me) hair from Lockdown 1, and made it into an illustration like Studio Ghibli, in coloured pencil. Job done. Chat GPT wrote me a script about a Blonde haired porridge thief, and this all got spoon-fed into d-id.com. And literally, 30 seconds later I had a 90-second video of a pencil-sketched version of me reading a bedtime story about some very forgiving bears.
We’ve tried Dalle2, Adobe FireFly, BlueWillow, DreamlikeArt and a host of other text-to-image services, they all have their own quirks.
We’re using online AI services to motion capture ourselves and feed that into animation rigs - it’s a top-secret brief with mind-blowing results. Literally saving days and days of timeline tweaking and the painful building forward and reverse IK rigs for characters. We’re using Unreal Engine with an iPhone to make characters talk in real-time, with skin that looks like skin and fur that looks like fur. It’s the stuff of dreams, good dreams, not nightmares.
So for me and my colleagues, these are tools to be embraced and cherished like the first time you realise that After Effects was just Photoshop with a timeline, or when you wrote your first bit of ActionScript in Flash and things happened. Oh, and you can write code using ChatGPT. Vex for Houdini, Expressions for After Effects, CSS and HTML. It might not always be the best bit of script but it will 9 times out of ten work and you can tweak it afterwards.
Stop Rambling Mik.
ABOUT THE AUTHOR
MIK SHAW, DESIGN DIRECTOR
Mik joined Bernadette over 15 years ago, back when we were VCCP Digital. Since then Mik as been at the forefront of growing our digital design capabilities - specialising in all things streamlining, automation, 3D, motion, creative code and everything in between.
This is part of what we're calling iTest - an ongoing content series by real people in Bernadette, documenting experiments and discoveries in the world of AI - focusing on how humans and machines can collaborate, co-create and co-exist in harmony.
Bernadette is proud cohorts with
- the ai creative agency from VCCP. We have faith that AI, used responsibly, will be an unparalleled accelerator of human creativity.