Skip to Content

Accessibility mode

Off

This accessibility switch, when activated, will stop movement in certain features, without compromising any core functionality or information. It includes pausing auto playing animations and video content.

We will endeavour to build out more accessibility features over time. If you have any accessibility recommendations get in touch.

At Bernadette, we believe that as a digital industry, it’s our duty to ensure that people of all abilities and backgrounds can use the digital experiences we create - from apps to chatbots, software to wearables - everybody has the right to use and enjoy each and every one of them. You can read more about it here.

18/06/23

Can AI be our friend or foe in Design?

Mik Shaw, Design Director
Circle with blue and orange spiral

Unless you've been hiding in a cave in Granada for the last 500 days you can’t have failed to notice the massive upsurge in using Artificial Intelligence for just about everything. Be that chatGPT, Stable Diffusion, Midjourney, Deepbrain, Synthesia or one of a thousand other emerging tools destined to change how we work forever. 

As a creative industry should we be scared? We expect users, customers and our audiences to embrace new technologies to engage with brands in new and magical ways. But our methods and ways to output have typcially always been in our control, as people collaborating, designing, and creating, pushing pixels and being in control of the output with hands on tools. Now the fear is that technology is seemingly taking us away from being the ones that create and control the execution. Is it time to dust off your Great Enoch (the massive hammers used by the technology fearing Luddites for years ago) and go and smash some servers rather than Spinning Jennys? Or should we embrace AIs  as just another input or tool to make our creative process smoother and faster? 

A man cheering behind computer in room full of people in historical themed clothes

Up until a few months ago I was firmly in favour of Big Enoch but then I realised I could be an illustrator, I could make screen prints, I could write a script and time it for 20 seconds. Now I’m hooked.

/Imagine…

It started with a quick dabble in MidJourney the text-to-image service very late in 2022 version 4. I joined the free version and fired up Discord and straight away threw in some prompts and made some cute gerbils, a dragon made of strawberries, Harry Potter reimagined as a Pixar Character - meh it was rudderless I couldn’t really steer the good ship MidJouney. I was out, bored.

Three images in one. Rat with red eyes, boy looing like harry potter and dragon on cloud

Five months went by and I kept hearing people chatting about it. Why? I’d been there, seen what it could do, it wasn’t for me. I’m a designer! I’d use my 30 years of amassed leanings and build it in 2d or 3d in whatever reality, if we need something specific. Let’s design it, not write some prompts - Right?

Then we got a brief, and it was to create works of art, modern twists using classics from a bygone era, Old masters and Cubism brought to life. I went to youTube and learned some basics, I bought a subscription, and set up my own Discord server and suddenly it all made sense. The dizzying stream of other people's thoughts and images was gone, it was just my images and it would with some prompt tweaking make pretty much what I asked it to make. MidJourney would be ace.

Images of three women

I started picking up pace. I watched some more youTube and joined ChatGPT, I taught it how to write prompts and specify cameras, lenses, lighting, contrast, colour grading and depth of focus. I was a prompt GOD. Well, chatGPT was, I was just holding the tiller making images it would have taken me months to make in 3d apps, weeks to find on Stock Imagery website. Oh websites, yeah I made those too, just as a test for a pitch, you can make anything. I found another server on Discord - DSNR - it writes amazing prompts, long-winded, rambling epics that you can really see in your head, and you just feed them into the AI and out pops a masterpiece at 1024x1024. 

Redhead woman with colourful balloons in background

But what can you do with a 1K image? 

The output from MidJourney at 1024 x 1024 is not useful for production of many things. It’s too small. But then I discovered AI upscalers. My preferred tool is Upscayl a freebie open-source AI tool. Get a copy from GitHub. Feed it your 1K image and out pops a 4096x4096 - . Amazing quality, better than the totally faked original, print ready. I made Stickers and off they went to get printed at my favourite online sticker printer soon to adorn the case of my MacBookPro. 

What a wonderful workflow of nothing to something in my hands through inspiration, AI realisation, automation, and streamlining.

Logo of monkey with yellow glasses and green beret

How about video?

I fed Midjourney a photo of myself with long (for me) hair from Lockdown 1, and made it into an illustration like Studio Ghibli, in coloured pencil. Job done. Chat GPT wrote me a script about a Blonde haired porridge thief, and this all got spoon-fed into d-id.com. And literally, 30 seconds later I had a 90-second video of a pencil-sketched version of me reading a bedtime story about some very forgiving bears. 

Colourful image of a man with trees and flowers from his hear

What else?

We’ve tried Dalle2, Adobe FireFly, BlueWillow, DreamlikeArt and a host of other text-to-image services, they all have their own quirks.

We’re using online AI services to motion capture ourselves and feed that into animation rigs - it’s a top-secret brief with mind-blowing results. Literally saving days and days of timeline tweaking and the painful building forward and reverse IK rigs for characters. We’re using Unreal Engine with an iPhone to make characters talk in real-time, with skin that looks like skin and fur that looks like fur. It’s the stuff of dreams, good dreams, not nightmares.

A person standing in water near shore
Harry potter

So for me and my colleagues, these are tools to be embraced and cherished like the first time you realise that After Effects was just Photoshop with a timeline, or when you wrote your first bit of ActionScript in Flash and things happened. Oh, and you can write code using ChatGPT. Vex for Houdini, Expressions for After Effects, CSS and HTML. It might not always be the best bit of script but it will 9 times out of ten work and you can tweak it afterwards.

Stop Rambling Mik.

ABOUT THE AUTHOR

MIK SHAW, DESIGN DIRECTOR

Mik joined Bernadette over 15 years ago, back when we were VCCP Digital. Since then Mik as been at the forefront of growing our digital design capabilities - specialising in all things streamlining, automation, 3D, motion, creative code and everything in between.

AboutTheAuthor-1024x1024-MikShaw

This is part of what we're calling iTest - an ongoing content series by real people in Bernadette, documenting experiments and discoveries in the world of AI - focusing on how humans and machines can collaborate, co-create and co-exist in harmony.

Bernadette is proud cohorts with faith - the ai creative agency from VCCP. We have faith that AI, used responsibly, will be an unparalleled accelerator of human creativity.

Let's talk

Got a business challenge that’s looking for an innovative digital solution? Or, perhaps you’re interested in joining our collective of digital pioneers? Maybe you just want to know a little more about what we do. In any case, we’d love to hear from you.