The human role in digital is as important today as it has ever been. When I say human, I’m looking mostly at the end user, the person interacting with an interface of some kind.
It could be ordering food from a menu on a touchscreen, transferring money on a phone, or booking cinema tickets from a watch. But my focus today is using an AI app. The usability per se is pretty good, but it doesn’t guarantee quality. In fact, the good usability is helping to create a lot of rubbish.
This is not to slam good usability – your interfaces really should be intuitive, whatever the user scenario – but using AI requires more engagement than the easy-to-use interface suggests, especially if using it for work purposes.
The AI Sandwich
There’s a principle in technology that’s been around forever. Garbage in, garbage out. This is true for using AI tools as it ever was for anything else, referring to the brief you feed into the chat. And there’s another concept, equally important when using AI – ‘layers of interpretation’, which you might recognise from taking English exams. This refers to the curation of the output from your AI chat.
This is what I was introduced to as the AI Sandwich, thanks to Liam Kingswell at an AI and Pizza event by YATM event. The fact being that there are points of human interaction that are crucial to quality use, the brief (the top layer of bread) and the curation (the bottom layer of bread), with AI doing its thing as the filling.
Getting the brief right
If the brief is vague, the output will be vague. If the brief is generic, the output will be generic. And generic AI output, handed to a client or a colleague or anyone who’s trusting you with something, isn’t really you doing the work. It’s you outsourcing it.
Getting the brief right can take a few goes. Sometimes my first conversation with AI is just about working out how best to articulate the requirement and then exploring what I need to add to bolster the input.
Curating the output
It’s the same with the output. We shouldn’t just accept the vanilla reply, we must curate it – even when vanilla looks pretty tempting. This is about owning your work and being curious about your delivery. Does the output actually reflect what I know? Is anything missing? Is anything wrong? What ideas do I have that can expand this? AI is confident even when it shouldn’t be. It sounds authoritative even when it’s guessing. So put your own expertise back in. The knowledge that got you here, your experience, your read on the specific situation or client or person – that’s not in the model. It can’t be. It has to come from you. Go back and forth with it. Ask questions. Push it. Redirect it. This isn’t a transaction. It’s a conversation.
Why this matters
I think about this a lot in the context of the work we do at Experience UX.
We’re in the business of understanding people. Real people, using real things, in real situations. AI can help me put together a discussion guide, structure a report, think through a problem from a different angle. That’s genuinely useful, but if I rely on AI to do these tasks alone, I lose connection and understanding of my own work. More importantly, I’ll soon lose my intuitive understanding and I’ll inadvertently be outputting the same standards as everyone else. Why would clients pay for that?
That’s not just a quality issue. It’s an identity one.
The brief is yours. The curation is yours. The judgement, the experience, the instinct built up over years of working with real people – that’s yours too.
AI doesn’t know what you know. It can’t.
So yes, use it. Use it often. But stay in the conversation. Keep your hands on it. Because the moment you step back and let it run, you’re not really there anymore.
And your clients/stakeholders will notice. Maybe not straight away. But they will.
