State of the Market 2024
This piece originally appeared in the AdNews State of the Market Report, 2024 and is kindly contributed by StudioSpace agency Nightjar.
Firstly, let’s just clear the air, generative AI will not be replacing any of the team at Nightjar in the next 6 months. We’re a digital product company, and yes, our team is experimenting, and using generative AI to perform perhaps 10% of their tasks. But people are multifaceted and perform dozens of tasks which require just as many different skill sets, every single day. So no one role is at risk. If we can use generative AI to improve our efficiency on routine tasks - helping our engineers with repetitive functions and supplying boilerplate code, summarising lengthy documents, writing actions from meetings, and spitting out 20 bad brand propositions so we steer clear of the cliches - then this is a good thing.
We’ve been cautious about our embrace of AI - the problem we see is that too often, the question has been “what work could generative AI do?”, and we get presented with hundreds of uses that generative AI can (sometimes impressively) be used for. But this doesn’t mean it’s solving an actual problem. If we start with the question “What work needs to be done?” we get a very different answer. There’s many things it could do that we don’t actually need doing.
Generative AI is currently more convincing than it is correct. True, people are not always correct, or truthful either, but as humans we have ways to discern how much we do or don’t trust someone. That disappears with a machine. Big Tech owning these models worries us from an ethical standpoint (Sam Altman stealing Scarlett Johansson’s voice, anyone?!)
Google’s new AI search result feature, AI Overviews (currently only rolled out in the States), has been giving such sage advice as adding glue to pizza sauce to stop cheese from sliding off, and telling people geologists recommend eating one small rock per day. This is eroding trust even further in Google’s flagship product (and main revenue driver), search. People are wise to the fact that Google results already favour ad-spend over quality of content ranking, and this further undermining of trust feels like a threat for them. We feel Google should go back to getting us to the information, not being the provider of said information. We’re more excited to see how AI powers are utilised when considered from the ground-up, rather than squeezed into existing products. There’s some exciting work and experimentation happening on products that are genuinely trying to improve people’s lives. The trying (and failing) is where good things will come from.
The next 6 months are hard to predict, but beyond that horizon, we see a looming (although avoidable) threat with people losing their ability to think critically, and younger people brought up on generative AI perhaps not even developing this ability. This is hugely sad. People are reading less, and fewer people are reading - there’s been a sharp decline in the past decade. And there’s now proof that our brains are shrinking. The skulls of modern humans are 13% smaller than those of our homo sapiens forebears. Sure, this decrease started 17,000 years ago at the end of the last Ice Age, likely as a way to help the body cool down more efficiently in hotter temperatures. But the rise of complex societies, when we stopped being hunter-gatherers and settled into civilisations, surely played a role. We no longer needed to store as much information in our short term memory as our forager ancestors, and the burden of knowledge, and tasks, could be distributed across more people. We’ve created tools and technologies that allow us to offload our thinking onto them - storing information in computers and using machines to calculate things for us. Which could all mean that our brains are being used less for intelligence and brainpower. (Note that both these theories are contested but we feel they’ve got legs!)
We’re seeing a rise in outsourcing to overseas teams, as well as clients taking work in house. Generative AI is definitely a threat in this domain - it can replace menial tasks, and the quality of the output can be brought up to a certain level so agencies of all levels can now be on a more even playing field. But the difference is in the craft, something created by human connection and emotion, something no AI can fake.
The past twelve months have been challenging, but we’re optimistic about the next six, and the next twelve. As budgets have tightened, we’ve been working with our clients on different partnership models. We want to make sure what we’re creating scales with their business and customers - so instead of hours we focus on agile-based outputs, outcomes and impact, and being compensated accordingly. We’re also putting our money where our mouth is and guaranteeing results with compensation tied to this performance, for clients who have the appetite. It means we’re in it for the long term, working collaboratively, and not just chasing a quick win.
We also hope that in the next 6 months the stupid Meta AI search bar disappears from WhatsApp!