They said the same thing about writing

Surveyors UK Avatar

Surveyors UK

They Said the same thing about writing
  • Technology & AI

2,400 years ago, Socrates warned that a dangerous new technology would destroy human expertise.

He wasn’t talking about AI. He was talking about writing.

In Plato’s Phaedrus, Socrates argued that writing would produce forgetfulness. People would stop exercising their memories. They’d rely on external marks instead of their own minds. Worst of all, they’d appear wise while actually knowing nothing.

Sound familiar?

The irony is perfect. We only know Socrates said this because Plato wrote it down.

The pattern repeats. Every single time.

1492. A monk called Johannes Trithemius predicted the printing press would never last. In his essay “In Praise of Scribes” he argued handwriting was morally superior to mechanical printing. The monks who worked as scribes were terrified it would put them out of work. It did.

1825. The Stockton-Darlington Railway opened in England. People genuinely believed that travelling at 30 miles per hour would kill you. That your body would melt.

1865. Cars arrived. Britain passed a law requiring someone to walk 60 yards ahead of every vehicle carrying a red flag to warn horse riders. In the US, proposed laws would have forced drivers to cover their cars with blankets when horses approached. If the horse still refused to pass, the driver had to dismantle the car and hide it in the bushes.

1877. The New York Times attacked the telephone as an invasion of privacy. Some people feared they’d be electrocuted using one. Others believed it could be used to communicate with the dead.

1889. Spectator magazine warned about the telegraph creating a world of information delivered in snippets. Swap “telegraph” for “Twitter” and the complaint is identical.

1920s. Radio was going to ruin children. By 1941, a paediatrician had compared children’s radio habits to chronic alcoholism. Marconi himself asked whether he’d done the world good or just added a menace.

1927. Television would destroy radio, conversation, reading, family life, and American culture. All at once.

1982. The head of the Motion Picture Association of America compared the VCR to a serial killer, telling the US government it would cause Hollywood to “bleed and bleed and hemorrhage.” Hollywood went on to make billions from home video.

1980s. “Home taping is killing music.” Cassette recorders would destroy the music industry. They didn’t.

1995. Newsweek published a piece arguing that no online database would replace newspapers, no computer network would change how government works, and the idea of buying books over the internet was laughable. Seventeen years later, Newsweek stopped printing and went online-only.

That same year, Robert Metcalfe predicted the internet would catastrophically collapse. Metcalfe invented Ethernet. Two years later he blended a copy of his own column with water and ate it at a web conference.

A Dell survey found more than half of Americans were technophobic. A quarter mourned the typewriter.

And a computer science professor noted that senior faculty were humiliated by email. They were used to being looked up to as models of intelligence. Suddenly a 17-year-old knew more than they did.

So why does AI feel different?

Because it is different. Every previous technology automated a process. Something you do.

The printing press automated copying. The loom automated weaving. The calculator automated arithmetic. The internet automated access to information.

AI automates reasoning and intelligence. The thinking that used to take years of experience to develop.

That’s why the fear cuts deeper this time. Previous technologies threatened jobs. AI threatens professional identity, knowledge and expertise. The thing you spent years learning. The expertise you built your career on. The judgement that separates a qualified professional from someone who just looked it up.

For surveyors, this lands hard. Whether your value is in capturing accurate data that others can’t, knowing where to look and what to capture, or interpreting what the data means and staking your professional judgement on it. Anyone can buy a drone. Not anyone can deliver survey-grade data from one. Anyone can ask AI a question about a building. Not anyone can be held accountable for the answer.

Socrates was right about the wrong thing

Socrates warned that writing would create people who merely appear wise. He was right about the risk. He was wrong about the outcome.

Writing didn’t destroy wisdom. It democratised access to knowledge. The people who adapted to it gained an enormous advantage. The monks who refused to accept the printing press lost their livelihoods. The ones who learned to use it shaped the modern world.

The same pattern played out with every technology on this list. The people who feared it were often the ones whose existing role was most threatened. And the people who adapted earliest gained the biggest advantage.

What this means for surveying

AI will not replace professional judgement. But it will change what professional judgement looks like in practice.

Where this is heading

A two-tier market is coming. Commoditised work gets squeezed hard. Basic homebuyer reports, routine valuations, anything where the knowledge component can be replicated by AI-assisted competitors. Some of those competitors won’t be surveyors. They’ll be tech platforms staffed by people who’ve never held a tape measure, offering a cheaper version of what you do. Sadly, this is already happening.

Complex, high-value, contentious work stays human-led. It might even command higher fees. Because when the stakes are high, someone still needs to be accountable. Someone still needs professional indemnity insurance and the judgement to back it up.

The dangerous place to be is the middle. Not specialist enough to command premium. Not efficient enough to compete on price.

Right now we’re in a messy middle. Surveyors are using AI but nobody wants to admit it. The fear is simple. If clients know you’re using AI, they’ll expect to pay less. So people stay quiet.

The RICS AI Professional Standard requires disclosure to clients. If you’re a RICS member or regulated firm, you can’t hide it. You have to be open about AI use in your professional services. That tension between wanting the efficiency gains and fearing the pricing conversation is real and it’s happening right now across the profession.

This isn’t unique to surveying. Accountants, lawyers, architects. Every profession using AI is facing the same disclosure pressure. And as adoption of these standards increases, more clients across every sector will become aware that AI is part of professional services. The quiet advantage disappears.

The window to get ahead is shorter than people think. We’re still in the early stages of AI generally. But the pace is exceptional. We’re talking years, not decades, for fundamental changes to how professional services operate and compete.

And the speed of this shift is the part most surveyors underestimate. The gap between AI being a novelty and AI being embedded in competitor workflows is shorter than anyone expects. It won’t arrive with a warning. It won’t send a letter. One day a client will tell you they got the same insight faster and cheaper somewhere else. That’s how you’ll know it happened.

So what do you actually do about it?

This is why I built the GUARD Framework.

GUARD Framework

The RICS AI Professional Standard is now in effect. Most firms treat compliance as a box to tick. That misses the point entirely.

The real value of working through the standard is what it forces you to confront. Where is AI already being used in your firm that you don’t know about? What are your employees doing with client data? What happens when a client asks how AI was involved in their report and nobody has an answer? Where are you exposed right now?

GUARD is a practical AI governance framework mapped directly to the RICS AI Professional Standard. Five pillars. Governance, Understanding, Accountability, Risk, and Disclosure. But this isn’t about producing a policy document and filing it away.

The firms that do the work properly will have uncomfortable conversations. With employees. With IT. With partners. They’ll discover tools being used without oversight. They’ll find gaps in their processes they didn’t know existed. They’ll realise they need answers to questions nobody has thought to ask yet.

That’s the point. The framework doesn’t just make you compliant. It makes you prepared. Prepared for the client complaint. Prepared for the regulatory question. Prepared for the moment a competitor cuts corners with AI and the whole profession comes under scrutiny.

Protection and positioning. That’s what good governance gives you.

Find out more about the GUARD Framework here

Nobody likes the work involved in complying with standards, but that work will deter and prevent the risks of AI down the line and reduce its impacts.

Thank you for reading 🙂

Nina

Nina Young

Nina Young

Blog

What's new