Surveyors keeps having the wrong argument about AI
Surveyors UK
- Technology & AI
The dominant story in surveying right now is that AI makes mistakes, AI hallucinates, AI cannot be trusted, and therefore AI cannot replace what a competent surveyor does.
Every word of that is true.
It is also easier to criticise a tool that gets things wrong than to admit humans get things wrong too.
The variation the profession does not talk about
Two surveyors can inspect the same property and produce reports of strikingly different quality. Defects identified, defects missed, defects mis-described. Cost analysis that ranges from forensic to back-of-envelope. Lease reviews that catch the onerous clause and lease reviews that do not. Client communication that informs and client communication that confuses.
This is not a slur on the profession. It is what happens in any human-delivered service. Surveyors get tired. Surveyors have bad eyesight. Surveyors get distracted. Surveyors get hungry. Surveyors carry the same biases, blind spots, and off-days as everyone else.
The honest comparison is not AI versus a perfect surveyor. The honest comparison is AI-assisted work versus the actual range of what humans currently produce.
What this argument is really about
The resistance to AI in the profession is not really about accuracy. It is about identity.
It is the fear that if a tool can absorb part of what a surveyor does, then a surveyor is somehow less. That fear is human. It is also a misreading of where professional value actually lives.
A surveyor’s value has never been in producing words on a page. It has been in the judgement that produces those words. The experience that knows which questions to ask. The accountability that signs the report. The relationship that explains the findings to a client who needs to act on them. None of that is replaceable by a model.
What is replaceable is the slower, more mechanical layer underneath. And handing that layer to a tool that does not get tired, does not get distracted, and can audit an output overnight is not a diminishment of the profession. It is a redirection of professional attention to the things that actually matter.
Once that distinction is clear, the rest of the argument changes shape.
What the medical evidence actually shows
The most cited piece of research used to argue against AI in professional work is the Lancet colonoscopy study from 2025, which found doctors who routinely used AI were 20% worse at detecting abnormalities when the AI was taken away. That finding is real and it deserves to be taken seriously.
The medical literature also tells the other half of the story.
The MASAI trial published its full results in The Lancet in January this year. Over 100,000 Swedish women. AI-supported mammography detected 29% more breast cancers than experienced radiologists working in standard double-reading. Interval cancers, the ones that appear between screening rounds, dropped by 12%. Aggressive cancer diagnoses dropped by 27%.
In plain English, an AI working alongside a radiologist found cancers that experienced human readers had missed. Consistently. At scale. In the largest randomised controlled trial ever conducted on AI in cancer screening.
That is the evidence the profession needs to sit with.
Where humans stay essential
Human judgement is not going anywhere. It is the most valuable thing the profession produces and there is nothing on the horizon that changes that.
A surveyor reading a building reads more than the building. They read the client’s situation, the lender’s appetite, the transaction’s pressures, the pattern of similar properties they have walked through over years. They notice the thing that is not in the brief because nobody thought to mention it. They smell damp before they can see it. They climb into the loft when the photograph would have been easier. They sit across from a client who is about to spend half a million pounds and translate what the report means for them, specifically, in their circumstances.
That is not at risk. It is not replicable by a model. It is the work, and it stays with the surveyor.
What is changing is everything that happens around it.
The risk that is not being measured properly
The hallucination conversation is the loudest in the profession and the least useful. Models get things wrong. That matters. The error rate is also dropping every six months and the surveyors using these tools will get better at catching what slips through.
The bigger risk sits somewhere else.
It is the surveyor who runs an inspection report through a model, glances at the output, and signs it off. It is the firm that has not defined what proper review of AI output actually looks like in practice. It is the assumption that because the text reads well, the underlying analysis is sound.
That is a people problem, not an AI problem. And the answer is not to refuse the tools. The answer is governance, audit, and a documented review process for every AI-assisted output that leaves the firm. The same disciplines a firm applies to anything else that carries professional risk.
This is what my GUARD Framework is built for. Not to slow firms down. To give them a structured way to use AI well, evidence that they are doing so, and the audit trail that proves it.
Without that, AI is not the threat. The absence of governance is the threat.
Where RICS members stand on the standard
The RICS Professional Standard on the Responsible Use of AI in Surveying Practice came into effect on 9 March. Several weeks in, most firm owners are asking the same two questions. Does it apply to my firm. And if it does, what do I need to do about it.
I am running a free 60-minute briefing on Wednesday 20 May, 1 to 2pm BST, that answers both.
It is not a walkthrough of all 30 mandatory requirements. It is a plain English briefing on where firms currently sit, the three or four gaps that almost every firm has, and what minimum compliance actually looks like. One hour of informal CPD. Recording included for 14 days afterwards.
The patterns from over 1,000 completed RICS AI Readiness Assessments (you can take this here) are clear. 92% of firms have no documented process for verifying AI outputs. Most do not have a written AI policy. Most have done no vendor due diligence on the AI features already embedded in their everyday software.
If any of that sounds like your firm, the briefing is built for you. Register here
The surveyors who are getting this right
The best surveyors in the profession right now are not the ones holding the line against AI. They are the ones using it to do better work. Faster reporting without losing rigour. Tighter cost analysis. Lease reviews that catch what the eye misses on the third read.
They have separated their professional identity from the mundane and repetitve tasks AI can absorb. They have located their value in judgement, accountability, experience, and client relationship. They are not afraid because they know what they are for.
Landing
The romantic notion that humans are the standard and AI is the threat to it has been challenged before. Every time it has been challenged, the people who clung to it lost ground to the people who adapted. AI is not temporary. It is not going to fade out of professional services.
It is becoming the working environment. Adoption is very uneven, the pace varies firm to firm, and that is fine.
AI is going to make mistakes. It must be used responsibly
The question is whether a surveyor’s professional identity is robust enough to sit alongside a tool that does not get tired and can genuinely help you do things faster and better through resposible use.
Nina
Nina Young
Founder & CEO, Surveyors UK