AI – a growing risk in Surveying
Surveyors UK
- Technology & AI
Everyone is in a different place with AI.
Some surveyors are already using it daily. Some are experimenting. Some are deliberately avoiding it. Some are sick and tired of hearing about it. All of that is understandable.
The question is no longer just “should we use AI?” Increasingly, it is “how would we explain it, evidence it, or defend it if we had to?”
That shift is already happening across professional services including surveying.
You do not need to be using AI today for this to affect you.
Even if you personally avoid it:
- your employees may already be using it
- your clients are certainly using it
- your reports can be questioned, reinterpreted, or challenged using AI tools
- insurers are starting to ask questions regardless
The risk is unmanaged, undocumented, or poorly understood use.
Across surveying, insurance, and legal commentary, AI is now being discussed in terms of governance, accountability, and evidence. Not simply tools.
That is an important shift.
Where the real AI risks sit for surveyors
Data and confidentiality Surveyors handle sensitive information every day. Addresses, plans, reports, emails, data sets, sensitive commercial information.
Risk arises when that information is copied into AI tools with unclear data handling terms, reused outside its original professional context, or shared without firm-level controls. Even if you do not use AI yourself, your data may still end up in it through others.
Accuracy and confident output AI produces clean, authoritative language. That is not the same as correct judgement.
It can subtly alter certainty, emphasis, or assumptions in a way that is easy to miss under time pressure. Professional responsibility does not disappear because wording sounds polished.
Bias and model risk AI systems rely on historic data. In valuation and cost-related work, that can reinforce outdated assumptions, underweight abnormal risks, or miss local nuance entirely. These risks are often invisible unless actively assessed.
Inconsistency inside firms Different people using AI in different ways can produce different outcomes for similar situations. This kind of drift often goes unnoticed until a complaint or claim highlights it.
Evidence and auditability When something goes wrong, firms must show how conclusions were reached.
If AI played a role, reassurance is not enough. Records matter.
Client and public challenge has changed
This is one of the most significant shifts.
Clients and the public now use AI to:
- question assumptions line by line
- reinterpret reports
- generate complaints quickly
- challenge professional judgement with confidence
Expertise still matters, but it is no longer unchallenged by default. Professionals do not have the complete upper hand when it comes to knowledge anymore.
That increases dispute risk, even where advice is sound.
Shadow AI inside firms
Many firms that believe they are “not using AI” already are.
Staff are using AI for emails, summaries, drafting, and admin. Often informally, often without guidance. This shadow use creates risk because it is undocumented, inconsistent, and uncontrolled. From an insurance perspective, that matters more than which tool is being used.
Why firms and insurers are paying attention
Insurers do not insure software. They insure process, judgement, and control.
Risk increases when use is informal, inconsistent, or cannot be traced. That is why AI has become an insurance and governance conversation, not a future tech debate.
Where the AI standard fits
The RICS AI standard, effective 9 March, does not require everyone to use AI.
It requires firms to assess whether AI use is material.
If AI affects advice, interpretation, or outputs in a way that could impact a client, that risk needs to be identified and addressed.
That is why the standard points towards governance, documentation, oversight, and where appropriate, a risk register. This is about showing risk has been considered, not ticking boxes.
Why a risk register matters
A risk register demonstrates that:
- risks were identified
- likelihood and impact were assessed
- controls were put in place
- responsibility was clear
For insurers, this shows control. For firms, it creates consistency. For surveyors, it protects professional judgement.
One simple action
Before March, speak to your insurer or broker.
Ask them: “What is your position on AI use in surveying, and what evidence would you expect from us?”
Even if you are not using AI today, the answer matters.
Close
Everyone is starting from a different place.
But risk does not wait until we feel ready.
AI has already changed the landscape around surveying. The standard simply gives us a framework to deal with it.
Nina Young
Surveyors UK