Two weeks until the RICS AI standard goes live. What has your firm done?
Surveyors UK
- Technology & AI
Many surveyors I speak to are using AI in some form. ChatGPT for drafting. Copilot for emails. AI-powered valuation tools. Measurement software with AI built in.
Some know they are using it. Some do not. And the number is growing every week.
Almost none of them have governance around it. No written record of what they are using. No risk assessment. No client disclosure. No documented process for checking whether the AI output they just relied on was actually reliable.
What many firm directors are missing is that even if your firm has not formally adopted AI, your staff might already be using it. A surveyor drafting a report with ChatGPT on their phone. An admin using Copilot to tidy up client letters. Someone pasting property details into an AI tool to speed up research.
If it is happening in your firm and you do not know about it, you have no governance around it.
And in two weeks, you are expected to.
The RICS professional standard – Responsible Use of Artificial Intelligence in Surveying Practice – takes effect on 9 March 2026. That is two weeks from today.
The risks I am already seeing
I talk to surveyors every day about AI. Here is what worries me.
Client data going into tools that are not secure. Surveyors pasting client names, property addresses, and case details into free AI tools that use inputs to train their models. That data is no longer confidential. It is no longer in your control. And you almost certainly did not get written consent from the client to upload it.
No paper trail on AI-assisted outputs. Report sections drafted by AI being included in final reports without any documented review of whether the output was accurate. If that report is challenged, there is no evidence showing a qualified surveyor applied their professional judgement before relying on it.
Blind trust in software vendors. Most surveyors trust their software providers. You buy a tool, it works, you get on with the job. But when that tool has AI embedded in it, the questions change:
- Where is your data going?
- Is it being used to train the model?
- What data was used to train the AI in the first place?
- Is there known bias in the outputs?
- What happens if the AI gets it wrong – where does the liability sit?
Most surveyors have never asked their software provider any of these questions. Most would not know where to start. The RICS standard requires written due diligence covering exactly these areas before you rely on a third-party AI system. The trust you have always placed in your software vendors now needs documenting.
AI summaries treated as fact. Surveyors relying on AI to summarise legal documents, planning conditions, or technical standards without checking against the source. AI summaries can be confident and wrong. If you rely on one that misses a critical clause, the liability sits with you, not the tool.
Terms of engagement that say nothing about AI. If you are using AI in the delivery of a service and you have not told the client, that is a transparency problem. The standard requires written disclosure in advance. Most firms have not updated their terms.
None of these are edge cases. They are happening right now, across the profession, every day.
This is bigger than compliance
I want to be clear about something. This is not just about ticking boxes for a standard. If that is how you approach it, you will end up with documents in a folder that nobody reads and nothing changes.
This is about understanding what is actually happening in your firm:
- Where is AI being used?
- Who is using it and who approved it?
- What data is going in and where is it going?
- Are AI outputs being checked before they reach a client?
- Could you explain your AI use to a regulator, an insurer, or a client who challenged an outcome?
- Do you know where the liability sits if an AI-assisted output turns out to be wrong?
Most firms cannot answer these questions today. Not because they are negligent. Because nobody has asked them yet.
“The firms that will thrive are not the ones that avoid AI. They are the ones that understand it, govern it, and use it with their eyes open.”
That is why I developed the GUARD framework – Governance, Use, Accountability, Risk, Documentation. It is not a checklist. It is a way of thinking through AI in your practice that surfaces the gaps you did not know you had. Most surveyors who work through it are surprised by what comes up. Not because they have been reckless, but because nobody has framed it this way before.
The standard gives you a reason to start. The framework gives you a way to do it properly.
Why this standard matters even if you are not RICS-regulated
The RICS standard on Responsible Use of Artificial Intelligence in Surveying Practice is the first mandatory AI governance framework for the surveying profession. But the principles it sets out are not unique to RICS. Data governance, risk management, client transparency, professional oversight of AI outputs – these are the baseline expectations that every professional body, every insurer, and every regulator will move toward. Getting ahead of it now is easier than catching up later.
What I am building
I have spent three years talking about AI and surveying. I use AI every day. I have tried hundreds of tools. Gemini, ChatGPT, NotebookLM, Claude – I use all of them daily. I know what works, what does not, and where the risks are because I have lived it, not just studied it.
Early in my career, I spent over a decade in governance, risk management, compliance, and information security – building risk registers, running risk workshops, setting up compliance frameworks, and making regulatory standards work in practice for regulated firms. That was part of my world before Surveyors UK.
So when I read the RICS standard and saw that it requires firms to build risk registers, governance policies, supplier due diligence processes, and client disclosure frameworks, I recognised the work. This is what I have spent years doing. Just in a different sector.
I am now building a suite of practical AI governance tools through Surveyors UK. Not theory. Not lectures. Tools you can actually use.
The first one is ready now.
The AI Governance Prompt Pack is a free set of five ready-to-use prompts designed to help your firm start preparing. Covering identification of AI use, responsibility and oversight, risk assessment, supplier due diligence, and staff training. Copy, paste, adapt to your practice.
It is a starting point, not a full compliance guide. But it will show you where you stand.
Download it free here: https://ai-survey-guard-workshop.lovable.app/
What is coming next
The first GUARD AI Workshop is on Tuesday 31 March. A focused, 60-minute session where we work through the framework together. It is not a lecture about what the standard says. It is a practical session designed to help you see where your firm’s gaps are and start closing them.
You will leave with a compliance starter toolkit and a CPD certificate. One hour that will save you days of trying to figure this out on your own.
“One hour that will save you days of reading, interpreting, and building compliance documents from scratch.”
This is the first in a series. More sessions will follow as the standard beds in and the profession’s understanding of AI governance develops.
More details and booking here: https://ai-survey-guard-workshop.lovable.app/
9 March is two weeks away. The question is whether you are ready for it.
Until next week,
Nina
Nina Young
Surveyors UK


