Your firm software tools just got smarter. Did anyone tell you?
Surveyors UK
- Technology & AI
If your vendor updated their software with AI, you may already be subject to requirements you haven’t thought about.
Over the past few years, the tools you already use have been changing. Software vendors across the profession, as well as standard software you use every day (Microsoft CoPilot), are embedding AI into their platforms. Report writing assistants. Image recognition. Predictive text. Data extraction. Automated comparable searches. Natural language processing is built into the workflows you run every day.
Some vendors have announced it. Many have just rolled it out in an update.
But if your software now uses AI to process, suggest, filter, or generate anything that touches your professional output, you are using AI in your practice. Whether you chose to or not.
That has implications.
You did not adopt AI. Your vendor did it for you.
This is different from a surveyor deciding to open ChatGPT and paste in a report. That is a conscious choice. This is AI arriving inside tools you already pay for, switched on by default, often without a separate notification or detailed implications that would make you stop and think about what has changed.
None of that is inherently bad. A lot of it is genuinely useful. But every one of those features changes the nature of how technology is operating inside your practice. And that matters because if AI is influencing your professional output, even in small ways, you need to know about it, document it, and govern it.
The questions most firms have not asked their vendors
If you use any software in your practice that has been updated in the past year, there are questions you should be asking.
Things like: where does AI sit in this product? What data is being processed and where? Is it being used to train the model? What happens if the AI gets something wrong? Can the vendor map their product against the professional requirements you need to meet?
These are not trick questions. They are the basics of responsible technology use.
In tomorrow’s GUARD workshop, every attendee gets a vendor questionnaire as a starter for 10. Data governance, AI transparency, liability, compliance mapping. All covered.
One vendor I know is already doing this well
I want to highlight Valos as an example of a vendor getting this right. Valos is a valuation platform that uses AI for data gathering, document processing, and report automation. They are upfront about the fact that they use large language models from providers including OpenAI, Google, and Anthropic.
But what sets them apart is what they have done proactively. They have produced a detailed compliance document that maps their entire platform against the RICS Professional Standard on Responsible Use of AI, section by section. It covers data governance, system governance, risk management, procurement due diligence, and explainability. They hold ISO 27001 certification. All client data is stored in the UK. Client data is never used to train AI models without explicit written consent. Every AI output requires mandatory human verification before it can be used.
That is what good looks like. A vendor that understands the regulatory environment their clients operate in, and gives them what they need to demonstrate compliance.
Every vendor should be able to answer the basic questions clearly and in writing. If they cannot, that tells you something. If you are a RICS firm you need to ensure that your vendor is aware of the RICS Standard and compliance requirements.
This is not just about the RICS standard
Yes, if you are RICS regulated, the Professional Standard on Responsible Use of AI requires you to understand and document the AI tools in your practice, including those embedded in your software. That is mandatory.
But even if you are not RICS regulated, think about it from a PI insurance perspective. Your insurer will want to know how AI is being used in your practice. If a claim arises and it turns out AI was involved in producing or processing part of the work, and you did not know about it or document it, that is a problem.
Think about it from a data protection perspective. If your software is sending client data through AI systems, you need to understand whether that processing is compliant with your obligations under GDPR. Your client’s property address, their personal details, their valuation, their survey findings. Where is all of that going?
And think about it from a professional liability perspective. If AI generated something you relied on and it was wrong, the liability still sits with you. It does not matter that you did not know the AI was there.
What to do this week
Make a list of every piece of software you use in your practice. Every tool. Every platform. Every browser extension.
Then ask one simple question about each: does this use AI?
If the answer is yes, or if the answer is “I don’t know,” that is where you start.
First GUARD Workshop: tomorrow, 1-2pm
Tomorrow I am running the GUARD Framework workshop. A practical session walking through how to build AI governance for your firm, step by step against the RICS standard. There are a handful of places left. You can attend live or get the recording afterwards.
I am also now running future GUARD sessions for individuals and firms as well as broader intro to AI sessions for teams who want to understand where AI sits in their practice before getting into governance. If either of those is something your firm needs, get in touch.
To keep up to date with workshops, the new AI in surveying hub and more join the weekly newsletter– no spam.
I am looking forward to tomorrow. It is going to be a good session.
Nina
Nina Young
Surveyors UK