An example output for Royston - well-known for one of its voids in particular - is at:
The predictive part is really interesting but needs cooperation from various authorities/data-holders to become automatable. Local councils, water companies, English Heritage, and Ordnance Survey hold critical data-sets but are not set up to release data for public parsing.
One could process their data for certain structure types (old ecclesiastical of any sort, castles, mounds in any setting. High cost old urban buildings, such as hotels, neo-classical structures, ashlar-faced or cornered buildings. Plus a few others.) These are the older structures whose owners could afford to pay for substructures: reredorters, culverts, drains, mill-races, cellars, icehouses, etc. Derive a decimal GPS for each structure and your machine learning is 80% there. Add in Highways Dept subsidence reports and any other known data and you are 95% there.
It’s not sophisticated coding. If it was, I couldn’t do it
There is also a wealth of data outside of closed modern databases. Its challenge is that its formats range from newspaper reports, through 90 year old former building workers with incomprehensible local accents and dimming memories, to pieces of vellum in controlled humidity storage in the country’s museums.