New Release of Datamap

This is a small scale project for making maps with markers. Why do this, when there’s Google Maps etc.? In a (two) words, scale and autonomy, this will run happily on a Raspberry Pi and is Open Street Map based. Since it’s RESTy, it’s pretty easy to extend too.

The next couple of iterations will deal with multilingual templating and federation experiments using pub/sub, probably RabbitMQ.

It’s here: https://sourceforge.net/projects/datamap/ and, yes, it’s Sourceforge, since Microsoft now own Github.

1 Like

You always do such useful things!

I’ve been working on a similiar-ish thing for mapping underground void risks (which cause settlement and subsidence in roads and buildings). I am mapping risks through a combination of python, pelican and leaflet. But my main interest is in predictive creation of risk data-sets, so I will try your code against my samples.

Or, if you want, I’ll pass you a sample of my data to consider as a potential use-case. My desired mapping outputs are:

  1. Red polylines for reported linear voids (old culverts, sewers, etc)
  2. Green dashed polylines for suspected linear voids (as above but less evidenced)
  3. Red polycircles for suspected large voids (sinkhole reports usually)
  4. Green polycircles for high void risk areas (usually sites of old priories, large ruins, etc)

Lee

Hi Lee, Thanks, I think what you have may be more sophisticated/different-objective than mine, so interested in whatever you publish.

Interested in the predictive aspect too, the history of this was an FOI to the council to get fly-tipping data, which we then clustered. I was also interested in doing a bit of ML (I hate the ‘AI’ thing) with LSTM with the data, but that required(s) better cooperation with the council.

Long story short, let’s keep talking, best Hugh

An example output for Royston - well-known for one of its voids in particular - is at:
https://12voltfarm.com/royston-void-risks.html

The predictive part is really interesting but needs cooperation from various authorities/data-holders to become automatable. Local councils, water companies, English Heritage, and Ordnance Survey hold critical data-sets but are not set up to release data for public parsing.

One could process their data for certain structure types (old ecclesiastical of any sort, castles, mounds in any setting. High cost old urban buildings, such as hotels, neo-classical structures, ashlar-faced or cornered buildings. Plus a few others.) These are the older structures whose owners could afford to pay for substructures: reredorters, culverts, drains, mill-races, cellars, icehouses, etc. Derive a decimal GPS for each structure and your machine learning is 80% there. Add in Highways Dept subsidence reports and any other known data and you are 95% there.

It’s not sophisticated coding. If it was, I couldn’t do it :slight_smile:

There is also a wealth of data outside of closed modern databases. Its challenge is that its formats range from newspaper reports, through 90 year old former building workers with incomprehensible local accents and dimming memories, to pieces of vellum in controlled humidity storage in the country’s museums.

Lee