Notes from EEO talk on population modelling with GIS

March 22, 2010

David Martin spoke in the EEO seminar series last Friday. Here are my notes:

In the last decades we have become “sophisticated in our tools, but our fundamental techniques and results aren’t very different”. Census data is not the same as demographic data, however census approaches to modelling population have become dominant – a “long-term reliance on census-based shaded area map to inform spatial decision-making.

Importance of small area population mapping for policy – resource allocation and site location decisions, calculation of prevalence rates. “Who is present in a small area, and what characteristics do they have”. A house or flat becomes a “proxy” for a person, who is tied to the space.

This doesn’t give a clear usage picture, specifically it is night-time activity rather than day time which has very different patterns of repetition and variation of movement.

More general problems with census-taking –

  • underenumeration
  • infrequency
  • spatially concentrated error

“We could cut the city differently and produce variations in the pattern” – research in automated generation of census zones, looking for areas with social homogeneity, size, population, based on previous samplings.

“Population distribution is not space-filling but is quasi-continuous”.

“Interest in surfaces, grids and dasymetric approaches”. Using a grid to slice and visualise population data. The grid gives us a finer grained depiction of actualy activity.

Interestingly, shift in government policy regarding census taking. Rapid development of space, and new tech, cause problems – people are more mobile, with multiple bases; concerns about data privacy are more mainstream.
The US Census Bureau has dropped the “long-form” return which used to go to one in six recipients. In France the idea of a periodic census has been dropped completely, they now conduct a “rolling census” compiled from different data sources.

“Register-based sources” – e.g. demographic data is held by health services, local government, transport providers, business associations, communications companies. It’s possible to “produce something census-like”, but richer, by correlating these sources.

Also the cross-section of other sources gives an idea of where census records are flawed and persistently inaccurate, e.g. council tax records not corresponding to where people claim they live.

Towards new representations of time-space

Temporal issues still neglected by geodata specialists, in fact some of the issues are gnarlier and trickier than spatial representation is.

space–time specific population surface modelling.

Dr Martin identified “emergent issues” affecting this practise- “Spatial units, data sources as streams, representational concepts”. His group has a some software in development to document the algorithm for gridding data space – I wanted to ask whether the software and implicitly the algorithm would be released as open source.

A thought about gridded data is that it’s straightforward to recombine (given grid cells for different sources are the same size). Something like OGC WCS but much, simpler.

Advertisements

OpenSearch Geospatial in progress

March 15, 2010

One promising presentation I saw last week at the Jornadas SIG Libre – Oscar Fonts’ work in the Geographic Information Group at the Universitat Jaume I building on OpenSearch Geospatial interfaces to different services. OpenSearch geo query of OSM

The demonstrator showed during the talk was an OpenLayers map display hooked up to various OpenSearch Geo services.

Some are “native” OpenSearch services, like the GeoCommons data deposit and mapmaking service, the interfaces published by Terradue as part of the European GENESI-DR earth observation distributed data repository project.

The UJI demo also includes an API adapter for sensationally popular web services with geographic contents. Through the portal one can search for tweets, geotagged Flickr photos, or individual shapes from OpenStreetmap.

Oscar’s talk highlighted the problem of seeming incompatibility between the original draft of the OpenSearch Geospatial extensions, and the version making its way through the Open Geospatial Consortium’s Catalog working group as a “part document” included in the next Catalog Services for the Web specification.

The issues currently breaking backwards-compatibility between the versions are these:

      geo:locationString became geo:name in the OGC draft version.
      geo:polygon was omitted from the OGC draft version, and replaced with geo:geometry which allows for complex geometries (including multi-polygons) to be passed through using Well Known Text.

1) looks like syntactic sugar – geo:name is less typing, and reads better. geo:locationString can be deprecated but supported.

2) geo:geometry was introduced into the spec as a result of work on the GENESI-DR project, which had a strong requirement to support multi-polygons (specifically, passes over the earth of a satellite, which crossed the dateline and thus were made up of two polygons meeting on either side of the dateline).

geo:polygon has a much simpler syntax, just a list of (latitude, longitude) pairs which join up to make a shape. This also restricts queries to two dimensions.

This seems to be the nub of the discussion – should geo:polygon be included in the updated version – risking it being seen as clashing with or superfluous to geo:geometry, leading to end user confusion?

There is always a balance to be met between simplicity and complexity, Oscar pointed out in his talk what I have heard in OGC Catalog WG discussions too – that as soon as a use case becomes sufficiently complex, then CSW is available and likely fitter for the job. geo:geometry is already at the top end of acceptable complexity.

It’s about a year since I helped turn Andrew Turner’s original draft into an OGC consumable form. Anecdotally it seems like a lot more people are interested in seeing what can be done with OpenSearch Geo now.

The OGC version is not a fork. The wiki draft was turned into a draft OGC spec after talking with Andrew and Raj Singh about the proposed changes, partly on the OpenSearch Google Group. The geo:relation parameter was added on the basis of feedback from the GeoNetwork and GeoTools communities. There’s been a Draft 2 page, as yet unmodified, on the OpenSearch wiki since that time.

In order to build the confidence of potential adopters, these backwards-incompatibilities do need to be addressed. Personal point of view would be to update the wiki draft, deprecating locationString and including both polygon and geometry parameters.

I was impressed by the work of Oscar and collaborators, though wondering if they are going to move in to aggregation and indexing, search-engine-style, of the results, or just use the OpenSearch interface to search in realtime fairly fast moving sources of data. I wish I’d asked this question in the session, now. It all offers reinforcement and inspiration for putting OpenSearch Geo interfaces on services nearby – Go-Geo!, CKAN. The NERC Data Discovery Service could benefit, as could SCRAN. We’ll get to see what happens, which I’m glad of.


Notes on Linked Data and Geodata Quality

March 15, 2010

This is a long post talking about geospatial data quality background before moving on to Linked Data about halfway. I should probably try to break this down into smaller posts – “if I had more time, I would write less”.

Through EDINA‘s involvement with the ESDIN project between mapping and cadastral agencies (NMCAs) across Europe, I’ve picked up a bit about data quality theory (at least as it applies to geography). One of ESDIN’s goals is a common quality model for the network of cooperating NMCAs.

I’ve also been admiring Muki Haklay’s work on assessing data quality of collaborative OpenStreetmap data using comparable national mapping agency data. His recent assessment of OSM and Google MapMaker’s Haiti streetmaps showed the benefit of analytical data quality work, helping users assess how what they have matches the world, assisting with conflation to join different spatial databases together.

Today I was pointed at Martijn Van Exel’s presentation at WhereCamp EU on “map quality”, ending with a consideration of how to measure quality in OpenStreetmap. Are map and underlying data quite different when we think about quality?

The ISO specs for data quality have their origins in industrial and military quality assurance – “acceptable lot quality” for samples from a production line. One measurement, “circular error probable“, comes from ballistics design – the circle of error was once a literal circle round successive shots from an automatic weapon, indicating how wide a distance between shots, thus inaccuracy in the weapon, was tolerable.

The ISO 19138 quality models apply to highly detailed data created by national mapping agencies. There’s a need for reproducible quality assessment of other kinds of data, less detailed and less complete, from both commercial and open sources.

The ISO model presents measures of “completeness” and “consistency”. For completeness, an object or an attribute of an object is either present, or not present.

Consistency is a bit more complicated than that. In the ISO model there are error elements, and error measures. The elements are different kinds of error – logical, temporal, positional and thematic. The measures describe how the errors should be reported – as a total count, as a relative rate for a given lot, as a “circular error probable”.

Geographic data quality in this formal sense can be measured, either by a full inspection of a data set or in samples from it, in several ways:

  • Comparison to another data set, ideally of known and high quality
  • Comparing the contents of the dataset, using rules to describe what is expected.
  • Comparing samples of the dataset to the world, e.g. by intensive surveying.

The ISO specs feature a data production process view of quality measurement. NMCAs apply rules and take measurements before publishing data, submitting data to cross-border efforts with neighbouring EU countries, and later after correcting the data to make sure roads join up. Practitioners definitely think in terms of spatial information as networks or graphs, not in terms of maps.

Collaborative Quality Mapping

Muki Haklay’s group used different comparison techniques – in one instance comparing variable-quality data to known high-quality data, in another comparing the relative completeness of two variable-quality data sources.

Not so much thought has gone into the data user’s needs from quality information, as opposed to the data maintainer’s clearer needs. Relatively few specialised users will benefit from knowing the rate of consistency errors vs topological errors – for most people this level of detail won’t provide the confidence needed to reuse the information. The fundamental question is “how good is good enough?” and there is a wide spectrum of answers depending on the goals of each re-user of data.

I also see several use cases for use of quality information to flag up data which is interesting for research or search purposes, but not appropriate to use for navigation or surveying purposes, where errors can be costly.

An example: the “alpha shapes” that were produced by Flickr based on the distribution of geo-tagged images attached to a placename in a gazetteer.

Another example: polygon data produced by bleeding-edge auto-generalisation techniques that may have good results in some areas but bizarre errors in others.

Somewhat obviously, data quality information would be very useful to a data quality improvement drive. GeoFabrik made the OpenStreetmap Inspector tool, highlighting areas where nodes are disconnected or names and feature types for shapes are missing.

Quality testing

What about quality testing? When I worked as a perl programmer I enjoyed the test coverage and documentation coverage packages. A visual interface to show how much progress you’ve made on clearly documenting your code, to show how many decisions that should be tested for integrity remain untested.

Software packages come with a set of tests – ideally these tests will have helped with the development process, as well as providing the user with examples of correct and efficient use of the code, and aiding in automatic installation of packages.

Donald Knuth promoted the idea of “literate programming“, where code fully explains what it is doing. For code, this concept can be extended to “literate testing” of how well software is doing what is expected of it.

At the Digimap 10th Birthday event, Glen Hart from Ordnance Survey Research talked about increasing data usability for Linked Data efforts. I want to link to this the idea of “literate data“, and think about a data-driven approach to quality.

A registry based on CKAN, like data.gov.uk, could benefit from a quality audit. How can one take a quality approach to Linked Data?

To start with, each record has a set of attributes and to reach completeness they should all be filled in. This ranges from data license to maintainer contact information to resource download. Many records inCKAN.netare incomplete. Automated tests could be run on the presence or absence of properties for each package. The results can be display on the web, with option to view the relative quality of package collections belonging groups, or tags. The process would help identify areas that needed focus and followup. It would help to plan and follow progress on turning records into downloadable data packages. Quality testing could help reward groups that were being diligent in maintaining metadata.

The values of properties will have constraints, these can be used to test for quality – links should be reachable, email contact addresses should make at least one response. Locations in the dataset should be near locations in the metadata. Time ranges matching, ditto. Values that should be numbers, actually are numbers.

Some datasets listed in the data.gov.uk catalogues have URLs that don’t dereference, i.e. are links that don’t work. It’s difficult to find out what packages these datasets are attached to, where to get the actual data or contact the maintainers.

To see this in real data, visit the bare SPARQL endpoint at http://services.data.gov.uk/analytics/sparql and paste this query into the search box (it’s looking for everything described as a Dataset, using the scovo vocabulary for statistical data):

PREFIX scv: <http://purl.org/NET/scovo#&gt;

SELECT DISTINCT ?p
WHERE {
?p a scv:Dataset .
}

The response shows a set of URIs which, when you try to look them up to get a full description, return a “Resource not found” error. The presence of a quality test suite would catch this kind of incompleteness early in the release schedule, help provide metrics of how fast identified issues with incompleteness and inconsistency were being fixed.

The presence of more information about a resource, from a link, can be agreed on as a quality rule for Linked Data – it is one of the Four Principles after all, that one should be able to follow a link and get useful information.

With OWL schemas there is already some modelling of data objects and attributes and their relations. There are rules languages from W3C and elsewhere that could be used to automate some quality measurement – RIF and SWRL. These languages require a high level of buy-in to the standards, a rules engine, expertise.

Data package testing be viewed like software package testing. The rules are built up, piece by piece, growing as the code does, ideally. The methods used can be quite ad-hoc, use different frameworks and structures, as long as the results are repeatable and the coverage is thorough.

Not everyone will have the time or patience to run quality tests on their local copy of the data before use, so we need some way to convey the results. This could be an overall score, a count of completeness errors – something like the results of a software test run:

3 items had no tests...
9 tests in 4 items.
9 passed and 0 failed.
Test passed.

For quality improvement, one needs to see the detail of what is missing. Essentially this is a picture of a data model with missing pieces. It would look a bit like the content of a SPARQL query:

a scv:Dataset .
dc:title ?title .
scv:datasetOf ?package .
etc...

After writing this I was pointed at WIQA, a Linked Data quality specification language by the group behind dbpedia and Linked GeoData, which basically implements this with a SPARQL-like syntax. I would like to know more about in-the-wild use of WIQA and integration back into annotation tools…


“At risk” 7-8am March 23rd, 8-9am March 30th

March 9, 2010

The Unlock services will have a couple of one-hour periods at risk of downtime for network maintenance at the end of this month.

7-8am Tue Mar 23rd a router upgrade planned on the University of Edinburgh network may disrupt access to the Unlock service.

8-9am Tue 30th March is an “at risk” period for our connection to JANET while it undergoes resilience testing.

That’s all.


Dev8D: JISC Developer Days

March 5, 2010

The Unlock development team recently attended the Dev8D: JISC Developer Days conference at University College London. The format of the event is fairly loose, with multiple sessions in parallel and the programme created dynamically as the 4 days progressed. Delegates are encouraged to use their feet to seek out what interests them! The idea is simple: developers, mainly (but not exclusively) from academic organisations come together to share ideas, work together and strengthen professional and social connections.

A series of back-to-back 15 minute ‘lightning talks’ ran throughout the conference, I delivered two – describing EDINA’s Unlock services and showing users how to get started with the Unlock Places APIs. Discussions after the talk focused on the question of open sourcing and the licensing of Unlock Places software generally – and what future open gazetteer data sources we plan to include.

In parallel with the lightning talks, workshop sessions were held on a variety of topics such as linked data, iPhone application development, working with Arduino and the Google app engine.

Competitions
Throughout Dev8D, several competitions or ‘bounties’ were held around different themes. In our competition, delegates had the chance to win a £200 Amazon voucher by entering a prototype application making use of the Unlock Places API. The most innovative and useful application wins!

I gave a quick announcement at the start of the week to discuss the competition, how to get started using the API and then demonstrated a mobile client for the Unlock Places gazetteer as an example of the sort of competition entry we were looking for. This application makes use of the new HTML5 web database functionality – enabling users to download and store Unlock’s feature data offline on a mobile device. Here’s some of the entries:

Marcus Ramsden from Southampton University created a plugin for EPrints, the open access respository software. Using the Unlock Text geoparser, ‘GeoPrints’ extracts locations from documents uploaded to EPrints then provides a mechanism to browse EPrint documents using maps.

Aidan Slingsby from City University, entered some beautiful work displaying point data (in this case a gazetteer of British placenames) shown as as tag-maps, density estimation surfaces and chi surfaces rather than the usual map-pins! The data was based on GeoNames data accessed through the Unlock Places API.

And the winner was… Duncan Davidson from Informatics Ventures, University of Edinburgh. He used the Unlock Places APIs together with Yahoo Pipes to present data on new start-ups and projects around Scotland. Enabling the conversion of data containing local council names into footprints, Unlock Places allowed the data to be mapped using KML and Google Maps, enabling his users to navigate around the data using maps – and search the data using spatial constraints.

Some other interesting items at Dev8D…

  • <sameAs>
    Hugh Glaser from the University of Southampton discussed how sameAs.org works to establish linkage between datasets by managing multiple URIs for Linked Data without an authority. Hugh demonstrated using sameAs.org to locate co-references between different data sets.
  • Mendeley
    Mendeley
    is a research network built around the same principle as last.fm. Jan Reichelt and Ben Dowling discussed how by tracking, sharing and organising journal/article history, Mendeley is designed to help users to discover and keep in touch with similarly minded researchers. I heard of Mendeley last year and was surprised by the large (and rapidly increasing) user base – the collective data from its users is already proving a very powerful resource.
  • Processing
    Need to do rapid visualisation of images, animations or interactions? Processing is Java based sketchbox/IDE which will help you to to visualise your data much quicker. Ross McFarlane from the University of Liverpool gave a quick tutorial of Processing.js, a JavaScript port using <Canvas>, illustrating the power and versatility of this library.
  • Genetic Programming
    This session centred around some basic aspects of Genetic Algorithms/Evolutionary Computing and Emergent properties of evolutionary systems. Delegates focused on creating virtual ants (with Python) to solve mazes and by visualising their creatures with Processing (above), Richard Jones enabled developers to work on something a bit different!
  • Web Security
    Ben Charlton from the University of Kent delivered an excellent walk-through of the most significant and very common threats to web applications. Working from the OWASP Top 10 project, he discussed each threat with real world examples. Great stuff – important for all developers to see.
  • Replicating 3D Printer: RepRap
    Adrian Bowyer demonstrated RepRap – short for Replicating Rapid-prototyper. It’s an open source (GPL) device, able to create robust 3D plastic components (including around half of its own components). Its novel capability of being able to self-copy, with material costs of only €350 makes it accessible to small communities in the developing world as well as individuals in the developed world. His inspiring talk was well received and this super illustration of open information’s far reaching implications captured everyone’s imagination.

All in all, a great conference. A broad spread of topics, with the right mix of sit-and-listen to get-involved activities. Whilst Dev8D is a fairly chaotic event, it’s clear that it generates a wealth of great ideas, contacts and even new products and services for academia. See Dev8D’s Happy Stories page for a record of some of the outcomes. I’m now looking forward to seeing how some of the prototypes evolve and I’m definitely looking forward to Dev8D 2011.


The Edinburgh Geoparser and the Stormont Hansards

March 4, 2010

Stuart Dunn (of the Centre for e-Research at Kings College London) organised a stimulating workshop on the Edinburgh Geoparser. We discussed the work done extracting and mapping location references in several recently digitised archives (including the Stormont Papers, debates from the Stormont Parliament which ran in Northern Ireland from 1921 to 1972.)

Paul Ell talked about the role of the Centre for Digitisation and Data Analysis in Belfast in accelerating the “digital deluge” – over the last 3 or 4 years they have seen a dramatic decrease in digitisation cost, accompanied by an increase in quality and verifiability of the results.

However, as Paul commented later in the day, research funding invested in “development of digital resources has not followed through with a step change in scholarship“. So the work by the Language Technology Group in the Edinburgh geoparser, and other research groups such as the National Centre for Text Mining in Manchester, becomes essential to “interrogate [digital archives] in different ways”, including spatially.

Changing an image into knowledge“, and translating an image into a machine-readable text is only the beginning of this process.

There was mention of a Westminster-funded project to digitise and extract reference data from historic Hansards (parliamentary proceedings) – it would be a kind of “They Worked For You”. I found this prototype site which looks inactive and the source data from the Hansard archives – perhaps this is a new effort at exploiting the data-richness in the archives.

The place search service used was GeoCrossWalk, the predecessor to Unlock Places. The Edinburgh Geoparser, written by the Language Technology Group in the School of Informatics, sits behind the Unlock Text geo-text-mining service, which uses the Places service to search for places across gazetteers.

Claire Grover spoke about LTG’s work on event extraction, making it clear that the geoparser does a subset of what LTG’s full toolset is capable of. LTG has some work in development extracting events from textual metadata associated with news imagery in the NewsFilmOnline archive.

This includes some automated parsing of relative time expressions, like “last Tuesday”, “next year”, grounding events against a timeline and connecting them with action words in the text. I’m really looking forward to seeing the results of this – mostly because “Unlock Time” will be a great name for an online service.

The big takeaway for me was the idea of searching and linking value implicit in the non-narrative parts of digitised works – indexes, footnotes, lists of participants, tables of statistics. If the OCR techniques are smart enough to (mostly) automatically drop this reference data into spreadsheets, without much more effort it can become Linked Data, pointing back to passages in the text at paragraph or sentence level.

At several points during the workshop there were pleas for more historical gazetteer of placename and location information, available for re-use outside a pure research context (such as enriching the archives of the Northern Irish assembly). Claire raised the intriguing possibility of generating names for a gazetteer, or placename authority files, automatically as a result of the geo-text-parsing process – “the authority file is in effect derived from the sources”.

At this point the idea of a gazetteer goes back beyond simply place references, to include references to people, to concepts, and to events. One could begin to call this an ontology, but for some that has a very specific technical meaning.

The closing session discussed research challenges, including the challenge of getting support for further work. On the one hand we have scholarly infrastructure, on the other scholarly applications. There are a breadth of disciplines who can benefit from infrastructure, but they need applications; applications may be developed for small research niches, but have as yet unknown benefit for researchers looking at the same places or times in different ways.

Links:
Embedding GeoCrossWalk final report (PDF)