Moving the Unlock blog

April 30, 2010

We have proper blog hosting set up at EDINA so we’re moving the Unlock service blog to a new home:

http://unlock.blogs.edina.ac.uk/

The past contents will stay here and also be duplicated at the new blog. Thanks.


Unlock Places API — version 2.2

April 21, 2010

The Unlock Places API was recently upgraded to include Ordnance Survey’s Open data. This feature rich data from Code-Point Open, Boundary-Line and the 1:50,000 gazetteer includes placenames and locations (points, boxes and shapes) and is now open for all to use! You can just get started with the API.

We’ve also added new functionality to the service, including an HTML view for features, more feature attributes, the ability to request request results in different coordinate systems as well as the usual speed improvements and bug-fixes.

The new data and features are available from Tuesday, 20th April 2010. Please visit the example queries page to try out some of the queries.

We welcome any feedback on the new features – and if there’s anything you’d like to see in future versions of Unlock, please let us know. Alternatively, why not just get in touch to let us know how you’re using the service, we’d love to hear from you!

Full details of the changes are listed below the fold.

Read the rest of this entry »


Linking Placename Authorities

April 9, 2010


Putting together a proposal for JISC call 02/10 based on a suggestion from Paul Ell at CDDA in Belfast. Why post it here? I think there’s value in working on these things in a more public way, and I’d like to know who else would find the work useful.

Summary

Generating a gazetteer of historic UK placenames, linked to documents and authority files in Linked Data form. Both working with existing placename authority files, and generating new authority files by extracting geographic names from text documents. Using the Edinburgh Geoparser to “georesolve” placenames and link them to widely-used geographic entities on the Linked Data web.

Background

GeoDigRef was a JISC project to extract references to people and places from several very large digitised collections, to make them easier to search. The Edinburgh Geoparser was adapted to extract place references from large collections.

One roadblock in this and other projects has been the lack of open historic placename gazetteer for the UK.

Placenames in authority files, and placenames text-mined from documents, can be turned into geographic links that connect items in collections with each other and with the Linked Data web; a historic gazetteer for the UK can be built as a byproduct.

Proposal

Firstly, working with placename authority files from existing collections, starting with the existing digitised volumes from the English Place Name Survey as a basis.

Where place names are found, they can be linked to the corresponding Linked Data entity in geonames.org, the motherlode of place name links on the Linked Data web, using the georesolver component of the Edinburgh Geoparser.

Secondly, using the geoparser to extract placename references from documents and using those placenames to seed an authority file, which can then be resolved in the same way.

An open source web-based tool will help users link places to one another, remove false positives found by the geoparser, and publish the results as RDF using an open data license.

Historic names will be imported back into the Unlock place search service.

Context

This will leave behind a toolset for others to use, as well as creating new reference data.

Building on work done at the Open Knowledge Foundation to convert MARC/MADS bibliographic resources to RDF and add geographic links.

Making re-use of existing digitised resources from CDDA to help make them discoverable, provide a path in to researchers.

Geonames.org has some historic coverage, but it is hit and miss (E.g. “London” has “Londinium” as an alternate name, but at the contemporary location). The new OS OpenData sources are all contemporary.

Once a placename is found in a text, it may not be found in a gazetteer. The more places correctly located, the higher the likelihood that other places mentioned in a document will also be correctly located. More historic coverage means better georeferencing for more archival collections.


Work in progress with OS Open Data

April 2, 2010

The April 1st release of many Ordnance Survey datasets as open data is great news for us at Unlock. As hoped for, Boundary-Line (administrative boundaries), the 50K gazetteer of placenames and a modified version of Code-Point (postal locations) are now open data.

Boundary Line of Edinburgh shown on Google earth. Contains Ordnance Survey data © Crown copyright and database right 2010

We’ll be putting these datasets into the open access part of Unlock Places, our place search service, and opening up Unlock Geocodes based on Code-Point Open. However, this is going to take a week or two, because we’re also adding some new features to Unlock’s search and results.

Currently, registered academic users are able to:

  • Grab shapes and bounding boxes in KML or GeoJSON – no need for GIS software, re-use in web applications
  • Search by bounding box and feature type as well as place name
  • See properties of shapes (area, perimeter, central point) useful for statistics visualisation

And in soon we’ll be publishing these new features currently in testing:

  • Relationships between places – cities, counties and regions containing found places – in the default results
  • Re-project points and shapes into different coordinate reference systems

These have been added so we can finally plug the Unlock Places search into EDINA’s Digimap service.

Having Boundary-Line shapes in our open data gazetteer will mean we can return bounding boxes or polygons through Unlock Text, which extracts placenames from documents and metadata. This will help to open up new research directions for our work with the Language Technology Group at Informatics in Edinburgh.

There are some organisations we’d love to collaborate with (almost next door, the Map Library at the National Library of Scotland and the Royal Commission on Ancient and Historical Monuments of Scotland) but have been unable to, because Unlock and its predecessor GeoCrossWalk were limited by license to academic use only. I look forward to seeing all the things the OS Open Data release has now made possible.

I’m also excited to see what re-use we and others could make of the Linked Data published by Ordnance Survey Research, and what their approach will be to connecting shapes to their administrative model.

MasterMap, the highest-detail OS dataset, wasn’t included in the open release. Academic subscribers to the Digimap Ordnance Survey Collection get access to places extracted from MasterMap, and improvements to other datasets created using MasterMap, with an Unlock Places API key.


Notes from EEO talk on population modelling with GIS

March 22, 2010

David Martin spoke in the EEO seminar series last Friday. Here are my notes:

In the last decades we have become “sophisticated in our tools, but our fundamental techniques and results aren’t very different”. Census data is not the same as demographic data, however census approaches to modelling population have become dominant – a “long-term reliance on census-based shaded area map to inform spatial decision-making.

Importance of small area population mapping for policy – resource allocation and site location decisions, calculation of prevalence rates. “Who is present in a small area, and what characteristics do they have”. A house or flat becomes a “proxy” for a person, who is tied to the space.

This doesn’t give a clear usage picture, specifically it is night-time activity rather than day time which has very different patterns of repetition and variation of movement.

More general problems with census-taking –

  • underenumeration
  • infrequency
  • spatially concentrated error

“We could cut the city differently and produce variations in the pattern” – research in automated generation of census zones, looking for areas with social homogeneity, size, population, based on previous samplings.

“Population distribution is not space-filling but is quasi-continuous”.

“Interest in surfaces, grids and dasymetric approaches”. Using a grid to slice and visualise population data. The grid gives us a finer grained depiction of actualy activity.

Interestingly, shift in government policy regarding census taking. Rapid development of space, and new tech, cause problems – people are more mobile, with multiple bases; concerns about data privacy are more mainstream.
The US Census Bureau has dropped the “long-form” return which used to go to one in six recipients. In France the idea of a periodic census has been dropped completely, they now conduct a “rolling census” compiled from different data sources.

“Register-based sources” – e.g. demographic data is held by health services, local government, transport providers, business associations, communications companies. It’s possible to “produce something census-like”, but richer, by correlating these sources.

Also the cross-section of other sources gives an idea of where census records are flawed and persistently inaccurate, e.g. council tax records not corresponding to where people claim they live.

Towards new representations of time-space

Temporal issues still neglected by geodata specialists, in fact some of the issues are gnarlier and trickier than spatial representation is.

space–time specific population surface modelling.

Dr Martin identified “emergent issues” affecting this practise- “Spatial units, data sources as streams, representational concepts”. His group has a some software in development to document the algorithm for gridding data space – I wanted to ask whether the software and implicitly the algorithm would be released as open source.

A thought about gridded data is that it’s straightforward to recombine (given grid cells for different sources are the same size). Something like OGC WCS but much, simpler.


OpenSearch Geospatial in progress

March 15, 2010

One promising presentation I saw last week at the Jornadas SIG Libre – Oscar Fonts’ work in the Geographic Information Group at the Universitat Jaume I building on OpenSearch Geospatial interfaces to different services. OpenSearch geo query of OSM

The demonstrator showed during the talk was an OpenLayers map display hooked up to various OpenSearch Geo services.

Some are “native” OpenSearch services, like the GeoCommons data deposit and mapmaking service, the interfaces published by Terradue as part of the European GENESI-DR earth observation distributed data repository project.

The UJI demo also includes an API adapter for sensationally popular web services with geographic contents. Through the portal one can search for tweets, geotagged Flickr photos, or individual shapes from OpenStreetmap.

Oscar’s talk highlighted the problem of seeming incompatibility between the original draft of the OpenSearch Geospatial extensions, and the version making its way through the Open Geospatial Consortium’s Catalog working group as a “part document” included in the next Catalog Services for the Web specification.

The issues currently breaking backwards-compatibility between the versions are these:

      geo:locationString became geo:name in the OGC draft version.
      geo:polygon was omitted from the OGC draft version, and replaced with geo:geometry which allows for complex geometries (including multi-polygons) to be passed through using Well Known Text.

1) looks like syntactic sugar – geo:name is less typing, and reads better. geo:locationString can be deprecated but supported.

2) geo:geometry was introduced into the spec as a result of work on the GENESI-DR project, which had a strong requirement to support multi-polygons (specifically, passes over the earth of a satellite, which crossed the dateline and thus were made up of two polygons meeting on either side of the dateline).

geo:polygon has a much simpler syntax, just a list of (latitude, longitude) pairs which join up to make a shape. This also restricts queries to two dimensions.

This seems to be the nub of the discussion – should geo:polygon be included in the updated version – risking it being seen as clashing with or superfluous to geo:geometry, leading to end user confusion?

There is always a balance to be met between simplicity and complexity, Oscar pointed out in his talk what I have heard in OGC Catalog WG discussions too – that as soon as a use case becomes sufficiently complex, then CSW is available and likely fitter for the job. geo:geometry is already at the top end of acceptable complexity.

It’s about a year since I helped turn Andrew Turner’s original draft into an OGC consumable form. Anecdotally it seems like a lot more people are interested in seeing what can be done with OpenSearch Geo now.

The OGC version is not a fork. The wiki draft was turned into a draft OGC spec after talking with Andrew and Raj Singh about the proposed changes, partly on the OpenSearch Google Group. The geo:relation parameter was added on the basis of feedback from the GeoNetwork and GeoTools communities. There’s been a Draft 2 page, as yet unmodified, on the OpenSearch wiki since that time.

In order to build the confidence of potential adopters, these backwards-incompatibilities do need to be addressed. Personal point of view would be to update the wiki draft, deprecating locationString and including both polygon and geometry parameters.

I was impressed by the work of Oscar and collaborators, though wondering if they are going to move in to aggregation and indexing, search-engine-style, of the results, or just use the OpenSearch interface to search in realtime fairly fast moving sources of data. I wish I’d asked this question in the session, now. It all offers reinforcement and inspiration for putting OpenSearch Geo interfaces on services nearby – Go-Geo!, CKAN. The NERC Data Discovery Service could benefit, as could SCRAN. We’ll get to see what happens, which I’m glad of.


Notes on Linked Data and Geodata Quality

March 15, 2010

This is a long post talking about geospatial data quality background before moving on to Linked Data about halfway. I should probably try to break this down into smaller posts – “if I had more time, I would write less”.

Through EDINA‘s involvement with the ESDIN project between mapping and cadastral agencies (NMCAs) across Europe, I’ve picked up a bit about data quality theory (at least as it applies to geography). One of ESDIN’s goals is a common quality model for the network of cooperating NMCAs.

I’ve also been admiring Muki Haklay’s work on assessing data quality of collaborative OpenStreetmap data using comparable national mapping agency data. His recent assessment of OSM and Google MapMaker’s Haiti streetmaps showed the benefit of analytical data quality work, helping users assess how what they have matches the world, assisting with conflation to join different spatial databases together.

Today I was pointed at Martijn Van Exel’s presentation at WhereCamp EU on “map quality”, ending with a consideration of how to measure quality in OpenStreetmap. Are map and underlying data quite different when we think about quality?

The ISO specs for data quality have their origins in industrial and military quality assurance – “acceptable lot quality” for samples from a production line. One measurement, “circular error probable“, comes from ballistics design – the circle of error was once a literal circle round successive shots from an automatic weapon, indicating how wide a distance between shots, thus inaccuracy in the weapon, was tolerable.

The ISO 19138 quality models apply to highly detailed data created by national mapping agencies. There’s a need for reproducible quality assessment of other kinds of data, less detailed and less complete, from both commercial and open sources.

The ISO model presents measures of “completeness” and “consistency”. For completeness, an object or an attribute of an object is either present, or not present.

Consistency is a bit more complicated than that. In the ISO model there are error elements, and error measures. The elements are different kinds of error – logical, temporal, positional and thematic. The measures describe how the errors should be reported – as a total count, as a relative rate for a given lot, as a “circular error probable”.

Geographic data quality in this formal sense can be measured, either by a full inspection of a data set or in samples from it, in several ways:

  • Comparison to another data set, ideally of known and high quality
  • Comparing the contents of the dataset, using rules to describe what is expected.
  • Comparing samples of the dataset to the world, e.g. by intensive surveying.

The ISO specs feature a data production process view of quality measurement. NMCAs apply rules and take measurements before publishing data, submitting data to cross-border efforts with neighbouring EU countries, and later after correcting the data to make sure roads join up. Practitioners definitely think in terms of spatial information as networks or graphs, not in terms of maps.

Collaborative Quality Mapping

Muki Haklay’s group used different comparison techniques – in one instance comparing variable-quality data to known high-quality data, in another comparing the relative completeness of two variable-quality data sources.

Not so much thought has gone into the data user’s needs from quality information, as opposed to the data maintainer’s clearer needs. Relatively few specialised users will benefit from knowing the rate of consistency errors vs topological errors – for most people this level of detail won’t provide the confidence needed to reuse the information. The fundamental question is “how good is good enough?” and there is a wide spectrum of answers depending on the goals of each re-user of data.

I also see several use cases for use of quality information to flag up data which is interesting for research or search purposes, but not appropriate to use for navigation or surveying purposes, where errors can be costly.

An example: the “alpha shapes” that were produced by Flickr based on the distribution of geo-tagged images attached to a placename in a gazetteer.

Another example: polygon data produced by bleeding-edge auto-generalisation techniques that may have good results in some areas but bizarre errors in others.

Somewhat obviously, data quality information would be very useful to a data quality improvement drive. GeoFabrik made the OpenStreetmap Inspector tool, highlighting areas where nodes are disconnected or names and feature types for shapes are missing.

Quality testing

What about quality testing? When I worked as a perl programmer I enjoyed the test coverage and documentation coverage packages. A visual interface to show how much progress you’ve made on clearly documenting your code, to show how many decisions that should be tested for integrity remain untested.

Software packages come with a set of tests – ideally these tests will have helped with the development process, as well as providing the user with examples of correct and efficient use of the code, and aiding in automatic installation of packages.

Donald Knuth promoted the idea of “literate programming“, where code fully explains what it is doing. For code, this concept can be extended to “literate testing” of how well software is doing what is expected of it.

At the Digimap 10th Birthday event, Glen Hart from Ordnance Survey Research talked about increasing data usability for Linked Data efforts. I want to link to this the idea of “literate data“, and think about a data-driven approach to quality.

A registry based on CKAN, like data.gov.uk, could benefit from a quality audit. How can one take a quality approach to Linked Data?

To start with, each record has a set of attributes and to reach completeness they should all be filled in. This ranges from data license to maintainer contact information to resource download. Many records inCKAN.netare incomplete. Automated tests could be run on the presence or absence of properties for each package. The results can be display on the web, with option to view the relative quality of package collections belonging groups, or tags. The process would help identify areas that needed focus and followup. It would help to plan and follow progress on turning records into downloadable data packages. Quality testing could help reward groups that were being diligent in maintaining metadata.

The values of properties will have constraints, these can be used to test for quality – links should be reachable, email contact addresses should make at least one response. Locations in the dataset should be near locations in the metadata. Time ranges matching, ditto. Values that should be numbers, actually are numbers.

Some datasets listed in the data.gov.uk catalogues have URLs that don’t dereference, i.e. are links that don’t work. It’s difficult to find out what packages these datasets are attached to, where to get the actual data or contact the maintainers.

To see this in real data, visit the bare SPARQL endpoint at http://services.data.gov.uk/analytics/sparql and paste this query into the search box (it’s looking for everything described as a Dataset, using the scovo vocabulary for statistical data):

PREFIX scv: <http://purl.org/NET/scovo#&gt;

SELECT DISTINCT ?p
WHERE {
?p a scv:Dataset .
}

The response shows a set of URIs which, when you try to look them up to get a full description, return a “Resource not found” error. The presence of a quality test suite would catch this kind of incompleteness early in the release schedule, help provide metrics of how fast identified issues with incompleteness and inconsistency were being fixed.

The presence of more information about a resource, from a link, can be agreed on as a quality rule for Linked Data – it is one of the Four Principles after all, that one should be able to follow a link and get useful information.

With OWL schemas there is already some modelling of data objects and attributes and their relations. There are rules languages from W3C and elsewhere that could be used to automate some quality measurement – RIF and SWRL. These languages require a high level of buy-in to the standards, a rules engine, expertise.

Data package testing be viewed like software package testing. The rules are built up, piece by piece, growing as the code does, ideally. The methods used can be quite ad-hoc, use different frameworks and structures, as long as the results are repeatable and the coverage is thorough.

Not everyone will have the time or patience to run quality tests on their local copy of the data before use, so we need some way to convey the results. This could be an overall score, a count of completeness errors – something like the results of a software test run:

3 items had no tests...
9 tests in 4 items.
9 passed and 0 failed.
Test passed.

For quality improvement, one needs to see the detail of what is missing. Essentially this is a picture of a data model with missing pieces. It would look a bit like the content of a SPARQL query:

a scv:Dataset .
dc:title ?title .
scv:datasetOf ?package .
etc...

After writing this I was pointed at WIQA, a Linked Data quality specification language by the group behind dbpedia and Linked GeoData, which basically implements this with a SPARQL-like syntax. I would like to know more about in-the-wild use of WIQA and integration back into annotation tools…


Follow

Get every new post delivered to your Inbox.