I'm a (digital) librarian fro Italy, and I'm involved with Medialibrary, a digital library that hosts a collection of open/free educational content.
You can explore it here: http://openmlol.it/
We work a lot with schools, and some music teachers expressed interest in your content, especially classical choral music: what we do is simply harvest the metadata and link to your site, explicitly recognizing attribution and the right license.
As I've spent weeks, few years ago, downloading metadata via API from IMSLP, I was wondering if by any chance there is somewhere a dump of the db, or some CSV/XML/JSON: I don't need the most up to date, but of course that would save me a lot of work.
The problem with the API is always to parse the wikipage afterwards, and my experience is that it's always complicated and never consistent (I've been a wikipedian for more than a decade...)
Thanks in advance
(and keep up the awesome work!)
Use this forum for HELP at Choral Public Domain Library as well as FEEDBACK
2 posts • Page 1 of 1
aubreymcfato wrote:The problem with the API is always to parse the wikipage afterwards
This is true when metadata are embedded in the text of the page, so the only way to extract them is to parse the contents. However, in all wikis based on the MediaWiki software metadata are largely represented by categories, so, given a page (for example, the page of a certain work) one may get through the API the information about the categories such a page belongs to. Those categories may be regarded as the metadata pursuant to the page in subject. I think that using the API would be better than using a dump, because the dump becomes obsolete over time, while the API can be queried at all times, and always provide the most updated information.
Am I missing anything?
Who is online
Users browsing this forum: No registered users and 2 guests