Bestand wählen

From Markup to Linked Data: Mapping NISO JATS v1.0 to RDF using the SPAR (Semantic Publishing and Referencing) Ontologies

Zitierlink des Filmsegments
Embed Code

Automatisierte Medienanalyse

Erkannte Entitäten
good afternoon so the the short answer to how I got here is that I've met Debbie
at the xml summer school in Oxford last summer and uh we had already been working with the papers from the pub mid-central which I knew were marked out using the uh and DTD and when I learned that she was involved in jets which was they also seen and unseen new model of the ML and DTB um and she knew that I was interested mapping that to RDF there was the marriage of lines over beer and the as a result of that Silvio directly over the early part of the summer what spent some weeks mapping the metadata elements all right gets to RDF and that's what I'm here to talk to you about today so I want to give to the introductory
sections 1 is to discuss where I think we are in terms of scholarly communication today uh which involves both mark and publishers and and the scholars and then to give you a sort of buffers guide to RDF
so I think that scholarly articles really haven't changed much since the 1st issue of the Philosophical Transactions of the Royal Society that we still have the title and some authors of the introduction of the abstract of sorts and then a series of semantically separate sections introduction Methods Results Discussion ending up with a reference to and the big thing today I guess is that we've gone from paper online and the limiting factor there is that most publishers still publish as facsimiles of the printed page in other words media so we ought
maybe at the midpoint in this digital revolution in our in ill-defined transitional state which I have uh like here to the horseless carriage state when the the legacy of print is still very much with us eat the point of the analogy to get back to what I was saying is that we start with the horse and carriage and we are now at a transition state of all the horseless carriage what we want to get to is on
noise red Ferrari and in which all the data in a paper would be actionable but all the links to datasets elsewhere would work all the references would take you to other papers and so on and so on if we are in the throes of a
publishing revolution but as you know as we heard today already and publishers all employing a variety of proprietary XML markup systems to annotate manuscripts and the problem with these is that they they need mapping from 1 to the other we've heard people today talking about the mapping and the hand crafting that they are doing to get from 1 to the other wouldn't it be nice instead if we used MapMan with information management techniques employing global standards such as RDF and all and encoding the information about all scholarly communications in such a way that you should know that computers could query the metadata and integrate information from multiple resources in an automated manner since this process of scholarly communication is central to the practice of what we do in science we think that it's essential that publishers moved to adopt these web standards and stop pussyfooting around so that we can move forward to the Ferrari error of scholarly communication so I just want to say a
few words for that for those of you but who
have heard about audio but never used and don't understand the principles of it but a very familiar with XML it's not so very different that the principles of very simple you define everything you talk about the classes and their relationships by unique URI eyes so that you can identify them on the Web
and these URI is resolve to terms in publicly available and structured vocabularies ontologies so that the meaning of
terms is unambiguous to a 3rd party you express relationships as subject predicate object triples RDF triples and the syntax of that is defined by the world would would would would consortiums RDF that standard for example if I
said my article is of a type a fabulous journal article but that is 1 RDF triple ends in a full stop in this particular way of representing I can also say my article has a creative the name David Shotton and it has a title site of the Citation Typing Ontology end you can write these as little atomic statements but you could also combine them into interconnected information networks RDF graphs forming what is called Linked Data and thereby you create an element of this web of knowledge it's out there and is freely available on the semantic Web and the nice thing about this if you do it correctly is that when you combine RDF statements from different sources created by people who've never met 1 another the truth content of the individual statements is maintained as you
bring together so you could create a larger information Network so next take a traditional bibliographic record for for the site of paper just as an example here are a number of tags uh according to uh the pub Med tagging system about
the journal uh the title of the publication date and so on that
can be represented in RDF by saying that the paper has a type a journal article it has a bunch of metadata it has relationships to people and to institutions and those things in their turn have metadata about so we start to build up a very small RDF graph of of uh information relationships and that
you can express in all but in a very simple way you start with the URI for the paper you say that it has a type the fabula journal articles on say more about Fabio in a minute you can give the metadata saying it has a title a publication date of bibliographic citations you can say the publisher areas and doing so you can say that the publisher is an organization that has a particular name in a particular page single find out more about the publisher and similarly for the rest of so this is trivially straightforward and this is an RDF graph which is that you could say more things about the paper but but it is a basic bibliographic record for that effect the for
several years rdf and and the the concept of the Semantic Web um was sort of academic exercise without any big players outside academia that has now changed I think a number of big players in the BBC is 1 closer to my home than others uh have now embraced RDF and a using it under the hood for all the content uh the BBC World Cup Website used RDF and appropriate ontologies under the hood as does BBC Music as does BBC Natural History and in each case what the BBC did is higher clever people who knew about these technologies to write the appropriate ontologies to power the website which now deliver very rich content 1 of
the benefits of effect is that these open services now become global resources to which other people can link and what we're seeing here is a little a part of the linked data world as mapped by Richard Sanya about 2 years ago 3 years ago 2 years ago on since that time is grown so complex it's very difficult to see on a picture like that as more sites of come on board but you can see that the thing I've renamed as BBC Music in the center there uh has a large number of incoming links from other people who go to BBC Music now to collect open linked data about musicians and bands in and publication dates and so on I want to say another few
words of introduction about the Semantic Publishing and referencing ontologies that Silvio Perrone and I have developed to describe in RDF the world of publishing uh in its various aspects we started
With the Citation Typing Ontology which I've mentioned in or if references are people on it already on this enables you to state that paper a State's paper b but more particularly to say right that 2nd paper might be referenced value referencing it because you could you took data from that paper how close your conclusions wishing that paper and we have about 30 different types of citation type that you can include in RDF statements to characterize the nature of a reference we developed fatty and the further aligned the geographic ontology to allow you to describe the objects of references the books and journal articles and so on and that you have all these ontologies of freely available on the web but you are always option you borrow the bibliographic reference ontology allows you to describe bibliographic records and references and their aggregation into larger things like library catalogs and references and the last 2 are structured according to the flow the model which I shouldn't need to explain to this audience which is a way of thinking all publications in terms of original conceptual works the
expressions their various manifestations as PDF online and individual items in my own and we have added to the rdf properties that
link these formal uh classes all of the further by adding what you see in the colored arrows and links between works and their manifestations of items and links between expressions of items that connector shortcuts to the normal chain of events that you normally have used there are a number of other ontologies
stress those 2 but uh they allow you to count citations and characterize the context and they allow you to describe the component parts of documents and to discuss the roles and status is of people and documents through the publication process on so here is a
bit of RDF you can't see that trade it's a little too small which the contains some psycho statements about uh why 1 paper cites another paper and also contains some statements about the cited paper and it's fairly compact it's easy to write from and it
leads into a discussion of what we did with jets now I don't have to
tell you what gets is I put this in for completeness for anybody else you might want to see this presentation later on but last July guided by daddy who helped us to understand what gets was really trying to say um Silvio and I've met to RDF the key metadata elements from objects we didn't try to do the whole thing we haven't done anything about paragraphs and tables and things like that the content of the that if the document what we tried to do is encompass but the metadata that would describe a document you might want to have in RDF to allow resource description uh and integration and this mapping is on line and we've used the scour tologies another well-known appropriate ontologies such as Dublin Core info
but now jets as you know is very large we chose to map these 5 metadata elements and their component elements and attributes article article matter journal matter can and restless and in all we've made a little over 240 separate mapping statements in that mapping document but we can handle translated titles name alternatives alternative languages that we have discussed earlier today and using the collections ontology developed at Harvard we can also encode ordered reference list for example of order of authors in off the list of references in their reference lists and so here is an
example In the 1st 4 elements from the rest list mapping table which is 1 of the 5 tables in our paper and on the left you see the element or attribute name in the middle you see an XML example of how that used in jets moment and on the right you see the RDF translation of that and what we have here is a statement that some textual entity contains a reference list and of something called the reflects the that ref list is defined by the buyer ontology property references in the next statement you see that the reference list has an ordered item in ordered by the collection ontology and I rest dot was that XXX and then we see it you say that contains but that is a list item that has a content ref except X so we distinguish the container for the reference from the content of the reference so references
might have 10 items in it and each of them will have particular content and each of the content items is defined as a bibliographic reference we can then say in the 3rd box that that reference references some other
textual entity and that textual entity but the 1st 1 cites the other 1 saying to very related things there and then in the final round we say that cited textual entity is a book chapter it's part of a larger collection which has a particular title and which is a book so all of these are very simple statements on and I just show them to you to exemplify the nature of the work that we undertook what was
interesting in undertaking this is grappling with the philosophical differences that exist among the minds of people who deal with XML all the time and the minds of people who deal with RDF the time and I'm going to just discussed these the BIC Chuckling array in the front row there on a date full under these 6 categories that I've mentioned there so
the first one is to understand the open world philosophy all the Semantic Web technologies this is commonly contrasted to the closed world of databases in a database if an item is not represented its converse is assumed to be true for example if you have a table but with but a column which is open access and you don't have a yes in that column and then you assume that particular element of that particular article is not open access but that's not the case in the RDF world if an article is not described as being an open access 1 just has to keep an open mind it might be open access it might not be a connected you just don't know see them say anything about so there's just a difference in the assumed meaning of on the stated assertions that in the Jets documentation for the at tribute publication format 1 of the suggested values is online only and that is something that we would never wish to encode in RDF it might be online in today but because we want to make statements universally true in 5 years time we don't know if someone will come out with the print version between now and the time if someone wants to read RDF statement later so we would never say it was online only we could say it's online we might not say it in print and the rest is left very intentional then the 2nd
interesting contrast came when comparing RDF and XML descriptions in terms of the semantic meaning of markup terms a cornerstone of the Semantic Web is this use of open published ontologies to define terms to give precise and universally agreed meanings 2 men have a particular statements and this is not the case in the XML world where mock-up terms can take on different meanings I discovered depending on who is using them this reminded me a little bit of and then there are lots of scholar who was a few decades before me a mathematician at Christ Church he wrote a story called Alice's ventures in Wonderland 1 of his characters was humpty dumpty Humpty Dumpty said when I use the word it means just what I choose it to mean neither more nor less and apparently this is how jets is
intended to be used by publishers BDC descriptive and not a prescriptive model it in there was to capture what publishers actually doing and is deliberately value as to the meaning of a particular term because it doesn't intend to tell publishers what they should be doing this suggested values for gaps quantities just that suggest so to take an example the Jets article it says in the spec can be used to describe not only typical journal articles research articles but also much of the norm article content within a journal such as book or product reviews editorials commentaries and news onwards thus jets article may be used to describe an article or it might be used to describe other sorts of journal content or it could even be used to describe something that's never been published like preprint this goes a little bit beyond what most people much of course there's a journal article so
we cannot say that the mark Jack's article should be translated in RDF esophageal journal article because you can also mean those other things 1 of those could be mapped to something that we might call a five-year periodical item but even that's too specific because that doesn't cover preprint preprint is not a periodical items and and so we have chosen to use something which is deliberately vague this term textual entities that I mentioned at the beginning a little earlier and that is broad enough to include include all the relevant possibilities this achieve semantic accuracy but there are expensive detailed specificity another thing
that we um had to find work round for was the XML use of nested statements to define hierarchical relationships for example in terms of that subject in this uh the example that's given here RDF triples a what they be described as flat you can't miss them and so to indicate a subject hierarchy like this we use cross which is of a category specifically developed to describe taxonomies and
hierarchical relationship and in RDF we can therefore say that a textual entity has a particular subject term has to subject terms still 1 and 2 the term 1 is a subject term with a particular name and and narrower version of that is 10 to which is also a subject him with the name if we
In fabliaux decided some time ago to adopt the further model for describing bibliographic entities because it gave us greater precision in what we want to talk about we found it useful to be able to talk about the work and various expressions of the work and various manifestations of particular expression just doesn't do that and so we had to decide how to do this smuggling into off of the world and we have you these 4 terms conceptual work textual entity digital embodiment and Digital Item just just refers specifically to the particular point of the the gap of the them further model that we wanted to discuss and those things are are related to 1 another by this set of triples at the bottom here as a textual entity is a fabula expression it's a realization of some Fabio uh of some conceptual work the 3rd it has an embodiment in some digital embodiment and we've made the assumption that everything that gets is talking about is digital on and it has a representation in a Digital Item we also had to think carefully when it
came to revisions the attraction and our conclusion is that the further work lay is the only 1 that can change over time from the 1st draft of the final published version or in indeed subsequent online corrected versions of each individual expression of each take stages a static document doesn't itself change each revision is revision to the work which results in a new expression thus the data type and sorry the date type attribute revision request provision received the corrected and the corrected and that to the work the expression in the way shown whereas retractions were referred to particularly expressions you can't retract work but you can attract a particular publication that's out there so there we used the term textual entities 1
of the most interesting and confusing areas was when we came to MIT media types a mapping the attribute has publication form and the Jets documentation suggests that the these formats can have the following values print electronic video or audio the book online only and that elsewhere online and when being proposed as additional possibilities the problem with this is that you grouping apples and oranges or chalk and cheese perhaps um you conflating the following independent categories the nature of the information is a text image or sound the nature of the storage media measured on paper or on the Web and the file format in which the digital object if it is a digital object is encoded In fabliaux we have different ways of encoding the separate things and the exact nature of our mapping therefore must depend on precisely what's intended as explains in somewhat more detail in the mapping paper on it's reminiscent of the talk we had uh this morning from the people from the American Institute of Physics but How do you treat these problems you change jets to change the XSLT would you do a man manual fix on the and then the
finally in terms of revising their learning through these problems have you met role as someone could be an editor of 1 paper and an author on another but because we want to make a RDF statements as a independently in universally true we can't just say this person is needed because it's only true in the context of 1 particular journal 1 particular paper and therefore to solve that problem we used prone to publications that the publishing roles ontology which allows you to specify the context and also if you need to the time constraints over which time period of time period a role is held so you can be an editor of the journal for 5 years from from 2000 to 2005 let's say that you cease to be there for you get promoted to editor-in-chief you can have different roles at different times you can be um PI on a project at particular times and so on fault for non-publishing grows like being principal investigator of photographer we use a different ontology which complements pro and in fact imports pro expands on it called the scholarly contribution of roles ontology
last month I wasn't able to participate in the bond fighting that was going on around this topic so I left it too so here and so you have created an XSLT transform that permitted the automated conversion of a document marked up in XML into RDF and I put into my presentation here this statement the documents are available online and it should be but uh someone else already has discovered that today it's not and I've been in e-mail conversation with Soviets equity gold and what what you should find and what you will find by tomorrow is that but this
uh XSLT and the examples which he has automatically convert it took the uh Jack's examples and automatically convert them to RDF those output should also be available on that you are L's which are present in our paper and I want embarrassed to say that they're they're not today I'm sorry about that and not In principle at least
that mapping allows us to take the Jets metadata elements in and tributes from a document that's marked up and convert them automatically to RDF enabling this information to be published on the Semantic Web as Linked Open Data and now
I want to conclude by telling a little bit about some work that the 10 year grade has done in my group recently which is to develop a very nice web input system for XML based on uh information so you start with XML model and using Eqs forms in all be informed you come up with a web form that you can complete them according to that underlying model and uh an example is shown here so this is the Jets input form it has 5 feels article article method Journal meta contrib and restless 5 tabs according to the 5 tables that we created and each of those did you feel that you can complete all of the the the element or exact reviews as appropriate and that's article type for example this drop-down menu giving you all the different uh attitude suggested in the Jets them the documentation we
can do that for article matter this is a a partial view of all those metadata elements up some of the from the bottom up to the the right hand side so you could see them and it goes on completion and again there are pulled down this where appropriate Stanford Journal metadata and for contribute and for reckless so what's significant about this development and it it fits very nicely with the previous talk we have and I would like to have to see them combined is that it is based on XML model in this case the Jets model it's automatically generates the form if the model changes the formal change appropriately in order anticipate the jets model changing tomorrow but eventually they'll be a 1 . 1 and that will be all the media the reflected in this form you can in which the firm with web services that go off good a deal I pull back all the bibliographic information from top medal wherever you want you can have drop down list you can have data control data type restrictions and so on and and the output you get is a document marked up in XML which can then be transformed into J. store on all into RDF using the XSLT I describe to anything else you want is structured data and this I think this is quite useful so to
summarize 2 recommendations where do we go in this revolution that we're going through from from the horse and carriage to the Ferrari and the 1st is to think where not print to get away
from from the mind set that we've had for 3 4 or 5 decades of thinking about print objects and when it is but the the platform now and that uh the fact that electron resources are available it's a really becoming the norm physical libraries at changing their usage patterns out of
all recognition of show you know and with that cataloging paradigms based on comedy indexes of becoming
replaced by intelligent faceted browsing semantics search over rich metadata which is why it's important that we have open access to which we metadata and
so hand without the 2nd men 2nd
recommendation is that we adopt semantic web technologies as indeed the US Library of Congress has already done um and it and major national libraries in Europe are permitted by next year to start to use the Linked Data as a means of describing their metadata it's
difficult to actually a publisher's direction I use this analogy of the tank uh because it was told to me by an Elsevier person so I don't feel I'm being rude to Elsevier and who said that changing the Elsevier publication workflow was like trying to turn the super tank but we have some tools that allow us power ontologies this jets mapping just a few of the available tools that will make it easier to publish bibliographic metadata about journal articles as openly that and I hope that the work that we have done will mesh with the work that others are doing and would enable others to take this and run with
it and move ourselves from these non interoperable XML a mock proprietary mark-ups in many cases different publishers using and to the world of open linked data but I want to thank Katie when Alistair
Miles & RAM flying for doing something that I didn't a work right I took it out of the talk for time and I forgot to take the acknowledgment out but that was all the semantic publishing work which preceded all of what I talked about today and so the approach for doing all the interesting ontology development and mapping work with me and tenure great for developing the Web for I also want to thank the just to funded all our recent work um I want to conclude by giving the example of how we will use this how we have already started to use this we have taken all the open access articles from public central and from them have
abstracted the reference lists and have encoded those reference list as named graphs in RDF and publish those as the open citation corpus so where all the references to some 3 and a half million articles that occurred In the articles in the open-access subset of PMC and now and have been for years available online as open linked data a and good news is that we now have a little bit of funding to take that very small project that led to that conclusion forward and we are going to be joined in that not only by cross who have their hands on a great volume of reference data but also by 3 of the major subscription-access publishers who who have already agreed to open their references from the journal of people's namely nature science and Oxford University Press and I know there are other publishers here and I would encourage them to get in contact with me if you want to open your reference data to join this open resource that can then be used for all sorts of things that nobody has yet imagined a visualization of citation networks of co co-author networks in an open way you don't have to pay Thomson Reuters for um and automated reference correction service so that you can submit your references from your paper as author or as an editor has just received a manuscript and run it against this corpus and get back back notation notification about which references might have errors in by the trivial things like having a E T a instead of a beta symbol in the title to having the wrong y In are going through the the public central reference we've and all sorts of examples of reckon about 1 % of all references have errors of some sort and there are other things that we haven't thought about on which you can build so I'll stop there thank you very much and it's nice to to be welcomed into this uh just new to using word in the and we have different descriptions and few at yes I requiring hives Damien has from Avalon consulting and if forgive it is that the naive because I'm not much of an RDF expert although I do work with crude cable he may know in the RDF field I'm mean I know you said that when you're looking at mapping your primary looking only at the metadata and not at the actual the actual content self that set and ask about if you were to try to map the textual content of the articles and if these 2 questions to think 1 way out what level of granularity would you go to make assertion RDF assertions is it I you're making a decision at each paragraph at each word has well and everything is possible it just and what effort you want to put in and what benefit you might expect to get out um the docket the document components ontologies we have already written and we just don't have time to do that bad thing but it's quite feasible to do and I don't need has any more problems than the mapping we've done already that will take you to the paragraph or the sentence level and and it also allows you to define the context in which a reference occur and so that you can match by text mining if you like and and uh Stephen 1 from C as our R 0 has already done this you take that sentence and you text cited article to see what similarities it has in the cited articles and that anything that is made made of follow question is do you think it's it's more valuable you things like trying to extract the entities and concepts from the tax and then making RDF assertions about those OK so that that's a very deep question because we've just been talking about metadata text mining the text or even manually marking up the text for semantic content is a different story we did it as an exemplar for a paper from plus neglected tropical diseases published that manually and it took 7 days we went through and the you know we we mocked up content items according to 9 semantic categories names places diseases and and so on and so on animals uh various things it was a paper to do with disease and and that's available on the web and you can look at it we done to to encourage publishers to do that would immunize yes if all the papers said this meant content locked up but Microsoft have a tool that allow you to do that I I haven't seen it used in anger yet but it allows you to an inward as a plug-in to word 2007 to can a particular concept in the text with an underlying tone from ontology I would like them to rewrite that to allow us to tag references with sighted terms do to wind there yeah so this is a world there and and the world of text mining and automatically retrieving semantic content is something like I have no personal expertise in but is clearly a research area yes the missing several um problems associated with with taxes ambiguous data and data fields values and so forth and out lot publishers use them in different ways doesn't interesting ways and I wonder if you have any weighted feedback that experience to the task community so for example we can see have to the tagging guidelines and of the perhaps you know some kind of guidelines said based on your experiences adhere to me what this on so far we've been constrained by time and available so on but we written this paper on which my talk based and that's available on the Jets conference website which discusses the various issues and I've also feedback comments to date who I take that I hope will take them to the Jets community you actually developing this and already that's led to 1 2 changes I think that in word just refers to do things in China later CLU is accountable we have done something simi lar in RDF for P C. those are it bears the all of said I offer PNC however we use the ball ontology there's a CEO of so I mean uses only about the book the idea this paper is the 1st rdf would only have ontology probably graphic entities we decided when we wrote the small ontologies not to use the bow but to invent fabliaux because we wanted to have the richness of forgot and we also wanted to include many terms that Bebo does not have that lacked that and so you can say things using spot that you can't say as nicely in b but if you just want to say this is an article you can do it equally well indeed the half and mapping between design line fine yes we we had created a mapping between the bow and variable between Bebo it's far from the published that that's all on the web because as we are doing something I'm married would be nice in the able to not be in sonata having is the same thing that a great on our you said also that you're planning to include the content we planning to include the content with adult quote at no I said it's possible to do now well without is is that 10 Malliaras our funding by the true I'm doing it in the short term because I I want to concentrate on the open citations the well actually we DDH on new we found some taking the mean error too wide Walser said in the previous talk about a tables for instance some of the tables in down that's our well this the format easy could indexing them and putting died in the rdf didn't make sense for us so on yeah I know probably I will approach to the doubts people to see how they think it should be done in RDF but it's something that if you're going to when I'm bored ridiculous June that you hear a lot and I think we should collaborate and this point the yes my sometimes too tough on the Nature Publishing Group and just 1 comes about the and the Linked Data and then is a confusion and up with people understand enough we your tried to map the Johnsons than all amounts of a it's obvious now think we I think when we talk about linked data on trips stories it's all about metadata and cowardly nature we've called the you can quite a metadata for the triple triple store which is let's take above Apalachicola but actually called of x amount is all of that it's in Jackson and stored in the index of of that events have so so something there is a vast scale up clear separation between metadata and called but I think mapping accountants around RDF a is this convey my hand express a lot of things that impossible have from well that's why we didn't do it for soft my hand but now I I quite agree and and of course what we were doing and most many publishers failed to do is make to expose XML when lots of publishers have use XML internally for the for the workflow and then they publish a pdf and for all this wonderful market away all strategy it doesn't really matter in the in the long run how the data structure but publishing structured data is important companies in map from 1 we know the XML is much better than nothing and but having the metadata in RDF allows you to do things that you can't do with exome element of it was good the and then it and
Ungerichteter Graph
Formale Semantik
Gruppe <Mathematik>
Abgeschlossene Menge
Translation <Mathematik>
Prinzip der gleichmäßigen Beschränktheit
Vervollständigung <Mathematik>
Content <Internet>
Lemma <Logik>
Ordnung <Mathematik>
Tabelle <Informatik>
Lesen <Datenverarbeitung>
Selbst organisierendes System
Algebraisches Modell
Äußere Algebra eines Moduls
Spezifisches Volumen
Tabelle <Informatik>
Formale Grammatik
Binder <Informatik>
Offene Menge
Wort <Informatik>
Prozess <Physik>
Natürliche Zahl
Befehl <Informatik>
Element <Mathematik>
Computerunterstütztes Verfahren
Metropolitan area network
Machsches Prinzip
Arithmetischer Ausdruck
Konfiguration <Informatik>
Arithmetisches Mittel
Linked Data
Hierarchische Struktur
Automatische Indexierung
Projektive Ebene
Semantic Web
Varietät <Mathematik>
Web Site
Hierarchische Struktur
Puffer <Netzplantechnik>
PERM <Computer>
Speicher <Informatik>
Bildgebendes Verfahren
Leistung <Physik>
Physikalisches System
REST <Informatik>
Dublin Core
Objekt <Kategorie>
Digitale Revolution
Offene Menge
Umsetzung <Informatik>
Deskriptive Statistik
Temporale Logik
Metropolitan area network
Befehl <Informatik>
Dichte <Stochastik>
Kategorie <Mathematik>
Dichte <Stochastik>
Kontextbezogenes System
Web log
Prädikat <Logik>
Dienst <Informatik>
Suite <Programmpaket>
Rechter Winkel
Klasse <Mathematik>
Automatische Handlungsplanung
Regulärer Ausdruck
Abgeschlossene Menge
Dienst <Informatik>
Überlagerung <Mathematik>
Inhalt <Mathematik>
Stochastische Abhängigkeit
Attributierte Grammatik
Einfache Genauigkeit
Attributierte Grammatik
Formale Sprache
Arithmetischer Ausdruck
Web Services
Kontrast <Statistik>
Funktion <Mathematik>
Zentrische Streckung
Element <Gruppentheorie>
Billard <Mathematik>
Text Mining
Verkettung <Informatik>
Strategisches Spiel
Reelle Zahl
Gewicht <Mathematik>
Kombinatorische Gruppentheorie
Digitale Photographie
Zusammenhängender Graph
Ontologie <Wissensverarbeitung>
Mapping <Computergraphik>
Elektronisches Buch


Formale Metadaten

Titel From Markup to Linked Data: Mapping NISO JATS v1.0 to RDF using the SPAR (Semantic Publishing and Referencing) Ontologies
Serientitel JATS-Con 2012
Teil 11
Anzahl der Teile 16
Autor Shotton, David
Lizenz CC-Namensnennung - keine kommerzielle Nutzung - Weitergabe unter gleichen Bedingungen 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen und nicht-kommerziellen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen und das Werk bzw. diesen Inhalt auch in veränderter Form nur unter den Bedingungen dieser Lizenz weitergeben
DOI 10.5446/30580
Herausgeber River Valley TV
Erscheinungsjahr 2016
Sprache Englisch
Produktionsjahr 2012
Produktionsort Washington, D.C.

Inhaltliche Metadaten

Fachgebiet Informatik

Ähnliche Filme