Combining Public and Proprietary Data and Requirements for a possible RDF 2.0

From lotico
Revision as of 07:12, 29 May 2020 by Marco (talk | contribs) (Created page with " Date: February 4, 2010 Location: New York's Hotel Pennsylvania , 401 7th Ave 33rd St & 7th Ave, PennTop North -18th, New York, NY 10001 ===Short introduction to NLP and Sem...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Date: February 4, 2010

Location: New York's Hotel Pennsylvania , 401 7th Ave 33rd St & 7th Ave, PennTop North -18th, New York, NY 10001

Short introduction to NLP and Semantic Web agenda (10 min) - Breck Baldwin (alias-i)

Combining public and proprietary data to add value to both - Bob DuCharme

Some semantic web applications convert all data to RDF triples, store them in a triplestore database, and then build applications around that stored data. Many tools are available, though, that let you treat data from diverse sources such as spreadsheets, relational databases, and Wikipedia as triples, and once you do, you can easily mix and match them to create new value from the combinations. Bob will demonstrate how to do this with both open source tools and those of his company, TopQuadrant, using dummy data about analyst recommendations for stock picks.

Requirements for a possible "RDF 2.0" discussion- Moderator: Chris Welty (IBM Research)

RDF 2.0 W3C Workshop

RDF 2.0 links, features & requirements list