Semantic Web Media Summit Meetup: Difference between revisions
No edit summary |
|||
(One intermediate revision by the same user not shown) | |||
Line 14: | Line 14: | ||
Callimachus is an Open Source Linked Data management system, with a wiki-like interface, and a Class-based template engine that allows you to visualize and create Linked Data easily and quickly. We are using Callimachus to turn a collection of Linked Data resources into a read/write Linked Data warehouse. This approach is being embraced by organizations that want the benefits of data warehouses but are looking to leverage the speed and standards of the Web to create, view and manage data. Callimachus provides customers the ability to visualize Linked Data via "follow your nose" navigation and familiar map and chart images. Today Callimachus is running on a variety of triple stores including Sesame, Mulgara and OWLIM, with support for additional RDF database management systems coming soon. | Callimachus is an Open Source Linked Data management system, with a wiki-like interface, and a Class-based template engine that allows you to visualize and create Linked Data easily and quickly. We are using Callimachus to turn a collection of Linked Data resources into a read/write Linked Data warehouse. This approach is being embraced by organizations that want the benefits of data warehouses but are looking to leverage the speed and standards of the Web to create, view and manage data. Callimachus provides customers the ability to visualize Linked Data via "follow your nose" navigation and familiar map and chart images. Today Callimachus is running on a variety of triple stores including Sesame, Mulgara and OWLIM, with support for additional RDF database management systems coming soon. | ||
==[http://www.tagasauris.com/ Using Computer Algorithms and Human Intelligence to Enrich Digital Media with Linked Open Data] - Todd Carter== | ==[http://www.tagasauris.com/ Using Computer Algorithms and Human Intelligence to Enrich Digital Media with Linked Open Data] - Todd Carter== | ||
Line 26: | Line 22: | ||
Kasabi is a new market place for hosting your Linked Data. We take a look at Kasabi in action. | Kasabi is a new market place for hosting your Linked Data. We take a look at Kasabi in action. | ||
==[http://kontekst.us/ Creating Ontologies from News] - Vijay Raj== | |||
"Today's News is Tomorrow's Knowledge". We believe to be relevant, our Knowledge Base needs to be current. We focus on analyzing news to build our ontologies and tap into the rich structure and use Wikipedia to "prime" our Knowledge Base. We have created more than 700 granular, hierarchical ontologies about various topics related to US. The hierarchy allows us to create contexts based on any topic, such as a sports team or a state. The context is essentially a small subset (usually ten) ontologies most relevant to the topic. We analyze each news article in a "context" based on the news source. We then iteratively enhance or reduce the scope of the context based on the topics found in the article. This essentially eliminates the need for disambiguation! The context also helps us to more accurately classify the news article. New assertions found in the news article are added in that context for further verification. The curation of news over time helps us create hyper local ontologies for every person place or thing. Vijay Raj started working on Knowledge Systems about 5 years ago. His first contribution to Linked Open Data was a mapping from [http://dbpedia.org/ DBPedia] to [http://www.cyc.com/opencyc OpenCyc]. Based on the DBPedia (Wikipedia) to OpenCyc mapping, he doubled the Cyc Knowledge Base by extracting assertions from Wikipedia article text. This enhanced KB was used to analyze news. Following is an app to "visualize" news as a map of articles and concepts.[ *] Lately his focus is to create a granular hierarchical knowledge base, with assertions extracted from Wikipedia and continuously updated from news analysis. The knowledge base, Kontekst.Us, has over 700 ontologies, and is continuously generating more data from news. A SPARQL endpoint and related web-services are hosted here and he blogs here. | |||
==[http://www.semanticengines.com/ Contextual Targeting Using Semantics] – Dmitri Soubbotin== | ==[http://www.semanticengines.com/ Contextual Targeting Using Semantics] – Dmitri Soubbotin== | ||
Advertisers are always looking to increase conversion rates for their ads. If an ad is relevant to the user’s interests, it is more likely to be clicked upon than a randomly chosen ad. There are a number of techniques to target users, including demographic, geo-location, history, etc. One efficient technique that our tools are supporting is contextual targeting, i.e. picking an ad category that matches the content of the page. The simplest form of contextual targeting would be to use the words from the URL of the page. However, the accuracy of such targeting would be quite low. Unlike that, our approach uses text mining of the page and semantic analysis. Our semantic engine SenseBot “understands” what the webpage is about, and extracts key concepts from the page. We then map them to the extensive advertising taxonomy, picking the most relevant ad category. The result is an ad that matches the content of the page, and is likely to be viewed by the user as an added value than a nuisance. Our tool is implemented as a REST API which is accessed by the client application. The Contextual Targeting API is deployed in the cloud, thus providing a scalable and highly-performing solution. The API is part of the family of APIs developed by our company. Dmitri Soubbotin is the CEO & Founder of Semantic Engines LLC | Advertisers are always looking to increase conversion rates for their ads. If an ad is relevant to the user’s interests, it is more likely to be clicked upon than a randomly chosen ad. There are a number of techniques to target users, including demographic, geo-location, history, etc. One efficient technique that our tools are supporting is contextual targeting, i.e. picking an ad category that matches the content of the page. The simplest form of contextual targeting would be to use the words from the URL of the page. However, the accuracy of such targeting would be quite low. Unlike that, our approach uses text mining of the page and semantic analysis. Our semantic engine SenseBot “understands” what the webpage is about, and extracts key concepts from the page. We then map them to the extensive advertising taxonomy, picking the most relevant ad category. The result is an ad that matches the content of the page, and is likely to be viewed by the user as an added value than a nuisance. Our tool is implemented as a REST API which is accessed by the client application. The Contextual Targeting API is deployed in the cloud, thus providing a scalable and highly-performing solution. The API is part of the family of APIs developed by our company. Dmitri Soubbotin is the CEO & Founder of Semantic Engines LLC |
Latest revision as of 11:38, 17 September 2011
In September we team up with Mediabistro.com to present you the Semantic Web Media Summit NYC 2011 and to celebrate the event we will in addition host a meetup to explore new, even speculative and fun Semantic Web developments in the news, publishing and media industries. The Semantic Web Media Summit NYC is the leading event in 2011 for media professionals to explore the emerging Semantic Web and discuss professional solutions to apply Semantic Technologies in your organization. http://tinyurl.com/5whkkkn
We hope to seeing you at the Semantic Web Media Summit 2011 in NYC on September 14th.
Community presentations - short:
OpenAmplify - Mike Petit
OpenAmplify is a web service that brings human understanding to content. Using patented Natural Language Processing technology, OpenAmplify reads and understands every word used in text. It identifies the significant topics, brands, people, perspectives, emotions, actions and timescales and presents the findings in an actionable structured data.
Callimachus - Bernadette Hyland
Callimachus is an Open Source Linked Data management system, with a wiki-like interface, and a Class-based template engine that allows you to visualize and create Linked Data easily and quickly. We are using Callimachus to turn a collection of Linked Data resources into a read/write Linked Data warehouse. This approach is being embraced by organizations that want the benefits of data warehouses but are looking to leverage the speed and standards of the Web to create, view and manage data. Callimachus provides customers the ability to visualize Linked Data via "follow your nose" navigation and familiar map and chart images. Today Callimachus is running on a variety of triple stores including Sesame, Mulgara and OWLIM, with support for additional RDF database management systems coming soon.
Using Computer Algorithms and Human Intelligence to Enrich Digital Media with Linked Open Data - Todd Carter
Tagasauris believes in the power of knowledge to inform, enrich, and improve our world and our lives. Our technology combines computer algorithms with human intelligence to solve complex problems that cannot be solved with either component alone. Our software helps collections managers and archivists factor and scale their curatorial operations. We achieve this by decomposing complex workflows into discrete, interconnected micro-tasks that can be distributed and processed cooperatively by machine and human agents. We manage the full stack (content, workflows, tasks, workers, pricing, payments quality and integration) so you don't have to. By making content accessible in new ways, Tagasauris enables companies to profoundly increase both their market reach and their customer engagement. Todd Carter, CEO and Co-founder Tagasauris.
KASABI Linked Data in Action - Leigh Dodds
Kasabi is a new market place for hosting your Linked Data. We take a look at Kasabi in action.
Creating Ontologies from News - Vijay Raj
"Today's News is Tomorrow's Knowledge". We believe to be relevant, our Knowledge Base needs to be current. We focus on analyzing news to build our ontologies and tap into the rich structure and use Wikipedia to "prime" our Knowledge Base. We have created more than 700 granular, hierarchical ontologies about various topics related to US. The hierarchy allows us to create contexts based on any topic, such as a sports team or a state. The context is essentially a small subset (usually ten) ontologies most relevant to the topic. We analyze each news article in a "context" based on the news source. We then iteratively enhance or reduce the scope of the context based on the topics found in the article. This essentially eliminates the need for disambiguation! The context also helps us to more accurately classify the news article. New assertions found in the news article are added in that context for further verification. The curation of news over time helps us create hyper local ontologies for every person place or thing. Vijay Raj started working on Knowledge Systems about 5 years ago. His first contribution to Linked Open Data was a mapping from DBPedia to OpenCyc. Based on the DBPedia (Wikipedia) to OpenCyc mapping, he doubled the Cyc Knowledge Base by extracting assertions from Wikipedia article text. This enhanced KB was used to analyze news. Following is an app to "visualize" news as a map of articles and concepts.[ *] Lately his focus is to create a granular hierarchical knowledge base, with assertions extracted from Wikipedia and continuously updated from news analysis. The knowledge base, Kontekst.Us, has over 700 ontologies, and is continuously generating more data from news. A SPARQL endpoint and related web-services are hosted here and he blogs here.
Contextual Targeting Using Semantics – Dmitri Soubbotin
Advertisers are always looking to increase conversion rates for their ads. If an ad is relevant to the user’s interests, it is more likely to be clicked upon than a randomly chosen ad. There are a number of techniques to target users, including demographic, geo-location, history, etc. One efficient technique that our tools are supporting is contextual targeting, i.e. picking an ad category that matches the content of the page. The simplest form of contextual targeting would be to use the words from the URL of the page. However, the accuracy of such targeting would be quite low. Unlike that, our approach uses text mining of the page and semantic analysis. Our semantic engine SenseBot “understands” what the webpage is about, and extracts key concepts from the page. We then map them to the extensive advertising taxonomy, picking the most relevant ad category. The result is an ad that matches the content of the page, and is likely to be viewed by the user as an added value than a nuisance. Our tool is implemented as a REST API which is accessed by the client application. The Contextual Targeting API is deployed in the cloud, thus providing a scalable and highly-performing solution. The API is part of the family of APIs developed by our company. Dmitri Soubbotin is the CEO & Founder of Semantic Engines LLC