We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

ZONE: towards a better news feed

00:00

Formal Metadata

Title
ZONE: towards a better news feed
Subtitle
and a way to create newspapers according to topics
Alternative Title
ZONE: towards a better news feed
and a way to create customized newspapers according to your favorite topics
Title of Series
Number of Parts
90
Author
License
CC Attribution 2.0 Belgium:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
Nowadays we can use RSS feeds, Twitter, Google Reader, Yahoo Pipes or aggregators to keep up with news. Though those solutions do not guarantee data privacy and rather manage news by origin. The zone project proposes an innovative solution to overcome those issues using the power of Semantic Web and group related informations together. ZONE-project provides a new, innovative way to follow news. At its core, the system is aggregating news items from various RSS feeds. Using the power of semantic web we are able to efficiently tag & annotate each news. Those tags are the basis of filters. Filters allow users to see only news that are relevant. For instance users can retrieve all news containing a tag, or on the contrary never see news containing specific tags. Basically it means that each user can create custom news feeds according to his interests. Though it may be tedious for John Doe to build its own filters, thus it will be possible to exchange filters with other users, or read specific news feeds built by other users. This will enable users to create news group feed focused on specific topics such as technology, heath, industry, transport, agriculture, communication, environment... No field can escape from the ZONE search and news feed mechanism!
25
Thumbnail
15:46
51
54
Thumbnail
15:34
55
57
Thumbnail
1:02:09
58
Thumbnail
16:08
62
Thumbnail
13:26
65
67
Time zoneSystem callQuantum stateProjective planeComputer virusTime zoneComputer animationLecture/Conference
Computer programmingComputer animation
PlastikkarteInformationSuite (music)QuicksortOrder (biology)XMLLecture/Conference
Order (biology)TwitterWeb pageInformationComputer animation
NeuroinformatikComputer animation
User interfaceMoving averageDigital filterInformationSoftware bugGoodness of fitInternet service providerWebsiteOrder (biology)Web pageLink (knot theory)CuboidGoogolReading (process)Source code
Data miningFilter <Stochastik>WordNeuroinformatikQuicksortLecture/Conference
Data miningInformationWordComputer animation
GoogolNeuroinformatikMeeting/InterviewComputer animation
NeuroinformatikGodStudent's t-testMeeting/InterviewComputer animationLecture/Conference
Web 2.0Open sourceCodeObservational studyMathematical analysisGraph (mathematics)
CodeFilm editingFilter <Stochastik>Computer animationLecture/ConferenceSource codeXML
Lecture/ConferenceComputer animationJSONXMLUML
Order (biology)Open sourceProjective planeMultiplication signLecture/Conference
Computer animation
Data miningMultiplication signStack (abstract data type)Theory of everythingService (economics)Mathematical analysisSource codeXML
Computer fontInformationOrder (biology)Level of measurementMathematical analysisNeuroinformatikOpen setSemantic WebPlanningSource codeBinary fileOffice suiteFrame problem
Software maintenanceOrder (biology)Arithmetic meanFilter <Stochastik>Data miningComputer animation
Lecture/Conference
DatabaseGraph (mathematics)Semantic WebFilter <Stochastik>Computer animation
InformationProjective planeDatabaseData miningLecture/Conference
Order (biology)Gamma functionInformationContext awarenessDatabaseSemantics (computer science)NeuroinformatikGraph (mathematics)SupercomputerUser interfaceSelf-organizationWeb browserWeb 2.0DataflowOrder (biology)Service (economics)Client (computing)Demo (music)WritingWebsiteDependent and independent variablesMobile WebWordLecture/Conference
Demo (music)Demo (music)Web pageTwitterLecture/ConferenceSource code
InformationOpen setTwitterAddress spaceSource code
Demo (music)Time zoneRevision controlDemo (music)Slide ruleGoodness of fitMeeting/Interview
WebsiteProjective planeSlide rule
Category of beingMeta elementAreaCartesian coordinate systemFormal languageOrder (biology)Projective planePattern recognitionOpen sourceMetreDomain nameSemantic WebSoftwareInformationMultiplication signLecture/Conference
Transcript: English(auto-generated)
OK, hello. I'm really happy to be there to present you my project called Zone Project. And I will present all the project, everything you would know about this. First, I have a problem. When I'm on Twitter, my timeline is really, really big. I can't follow everything. Who has this problem? Every- everybody.
It's really a big problem. How can you sort all this information? How can you say I want to see one thing, one other thing, one other thing? We need to have solutions making which will make this sort. We need to have solution ready to make sort in order to don't read everything. And it's what I want to
work on. But it's not only on Twitter. It's on all information. When you see, when you read, you read newspaper, you need to, you need to see, OK, I don't want to read this page, other page. You need to have your personal newspaper. Or another solution is to buy everything. All newspaper
and read everything every morning. But it's not good solutions. We need to make this quickly. In informatics, if we want to make news forecasting, we need to have solutions to make this efficiently. And it's really important. But there are solutions. There are a lot
of solutions. First, you can trust providers. You can buy a newspaper that you like and say, OK, it's good information. I have just to read this. You can go on Google news on other websites to have this information. But it's not your information.
In order to have your information, you can follow RSS feeds. You have aggregators. You say, OK, I want news coming from one website for another website. And say, OK, you will give me all the news on the page, and I will read everything for this website. But it's not making filtering. It's not, for me,
a good solution because, because I will need to read everything on this, on this aggregator, and it's not pretty good solution. Finally, I found one solution called Yahoo pipe. It's really easy to use. As you can see, you just have to make box, to make link between box, and it works. But I think
you don't know how to use this. And we really need to understand this. But it works. You can say, I want maybe ten or fifteen RSS feeds. I want you to, to take on the news with this word in the news, with another word, making salt,
making other things, making random, maybe. And you will have your news. You will have your, your filtering. But it's called Yahoo pipe. And it's in Yahoo. It's not on your computer. OK. I have problem. I need to work on mining. Why mining? Because when I'm filtering news,
I can't say I just want this word, this other word. I need to say, I want, I want this information. I want information talking about this person, about another person. And I really need to work on mining. Other problem. I need an open solution. My solution can't be hosted by Google or Yahoo. It
really need to be in my computer. I need to, it need to be for me, by me, on my computer. OK. Why? I can't make this, I can't make this solution. I was student in an engineering school.
I can't make code. I have, I have study graph analysis. I can make everything about this. I can make web hosting. I love open source. Why not making this solution? OK. I start to code. And I made my solutions. It's pretty, pretty beautiful. I love this.
It's my solutions. It's working for me. I have my news. I have my filtering. I have everything I wanted. But maybe it's the same problem than Yahoo pipes. It's not beautiful. And I say, OK. I have my solution. It's work for me. I use this. But I
say, OK. How can I make better? How can I make a solution for other person, for you? And I see, I see a contest made by Enria. They say, OK. We give you one year full time working on an open source project, and you will, you will manage the project
as you want. You will make open source. You will go to, to conferences in order to make the, the, for Enria, say, Enria is good, everything like this. But you will work as you want on open source. And since five months, I'm working on this. And I will present you what
I'm working on and how can it work. The tech. OK. I have my feeds. Like, with an other feed aggregator, I need to read these feeds. But now I will make better. I will annotate my news. How can I annotate news? How can I annotate news? I will use annotator. And I, an
annotator will take a text. Here, for example, I have a text coming from BBC News. The, the services will read this text and will underline things important. Will underline named entities. He can, he can read what it's talking about David Cameron.
He will see that David Cameron is, is named two times as David Cameron first and Mr. Cameron in the second time. It will say, OK, it's really important. This person is really important in this text. He will see other things, like it's calling about, it's talking about European Union. It will say, OK,
it's, it's a news talking about David Cameron and European Union. And with this annotators, I will need to be able to make, to work on mining on my text. I need, I'm, I'm able to put a sense of all the text I'm analysis.
OK. I have this. But I can make a lot, a lot more. Why not using open data? With open data, you have all the things organized in order to be, in order to be analysis by, by ordinators. With open data you have semantic web. The aim of this, of this,
of this research topic is to take all the information sources, like Wikipedia, and to link them, one between other one, and to make ordinate, and to put, and to be able with computer to make analysis about text. For example, it was talking about
David Cameron, but with Wikipedia, I know that David Cameron is a politician. And I will, I'm now able to say that, that my news is talking about politics. And I need to be able to add a lot of meanings on my text. And it's really important in order to make my filtering.
And other things. I can make data mining on my news. I just take all the news, and I make, and I will regroup all the news similar, one between other one. And like this, I can't, I can't say, OK, all this news is talking, are talking about this subject. All other news are talking about another subject. And like this,
I can make more and more about my news and about filtering. OK, now I have all my news. I just need to store them. And for this, I use no SQL databases called virtuoso, which is a graph oriented databases you use in semantic web. It's, it
work really fine. And it enable me to make mining, to organize my data, and it work really efficiently in this, in my project. OK. I have my databases. I have my informations. And now I need to, to write and
web interfaces in order to see my news. I have my topics. I just need, I need, I just need to translate my, my questions to the computer. For this, I use speckle-request. It's come, it's come from web semantic, and the aim is
to, is to write request, like in MySQL, and to, to ask for databases, for graph databases. Here, I have the example talking about David Cameron. I can say, I want all informations talking about this person, about, talking about European, you know. And it will give, it will give me all this information. And it's very efficient. It's helped
me to have all my informations with the filtering on person. It's not just on words. It's really on, on context. I can say, I want the person, David Cameron, the organization, European Union. It's really efficiently on this.
Now, I have my response. I have information I want, and I just need to give this to a client, as a RSS feed or as a web interface. And like this, I have my solutions. And here is the solutions, working with
two workflows. The annotation workflows on a, on maybe on a computer, on a server, and the, and the clients, the web clients, or from mobile phone, for everything you want, on another site. Now I will present you a short demo. OK. Here is the solutions. You can see all the
news. You can see the annotations. And we will click maybe on one annotation, for White House. We have all news talking now about White House. We have RSS feeds, Twitter news, and all is annotated. We have some pages. We can see everything. Now I will add an, I can click
to see the news, talking about Maison Blanche. And I said, OK, I will also, I want also news talking about Amit Kazi. And I have all this news coming from my RSS feeds and coming from Twitter. And OK, it work. I have my solutions. Now I can make more things with open data.
I want all news talking about the departments in the France. I can take all these informations. I can make more things. And I can really make my filtering like this. OK. It's not much more beautiful than the first versions, but I'm really working on this, and I think we will have,
in one or two months, a really better solution on this. It was the demo. You can try it. Go. Try it. I really need user. I really need to know how you want to use these solutions. We have a really big solutions. We, we just
need to find usage, to find how you want to use these solutions. It's really important, because we now, I use, I can use for my usage, but I think there is a lot of, lot of usage you would like interested in. You can use these solutions for medical, for medical news feeding, for news forecasting, for a lot of things.
And I think we need to know on what we want to use this. Last slide. For me, it's really a good project. I love working on this every morning, going to work. It's really fine. If one wants to work on this project, we have an internship. It's, it's
really, I think, a good solution. It's fine. Thank you. Thank you for sharing with us this new software. Maybe time for one question? No. Yes. It's come
from Ned Rekonation, you know, named on TT Rekonation. Ah yes. The question is, what kind of tool
do you use in your Annotator to extract name entity? Yeah. We use solutions coming from name entity Rekonation's domain. We, I use, personally I use Wikimeter. Or there are other solutions. You can go to Ned, N-E-R-D dot Rekon.org, and there are
a lot of solutions for this, for these applications. There are, there are not a lot of open source solutions, but we are working on with, with Inria, in order to have solutions working in French and in English, called Spotlight. It's solutions coming from
Wikipedia. And we will try to translate this in French, in order to work for my project. Yes, thanks. It's really powerful, your solution. And another question is about the topics that the user can use to select their news. If you, you have a, think
about some, some criteria to suggest the topics or some other things about this, because I think it's a big problem that people can select what they are interested. Yeah. One thing I really want to see in the solutions is making a language in order to make, in
order to make categories, meta categories. The user will say, OK, I create a category with this thing, this other thing, not another thing, and like this, we will be able to make meta categories for things. And all the, all the things we are, we are using for this filtering, it's come
only for, from semantic web. You, we just use this, but we have a lot of things that we can use. If we use other annotators, we can work on medical, medical news on a lot of things. But it's really important to have these annotators in order to have a lot of
information about news. You're welcome.