Monthly Archives: September 2007

MashMaker — Intel entering Semantic Web"through the back door" ?

   That’s at least the way they put it. As I unfortunately have not yet been able to make it among the lucky ones, who get the first bunches of early-adopter tickets for Intel’s MashMaker, I have to stick with the documentation when it comes to figuring out the details:
screen-thumbnail of the Intel  MashMaker websiteIn order to bring mash-up creation to "the rest of us", the service provides you a toolbar for Firefox 2+ (with other versions still to come) which brings pre-programmed access to service APIs like Google Maps or Yahoo! Search with it. Whenever you then visit a webpage and choose a mash-up kind from the toolbar’s menu or simply push one of the buttons for the more popular services, the program trys to relate and mash-up the (optionally selected) webpage content with popular web services of choice.

   And while the toolbar software seems to make use of semantic web content extraction more in the sense search engines do it, obviously users are enabled to share with others if they are being happy with the automated processing results and especially if they could successful use the mashup they created. So you can annotate and refine the results lateron and in return the MashMaker server will lern about web pages’ content and start soon proposing suitable mash-ups by itself.

   While it currently still looks a bit like messing up DabbleDB and de.icio.us, it may indeed have a real semanti RDF based backend, making sure, it won’t mess with its databases either (which has not yet been confirmed) ;-) . Nevertheless MashMaker finally seems to be a really powerful new tool for collaborative webpage annotation.

Buzz: Semantic Web in The Economist and on Tim’s Radar

It’s been Tim O’Reilly who wrote on his "Radar"about the perception of the Semantic Web as it has been reported by The Economist in a mistaken context recently, as a sort of name for any Web 2.0 application which seems to think — supposed to be many of them.
   Though, Tim uses the opportunity for an explanaition where he thinks the differences are and where the two may come together. The day after, he quotes different approaches and hurdles towards making semantic content reality.
 
Here is, what I commented on the articles:

Being a Semantic Web project leader myself, one major difference between the Semantic Web (narrower sense) and Web 2.0 to me seems that the latter for most people (including myself) is describing an outcome or at least a resulting type of application, while the first one obviously is just one of many available vehicles to achieve this outcome.
 
Using the Semantic Web as a way to build a Web 2.0 application has obvious disadvantages:

  1. You need much longer to get your pants on: There are about 1,400 pages of standards and methods between you and your first app and even more documentation assumed to be missing for the tools you are about to use in order to get it programmed…
  2. Your learning curve is pretty steep, even until you and your crew just got the most basic concepts. We ended up, splitting the tasks of programming (JAVA recommended) and data-modelling/markup-writing between different people, as it turned out that both tasks required a quite different mindset.

On the other hand it turned out, that with all of the additional work come some quite unexpected results:

  1. If you do it properly, you really only got to do it once. We actually found ourselves reusing our first creations quite early, as well as deploying those provided by other people with ease. The Semantic Web’s consequent standardization approach really allows you repurposing your stuff quite early — just like many others claimed it before and you never got there.
  2. Semantic Web data clearly takes away the pain from sharing data accross company- or other technical boundaries, because you already got your processing in place for whatever is going to come in from out there or vice versa.

Our result: If you are really about to create a single-domain application with only a limited need to exchange data with the outside (such as users ubloading files or developers submitting a bunch of parameters), most likely the Semantic Web approach will be a waste of production time and therefore money (at least until better developer tools become available).
 
Nevertheless, the more different and independent (!) from each other the various parties are (for instance when you may be building an exchange or trading platform) who are supposed to use the resulting applications, the more it’s probably worth considering to go through the accompanying hassle and get your feet wet with Semantic Web technologies.

Some currently argue that the Semantic Web (which I am admittedly very passionate about…) will never become real or at least useful, because it would need to many people to translate everything on the web and in the world into semantic expressions.
   But who talked about everything ? Prominent non-semantic applications like Wikipedia or even search engines’ ‘suggest’ features have shown us, that enough to be useful can be reached within several thousands, rather than millions or billions of entries.
   Which is (with regard to the web’s global scale) not very much actually… — especially as the Semantic Web technology has already been adopted for real applications (often just for internal use) by companies such as Adobe, Vodafone, Audi (the carmaker) and a bunch of well-known others.
 
This seems to me being quite similar to the early XML adoption at the end of last century: No-one really knew if this was going to be useful or just an IT fad as they had already seen so many.

So let’s handle Semantic Web technology just like we did it back then with XML: Wait, and see what people are going to figure out which purposes this is usable for… :-) :-)

 
What do YOU think ?  Is this the way to go ?  Any other elaborate concepts ?  I’d really appreciate to hear your input on this !!!