Category Archives: Semantic Web & Web 2.0

The Empire strikes Back:
NOKIA’s Return To Your Pocket And The Magic Of Context

Sometimes you just don’t see it coming…. We all remember the times, not too long ago, when Nokia were selling basically the same Series 60 models of their phones for years by essentially just changing their outer design – along with the pricetag.
 
And despite a couple of surprising successes like with their N770 PDA tablet (which, at the time, we also used to showcase the early SemaWorx prototypes) there obviously has not been a lot of initiative inside the company to more dedicatedly pushing the technological borders in direction of widening the market for cellphone usage.
 
After all, one could still make a good living from the existing models’ diversification and even afford to send away Apple back in the days when the late Steve Jobs approached them in order to speed up the initial iPhone product development.
 

Though the times, they’re changin’: With Apple’s stock price ranging as high as it may ever get, but their once-superior phone engineering at the same time showing inceasingly more weeknesses in both concept and execution, this is the dawn for their competitors again. And since it can be quite hard to impossible sometimes, to beat huge corporations such as Apple in the business they themselves created, the guys at Nokia did wisely, to leave to others what others can do better.
 
In the past, Nokia’s own developments haven’t ever really been up equally with their counterpart’s products in terms of UI and OS engineering, which in recent years more often than not resulted in quite limited software capabilities even on the newer Nokia handsets. In following this inevitably lead to a shrinking community of application developers willing to take the hassle to write software for e.g. the Symbian or maemo.org platforms.
 

With the introduction of their new LUMIA smartphone series however, the NOKIA management left building the smartphone OS to Microsoft and their Windows 8 product. This did not only bring a new, sophisticated though proven, system base to their latest generation of completely re-engineered phones without causing too much hassle on the run, but brought in a huge crowd of new application developers as well, who already had long-yeared experience in programming for the Microsoft platform.
 

The other strength which NOKIA has just turned into a USP again, is their experience in engineering high-standard cellphones. And even though the current models’ hardware may not always be able to keep up with the installed Windows operating system’s hunger for computational power yet, they focus on the customers’ outcomes by providing them a high level of tools to ease a phone user’s life. And no, I don’t think so much of the literal ˮbells & whistles“ here, but more of features like
 

  • Mutiband Phone Networks (including LTE broadband) Where competitors stick to supporting only certain partner-vendors’ networks, Nokia Lumia customers always get the full spectrum.
     
  • High Quality Multi-Brand Location Services With the strategic purchases of NAVTEQ and earthmine, Nokia product designers get their hands on powerful features like sight-based navigation, making up for a nicely designed AR display of your environment, accompanied by corresponing quality guides: Besides the omnipresent crowd-sourced content, Nokia services provide a wide range of professionally authored information from providers like Michelin, HRS or Expedia.
     
  • Realtime Public Transport Information The one-of-a-kind just-on-time routing for those of us travelling by foot and public transport.
     
  • Phone Calls Almost forgot about it: No, they haven’t forgotten about building reliable voice calling into the latest generation phones. No antenna-issues involved here…
     
  • Above Standards Camera With photo shooting among the most popular phone features in recent years, Lumia devices come with ZEISS lenses, optical image stabilization, sequence-shots and HDR-like processing tools.
     

In my oppinion, these new approaches will, in the longer run, put NOKIA at the heart of an eco-system fucussing much less on what is possible, than on services users actually need to navigate through life on a daily basis.
 

The Grassroots Dilemma, The Return Of Browser Wars And
The Death Of The Plug-In

With sullenness I look back to the early 2000s when webpages always required ”specific engineering“ simply in order to display properly in certain vendors’ webbrowsers. And it was only a couple of years ago, that the market had consolidated and standardized enough to start fading out that practice. All strictly for the birds.

It indeed has been beneficient to online production, that web standards (much different from their associated markets…) have only used to change slowly over time and that chances are good, web content will display properly for the time being, from the date it has been sent live.

Though, unfortunately, over the years the arising argony caused by interest conflicts, political games and bureaucracy at W3C, the standards giving institution for the web, created more and more resentments among the more practically engaged part of the online creative community. It just hat become too obvious, that the old standards (more often than not from an entire decade ago) were not to keep up with the functional requirements of today’s advanced web applications.

That said, the absence of a credible authority sparked the uprise of open opposition by ambitious revolutionaries, putting themselves and their daily needs at the heart of their very own web standards revolt.­­

As enlightning as the ideas of these freshly founded ”working groups“ are, just by their nature, these concepts lack any kind of official recognition. With a groups core members (often just 1 to 5 people…) by chance even refusing to name a final publishing date or even a version-system for their so-called ”standards“, from a creator’s perspective, it is more and more becoming impossible to publish online content which can be reliably assumed to work for most of its prospective users. ­― So, who cares at all?

The part, which makes the topic worth discussing, is that (after years of rather slow, incremental improvements and despite the missing assurance about the outcome) browser vendors just seem to have waited for a chance to add tons of brand new funky bells ad whistles to their widely adopted software – in order to show off their superiority over any anticipated competitor. But, just as with the standards revolutionaries themselves, every company also tries to add their own approach for deploymemt. Along with unique features to each webbrowser-product, the most advanced developers shall eventually be lured away from the competitors’ software – in pretty much the same way Microsoft wasted billions pushing its free Internet Explorer webbrowser in the late nineties.

The current result now appears only too well-known to year-long web developers: we go back in time and again start to engineer every webpage template separately for any software-client in question, including the upcoming new mobile ones. And to really get the results right, the required adjustments add to development costs by at least a third – which, of course, may be fun for web agencies, but much less for their customers – companies simply needing these websites to run their business.

After all: Is there anything in it for the avage web user with this game? Sure. Since most revolutions, despite considerable collateral damage, use not to go all-bad, there are clear end-user advantages involved:

  1. The death of the plug-in: It already today is very unlikely you will need to install additional software only to properly display an average webpage’s content, such as sound, video, animation, immersive imagery or even 3D objects.
     
  2. Easier-to-handle forms: Web forms will start verifying your input already as you type and assist you to easily enter appropriate values, especially on mobile devices.
     
  3. High performance content: Previously unseen display quality for web content will become common to an amount as it has only been available to high-end computer games just a couple of years ago.

So please prepare for the most innovative technical changes to online experience since the late nineties, and watch out for those just wanting to cash-in on you for plain eye candy that will likely available to a selected few only anyway.

QR Chalking viable to mark-up the real world?

 
Well, admittedly my first experiments’ results proved less than wonderful. As you can see, I went with a QR shape made of card board, diligently hand-cut from the Amazon package the chalk spray came in.

 

screen, spraychalk and shape to add a "quiet zone" around the pattern

 

While the first tests using white paper instead of chalk colour under the Matrix shape turned out quite promising, the final result on asphalt was not readable at all, despite having added the mandatory "quiet zone" around the actual code display (not shown on the picture) before running the readability tests.

 

 

Anyway, with future expriments I’ll start with a more even surface as a background and less colour, so that the individual squares hopefully will not bleed into each other as much as they did with this first print.
 

Will keep you updated about future outcomes.
 

The Semantic Web is Meaning Less
(at least to search engines…)

 
When recently launching my SemaWorx SEO and Internet Marketing Shop here in Leipzig, I had been carefully considering, where to put the focus of the work, in order to prevent ending up in the same pot with all the other more or less notable SEOs in the area.
 

So I initially thought it to be a great idea to deploy my existing experience of semantic data logic for search engine optimization. This has become increasingly popular lately with the rise of RDFa and adoption of commerce ontologies like Good Relations through major search engines.
 

As experience in this field is not easily replicable, the knowledge about and deployment of Semantic Web technology could have made for a great USP.
 

But after playing around with these fresh options for a while and much to my disappointment, I discovered that (at least a the time of this writing), most relevant search engines, including Google, do not actually parse the semantic markup, but rather string-search it with the rest of the respective page.
 

What may sound quite reasonable from an efficiency or productivity point of view, unfortunately also misses an important opportunity derived from the triple-nature of RDF data: To match and co-relate information across different domains, which could help filter a lot of false positives out of search engine results. This leads to strange feature restrictions, like the ability to recognize only one product per page, which makes semantic markup rather useless e.g. for catalogues or category overviews.
 

That said, apart from the early island solutions like Intel’s Mash Maker, by now not a lot of companies have successfully managed to use structural data of semantically rich webpages to co-relate it with content from other domains.
 

Nonetheless, I’ll dare to offer semantically enhanced SEO services, which we create for use with Google’s Universal Search or Yahoo!’s Monkey Business, as fully query-able semantic data endpoints, so not not only search robots, but any application willing to use and promote the outcomes will be able to use these in real time.
 

Sounds intersting ?  Wanna give it a try ?  Then you are very welcome to get in touch with us via the Semantic Search Engine Optimization site.
 

Fascinating Insights from the First Ontologist Workshop at Max-Planck-Institute Leipzig

 
Having joined the OBML workgroup‘s 1st workshop at the University of Leipzig Max-Planck-Institute of Evolutionary Anthropology for the last two days, I am left quite impressed.
 

Not only by the presenters’ track records, but also by the sheer number and diversity of researchers and practitioners Prof. Dr. rer. nat habil. Heinrich Herre managed to bring together for the event. Commonly such gatherings here in Germany would consist of ten people at most ;-) — this time there were more than 50 experts, including international guests, from institutions and companies all over the country.
 

Nevertheless, something extremely re-assuring that I am taking away is not only the obvious development progress in the ontology sector, but also the fact, that even the most renowned experts in the field struggle with the very same technical insufficiencies as we do, when designing new data models for our SemaWorx application base.
 

Therefore we also had a very enlightening (entertaining goes without saying…) evening at local Cafe Madrid, where over the delicious meal discussion focus lay more on the practically achievable short-term benefits of current semantic modeling approaches, rather than their long-term optimization wishlist (hopefully I didn’t get this imprecise now… ;-)).