If HTML is the way developers get information into Google’s search engine, meta data is the way developers will get data into Facebook’s semantic search engine which will be based on the company’s “Open Graph”. Through the use of easy to implement plugins, Facebook is rapidly collecting structured data on every user. Facebook has also upgraded their API to make building on top of the Open Graph a much easier process. What’s pretty clear is that it’s an attempt to tackle the residing search giant.
While the concept of the semantic web has been around for years, there are very few companies who have a shot at building a semantic search engine. Companies like Adaptive Blue and others have attempted to implement this but none of them have the scale that Facebook has. As Mark Zuckerberg said on stage an hour ago, by the end of the day Facebook should have more than 1 billion likes and that data will grow exponentially.
It’s an insanely ambitious project but Facebook has two things to their advantage: a user base of more than 400 million users and a suite of easy to implement social services which will help collect structured data from around the web. Overnight, Facebook has essentially launched the semantic search engine of the future. We’ll have to wait and see how Google, the current search incumbent, will attempt to compete in this space.
There are a number of standards that have been created in the past as some developers have pointed out, microformats being the most widely accepted version, however the reduction of friction for implementation means that Facebook has a better shot at more quickly collecting the data. The race is on for building the semantic web and now that developers and website owners have the tools to implement this immediately.