Web 3.0? Is there really such a thing or did someone run out of post names?
So here we have the term Web 3.0 coming into common usage. The problem is, what does it actually mean? Many are scratching their heads and asking, “isn’t that what Web 2.0 was supposed to be about”.
The reference to Web 3.0 was made by members of the W3 to describe the desire for a Semantic Web. This new “Semantic Web” was to be user friendly and interactive… (I know; sounds familiar, right?). In almost all of the laymen terms, it sounds virtually the same as Web 2.0; the difference is when we look at the core problems with Web 2.0 and the theories behind it.
Web 2.0 was to give an interactive world of exchanging ideas in an intelligent way. Many would argue about how intelligent the exchange has actually been, but intelligent communication was the intent. People have built on these principles to pass information accross the internet in a manner that resembled intelligence, but have found one major piece of the equasion was left out….
The Computers didn’t know what we were saying..!!!
The main purpose of Web 3.0 is to cooperatively enable the computers to understand the connections being made within the internet. By enableing them to have an understanding of the importance that real people are placing on content, the systems will be able to weed out all of the Black Hat and Scam jobs.(at least for a little while)
Here’s the post of wikipedia:
Humans are capable of using the Web to carry out tasks such as finding the Irish word for “directory”, reserving a library book, and searching for a low price for a DVD. However, one computer cannot accomplish all of these tasks without human direction, because web pages are designed to be read by people, not machines. The semantic web is a vision of information that is understandable by computers, so computers can perform more of the tedious work involved in finding, combining, and acting upon information on the web.
Tim Berners-Lee originally expressed the vision of the semantic web as follows:
I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.
– Tim Berners-Lee, 1999
Semantic publishing will benefit greatly from the semantic web. In particular, the semantic web is expected to revolutionize scientific publishing, such as real-time publishing and sharing of experimental data on the Internet. This simple but radical idea is now being explored by W3C HCLS group’s Scientific Publishing Task Force.
Semantic Web application areas are experiencing intensified interest due to the rapid growth in the use of the Web, together with the innovation and renovation of information content technologies. The Semantic Web is regarded as an integrator across different content and information applications and systems, and provide mechanisms for the realisation of Enterprise Information Systems. The rapidity of the growth experienced provides the impetus for researchers to focus on the creation and dissemination of innovative Semantic Web technologies, where the envisaged ’Semantic Web’ is long overdue. Often the terms ’Semantics’, ’metadata’, ’ontologies’ and ’Semantic Web’ are used inconsistently. In particular, these terms are used as everyday terminology by researchers and practitioners, spanning a vast landscape of different fields, technologies, concepts and application areas. Furthermore, there is confusion with regards to the current status of the enabling technologies envisioned to realise the Semantic Web. In a paper presented by Gerber, Barnard and Van der Merwe the Semantic Web landscape are charted and a brief summary of related terms and enabling technologies are presented. The architectural model proposed by Tim Berners-Lee is used as basis to present a status model that reflects current and emerging technologies”
What this means for developers?
More work… more education.. longer nights
For those true to the desire of solving the next development or SEO equation, this is more tantalizing than tantrum causing. Those who enjoy doing quality SEO work will only see this as a challenge. This is also seen as a much needed improvement to kill off the scammera that kill the face of our business.
What this means for the “No Talent A$$ Clowns” using Black and Grey Hat Techniques:
You better learn some real SEO, or your days are numbered. Many times before these threats have been issued by the W3, but never before have they issued exact concepts that will be so easily incorporated by both Google and MSN. There will always be those who skirt the system, but most will find their cheep tricks no longer working after these new rules are implemented.
Here is the basic visual guide and reference to the new Semantic Solution. i’ll leave links acrross to the entire Wiki article.
We will be evaluating the software available for the changes and should have a review within the next few weeks…
People keep asking what Web 3.0 is. I think maybe when you’ve got an overlay of scalable vector graphics – everything rippling and folding and looking misty — on Web 2.0 and access to a semantic Web integrated across a huge space of data, you’ll have access to an unbelievable data resource…”
– Tim Berners-Lee, 2006
Relationship to the hypertext web
Limitations of HTML
Many files on a typical computer can be loosely divided into documents and data. Documents like mail messages, reports, and brochures are read by humans. Data, like calendars, addressbooks, playlists, and spreadsheets are presented using an application program which lets them be viewed, searched and combined in many ways.
Currently, the World Wide Web is based mainly on documents written in Hypertext Markup Language (HTML), a markup convention that is used for coding a body of text interspersed with multimedia objects such as images and interactive forms. Metadata tags, for example
<meta name="keywords" content="computing, computer studies, computer"> <meta name="description" content="Cheap widgets for sale"> <meta name="author" content="John Doe">
provide a method by which computers can categorise the content of web pages.
With HTML and a tool to render it (perhaps web browser software, perhaps another user agent), one can create and present a page that lists items for sale. The HTML of this catalog page can make simple, document-level assertions such as “this document’s title is ‘Widget Superstore'”, but there is no capability within the HTML itself to assert unambiguously that, for example, item number X586172 is an Acme Gizmo with a retail price of €199, or that it is a consumer product. Rather, HTML can only say that the span of text “X586172” is something that should be positioned near “Acme Gizmo” and “€199”, etc. There is no way to say “this is a catalog” or even to establish that “Acme Gizmo” is a kind of title or that “€199” is a price. There is also no way to express that these pieces of information are bound together in describing a discrete item, distinct from other items perhaps listed on the page.
Semantic HTML refers to the traditional HTML practice of markup following intention, rather than specifying layout details directly. For example, the use of
<em> denoting “emphasis” rather than
<i>, which specifies italics. Layout details are left up to the browser, in combination with Cascading Style Sheets. But this practice falls short of specifying the semantics of objects such as items for sale or prices.
Semantic Web solutions
The Semantic Web takes the solution further. It involves publishing in languages specifically designed for data: Resource Description Framework (RDF), Web Ontology Language (OWL), and Extensible Markup Language (XML). HTML describes documents and the links between them. RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or airplane parts. Tim Berners-Lee calls the resulting network of Linked Data the Giant Global Graph, in contrast to the HTML-based World Wide Web.
These technologies are combined in order to provide descriptions that supplement or replace the content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-accessible databases , or as markup within documents (particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues stored separately). The machine-readable descriptions enable content managers to add meaning to the content, i.e., to describe the structure of the knowledge we have about that content. In this way, a machine can process knowledge itself, instead of text, using processes similar to human deductive reasoning and inference, thereby obtaining more meaningful results and helping computers to perform automated information gathering and research.
An example of a tag that would be used in a non-semantic web page:
Encoding similar information in a semantic web page might look like this:
Relationship to object oriented programming
A number of authors highlight the similarities which the Semantic Web shares with object-oriented programming (OOP). Both the semantic web and object-oriented programming have classes with attributes and the concept of instances or objects. Linked Data uses Dereferenceable Uniform Resource Identifiers in a manner similar to the common programming concept of pointers or “object identifiers” in OOP. Dereferenceable URIs can thus be used to access “data by reference“. The Unified Modeling Language is designed to communicate about object-oriented systems, and can thus be used for both object-oriented programming and semantic web development.
When the web was first being created in the late 1980s and early 1990s, it was done using object-oriented programming languages such as Objective-C, Smalltalk and CORBA. In the mid-1990s this development practice was furthered with the announcement of the Enterprise Objects Framework, Portable Distributed Objects and WebObjects all by NeXT, in addition to the Component Object Model released by Microsoft. XML was then released in 1998, and RDF a year after in 1999.
Similarity to object oriented programming also came from two other routes: the first was the development of the very knowledge-centric “Hyperdocument” systems by Douglas Engelbart, and the second comes from the usage and development of the Hypertext Transfer Protocol.[clarification needed]