Google Authorship Is Dead & What That Means For Your SEO Strategy

The Authorship markup was first unveiled by Google in June 2011 and SEO Techs everywhere rejoiced. Its roots can be traced back to the company's Agent Rank patent of 2007. Bill Slawski, an expert on Google's patents, says that the Agent Rank patent is a type of system wherein multiple pieces of content are connected with a digital signature that represents one or more "agents" (authors).

Three years after Google Authorship was launched, the company decided to discontinue the project and SEO's everywhere cried a little. The announcement came from John Mueller of Google Webmaster Tools which he posted in Google+. According to Mueller, Google will stop displaying authorship in Google Search. Likewise, it will no longer track data from content with the rel=author markup in SERP rankings.

Google noticed that displaying the authorship information wasn't as useful as the company had thought it would be. At some point, it can even distract from the results. For these reasons, Google decided to axe the Authorship project.

Don't discount Author Rank as a result of this change and the reduced spinets.

According to Search Engine Land:

Author Rank Is Real — And Continues!

Schmidt was just speculating in his book, not describing anything that was actually happening at Google. From Google itself, there was talk several times last year of making use of Author Rank as a way to identify subject experts and somehow boost them in the search results:

  • Google Authority Boost: Google’s Algorithm To Determine Which Site Is A Subject Authority, May 2013
  • Google’s Matt Cutts: Someday, Perhaps Ranking Benefits From Using Rel=”Author”, June 2013
  • Google Still Working On Promoting Subject-Specific Authorities In Search Results, December 2013

That was still all talk. The first real action came in March of this year. After Amit Singhal, the head of Google Search, said that Author Rank still wasn't being used, the head of Google’s web spam team gave a caveat of where Author Rank was used: for the “In-depth articles” section, when it sometimes appears, of Google’s search results.

Google divulged that dropping Google Authorship shouldn't have an impact on how the In-depth articles section works so strong writers' SEO platforms should be intact. Google also explained that the dropping of Google Authorship won’t impact its other efforts to reward authors who perpetually make quality and engaging content. Well, if you read the above portion, you're likely scratching your heads. How is there to be author rank without authorship, when Google has also said that it’s ignoring authorship markup? The answer is that Google has other ways to the author of a quality story, if it wants. In particular, Google is likely to look for visible "bylines" and citations that often appear on news stories and blog posts. These existed before Google Authorship, and they aren't going away. One thing to keep in mind, you will want to ensure that all of your titled work is consolidated under the account name you will want tracked.
Read more...

What Are The Best Management Groups For My PPC Campaign?

Last week we met with a client concerning his Pay  Per Click managed account.  While reviewing his keywords for SEO, we looked at the keywords he was paying for.  To our surprise, he was paying top dollar for items that seemed just short of criminal.  While Naper Design has at times taken a stance against PPC campaigns, we do recognize the need for strong advertising management in professional settings.  Often the view has been that we are completely anti-PPC ads.  This would be a very incorrect interpretation of our core beliefs.  The problem we have are managed accounts that sell and bid for keywords that are unneeded or to expensive, selling to competing groups, and dishonest practices.  I should point out that there are PPC management groups out there that are honest and provide quality service, but when looking for an advertising group, it would be wise to check that the following keyword issues are not arising:

1. Paying For What You Don't Have To

Our encounter last week began with a web design client of ours asking us to add some of his PPC keywords into his site.  This is perfectly common and highly encouraged.  The better the keyword score in the content, the lower the keyword will cost... (so use it...).  What surprised us was that the client was being charged for any and all variations of their own business name.   Even worse, they were being charged for their competitions business name.  Keywords are generally for prospective customers that don't know the business that they are looking for, not for your assured clients that already know who you are.  Title tags should be used to make sure that your business name keeps you up top, but paying for a listing directly above your organic listing is absolutely ridiculous and highly dishonest by your marketing management team.  If you find that you are paying for your business name, it would be highly advised to demand an audit of clicks to that keyword and a refund for them.  In the future, I would also suggest ensuring that they are never added to a campaign. -the only exception to this rule is for large companies that have scam sites attempting to take traffic, sadly, these still need title payment-

2. Wanted But Unneeded PPC Keywords

At what point are you happy with your position in the search engines?  There are arguments that say first on everything, and then there are those that say bottom of first page.  The debate on placement will continue to rage with no winners but several losers.  The problem is that it's so case specific that nothing can be considered doctrine.  The only thing certain is that there is a point when the owner should feel comfortable in his organic, and comfortable in his Search Engine Marketing(SEM).  If a keyword cost $50.00/click, but you rank on the middle of Page 2, is it worth it to pay for a listing?  The hard answer is "probably not, but possibly maybe". No one can say with absolute assurance that this keyword will inspire a client.  No one can say with any certainty that a client will not be born from someone clicking the add.  The point is to maximize the performance of your add campaign.  This requires sitting down with a calculator, a rank checking script ( We use Traffic Travis because the cartoon is cute), and weighing which keywords will pay for themselves and provide the best profits for the money spent.  This process should not be painless, and should not be quick.  This should be a process where both parties delve into the need of each keyword to make sure for effectiveness.  If this is something that is blown off or rushed by your advertisement representative, then I would suggest caution.  Nothing is worse than moving to quickly through a situation and purchasing something that wasn't needed.  Proper management of your funds will lead to better returns on your investments.

3.Dishonest Practices

Most advertising management companies that you can meet face to face are going to be reputable.  There are a few out the who are not honest, but they often don't survive a market based on the value of public image.  Most dishonest practices tend to come from Cold Calling representatives who either refuse to meet in person, or can't because they live somewhere completely out of range.  It is highly suggested to conduct a simple Google search of a reps number as soon as you hear from them.  In this day and age of shared information, people are quick to report harassing calls from marketers, and more importantly, the ones that are selling a Scam.  While it's true that not every company can be met with face to face, it is highly suggested to only teleconference with companies that have proven themselves to be reputable.  If you conduct a search and find nothing but complaints, it's a good possibility that you will want to avoid their practices.  There are companies that have been spammed, and as much as we fight against these practices, the ultimate goal for you as a business owners is to safeguard your business. Keep in mind, not every marketing company is suitable for your business.  Likewise, your business might not be suitable for the marketing agency.  Each agency has its strong-suits and they may differ from your company goals.  Come prepared with questions relevant to your business goals. After all, no matter how reputable, honest and effective the marketing company may be, if they don't know your exact goals for the future, they will be unable to perform to peak performance. For a list of trusted marketing agents, we suggest asking businesses that have been around for some time.  Ask them what companies they have worked with.  Study the trends in conversation.  Like all situations, if you stick with the winners, you'll follow their path.
Read more...

Web 3.0? Is there really such a thing or did someone run out of post names?

So here we have the term Web 3.0 coming into common usage.  The problem is, what does it actually mean?  Many are scratching their heads and asking, "isn't that what Web 2.0 was supposed to be about". The reference to Web 3.0 was made by members of the W3 to describe the desire for a Semantic Web. This new "Semantic Web" was to be user friendly and interactive... (I know; sounds familiar, right?). In almost all of the laymen terms, it sounds virtually the same as Web 2.0; the difference is when we look at the core problems with Web 2.0 and the theories behind it.
LOL Cats were the greatest gift of Web 2.0
Web 2.0 was to give an interactive world of exchanging ideas in an intelligent way.  Many would argue about how intelligent the exchange has actually been, but intelligent communication was the intent.  People have built on these principles to pass information accross the internet in a manner that resembled intelligence, but have found one major piece of the equasion was left out....
The Computers didn't know what we were saying..!!!
The main purpose of Web 3.0 is to cooperatively enable the computers to understand the connections being made within the internet.  By enableing them to have an understanding of the importance that real people are placing on content, the systems will be able to weed out all of the Black Hat and Scam jobs.(at least for a little while) Here's the post of wikipedia: "Purpose Humans are capable of using the Web to carry out tasks such as finding the Irish word for "directory", reserving a library book, and searching for a low price for a DVD. However, one computer cannot accomplish all of these tasks without human direction, because web pages are designed to be read by people, not machines. The semantic web is a vision of information that is understandable by computers, so computers can perform more of the tedious work involved in finding, combining, and acting upon information on the web. Tim Berners-Lee originally expressed the vision of the semantic web as follows:[6]
I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize. – Tim Berners-Lee, 1999
Semantic publishing will benefit greatly from the semantic web. In particular, the semantic web is expected to revolutionize scientific publishing, such as real-time publishing and sharing of experimental data on the Internet. This simple but radical idea is now being explored by W3C HCLS group's Scientific Publishing Task Force. Semantic Web application areas are experiencing intensified interest due to the rapid growth in the use of the Web, together with the innovation and renovation of information content technologies. The Semantic Web is regarded as an integrator across different content and information applications and systems, and provide mechanisms for the realisation of Enterprise Information Systems. The rapidity of the growth experienced provides the impetus for researchers to focus on the creation and dissemination of innovative Semantic Web technologies, where the envisaged ’Semantic Web’ is long overdue. Often the terms ’Semantics’, ’metadata’, ’ontologies’ and ’Semantic Web’ are used inconsistently. In particular, these terms are used as everyday terminology by researchers and practitioners, spanning a vast landscape of different fields, technologies, concepts and application areas. Furthermore, there is confusion with regards to the current status of the enabling technologies envisioned to realise the Semantic Web. In a paper presented by Gerber, Barnard and Van der Merwe[7] the Semantic Web landscape are charted and a brief summary of related terms and enabling technologies are presented. The architectural model proposed by Tim Berners-Lee is used as basis to present a status model that reflects current and emerging technologies" What this means for developers? More work... more education.. longer nights For those true to the desire of solving the next development or SEO equation, this is more tantalizing than tantrum causing.  Those who enjoy doing quality SEO work will only see this as a challenge.  This is also seen as a much needed improvement to kill off the scammera that kill the face of our business. What this means for the "No Talent A$$ Clowns" using Black and Grey Hat Techniques: You better learn some real SEO, or your days are numbered.  Many times before these threats have been issued by the W3, but never before have they issued exact concepts that will be so easily incorporated by both Google and MSN.  There will always be those who skirt the system, but most will find their cheep tricks no longer working after these new rules are implemented. Here is the basic visual guide and reference to the new Semantic Solution.  i'll leave links acrross to the entire Wiki article. We will be evaluating the software available for the changes and should have a review within the next few weeks... Cheers.

Web 3.0

Tim Berners-Lee has described the semantic web as a component of 'Web 3.0'.[9]
People keep asking what Web 3.0 is. I think maybe when you've got an overlay of scalable vector graphics - everything rippling and folding and looking misty — on Web 2.0 and access to a semantic Web integrated across a huge space of data, you'll have access to an unbelievable data resource..."

Tim Berners-Lee, 2006

Relationship to the hypertext web

Limitations of HTML

Many files on a typical computer can be loosely divided into documents and data. Documents like mail messages, reports, and brochures are read by humans. Data, like calendars, addressbooks, playlists, and spreadsheets are presented using an application program which lets them be viewed, searched and combined in many ways. Currently, the World Wide Web is based mainly on documents written in Hypertext Markup Language (HTML), a markup convention that is used for coding a body of text interspersed with multimedia objects such as images and interactive forms. Metadata tags, for example
<meta name="keywords" content="computing, computer studies, computer">
<meta name="description" content="Cheap widgets for sale">
<meta name="author" content="John Doe">
provide a method by which computers can categorise the content of web pages. With HTML and a tool to render it (perhaps web browser software, perhaps another user agent), one can create and present a page that lists items for sale. The HTML of this catalog page can make simple, document-level assertions such as "this document's title is 'Widget Superstore'", but there is no capability within the HTML itself to assert unambiguously that, for example, item number X586172 is an Acme Gizmo with a retail price of €199, or that it is a consumer product. Rather, HTML can only say that the span of text "X586172" is something that should be positioned near "Acme Gizmo" and "€199", etc. There is no way to say "this is a catalog" or even to establish that "Acme Gizmo" is a kind of title or that "€199" is a price. There is also no way to express that these pieces of information are bound together in describing a discrete item, distinct from other items perhaps listed on the page. Semantic HTML refers to the traditional HTML practice of markup following intention, rather than specifying layout details directly. For example, the use of <em> denoting "emphasis" rather than <i>, which specifies italics. Layout details are left up to the browser, in combination with Cascading Style Sheets. But this practice falls short of specifying the semantics of objects such as items for sale or prices. Microformats represent unofficial attempts to extend HTML syntax to create machine-readable semantic markup about objects such as retail stores and items for sale.

Semantic Web solutions

The Semantic Web takes the solution further. It involves publishing in languages specifically designed for data: Resource Description Framework (RDF), Web Ontology Language (OWL), and Extensible Markup Language (XML). HTML describes documents and the links between them. RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or airplane parts. Tim Berners-Lee calls the resulting network of Linked Data the Giant Global Graph, in contrast to the HTML-based World Wide Web. These technologies are combined in order to provide descriptions that supplement or replace the content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-accessible databases [10], or as markup within documents (particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues stored separately). The machine-readable descriptions enable content managers to add meaning to the content, i.e., to describe the structure of the knowledge we have about that content. In this way, a machine can process knowledge itself, instead of text, using processes similar to human deductive reasoning and inference, thereby obtaining more meaningful results and helping computers to perform automated information gathering and research. An example of a tag that would be used in a non-semantic web page:
<item>cat</item>
Encoding similar information in a semantic web page might look like this:
<item rdf:about="http://dbpedia.org/resource/Cat">Cat</item>

Relationship to object oriented programming

A number of authors highlight the similarities which the Semantic Web shares with object-oriented programming (OOP).[11][12] Both the semantic web and object-oriented programming have classes with attributes and the concept of instances or objects. Linked Data uses Dereferenceable Uniform Resource Identifiers in a manner similar to the common programming concept of pointers or "object identifiers" in OOP. Dereferenceable URIs can thus be used to access "data by reference". The Unified Modeling Language is designed to communicate about object-oriented systems, and can thus be used for both object-oriented programming and semantic web development. When the web was first being created in the late 1980s and early 1990s, it was done using object-oriented programming languages[citation needed] such as Objective-C, Smalltalk and CORBA. In the mid-1990s this development practice was furthered with the announcement of the Enterprise Objects Framework, Portable Distributed Objects and WebObjects all by NeXT, in addition to the Component Object Model released by Microsoft. XML was then released in 1998, and RDF a year after in 1999. Similarity to object oriented programming also came from two other routes: the first was the development of the very knowledge-centric "Hyperdocument" systems by Douglas Engelbart[13], and the second comes from the usage and development of the Hypertext Transfer Protocol.[14][clarification needed]

Web 3.0

Tim Berners-Lee has described the semantic web as a component of 'Web 3.0'.[9]
People keep asking what Web 3.0 is. I think maybe when you've got an overlay of scalable vector graphics - everything rippling and folding and looking misty — on Web 2.0 and access to a semantic Web integrated across a huge space of data, you'll have access to an unbelievable data resource..."

Tim Berners-Lee, 2006

[edit] Relationship to the hypertext web

[edit] Limitations of HTML

Many files on a typical computer can be loosely divided into documents and data. Documents like mail messages, reports, and brochures are read by humans. Data, like calendars, addressbooks, playlists, and spreadsheets are presented using an application program which lets them be viewed, searched and combined in many ways. Currently, the World Wide Web is based mainly on documents written in Hypertext Markup Language (HTML), a markup convention that is used for coding a body of text interspersed with multimedia objects such as images and interactive forms. Metadata tags, for example
<meta name="keywords" content="computing, computer studies, computer">
<meta name="description" content="Cheap widgets for sale">
<meta name="author" content="John Doe">
provide a method by which computers can categorise the content of web pages. With HTML and a tool to render it (perhaps web browser software, perhaps another user agent), one can create and present a page that lists items for sale. The HTML of this catalog page can make simple, document-level assertions such as "this document's title is 'Widget Superstore'", but there is no capability within the HTML itself to assert unambiguously that, for example, item number X586172 is an Acme Gizmo with a retail price of €199, or that it is a consumer product. Rather, HTML can only say that the span of text "X586172" is something that should be positioned near "Acme Gizmo" and "€199", etc. There is no way to say "this is a catalog" or even to establish that "Acme Gizmo" is a kind of title or that "€199" is a price. There is also no way to express that these pieces of information are bound together in describing a discrete item, distinct from other items perhaps listed on the page. Semantic HTML refers to the traditional HTML practice of markup following intention, rather than specifying layout details directly. For example, the use of <em> denoting "emphasis" rather than <i>, which specifies italics. Layout details are left up to the browser, in combination with Cascading Style Sheets. But this practice falls short of specifying the semantics of objects such as items for sale or prices. Microformats represent unofficial attempts to extend HTML syntax to create machine-readable semantic markup about objects such as retail stores and items for sale.

[edit] Semantic Web solutions

The Semantic Web takes the solution further. It involves publishing in languages specifically designed for data: Resource Description Framework (RDF), Web Ontology Language (OWL), and Extensible Markup Language (XML). HTML describes documents and the links between them. RDF, OWL, and XML, by contrast, can describe arbitrary things such as people, meetings, or airplane parts. Tim Berners-Lee calls the resulting network of Linked Data the Giant Global Graph, in contrast to the HTML-based World Wide Web. These technologies are combined in order to provide descriptions that supplement or replace the content of Web documents. Thus, content may manifest itself as descriptive data stored in Web-accessible databases [10], or as markup within documents (particularly, in Extensible HTML (XHTML) interspersed with XML, or, more often, purely in XML, with layout or rendering cues stored separately). The machine-readable descriptions enable content managers to add meaning to the content, i.e., to describe the structure of the knowledge we have about that content. In this way, a machine can process knowledge itself, instead of text, using processes similar to human deductive reasoning and inference, thereby obtaining more meaningful results and helping computers to perform automated information gathering and research. An example of a tag that would be used in a non-semantic web page:
<item>cat</item>
Encoding similar information in a semantic web page might look like this:
<item rdf:about="http://dbpedia.org/resource/Cat">Cat</item>

[edit] Relationship to object oriented programming

A number of authors highlight the similarities which the Semantic Web shares with object-oriented programming (OOP).[11][12] Both the semantic web and object-oriented programming have classes with attributes and the concept of instances or objects. Linked Data uses Dereferenceable Uniform Resource Identifiers in a manner similar to the common programming concept of pointers or "object identifiers" in OOP. Dereferenceable URIs can thus be used to access "data by reference". The Unified Modeling Language is designed to communicate about object-oriented systems, and can thus be used for both object-oriented programming and semantic web development. When the web was first being created in the late 1980s and early 1990s, it was done using object-oriented programming languages[citation needed] such as Objective-C, Smalltalk and CORBA. In the mid-1990s this development practice was furthered with the announcement of the Enterprise Objects Framework, Portable Distributed Objects and WebObjects all by NeXT, in addition to the Component Object Model released by Microsoft. XML was then released in 1998, and RDF a year after in 1999. Similarity to object oriented programming also came from two other routes: the first was the development of the very knowledge-centric "Hyperdocument" systems by Douglas Engelbart[13], and the second comes from the usage and development of the Hypertext Transfer Protocol.[14][clarification needed]

[edit] Skeptical reactions

[edit] Practical feasibility

Critics (e.g. Which Semantic Web?) question the basic feasibility of a complete or even partial fulfillment of the semantic web. Cory Doctorow's critique ("metacrap") is from the perspective of human behavior and personal preferences. For example, people lie: they may include spurious metadata into Web pages in an attempt to mislead Semantic Web engines that naively assume the metadata's veracity. This phenomenon was well-known with metatags that fooled the AltaVista ranking algorithm into elevating the ranking of certain Web pages: the Google indexing engine specifically looks for such attempts at manipulation. Peter Gärdenfors and Timo Honkela point out that logic-based semantic web technologies cover only a fraction of the relevant phenomena related to semantics [15] [16]. Where semantic web technologies have found a greater degree of practical adoption, it has tended to be among core specialized communities and organizations for intra-company projects.[17] The practical constraints toward adoption have appeared less challenging where domain and scope is more limited than that of the general public and the World-Wide Web.[17]

[edit] The potential of an idea in fast progress

The original 2001 Scientific American article by Berners-Lee described an expected evolution of the existing Web to a Semantic Web.[18] A complete evolution as described by Berners-Lee has yet to occur. In 2006, Berners-Lee and colleagues stated that: "This simple idea, however, remains largely unrealized."[19] While the idea is still in the making, it seems to evolve quickly and inspire many. Between 2007-2010 several scholars have already explored first applications and the social potential of the semantic web in the business and health sectors, and for social networking [20] and even for the broader evolution of democracy, specifically, how a society forms its common will in a democratic manner through a semantic web [21]

[edit] Censorship and privacy

Enthusiasm about the semantic web could be tempered by concerns regarding censorship and privacy. For instance, text-analyzing techniques can now be easily bypassed by using other words, metaphors for instance, or by using images in place of words. An advanced implementation of the semantic web would make it much easier for governments to control the viewing and creation of online information, as this information would be much easier for an automated content-blocking machine to understand. In addition, the issue has also been raised that, with the use of FOAF files and geo location meta-data, there would be very little anonymity associated with the authorship of articles on things such as a personal blog.

[edit] Doubling output formats

Another criticism of the semantic web is that it would be much more time-consuming to create and publish content because there would need to be two formats for one piece of data: one for human viewing and one for machines. However, many web applications in development are addressing this issue by creating a machine-readable format upon the publishing of data or the request of a machine for such data. The development of microformats has been one reaction to this kind of criticism. Specifications such as eRDF and RDFa allow arbitrary RDF data to be embedded in HTML pages. The GRDDL (Gleaning Resource Descriptions from Dialects of Language) mechanism allows existing material (including microformats) to be automatically interpreted as RDF, so publishers only need to use a single format, such as HTML.

[edit] Need

The idea of a semantic web, able to describe, and associate meaning with data, necessarily involves more than simple XHTML mark-up code. It is based on an assumption that, in order for it to be possible to endow machines with an ability to accurately interpret web homed content, far more than the mere ordered relationships involving letters and words is necessary as underlying infrastructure, (attendant to semantic issues). Otherwise, most of the supportive functionality would have been available in Web 2.0 (and before), and it would have been possible to derive a semantically capable Web with minor, incremental additions. Additions to the infrastructure to support semantic functionality include latent dynamic network models that can, under certain conditions, be 'trained' to appropriately 'learn' meaning based on order data, in the process 'learning' relationships with order (a kind of rudimentary working grammar). See for example latent semantic analysis

[edit] Components

The semantic web comprises the standards and tools of XML, XML Schema, RDF, RDF Schema and OWL that are organized in the Semantic Web Stack. The OWL Web Ontology Language Overview describes the function and relationship of each of these components of the semantic web:
  • XML provides an elemental syntax for content structure within documents, yet associates no semantics with the meaning of the content contained within.
  • XML Schema is a language for providing and restricting the structure and content of elements contained within XML documents.
  • RDF is a simple language for expressing data models, which refer to objects ("resources") and their relationships. An RDF-based model can be represented in XML syntax.
  • RDF Schema extends RDF and is a vocabulary for describing properties and classes of RDF-based resources, with semantics for generalized-hierarchies of such properties and classes.
  • OWL adds more vocabulary for describing properties and classes: among others, relations between classes (e.g. disjointness), cardinality (e.g. "exactly one"), equality, richer typing of properties, characteristics of properties (e.g. symmetry), and enumerated classes.
  • SPARQL is a protocol and query language for semantic web data sources.
Current ongoing standardizations include: Not yet fully realized layers include:
  • Unifying Logic and Proof layers are undergoing active research.
The intent is to enhance the usability and usefulness of the Web and its interconnected resources through:
  • Servers which expose existing data systems using the RDF and SPARQL standards. Many converters to RDF exist from different applications. Relational databases are an important source. The semantic web server attaches to the existing system without affecting its operation.
  • Documents "marked up" with semantic information (an extension of the HTML <meta> tags used in today's Web pages to supply information for Web search engines using web crawlers). This could be machine-understandable information about the human-understandable content of the document (such as the creator, title, description, etc., of the document) or it could be purely metadata representing a set of facts (such as resources and services elsewhere in the site). (Note that anything that can be identified with a Uniform Resource Identifier (URI) can be described, so the semantic web can reason about animals, people, places, ideas, etc.) Semantic markup is often generated automatically, rather than manually.
  • Common metadata vocabularies (ontologies) and maps between vocabularies that allow document creators to know how to mark up their documents so that agents can use the information in the supplied metadata (so that Author in the sense of 'the Author of the page' won't be confused with Author in the sense of a book that is the subject of a book review).
  • Automated agents to perform tasks for users of the semantic web using this data
  • Web-based services (often with agents of their own) to supply information specifically to agents (for example, a Trust service that an agent could ask if some online store has a history of poor service or spamming)

[edit] Challenges

Some of the challenges for the Semantic Web include vastness, vagueness, uncertainty, inconsistency and deceit. Automated reasoning systems will have to deal with all of these issues in order to deliver on the promise of the Semantic Web.
  • Vastness: The World Wide Web contains at least 48 billion pages as of this writing (August 2, 2009). The SNOMED CT medical terminology ontology contains 370,000 class names, and existing technology has not yet been able to eliminate all semantically duplicated terms. Any automated reasoning system will have to deal with truly huge inputs.
  • Vagueness: These are imprecise concepts like "young" or "tall". This arises from the vagueness of user queries, of concepts represented by content providers, of matching query terms to provider terms and of trying to combine different knowledge bases with overlapping but subtly different concepts. Fuzzy logic is the most common technique for dealing with vagueness.
  • Uncertainty: These are precise concepts with uncertain values. For example, a patient might present a set of symptoms which correspond to a number of different distinct diagnoses each with a different probability. Probabilistic reasoning techniques are generally employed to address uncertainty.
  • Deceit: This is when the producer of the information is intentionally misleading the consumer of the information. Cryptography techniques are currently utilized to alleviate this threat.
This list of challenges is illustrative rather than exhaustive, and it focuses on the challenges to the "unifying logic" and "proof" layers of the Semantic Web. The World Wide Web Consortium (W3C) Incubator Group for Uncertainty Reasoning for the World Wide Web (URW3-XG) final report lumps these problems together under the single heading of "uncertainty". Many of the techniques mentioned here will require extensions to the Web Ontology Language (OWL) for example to annotate conditional probabilities. This is an area of active research.
Read more...

New Website Builder at Intuit

For years now, Intuit has been a shining star to many small businesses because of its highly acclaimed TurboTax, Quicken, and Quick-Books software.  I myself have been an avid user of their small business tools and have enjoyed the time it saves with handling budgets, bills and payrolls.

Recently, Intuit began a major push for their new website building software and I thought I'd give it a shot.  If it's better than Go-Daddy's horrid "Website Tonight" software,  that would be a great improvement for businesses beginning a web footprint.  I've decided to break down my critique of their builder into three basic categories for this assessment.  There are certainly other criteria that could be used, but I would likely write much more than anyone would be willing to read.   After discussing the criteria, we will make a founded assessment of the value vs cost of their development price.  The three categories we will discuss are for design structure, code development, and what SEO and SEM their builds enable a site with.  We'll use a scoring method of 1 to 10 with 1 equaling Go Daddy's Website Tonight and 10 equaling a Free Wordpress Theme.

Design

Score= 4

First of all, their design structure, while a bit cliche, is one of the first autobuilder formats that has some decent design structure to their templates.  That being said, these are templates and will not move far.  Of the ~2000 templates they have, there are only about 100 that are really going to get peoples attention.  Of these 100 templates, take a guess at how many times each one will be recycled continuously across the internet.  On the positive side, the builder does allow you to tweak certain aspects and containers of each template to allownot worth it for some impression of originality on your site.  This will help to an extent, but eventually there will be repition of your site throughout your market and maybe even your direct competitors.  While far better than the templates that most autobuilders use, there's still a lot to be desired here.

Code Development

Score= 7

Some of you familiar with their templates are going to be scratching your heads after seeing a score of 7 on this one.  Yes, it may be a little high for legacy HTML, but there is a good reason behind this thought.

They're using Quality CSS and don't have tons of tables being shot out at your browser.  If familiar with other autobuilders, you understand that the majority of their build are conducted in long and dirty table design.  The CSS in the Intuit sites are actually decent.  Anyone hoping to see PHP within their listings, hope you can find one, but after and hour of search and reading "index.html" as many times as I did, it was apparent that there are no PHP driven sites within their selection.  This means that there is less functionality available through these sites, but if you're using them, you probably don't even notice.  All in all though, the HTML is written well for template use and the CSS is clean enough to earn an 8 in this category.

SEO and SEM

Score=2


Ok, here is the category that hurts them.  While it was hard to find any sites built with this tool(besides spend money and make one), I was able to find a few that we could run through website. grader from Hubspot.  The sites saw a "Best Ranking" of 30. At no point in my life has a score of 30 been a passing grade.  The same occurred when running it through the tools on SEOmoz(know some of you hate them, but they have easy tools).   There's simply no excuse for a website builder that will leave a client with a  site of  limited ability and still be published mainly through PPC style advertising.   If there is any suggestion for Intuit, or specifically people thinking of using there site builder, it would be that there needs to be more focus within the code structure to benefit the customer needs. If you are looking for a website that allows people to find your business and not just a vanity plate to put on a business card, avoid this builder as well.

Final Score= 4.3

While Some of my fellow designers will vehemently disagree with my assessment, I think it's like to be the best summarized review that could be offered.  It should be mentioned as well, that this is only a grade of the Starter system, since that's the one being offered for $4.99/month(while the only one with the tools needed for business cost $49.99/month).  The problem with the price is the "Bait and Switch".  If we look at the pricing table, it becomes apparent that this is not the fantastic deal that they make it out to be in the commercial(imagine that happening).  Here are there price columns per service.

As you can see, there's not much you're getting for their advertised $4.99/month or even for their Cadillac Package that could legitimize this site build being used for a long period of time.  It is understandable that some businesses must use one of these auto builders to get started, but it should only be a temp fix. These should not be used over six months at max; otherwise, it becomes a money pit instead of a money producer.  A quality CMS system built for your business will save you money in the long-run, and will do more for your income as well...

hidden links
Read more...