Author - sicksens

Downtime Can Be A Nightmare

We're glad to be back in the land of the tubes.  The past two weeks has made for an incredible experience of getting our site back up and working out the finer points with our host. Before we get into that portion though, we should probably explain why we were down... As we often point out, we are hosted by Byethost.  They and Bluehost are the two hosting companies we've come to trust the most for shared server space. Unfortunately, we discovered that a plugin we were using for the feeds was not well accepted by our neighbors on the server.  We were capturing all of our feeds with the Wordpress plugin wp-o-matic. What we didn't realize is that this plugin causes for some major drain on the server side, and when on shared space, damages your neighbors bandwidth... OOPS!!! We honestly never considered that such a widely-used plugin would be the cause of such issues, but test afterward have shown that it couldn't have been anything else. Because of the excessive bandwidth, resources, and memory that wp-o-matic was stripping from the shared space, we were booted to a Virtual Private Server... Ie...Banished to the Degaba System. It took a week of negotiating, but byethost was willing to settle the discourse once we were well assured that it had to be the plugin that was causing all of he trouble.  One week later, we're back up and on the way again.... Needless to say, our feeds will no longer run through WP-O-Matic. Now for the sob-story While we were only down about a day and back and forth from IP's for one week, Google took a lot of notice.  Our impressions went from an average of 1,000/day to 22 tonight.  Our traffic went from an average of 200/day to 30 tonight. The past week of being back up is representative of absolute downtime in the SERPs. Notice in the screen-shot that the exact date of being down is visible, as well as the subsequent falling out with Google ranking. While this is a pretty steep setback for the short-term, we will return to our previous standing rather quickly.  We've run into similar situations with client sites, but never thought we would need to do damage control of our own. Here are the following steps that must be taken directly after an event like this. 1. Immediate push for Back-links- During the month following a down-time like this one, the need for sites around the web to verify your existence is crucial.  The pages that Google attempts and fails to Crawl can be put on the back burners for a period of time.  Making a strong push for links and acknowledgment can get a faster crawl to those skipped pages than would likely come otherwise. 2. Add more content.  We all know that the best way to get Google's attention is to give it something new to look at. While we feed several different blogs on this site, we also enjoy adding our own content on a regular basis  At this moment its pertinent to add more than we normally would.  The additional content will signal that the site is not dead and is in fact very much alive and active. 3.  In addition to adding the content, we have to make sure that Google is aware of it being added.  To drive home this point, we submit sitemaps... Several sitemaps. From XML to ROR, wee submit every possible format of sitemap available.  Some may think this a bit eccentric, but in dire moments like these, it can be the difference between being demoted  for weeks or for months.  For additional sitemaps in Wordpress, we suggest using the following We also have some made at xml-sitemaps.com in url.txt and rss.ror format. Our running experiment will now be to see how long it will take to return to the traffic and ranking that we had last month.  We will share this experience with you all and welcome any suggestion or thoughts you would like to add.
Read more...

PrestaShop Devlopment for e-Commerce in Chicago

We are pleased to announce that Naper Design will be developing our first major PrestaShop Template and Website.  While we've enjoyed playing with it for some time now, we will finally have the opportunity to develop a major site with this highly versatile E-Commerce solution. Here's a description of PrestaShop from the core site: "PrestaShop e-Commerce Solution was built to take advantage of essential Web 2.0 innovations such as dynamic AJAX-powered features and next-generation ergonomy. PrestaShop guides users through your product catalog intelligently and effortlessly, turning intrigued visitors into paying customers" PrestaShop began as a network and community maintained e-Commerce Software. The script is maintained in much the same way Wordpress is.  The most up to date version of PrestaShop can be downloaded here. It offers the Following:

Front-office

  • Featured products on homepage
  • Top sellers on homepage
  • New items on homepage
  • 'Free shipping' offers
  • Cross-selling (Accessories)
  • Product image zoom
  • Order out-of-stock items
  • Customer subscription & user accounts
  • Payment by bank wire
  • Google Checkout module
  • Cash-On-Delivery (COD)
  • Preconfigured for Paypal

Back-office

  • Unlimited categories & subcategories, product attribute combinations, product specs, images, currencies, tax settings, carriers & destinations
  • Real-time currency exchange rates
  • Inventory management
  • Unlimited languages & dialects
  • 100% modifiable graphic themes
  • Newsletter contact export
  • Alias search
  • SSL encryption
  • Visitors online
  • Customer groups
Our first site built in the platform (fr mainstream anyway) will be Malloys Finest. - the site that we are preparing for them can be found here-
Read more...

Template Monster Releases New PrestaShop Themes

Brooklyn, New York, July 7th 2010 - TemplateMonster.com, the Internets largest templates provider, introduces brand new PrestaShop Themes which are basically a brand new eCommerce solution designed specifically to provide the customers with the speedy and very lightweight tool in setting up their own PrestaShop stores. PrestaShop is a new feature-rich and open-source eCommerce software which has a very powerful back-end for its size. Combined with decent and affordable designs PrestaShop is an efficient eCommerce shopping cart solution available for small and medium-sized online businesses. PrestaShop multilingual eCommerce system is fully customizable, quick and simple to install. And the site owners will appreciate its flexibility as it supports unlimited categories, sub-categories and image pictures. Besides, the store admins can manage the inventory, customers, orders, and payments easily. According to companys authorities, PrestaShop themes provided by TemplateMonster are developed to take advantages of this new shopping cart system. The themes are HTML/CSS Validated and optimized for fast page loading, plus they support all major browsers and have built-in jQuery elements for even more spectacular design. David Braun, CEO of the Template Monster, said, "Our customers have been asking us to launch PrestaShop themes for many months now. And today we can proudly claim that TemplateMonster now offers innovative PrestaShop themes brushed and polished by our website design pros. Dedicated to providing unbeatable functionality for customers online eCommerce presence, our PrestaShop design solutions ensure youll fully enjoy fantastic functionality of PrestaShop. Not to mention that the themes are extremely easy to modify. And of course we are eager to extend our PrestaShop selection by adding more and more new designs into this product type. So be sure to check back for more premium quality PrestaShop templates!" Previously the company has launched the Free PrestaShop Theme for cell phone store. This template is still available for download at Free PrestaShop Theme download page.
Read more...

Spotting a Scam online

So, it's not like the Acai Berry is the first or will be the last scam on line.  Every day hundreds of affiliate sites are added to gather information about prospective customers and leads.  I actually love hearing people laugh and pride themselves on how many of these "Leads" they've collected.  The one problem with calling them leads(with thinking of them as a potential group of people to be marketed to) is that they usually don't want to be on the list of leads to begin with. Affiliate Email Lead Generation is the laziest and least performing form of marketing that there is. The only people who make any form of true wealth from running these scams are the leaders at the pinnacle of their particular pyramid scheme.  I call them pyramid schemes because that's just what they are.  If you prefer Multi-Level Marketing Scheme or Platform Scheme, please understand that changing the name of a scam does not change the fact that it's Pyramid Scheme and a scam. Having websites that take two minutes of scrolling to reach the bottom and serve only to harvest email addresses or to sell junk is still scam sales and weak marketing techniques.  It's like the guy who claims to be the modern Fonz. He gets a decent amount of dates and interaction, but has to work nonstop to do so, burning every bridge as he goes. I'm not certain when it happened, but at some point people began perpetuating a fake reality of internet marketing.  With the lowest conversion ratio of any sales "Ever" these people are bragging about the rewards to be had from internet affiliate marketing.  I should qualify my statements though, I am an affiliate for Woo Themes, Theme Forest, and many others... I am not speaking of this form of affiliation, I'm speaking of the obvious crooks like"incomehome55.com" (I just don't like giving those crooks a link to prove this point) or the crooks that are in the Rx or Acai camp. They blast massive amounts of scam material and give bad names to internet marketers... For What?  To make sales as poor salesman.  The sad truth is that these affiliates with long draping websites are so disastrous as salesman that they've taken the scam route as their only option to make a living.  At this point, there are far more scammers than true marketers out there. There is hope though.  There are true marketing methods still being practiced on line.  There are still marketing groups that demand for themselves to turn a conversion rate of at least 10%.  If you are with a group that doesn't even mention their conversion ratio, it would be wise to ask.  A quality group will be glad to show(prideful to say the least) that their conversion ratios are working in their clients benefit.  The initial reaction to these numbers is the knowledge that more people that find your product are purchasing through this marketing method.  Purchasers/Subscribers are definite cause for celebration... but don't discount the other reward. Your future clients won't perceive you to be a crook because of all of the Spam and Brute Force marketing techniques the other guys will be associated with.  This is the true reward.  Having true "Leads" recognize your brand as one to be trusted, nit run from.
Read more...

A Greatful Month in Naperville

This month we have picked up some very exciting work.  Over the course of July, we will be building or restyling seven websites in the Chicago and Naperville areas.  Some are just in need of a redesign, but some are complete database developments.  We are looking forward to the opportunity to expand our range. Here are some of the websites we are inherating. We'll show progress with them to all of you in the coming month. While we are also engaged in the creation of content and SEO for other clients, we offer these as sites to visit and grade the experience this month. Over the course of July, you should have several opportunities to witness growth with them all.
Read more...

40 Superb Photoshop Tutorials For Attractive Photo Effects

By Obaid ur Rehman Photoshop is the basic requirement of a designer and that’s why designers all around the world regularly look for the tutorials that can help them in polishing their Photoshop skills. Since the demand for Photoshop tutorials is too high these days, we prepared a post that can truly help you in achieving amazing results with your Photoshop skills.

Photoshop Tutorials for Attractive Photo Effects

Graceful Lady In The Dark In this tutorial you will find some of best technique to beautify your photos. Th Photoeffect3 in 40 Superb Photoshop Tutorials For Attractive Photo Effects The Colors Of Love In this tutorial you will learn some great tips to modify your photographs. Th Photoeffect4 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Simple Make Up In this tutorial, shows you how to enhance your normal potrait photo into beautifully fair skin and well done makeup, plus shiny healthy hair just like what you see on tv. Th Photoeffect11 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Easy composition but nice result This photoshop tutorial will be very useful for the beginners. Th Photoeffect2 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Impressive Photo’s Background In this tutorial you will find some of best technique to beautify your photos. Th Photoeffect5 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Adding Light Streaks To A Photo how to add colorful streaks of light to a photo. Th Photoeffect6 in 40 Superb Photoshop Tutorials For Attractive Photo Effects How to create Glowing Fashion Photo Manipulation how to create a glowing fashion photo manipulation using Photoshop techniques, starting from a basic model shot. Th Photoeffect7 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Creating a Vector Composite Effect from a Photo how to take an image, in this case a woman’s face, and give the appearance that it is entirely composed of vector shapes. Th Photoeffect8 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Selective Sepia how to use Photoshop to create selectively add a dramatic sepia effect a photo. This Photoshop effect works best when used with Photoshop CS3 but will also work with Photoshop CS2 or older using an alternative method. Th Photoeffect9 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Hot & Fiery Photo Effect how to create a photo effect like the one in the below image, it’s pretty much all accomplished by using Filters, as are most photo effects, right? Photo manipulation on the other hand uses many tools of the trade, most commonly “airbrushing”, as everyone calls it. Th Photoeffect10 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Creative Photo Of A Bride In this tutorial you will find some of best technique to beautify your photos. Th Photoeffect11 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Adding Reflections To Sunglasses In Photoshop how to add reflections, or at least, different reflections, to sunglasses. Th Photoeffect12 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Beautiful Lighting Effect In this tutorial you will learn some great tips to modify your photographs. Th Photoeffect13 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Create a photoshop mermaid image manipulation how to create your very own water splash techniques using only Photoshop, and you’ll learn how to customize brushes that are essential to this effect’s success. Th Photoeffect14 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Remove-freckles In this Photoshop tutorial, you’ll learn how to subtract freckles using a layer. This process provides the most natural results for light-brown freckles (ephelides) but will not work for dark freckles (lentigines). Th Photoeffect15 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Airbrushing – Natural Smooth Skin In this tutorial you will teach you an effective Photoshop technique to simulate airbrushing without losing the texture. Th Photoeffect16 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Photo retouching In this tutorial we are going to practice some amazing Photo effects using Wacom tablet. Of course you will be able to do it with PC mouse. Th Photoeffect19 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Thermal Photo Effect how to convert your photos into Thermal Photo Effect. Th Photoeffect20 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Glamour Model how to retouch a face’s photo and make from a simple picture a very original one. Th Photoeffect23 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Retouch A Girl with Lighting Focus how to use lighting effect to emphasize her face, skin and hair. Th Photoeffect24 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Star Diffusion This Photoshop effect creates diffusion in the shape of a four point star and is an ideal effect for portraits or any photos with a strong background blur. Th Photoeffect27 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Galaxy Angel how to retouch the skin of a model, imitate the make-up, draw the tattoos, represent the hair out of the smoke, use different textures, represent the clouds and insert the wings on the picture, enlarge the background the model is situated on, not cutting it out and create the effect of an old picture, at least on its edges. Th Photoeffect30 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Add Dynamic Lighting to a Flat Photograph how to spice up a fairly dull and flat photograph. Th Photoeffect31 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Create a Funky Perspective of a Model Riding Digital Volume how you can achieve the feeling of depth and motion.creating the volume fading away and adding foreground and background images. Th Photoeffect32 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Add Cool Fireworks to the Photo If you are interesting in pyrotechnics then this tutorial special for you! Th Photoeffect33 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Create an Artistic Photo by Yourself Here is not so hard, but very interesting tutorial how to make an artistic photo. Th Photoeffect34 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Old Style Photo Effect how to create old style photo effect with interesting accent. Th Photoeffect35 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Make Up Effect for the Face In this tutorial you will find some of best technique to beautify your photos. Th Photoeffect36 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Coloring Effects In this tutorial you will learn some great tips to modify your photographs. Th Photoeffect37 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Fantasy Art In this tutorial you will find some of best technique to beautify your photos. Th Photoeffect38 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Make Perfect Selection for Human Object by Utilising Channel Mask Technique in Photoshop In this tutorial you will learn some great tips to modify your photographs. Th Photoeffect39 in 40 Superb Photoshop Tutorials For Attractive Photo Effects How to Turn Humdrum Photos into Cinematic Portraits how to transform usual photo into regular, humdrum portrait, even faking HDR effects a little bit. Th Photoeffect40 in 40 Superb Photoshop Tutorials For Attractive Photo Effects How To Create A Neat Bird House From Scratch In this tutorial to create your own “Bird House”, just read the steps carefully, and you’ll be surprised how easily this can be achieved with Photoshop Th Photoeffect43 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Adding Light Streak how to add light streak into a photo. Th Photoeffect44 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Reducing 5 O’Clock Shadow And Beard Stubble In Photoshop how to reduce the appearance of 5 o’clock shadow and beard stubble in a photo. Th Photoeffect45 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Retouching a Studio Portrait how to enhance backgrounds, dodge and burn, brighten eyes, and add hair shine while keeping a low count of layers. Th Photoeffect46 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Dream Photo Effect how to create dreamy effect in your photographs. Th Photoeffect47 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Create a Beautiful and Dramatic Scene With Photo Manipulations how to combine two pictures to create a picturesque background, how to draw long hair manually using Photoshop brushes, as well as how to add some adjustment layers to add dramatic effects to the final image. Th Photoeffect48 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Super Fast and Easy Facial Retouching how to repair some image noise from a low-quality shot The woman in this shot is not a model, and she has a lot of character so you don’t want to overdo it with the smoothing. Th Photoeffect49 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Quick and Effective Facial Photo Retouching Here’s another method for quick and easy facial retouching. Th Photoeffect50 in 40 Superb Photoshop Tutorials For Attractive Photo Effects Achieve Brilliant Lighting Effects in Photoshop Lighting effects can make or break any digital artwork. When done properly, lighting can add visual impact, draw the viewer’s eye, convey depth and emotion, and tie together all the elements of the piece for a quality finished result. Th Photoeffect51 in 40 Superb Photoshop Tutorials For Attractive Photo Effects
Read more...

Statistics a Win for SEO

Posted by bhendrickson

We recently posted some correlation statistics on our blog. We believe these statistics are interesting and provide insight into the ways search engines work (a core principle of our mission here at SEOmoz). As we will continue to make similar statistics available, I'd like to discuss why correlations are interesting, refute the math behind recent criticisms, and reflect on how exciting it is to engage in mathematical discussions where critiques can be definitively rebutted.

I've been around SEOmoz for a little while now, but I don't post a lot. So, as a quick reminder, I designed and built the prototype for the SEOmoz's web index, as well as wrote a large portion of the back-end code for the project. We shipped the index with billions of pages nine months after I started on the prototype, and we have continued to improve it since. Recently I made the machine learning models that are used to make Page Authority and Domain Authority, and am working on some fairly exciting stuff that has not yet shipped. As I'm an engineer and not a regular blogger, I'll ask for a bit of empathy for my post - it's a bit technical, but I've tried to make it as accessible as possible.

Why does Correlation Matter?

Correlation helps us find causation by measuring how much variables change together. Correlation does not imply causation; variables can be changing together for reasons other than one affecting the other. However, if two variables are correlated and neither is affecting the other, we can conclude that there must be a third variable that is affecting both. This variable is known as a confounding variable. When we see correlations, we do learn that a cause exists -- it might just be a confounding variable that we have yet to figure out.

How can we make use of correlation data? Let's consider a non-SEO example.

There is evidence that women who occasionally drink alcohol during pregnancy give birth to smarter children with better social skills than women who abstain. The correlation is clear, but the causation is not. If it is causation between the variables, then light drinking will make the child smarter. If it is a confounding variable, light drinking could have no effect or even make the child slightly less intelligent (which is suggested by extrapolating the data that heavy drinking during pregnancy makes children considerably less intelligent).

Although these correlations are interesting, they are not black-and-white proof that behaviors need to change. One needs to consider which explanations are more plausible: the causal ones or the confounding variable ones. To keep the analogy simple, let's suppose there were only two likely explanation - one causal and one confounding. The causal explanation is that alcohol makes a mother less stressed, which helps the unborn baby. The confounding variable explanation is that women with more relaxed personalities are more likely to drink during pregnancy and less likely to negatively impact their child's intelligence with stress. Given this, I probably would be more likely to drink during pregnancy because of the correlation evidence, but there is an even bigger take-away: both likely explanations damn stress. So, because of the correlation evidence about drinking, I would work hard to avoid stressful circumstances. *

Was the analogy clear? I am suggesting that as SEOs we approach correlation statistics like pregnant women considering drinking - cautiously, but without too much stress.

* Even though I am a talented programmer and work in the SEO industry, do not take medical advice from me, and note that I construed the likely explanations for the sake of simplicity :-)

Some notes on data and methodology

We have two goals when selecting a methodology to analyze SERPs:

  1. Choose measurements that will communicate the most meaningful data
  2. Use techniques that can be easily understood and reproduced by others

These goals sometimes conflict, but we generally choose the most common method still consistent with our problem. Here is a quick rundown of the major options we had, and how we decided between them for our most recent results:

Machine Learning Models vs. Correlation Data: Machine learning can model and account for complex variable interactions. In the past, we have reported derivatives of our machine learning models. However, these results are difficult to create, they are difficult to understand, and they are difficult to verify. Instead we decided to compute simple correlation statistics.

Pearson's Correlation vs. Spearman's Correlation: The most common measure of correlation is Pearson's Correlation, although it only measures linear correlation. This limitation is important: we have no reason to think interesting correlations to ranking will all be linear. Instead we choose to use Spearman's correlation. Spearman's correlation is still pretty common, and it does a reasonable job of measuring any monotonic correlation.

Here is a monotonic example: The count of how many of my coworkers have eaten lunch for the day is perfectly monotonically correlated with the time of day. It is not a straight line and so it isn't linear correlation, but it is never decreasing, so it is monotonic correlation.

Here is a linear example: assuming I read at a constant rate, the amount of pages I can read is linearly correlated with the length of time I spend reading.

Mean Correlation Coefficient vs. Pooled Correlation Coefficient: We collected data for 11,000+ queries. For each query, we can measure the correlation of ranking position with a particular metric by computing a correlation coefficient. However, we don't want to report 11,000+ correlation coefficients; we want to report a single number that reflects how correlated the data was across our dataset, and we want to show how statistically significant that number is. There are two techniques commonly used to do this:

  1. Compute the mean of the correlation coefficients. To show statistical significance, we can report the standard error of the mean.
  2. Pool the results from all SERPs and compute a global correlation coefficient. To show statistical significance, we can compute standard error through a technique known as bootstrapping.

The mean correlation coefficient and the pooled correlation coefficient would both be meaningful statistics to report. However, the bootstrapping needed to show the standard error of the pooled correlation coefficient is less common than using the standard error of the mean. So we went with #1.

Fisher Transform Vs No Fisher Transform: When averaging a set of correlation coefficients, instead of computing the mean of the correlation coefficients, sometimes one computes the mean of the fisher transforms of the coefficients (before applying the inverse fisher transform). This would not be appropriate for our problem because:

  1. It will likely fail. The Fisher transform includes a division by the coefficient minus one, and so explodes when an individual coefficient is near one and outright fails when there is a one. Because we are computing hundreds of thousands of coefficients each with small sample sizes to average over, it is quite likely the Fisher transform will fail for our problem. (Of course, we have a large sample of these coefficients to average over, so our end standard error is not large)
  2. It is unnecessary for two reasons. First, the advantage of the transform is that it can make the expect average closer to the expected coefficient. We do nothing that assumes this property. Second, as mean coefficients are near to zero, this property holds without the transform, and our coefficients were not large.

Rebuttals To Recent Criticisms

Two bloggers, Dr. E. Garcia and Ted Dzubia, have published criticisms of our statistics.

Eight months before his current post, Ted Dzubia wrote an enjoyable and jaunty post lamenting that criticism of SEO every six to eight months was an easy way to generate controversy, noting "it's been a solid eight months, and somebody kicked the hornet's nest. Is SEO good or evil? It's good. It's great. I <3 SEO." Furthermore, his twitter feed makes it clear he sometimes trolls for fun. To wit: "Mongrel 2 under the Affero GPL. TROLLED HARD," "Hacker News troll successful," and "mailing lists for different NoSQL servers are ripe for severe trolling." So it is likely we've fallen for trolling...

I am going to respond to both of their posts anyway because they have received a fair amount of attention, and because both posts seek to undermine the credibility of the wider SEO industry. SEOmoz works hard to raise the standards of the SEO industry, and protect it from unfair criticisms (like Garcia's claim that "those conferences are full of speakers promoting a lot of non-sense and SEO myths/hearsays/own crappy ideas" or Dzubia's claim that, besides our statistics, "everything else in the field is either anecdotal hocus-pocus or a decree from Matt Cutts"). We also plan to create more correlation studies (and more sophisticated analyses using my aforementioned ranking models) and thus want to ensure that those who are employing this research data can feel confident in the methodology employed.

Search engine marketing conferences, like SMX, OMS and SES, are essential to the vitality of our industry. They are an opportunity for new SEO consultants to learn, and for experienced SEOs to compare notes. It can be hard to argue against such subjective and unfair criticism of our industry, but we can definitively rebut their math.

To that end, here are rebuttals for the four major mathematical criticisms made by Dr. E. Garcia, and the two made by Dzubia.

1) Rebuttal to Claim That Mean Correlation Coefficients Are Uncomputable

For our charts, we compute a mean correlation coefficient. The claim is that such a value is impossible to compute.

Dr. E. Garcia : "Evidently Ben and Rand don’t understand statistics at all. Correlation coefficients are not additive. So you cannot compute a mean correlation coefficient, nor you can use such 'average' to compute a standard deviation of correlation coefficients."

There are two issues with this claim: a) peer reviewed papers frequently published mean correlation coefficients; b) additivity is relevant for determining if two different meanings of the word "average" will have the same value, not if the mean will be uncomputable. Let's consider each issue in more detail.

a) Peer Reviewed Articles Frequently Compute A Mean Correlation Coefficient

E. Garcia is claiming something is uncomputable that researchers frequently compute and include in peer reviewed articles. Here are three significant papers where the researchers compute a mean correlation coefficient:

"The weighted mean correlation coefficient between fitness and genetic diversity for the 34 data sets was moderate, with a mean of 0.432 +/- 0.0577" (Macquare University - "Correlation between Fitness and Genetic Diversity", Reed, Franklin; Conversation Biology; 2003)

"We observed a progressive change of the mean correlation coefficient over a period of several months as a consequence of the exposure to a viscous force field during each session. The mean correlation coefficient computed during the force-field epochs progressively..." (MIT - F. Gandolfo, et al; "Cortical correlates of learning in monkeys adapting to a new dynamical environment," 2000)

"For the 100 pairs of MT neurons, the mean correlation coefficient was 0.12, a value significantly greater than zero" (Stanford - E Zohary, et al; "Correlated neuronal discharge rate and its implications for psychophysical performance", 1994)

SEOmoz is in a camp with reviewers from the journal Nature, as well as researchers from MIT, Stanford and authors of 2,400 other academic papers that use the mean correlation coefficient. Our camp is being attacked by Dr. E. Garcia's, who argues our camp doesn't "understand statistics at all." It is fine to take positions outside of the scientific mainstream, although when Dr. E. Garcia takes such a position he should offer more support for it. Given how commonly Dr. E. Garcia uses the pejorative "quack," I suspect he does not mean to take positions this far outside of academic consensus.

b) Additivity Relevant For Determining If Different Meanings Of "Average" Are The Same, Not If Mean Is Computable

Although "mean" is quite precise, "average" is less precise. By "average" one might intend the words "mean", "mode", "median," or something else. One of these other things that it could be used as meaning is 'the value of a function on the union of the inputs'. This last definition of average might seem odd, but it is sometimes used. Consider if someone asked "a car travels 1 mile at 20mph, and 1 mile at 40mph, what was the average mph for the entire trip?" The answer they are looking for is not 30mph, which is mean of the two measurements, but ~26mph, which is the mph for the whole 2 mile trip. In this case, the mean of the measurements is different from the colloquial average which is the function for computing mph applied to the union of the inputs (the whole two miles).

This may be what has confused Dr. E. Garcia. Elsewhere he cites Statsweb when repeating this claim. Which makes the point that this other "average" is different than the mean. Additivity is useful in determining if these averages will be different. But even if another interpretation of average is valid for a problem, and even if that other average is different than the mean, it neither makes the mean uncomputable nor meaningless.

2) Rebuttal to Claim About Standard Error of the Mean vs Standard Error of a Correlation Coefficent

Although he has stated unequivocally that one cannot compute a mean correlation coefficient, Garcia is quite opinionated on how we ought to have computed standard error for it. To wit:

E. Garcia: "Evidently, you don’t know how to calculate the standard error of a correlation coefficient... the standard error of the mean and the standard error of a correlation coefficient are two different things. Moreover, the standard deviation of the mean is not used to calculate the standard error of a correlation coefficient or to compare correlation coefficients or their statistical significance."

He repeats this claim even after making the point above about mean correlation coefficients, so he clearly is aware the correlation coefficients being discussed are mean coefficients and not coefficients computed after pooling data points. So let's be clear on exactly what his claim implies. We have some measured correlation coefficients, and we take the mean of these measured coefficients. The claim is that we should have used the same formula for standard error of the mean of these measured coefficients that we would have used for only one. Garcia's claim is incorrect. One would use the formula for the standard error of the mean.

The formula for the mean, and for the standard error of the mean, apply even if there is a way to separately compute standard error for one of the observations the mean was over. If we were computing the mean of the count of apples in barrels, lifespans of people in the 19th century, or correlation coefficients for different SERPs, the same formula for the standard error of this mean applies. Even if we have other ways to measure the standard error of the measurements we are taking the mean over - for instance, our measure of lifespans might only be accurate to the day of death and so could be off by 24 hours - we cannot use how we would compute standard error for an observation to compute standard error of the mean of those observations.

A smaller but related objection is over language. He objects to my using the standard deviations in reference to a count of how far away a point is from a mean in units of the mean's standard error. As wikipedia notes, the "standard error of the mean (i.e., of using the sample mean as a method of estimating the population mean) is the standard deviation of those sample means" So the count of how many lengths of standard error a number is away from the estimate of a mean, according to Wikipedia, would be standard deviations of our mean estimate. Beyond it being technically correct, it also fit the context, which was the accuracy of the sample mean.

3) Rebuttal to Claim That Non-Linearity Is Not A Valid Reason To Use Spearman's Correlation

I wrote "Pearson’s correlation is only good at measuring linear correlation, and many of the values we are looking at are not. If something is well exponentially correlated (like link counts generally are), we don’t want to score them unfairly lower.”

E. Garcia responded by citing a source whom he cited as "exactly right": "Rand your (or Ben’s) reasoning for using Spearman correlation instead of Pearson is wrong. The difference between two correlations is not that one describes linear and the other exponential correlation, it is that they differ in the type of variables that they use. Both Spearman and Pearson are trying to find whether two variables correlate through a monotone function, the difference is that they treat different type of variables - Pearson deals with non-ranked or continuous variables while Spearman deals with ranked data."

E. Garcia's source, and by extension E. Garcia, are incorrect. A desire to measure non-linear correlation, such as exponential correlations, is a valid reason to use Spearman's over Pearson's. The point that "Pearson deals with non-ranked or continuous variables while Spearman deals with ranked data" is true in that to compute Spearman's correlation, one can convert continuous variables to ranked indices and then apply Pearson's. However, the original variables do not need to originally be ranked indices. If they did, Spearman's would always produce the same results as Pearson's and there would be no purpose for it.

My point that E. Garcia objects to, that Pearson's only measure's linear correlation while Spearman's can measure other kinds of correlation such as exponential correlations, was entirely correct. We can quickly quote Wikipedia to show that Spearman's measures any monotonic correlation (including exponential) while Pearson's only measures linear correlation.

The Wikipedia article on Pearson's Correlation starts by noting that it is a "measure of the correlation (linear dependence) between two variables".

The Wikpedia article on Spearman's Correlation starts with an example in the upper right showing that a "Spearman correlation of 1 results when the two variables being compared are monotonically related, even if their relationship is not linear. In contrast, this does not give a perfect Pearson correlation."

E. Garcia's position neither makes sense nor agrees with the literature. I would go into the math in more detail, or quote more authoritative sources, but I'm pretty sure Garcia now knows he is wrong. After E. Garcia made his incorrect claim about the difference between Spearman's correlation and Pearson's correlation, and after I corrected E. Garcia's source (which was in a comment on our blog), E. Garcia has stated the difference between Spearman's and Pearson's correctly. However, we want to make sure there's a good record of the points, and explain the what and why.

4) Rebuttal To Claim That PCA Is Not A Linear Method

This example is particularly interesting because it is about Principle Component Analysis(PCA), which is related to PageRank (something many SEOs are familiar with). In PCA one finds principal components, which are eigenvectors. PageRank is also an eigenvector. But I am digressing, let's discuss Garcia's claim.

After Dr. E. Garcia criticized a third party for using Pearson's Correlation because Pearson's only shows linear correlations, he criticized us for not using PCA. Like Pearson's, PCA can only find linear correlations, so I pointed out his contradiction:

Ben: "Given the top of your post criticizes someone else for using Pearson’s because of linearity issues, isn’t it kinda odd to suggest another linear method?"

To which E. Garcia has respond: "Ben’s comments about... PCA confirms an incorrect knowledge about statistics" and "Be careful when you, Ben and Rand, talk about linearity in connection with PCA as no assumption needs to be made in PCA about the distribution of the original data. I doubt you guys know about PCA...The linearity assumption is with the basis vectors."

But before we get to the core of the disagreement, let me point out that E. Garcia is close to correct with his actual statement. PCA defines basis vectors such that they are linearly de-correlated, so it does not need to assume that they will be. But this a minor quibble.  This issue with Dr. E. Garcia's his position is the implication that the linear aspect of PCA is not in the correlations it finds in the source data like I claimed, but only in the basis vectors.

So, there is the disagreement - analogous to how Pearson's Correlation only finds linear correlations, does PCA also only find linear correlations? Dr. E. Garcia says no. SEOmoz, and many academic publications, say yes. For instance:

"PCA does not take into account nonlinear correlations among the features" ("Kernel PCA for HMM-Based Cursive Handwriting Recognition"; Andreas Fischer and Horst Bunke 2009)

"PCA identifies only linear correlations between variables" ("Nonlinear Principal Component Analysis Using Autoassociative Neural Networks"; Mark A. Kramer (MIT), AIChE Journal 1991)

However, besides citing authorities, let's consider why his claim is incorrect. As E. Garcia imprecisely notes, the basis vectors are linearily de-correlated. As the sources he cites points out, PCA tries to represent the source data as linear combinations of these basis vectors. This is how PCA shows us correlations - by creating basis vectors that can be linearly combined to get close to the original data. We can then look at these basis vectors and see how aspects of our source data vary together, but because it only is combining them linearly, it is only showing us linear correlations. Therefore, PCA is used to provide an insight into linear correlations -- even for non-linear data.

5) Rebuttal To Claim About Small Correlations Not Being Published

Ted Dzubia suggests that small correlations are not interesting, or at least are not interesting because our dataset is too small. He writes:

Dzubia: "out of all the factors they measured ranking correlation for, nothing was correlated above .35. In most science, correlations this low are not even worth publishing. "

Academic papers frequently publish correlations of this size. On the first page of a google scholar search for "mean correlation coefficient" I see:

  1. The Stanford neurology paper I cited above to refute Garcia is reporting a mean correlation coefficient of 0.12.
  2. "Meta-analysis of the relationship between congruence and well-being measures"  a paper with over 200 citations whose abstract cites coefficients of 0.06, 0.15, 0.21, and 0.31.
  3. "Do amphibians follow Bergmann's rule" which notes that "grand mean correlation coefficient is significantly positive (+0.31)."

These papers were not cherry picked from a large number of papers. Contrary to Ted Dzubia's suggestion, the size of a correlation that is interesting varies considerably with the problem. For our problem, looking at correlations in Google results, one would not expect any single high correlation value from features we were looking at unless one believes Google has a single factor they predominately use to rank results with and one is only interested in that factor. We do not believe that. Google has stated on many occasions that they employ more than 200 features in their ranking algorithm. In our opinion, this makes correlations in the 0.1 - 0.35 range quite interesting.

6) Rebuttal To Claim That Small Correlations Need A Bigger Sample Size

Dzubia: "Also notice that the most negative correlation metric they found was -.18.... Such a small correlation on such a small data set, again, is not even worth publishing."

Our dataset was over 100,000 results across over 11,000 queries, which is much more than sufficient for the size of correlations we found. The risk when having small correlations and a small dataset is that it may be hard to tell if correlations are statistical noise. Generally 1.96 standard deviations is required to consider results statistically significant. For the particular correlation Dzubia brings up, one can see from the standard error value that we have 52 standard deviations of confidence the correlation is statistically significant. 52 is substantially more than the 1.96 that is generally considered necessary.

We use a sample size so much larger than usual because we wanted to make sure the relative differences between correlation coefficients were not misleading. Although we feel this adds value to our results, it is beyond what is generally considered necessary to publish correlation results.

Conclusions

Some folks inside the SEO community have had disagreements about our interpretations and opinions regarding what the data means (and where/whether confounding variables exist to explain some points). As Rand carefully noted in our post on correlation data and his presentation, we certainly want to encourage this. Our opinions about where/why the data exists are just that - opinions - and shouldn't be ascribed any value beyond its use in applying to your own thinking about the data sources. Our goal was to collect data and publish it so that our peers in the industry could review and interpret.

It is also healthy to have a vigorous debate about how statistics such as these are best computed, and how we can ensure accuracy of reported results. As our community is just starting to compute these statistics (Sean Weigold Ferguson, for example, recently submitted a post on PageRank using very similar methodologies), it is only natural there will be some bumbling back and forth as we develop industry best practices. This is healthy and to our industry's advantage that it occur.

The SEO community is the target of a lot of ad hominem attacks which try to associate all SEOs with the behavior of the worst. Although we can answer such attacks by pointing out great SEOs and great conferences, it is exciting that we've been able to elevate some attacks to include mathematical points, because when they are arguing math they can be definitively rebutted. On the six points of mathematical disagreement, the tally is pretty clear - SEO community: Six, SEO bashers: zero. Being SEOs doesn't make us infallible, so surely in the future the tally will not be so lopsided, but our tally today reflects how seriously we take our work and how we as a community can feel good about using data from this type of research to learn more about the operations of search engines.


Do you like this post? Yes No

Read more...

Must-Have SEO Recommendations: Step 7 of the 8-Step SEO Strategy

Posted by laura

This post was originally in YOUmoz, and was promoted to the main blog because it provides great value and interest to our community. The author's views are entirely his or her own and may not reflect the views of SEOmoz, Inc.

You know the client.  The one that really needs your help.  The one that gets pumped when you explain how keywords work.  The one that has an image file for a site.  Or maybe the one that insists that if they copy their competitor’s title tags word-for-word, they’ll do better in search results (I had a product manager make his team do that once. Needless to say (I was thrilled when) it didn’t work). 

In Step 6 of the SEO Strategy document I noted that this strategy document we’ve been building isn’t a best practices document, and it’s more than a typical SEO audit.  It is a custom set of specific, often product-focused recommendations and strategies for gaining search traffic.  For that reason I recommended linking out to SEO basics and best practices elsewhere (in an intranet or a separate set of documents).

But most of the time you’ll still need to call out some horizontal things that this client must have put in front of their faces, or else it will be missed completely.  SEO/M is your area of expertise, not theirs, so help them make sure they’ve got their bases covered. You can create an additional section for these call-outs, wherever you feel it is appropriate in your document.

WHAT CAN I INCLUDE HERE?

Here are some examples of things you could include if you felt your client needed this brought to their attention:

  1. Press Release optimization and strategy
  2. SEO resources for specific groups in the company:
    1. SEO for business development (linking strategies in partner deals)
    2. SEO for writers/editorial
    3. SEO for designers
  3. SEO for long term results rather than short term fixes
  4. International rollout recommendations
  5. Content management system – how it is impairing their SEO
  6. Risks and avoidances
  7. Anything that you feel should be covered in more detail for this particular client, that wasn’t covered in your strategy in the last step. This is a catchall – a place to make sure you cover all bases.
  8. Nothing - if you dont feel it's needed.

If the client really needs a lot of help, you’d want to provide training and best practices, either as separate deliverables along with the strategy document, or better yet – work on training and best practices with them first, then dive into more specific strategy. You don’t want to end up with a 15 page (or even 4 page for that matter) best practices document in your strategy doc. Remember, we’re beyond best practices here, unless, in this case there’s something specific that needs to be called out.  

If the client needs more than one thing called out, do it.  If it’s several things, consider either adding an appendix, or as I mentioned, creating a separate best practices document.

The reason I recommend best practices as a separate document is because it is really a different project, often for an earlier phase.

EXAMPLE 1:

Let’s say for example, my client has the type of content the press loves to pick up. They don’t do press releases, mostly because they don’t know how exactly to write them and where to publish them, but they want to.  I‘ll add a Press Releases section after the strategy and I might give them these simple tidbits:

  • High level benefit of doing press releases
  • What person or group in the company might be best utilized to manage press releases
  • Examples of what to write press releases about
  • Channels they can publish press releases to
  • Optimization tips
  • References they can go to for more detailed information

EXAMPLE 2:

My client gets it. They’re pretty good at taking on most SEO on their own. This strategy document I’m doing for them is to really dig in and make sure all gaps are closed, and that they’re taking advantage of every opportunity they should.  Additionally, in a few months they are going to roll out the site to several international regions. 

My dig into the site and its competitors (and search engines) for this strategy have all been for the current site in this country. Because the Intl rollout hasn’t started yet, I will add a section to my document with specific things they need to keep in mind when doing this rollout.

  • Localized keyword research (rather than using translate tools)
  • ccTLD  (country code top level domain) considerations
  • Tagging considerations (like “lang”)
  • Proper use of Google Webmaster Tools for specifying region
  • Potential duplication issues
  • Maybe even a lit of popular search engines in those countries
  • Point to more resources or list as a potential future contract project

Make sense?  Use your judgment here. Like we’ve seen in the rest of the steps, this strategy document is your work of art, so paint it how your own creative noggin sees it, Picasso.

Other suggestions for what you might include here? Love it? Hate it? Think this step stinks or mad I didn’t include music to listen to for this one? Let’s hear about it in the comments!


Do you like this post? Yes No

Read more...

Patience is an SEO Virtue

Posted by Kate Morris

We have all been there once or twice, maybe a few more than that even. You just launched a site or a project,  and a few days pass, you login to analytics and webmaster tools to see how things are going. Nothing is there. 

WAIT. What?!?!?! 

Scenarios start running through your mind, and you check to make sure everything is working right. How could this be?

It doesn't even have to be a new project. I've realized things on clients' sites that needed fixing: XML sitemaps, link building efforts, title tag duplication, or even 404 redirection. The right changes are made, and a week later, nothing has changed in rankings or in webmaster consoles across the board. You are left thinking "what did I do wrong?"

funny pictures of dogs with captions

A few client sites, major sites mind you, have had issues recently like 404 redirection and toolbar PageRank drops. One even had to change a misplaced setting in Google Webmaster Tools pointing to the wrong version of their site (www vs non-www). We fixed it, and there was a drop in their homepage for their name.

That looks bad. Real bad. Especially to the higher ups. They want answers and the issue fixed now ... yesterday really.

Most of these things are being measured for performance and some can even have a major impact on the bottom line. And it is so hard to tell them this, even harder to do, but the changes just take ...

Patience

That homepage drop? They called on Friday, as of Saturday night things are back to normal. The drop happened for 2-3 days most likely, but this is a large site. Another client, smaller, had redesigned their entire site. We put all the correct 301 redirects for the old pages and launched the site. It took Google almost 4 weeks to completely remove the old pages from the index. There were edits to URLs that caused 404 errors, fixed within a day, took over a week to reflect in Google Webmaster Tools. 

These are just a few examples where changes were made immediately, but the actions had no immediate return. We live in a society that thrives on the present, immediate return. As search marketers, we make c-level executives happy with our ability to show immediate returns on our campaigns. But like the returns on SEO, the reflection of changes in SEO take time. 

The recent Mayday and Caffeine updates are sending many sites to the bottom of rankings because of the lack of original content. Many of them are doing everything "right" in terms of onsite SEO, but now that isn't enough. The can change their site all they want to, but until there is relevant and good content plus traffic, those rankings are not going to return for long tail terms. 

There has also been a recent crack down on over optimized local search listings. I have seen a number of accounts suspended or just not ranking well because they are in effect trying too hard. There is a such thing as over optimizing a site, and too many changes at once can raise a flag with the search engines. 

One Month Rule

funny pictures of cats with captions

Here is my rule: Make a change, leave it, go do social media/link building, and come back  to the issue a month later. It may not take a month, but for smaller sites, 2 weeks is a good time to check on the status of a few things. A month is when things should start returning to normal if there have been no other large changes to the site. 

We say this all the time with PPC accounts. It's like in statistical analysis, you have to have enough data to work with to see results. And when you are waiting for a massive search engine to make some changes, once they do take effect in the system, you then have to give it time to work. 

So remember the next time something seems to be not working in Webmaster Tools or SERPs:

  1. If you must, double check the code (although you’ve probably already done this 15 times) to ensure it’s set up correctly. But then,
  2. Stop. Breathe. There is always a logical explanation. (And yes, Google being slow is a logical one)
  3. When did you last change something to do with the issue?
  4. If it's less than 2 weeks ago, give it some more time.
  5. Major changes, give it a month. (Think major site redesigns and URL restructuring)


Do you like this post? Yes No

Read more...

Another Saturday Night in Web Development

How many times must it be said?

"BACK UP YOUR SITE REGULARLY"

I'm 5 cups of coffee past sanity and still have an hour or two before I can sleep.  Recently many of our clients began getting hacked by the children on the Defacement Logging Website that shall remain nameless.  (Quite frankly, I don't want to add ourselves to the hit-list.) They targeted three of our clients sites this past week. Their targeting was very general in nature, and used a few different methods.  Two were injections, and one is still being debated.   The portion that hurts is that one of our clients didn't back up his database. After a forced entry into your website, it is generally considered a good idea to burn the damage.  IE... kill the database and erase data from the server to ensure that back door code has not been left in the site. Tonight, that is not an option.  our client had apparently gone three months without an xml backup, and has misplaced where that copy is located. Instead of the famous 5 Minute Install, or in some cases 5 Minute Reset", we get to go through tons of lines of MySQL database to ensure that we eliminate all code that may have been left.  I will not be a very happy person in the morning, and I'm grateful that it will be Sunday.  Hopefully we get a day off.
Read more...

Protecting Against SQL Injections

No, this is not a replay of 2002.  SQL Injection is still in an active exploit for hijacking and defacing a database driven website.  While there are several methods and builds of site that can be attacked in this method, we will be confining our feeds to SQL Injections into a Wordpress MySQL database.  Here is the first we will be referencing.  It comes from G4B1DEV and is a good article on ways t protect your PHP and MySQL Database.  In None of these feeds will we reference the "How To" in SQL Injection, but we will be adding regular post on how to protect your website. -Enjoy-

SQL Injection Protection in PHP With PDO

Database abstraction layers like PHP’s Portable Data Objects (PDO) are not a new concept, but a lot of developers don’t seem to realise the security benefit they’re getting for free by using them – inherent protection against SQL injection. SQL injection is the buffer overflow of the web application world – it’s been around forever, and every web application developer should know how to write secure code that’s not vulnerable to it. For those not in the know, SQL injection is a technique whereby a malicious attacker can exploit inadequate data validation to inject arbitrary SQL code into your application’s queries and have it executed as though it is a legitimate query. I won’t go too deeply into SQL injection in this article, but here’s a simple example: The front page of your application has a login form, which is submitted to a PHP script to validate the user’s credentials and allow or deny access to the application. The login form submits two variables by POST as follows: username=fred&password=Fr3dRul3z The POSTed data is then used to build an SQL query to validate the credentials, like this: $sql = “SELECT * FROM users WHERE username = ‘”.$_REQUEST['username'].”‘ AND password = ‘”.$_REQUEST['password'].”‘”; This would result in the SQL query: SELECT * FROM users WHERE username = ‘fred’ AND password = ‘Fr3dRul3z’ Assuming a row exists in the database with these credentials, the user would be allowed to log in. An attacker could easily circumvent this authentication scheme by escaping out of the username field into the SQL query by entering nothing into the password field and this into the username field: ‘ OR 1==1 – The resulting SQL query string would look like this: SELECT * FROM users WHERE username = ‘fred’ OR 1==1 — ‘ AND password = ” Which, as I’m sure you can see, would select all users from the database as the condition 1==1 will always be true. The rest of the query is discarded with the comment operator ‘–’. The way to avoid this kind of attack is to sanitise the data submitted to the form by escaping everything that could be used to escape the confines of the quotes around the fields (e.g. mysql_real_escape_string() if you’re using MySQL). However, in a land far away somebody was inventing database abstraction layers… The primary objective of database abstraction layers like PDO is clean abstraction in your code away from the database platform – so, theoretically, you could switch database platforms from, say, MySQL to PostgreSQL or Oracle with minimal changes to the code. In practice this depends heavily on how much your code relies on platform-specific features like triggers and stored procedures, but if you’re not relying on them at all and you’re just doing simple INSERT/UPDATE/DELETE operations it’s a free ride. Sounds moderately useful, but nothing exciting, right? Right. Another neat feature invented a long time ago is prepared statements, and most database abstraction layers (including PDO) implement this as a way to perform the same query multiple times with different data sets (e.g. inserting a whole bunch of new rows). Now, when building statements with PDO, instead of building the SQL string manually as demonstrated earlier, we build the statement with placeholders like this: $sql = “INSERT INTO fruits (name, price) VALUES (?, ?)”; and then execute the query with a data set passed to the abstraction layer as follows: $sth = $dbh->prepare($sql); $sth->execute(array($fruit, $price)); When the data is handed to PDO like this, it then either passes the data on to the database driver directly, or builds the query internally in a safe manner with any potentially malicious data encoded or escaped. As you can see, this is an easy way around the problem of SQL injection. However, prepared statements with PDO aren’t all puppies and rainbows. Using prepared statements can introduce a number of interesting caveats of which developers should be aware. For example, in the MySQL client API prepared statements can not execute certain types of queries[1] and they do not use the query cache[1][2] which may have an impact on your application’s performance. The inherent security in using prepared statements sounds great, but developers should not let PDO and other abstraction layers/prepared statement implementations lull them into a false sense of security. Untrusted data should always be validated and sanitised, PDO is just another line of defense. It doesn’t cover the territory of a multitude of other input validation vulnerabilities like cross site scripting, but it does do a good job of protecting applications against SQL injection. The best strategy is only allowing known good data by whitelisting characters and matching input data against regular expression patterns, then using prepared statements to catch anything SQL injection-wise that the input validation misses, all in conjunction with a web application firewall like ModSecurity. PDO has been built in to PHP since version 5.1.0, which was released in Nov 2005. Unless you’ve got a good reason for not using it in your PHP apps, you should be – it is a portable replacement for the old mysql_* functions and other platform-specific functions with the added benefit of protection against SQL injection. Author: Loukas Kalenderidis Article Source: EzineArticles.com Provided by: Duty on LCD/Plasma TV
Read more...

New Directories

We've revived and added new directories to the ever-expanding Naper Design Business Network. While the number of Business Directories is currently limited, there will be many more to come.  We are using the DirectoryPress produced by Mark Fail, and will be asking for everyone to list a business that they know of.  Our hope is to add 200 businesses a month for each directory.  Here is a list of the first batch....  more will be added in days to come. Naoerville Business Directory Raleigh Business Directory (serving the entire Triangle Area) Chicago Business Directory Aurora Business Directory (Aurora, Illinois) Charleston Business Directory (For the entire Tri-County and Low Country Area) There will be many more directories added to this list and all will be managed for accuracy.   After implementation on July 1st, listings will be automatic for all registered users of each site. Leave your thoughts, and we'll see what we can add to the directory plot that will make it more user friendly.
Read more...