Sampling goes to the heart of good quantitative research. The research agency will be responsible, but you need to be able to judge appropriate proposals.

It matters. There’s no point doing it unless it’s with the audience you think you want to affect.

Sampling is a trade off between making the sample is big enough to give confidence in the findings and not being too expensive. It needs to be big enough to give a reliable reflection of what you’re audience as a whole thinks, but size is nothing without recruiting a REPRESENTATIVE sample.

As a rule of thumb:

Samples of less than 100 should be treated with caution.

Samples of 500 are robust enough to leave out any margin for error.

Remember, if you want to sub-sample, like maybe looking at two life stages within a sample, the 100 rule still applies.

Of course, you can get scientific  and look at ‘sample error’. This is a calculation that lets you know that if you repeated the survey, the likelihood of the same answer is xx. Bigger samples always means smaller error.

It depends on the scale of the answer too though. Like if 95% of your sample say they like chocolate, that’s pretty clear cut. But what if 50% of people say that?

Ask the research agency. It matters. Here’s an example:

Spontaneous brand awareness = 20%

Sample size                           = 400

Sample error                         = 4.3%

So the real answer                 = 20% plus or minus 4.3%

Since quant is about proof, make sure your proof is reliable!!

Posted in

2 responses to “Sampling”

  1. Lee Avatar

    Thanks for this post NP.
    I just wanted to reiterate your point that size isn’t everything when it comes to sampling. It may sound a little trite but it is very important to check who was actually interviewed.
    The composition of the sample is, if anything, more important than the sample size in determining the robustness of a piece of research data.
    Firstly, always check that the population (the group the research is supposed represent) is actually relevant to the task at hand. I often see debriefs that do not even include a description of the target population (you should see something like ‘British weekly drinkers aged 21-45’ at the foot of every slide). If it’s not there, always ask to know who the sample is supposed to represent.
    Secondly, as you say NP, check that the researcher has done their best to ensure that the completed interviews are representative of the desired population. This is what people mean when they talk about the quality of a sample. The source of the sample (the sample frame) is critical here. In the same way that you wouldn’t just approach people who live in London to get a sample that is representative of the UK, neither should you be blind to the possible risks of just approaching broadband internet users on a self-selecting panel if you want a sample that is representative of all citizens.
    I could go on at length about the different methods that researchers use to control for biases in the sample selection and response but I’m sure you’re falling asleep already.
    Suffice to say that understanding more about who was interviewed is critical to assessing the robustness of a piece of research. As response rates fall and researchers are getting more creative in their use of different sample frames (e.g. Facebook polls) the more important this assessment will become.

    Like

  2. NP Avatar

    Thanks for that Lee. Thanks for making more of the composition issue, I didn’t do enough of that.
    Good point about newer recruitment methodology not always benefiting the rigour, even if it’s quicker and cheaper!

    Like

Leave a comment