Do you need to eat a whole pie to taste the pie? #throwbackthursday

LK interview 4I am often asked about sample sizes – both in relation to quantitative data and qualitative studies. Issues of sampling, reliability and validity are thorny ones for museum audience researchers often tasked to get results quickly and often having to resort to the sample that is available – museum visitors either onsite or an online sample weighted towards the population.

In this #throwbackthursday post I’m re-visiting two key texts for researchers that address sample sizes – Judy Diamond’s 1999 classic, Practical Evaluation Guide: tools for museums and other informal education settings, and Paulette McManus’s 1991 work Towards Understanding the Needs of Museum Visitors.

Before thinking about the size of the sample, we need to first make sure that we are sampling the right kinds of people, paying attention to both reliability and validity. Reliability is defined as ‘… a measure of how consistent a research method is’ (Diamond, 1999, p.77). Validity is defined as a ‘… measure [that] measures what it is intended to measure’ (de Vaus, 1991, p.55). Validity relates to how well the analysis actually represents the phenomena it purports to represent: ‘… to know [that] the means of assessment you have developed is accurate and appropriate’ (Diamond, 1999, p.75). Researchers will often talk about covering validity in qualitative research through triangulation—using several different ways to collect and analyse data about the same phenomena. Triangulation has been defined as ‘… the use of two or more methods of data collection in the study of some aspect of human behaviour’ (Cohen and Manion, 1994, p.233). Triangulation enables the complexity of human behaviour and thought to be uncovered, as well as offering opportunities for introducing more creative and flexible elements to the research, providing validity checks by comparing each data set gathered in different ways.

So, how many to sample? As McManus reminds us ‘… there comes a point in sampling where increasing the sample size does not increase the reliability of the information gained … Two well-selected samples of 250 are unlikely to differ in response by more than 8 percent’ (p. 42). And Diamond ‘… as a population gets larger, you eventually reach a point where the required sample changes only very little, or not at all’ (p.42).

McManus outlined the processes for evaluation and suggested sample sizes for each:

  • Demographic/population sample – 300-600 for a small museum; 600-700 respondents for a national museum
  • Front-end/formative/summative samples – between 100-150
  • Qualitative surveys asking open-ended in-depth questions – up to 40
  • Formative qualitative studies (for example to test an exhibition themes) – 15-20

Diamond suggests the following:

  • Exploratory evaluations – 5-10 subjects, and 15-20 in focus groups
  • Most quantitative analysis – 40-60

Overall, thinking about numbers for a quantitative (say, exit) survey, Diamond (1999) suggested that 96 visitors is a sufficient sample size to make generalisations and produce conclusions for a museum that has one million visitors per year with a ten percent sampling error. Of course, the more surveys completed the less the sampling error, but specifically in front-end and formative evaluation I believe around 100 onsite surveys should do it. With the ease of online surveys that can be weighted towards the population, then I recommend 150 online sample for a front-end study.

Finally, when sampling onsite and online we need to select the best possible sample within the constraints of these informal settings where there’s often less control over who is actually available to be involved, and time pressures to get results fast.

So, you don’t need to eat the whole pie to know what it tastes like (although you may like to!) – just ensure that you have sliced it correctly.

References

  • Cohen, L., and Manion, L. (1994). Research Methods in Education. London: Routledge.
  • de Vaus, D. (1991). Surveys in Social Research (3rd ed.). London: UCL Press.
  • Diamond, J. (1999). Practical Evaluation Guide: tools for museums and other informal education settings. Walnut Creek, CA: Alta Mira Press. [revised and updated in 2016]
  • McManus, P. (1991). Towards Understanding the Needs of Museum Visitors. in Lord, G and Lord, B. (Eds). The Manual of Museum Planning. London: HMSO.

Further references and an explanation of exhibition evaluation can be found here.