Skip to main content

What is the standard error for Viva Glint survey scores? Related, what constitutes a statistically meaningful change in scores? Research shows that the standard deviation of a 5-point scale raw score for survey items is typically between .7 and 1.0. Conservatively, for a group of 100 participants and an SD of 1, the standard error would be .10 (which is .02 of the 5 point scale). This suggests that 2 points of the 100 points in the Viva Glint score range would be the Standard Error, and so a difference of less than 4 points is likely NOT indicative of a change in scores. Or I would assume. Thoughts?

Hi Paul,

Not directly on point, but if you haven’t seen these already you might find them helpful based on your questions:


Thanks, Brian. It’s not really helping me with the Glint Score. The first table refers to percent favorable and the second refers to a mean (but on a 1-5 raw scale or the “mean” 0-100 conversion is not clear). I would have expected this to be easier to find… It may need to be an office hour question. Thanks again for getting me closer. The truth is out there.  P


Okay, some fake data made me realize this was simple algebra. Every 1 unit change on a raw score (1, 2, 3, 4, 5) is associated with a 25 point change on the Viva Glint scale (0, 25, 50, 75, 100). Therefore, if the standard deviation for a survey item on the raw scale is 1, then the standard deviation on the Viva Glint scale is 25. Therefore, for a group of 100 participants, the Standard Error on the raw score is 1/100^.5 or 1/10 = .1 and the Standard Error on the Viva Glint score is 25/100^.5 or 25/10 = 2.5. 

The implication is that a second survey (B) of those 100 participants would need to have a Viva Glint score that is > 1.96*2.5 or 4.9 points above or below the score of group A to be considered beyond the p=.05 level. So, a crude rule of thumb would be that a change in score of 5 points or greater (for a group of 100) would be statistically significant.

So, if you enter various group sizes, you get this table:

N of 20, 11 point difference or greater is significant
N of 50, 7  point difference or greater is significant
N of 100, 5 point difference or greater is significant
N of 150, 4 point difference or greater is significant
N of 250, 3 point difference or greater is significant
N of 500, ~ 2 point difference or greater is significant
N > 2,000, ~ 1 point difference or greater is significant

Comparing group scores of difference sizes (e.g., company of 1,000 vs benchmark of 10,000) is a bit beyond my stats knowledge at the moment, and I don’t feel like digging up an independent measures t-test formula. Suffice to say, these same guidelines are a conservative estimate for most score differences. Just be careful about very small groups. 

Please correct my logic, math, statistics if I am wrong. Thanks. P


Reply