The Interactive Storytelling Team is always looking for ways to make news stories more engaging for audiences, so when they secured early access to a major Australian National University report on social class in Australia they created a quiz - ‘Working class or affluent? Find out where you fit in Australia’. Developer Simon Elvery explains the technical challenges involved and how the quiz made the story more interesting and relevant to readers.
How did the social class quiz come about?
The ANU poll is a survey of public opinion conducted on behalf of the Australian National University (ANU) to gain insight into Australians’ opinions on issues of national importance. After collaborating with them last year to report on the results of their poll examining opinions on government expenditure, we were given early access to the results of their latest report looking at social class in Australia.
Reporting on opinion polls and surveys often involves relating a lot of numbers, percentages, and other statistics. As a result it’s often difficult to avoid turning an interesting topic into a relatively lifeless report laden with numbers. The audience’s attention can be quickly lost. It’s difficult to read a story chock full of numbers and keep them all straight in your head.
When reporting the results of an opinion poll, we now often look to engage the audience by re-creating parts of the poll as a quiz to let people gauge how their own opinions compare against those of the wider population. We’ve taken this approach in the past with some success. It gives readers the ability to relate more directly to the subject matter and lets us lead them through the numbers in a way which keeps it relevant to them.
The ANU’s social class survey presented a unique opportunity due to the methodology used—it would allow us to create a short quiz so readers could discover their own social class according to the researchers’ model and the results of the original ANU survey.
What was involved in creating the quiz?
The first thing involved in creating the quiz was building an understanding of the research. The research used a statistical technique called latent class analysis (LCA) which is a methodology used to find objective groupings within a set of variables (in this case, survey respondents). While I have a reasonable understanding of statistics, I’m by no means a statistician, and LCA is a statistical method I was not previously familiar with.
After doing a lot of reading to get my head around LCA, I got to work using the ANU model to generate the data we would need for the ABC News Digital version of the survey. This was done using a statistical programming language known as R and involved a lot of back-and-forth with Jill Sheppard, one of the researchers at ANU, to ensure my calculations were correct.
The final piece of the puzzle was doing the web programming to adapt our existing quiz tool for this new purpose.
What challenges did you face?
The key challenge was ensuring we understood the research and the statistical model correctly. Working with the researchers helped overcome this challenge and another key remedy was doing lots of reading. I also wrote quite a bit of code to validate our understanding and test our assumptions against the model.
Another challenge was generating the data we needed to give the audience immediate feedback on their input to the survey. The survey of just five questions results in 24 thousand possible combinations for input to the model. A lot of time went into writing the code to generate the results data from the model and then verifying it against known expectations from the model.
What was the audience response?
The audience responded very well to the survey, easily making it one of the most popular stories of the month. There was a lot of interest in the results and discussion around the research, which is a great outcome. Some audience members were critical of the classification they received and it was a really interesting process to see how research like this generalised to a wider audience.
Were there any lessons from this for future digital projects?
Publishing an interactive like this, where it’s impossible to test all the combinations of user input against expected outcomes, always comes with a degree of apprehension. For the most part, we were really satisfied with the way everything worked and the way it was received, however there are definitely a couple of things which could have been done better.
If we were doing it again, we’d definitely have spent more time thinking about how to explain the research and anticipating audience questions about the quiz results. The nature of the statistical analysis behind this research and the way the social classes are defined means that the quiz won’t always classify a respondent perfectly—and our readers noticed that. This reality doesn’t at all reduce the validity of the research, but without adequate explanation it may raise questions of validity in the mind of some readers.
A final lesson from this story is another reminder of how long it takes to do a story like this well. We were lucky that we started planning and thinking about this story early—there were a few last minute hitches which, had we not planned for their possibility, would have delayed the story. But had we planned even better, that would have been reflected in the quality of the result.
Overall, we were very happy with how the story finally came together and with the audience response it generated. We’re really looking forward to the next chance we have to tell a story this way, and to working directly with researchers on presenting the outcomes of their work to the ABC’s audience.