Final TV debate: policy substance matters little to voters

Linguamatics’ linguistic analysis with NPCU provided insight last night into tweet sentiment towards party leaders during the final televised UK election debate and was immediately picked by Rory Cellan-Jones at the BBC. The preliminary results from tweets sent during the debate, including a new view on the instant reactions to particular issues (Figure 1), showed a further narrowing of the gap between the leaders’ performances (Figure 2), but with Nick Clegg still performing best overall.

Figure 1: How twitterers reacted to particular issues in the final debate

Figure 1

The overall tweet analysis (Figure 2) for the three debates shows the percentage of tweets in favour of each of the leaders. Nick Clegg’s share has dropped to 37% from 43% in the second debate, Gordon Brown is down to 32% from 35%, while David Cameron rose to 31% from 22%.

Figure 2: Number of tweets showing positive sentiment towards each party leader

Figure 2

Top issues for the twitterers in the third debate (Figure 3 below) were immigration, banking, economy and tax. Clegg and Brown shared the lead on immigration, Clegg was ahead on banking and tax, whilst Brown clearly won on the economy. The fact that Camercon didn't win any issues of policy substance, but nevertheless improved his performance, suggested viewers are not assessing the leaders on policy specifics - hardly a revelation of course. Try as Labour might to shift the terrain onto policy,  viewers' connection to the leaders is shaped by matters of personality, body language and other factors which Brown performs relatively poorly on. Brown walloped his rivals on the topic of the economy, but didn't win the debate.

Figure 3: Winner per topic from number of relevant positive tweets

Figure 3

Tracking positive sentiment towards each of the leaders during all three debates (Figure 4) also reflects the narrowing gap between their performances.

Figure 4: Positive sentiment towards leader over time during the debate

Figure 4

The published results come from the deep analysis of 187,000 tweets sent by 43,656 twitterers from 8.30pm – 10.00pm on the night of the third televised UK election debate.

Linguamatics’ I2E text mining software was used to find and summarize tweets that have the same meaning, however they are worded. I2E identifies the range of vocabulary used in tweets and uses linguistic analysis to collect and summarize the different ways opinion is expressed.

Description of the figures

Figure 1 shows how the twitterers reacted to particular issues during the debate.
This is a timeline showing the positive tweets made about each leader in relation to audience questions or key statements made by a leader

Figure 2 shows the number of tweets that expressed a positive sentiment towards each of the party leaders.
The analysis identified tweets saying that a particular leader was doing well or made a good point, or that they like the leader, etc. Linguistic filtering removed examples which were about expectations, e.g. “I hope the leader will do well”, questions, such as “anyone think the leader is doing well?”, and negations, such as “the leader did not do well” or “the leader made no sense”.

Figure 3 shows winner per topic from number of relevant positive tweets.
The analysis identified a list of topics by identifying words or phrases which described the discussion subject, for example Trident, nuclear weapons, armed forces, military, and Eurofighter are assigned to defence. The tweets were then analyzed to find out who was saying positive things about each leader in relation to a specific topic.

Figure 4 shows Figure 1 (positive sentiment towards leaders over time during the debate) compared with the positive sentiment results from the two earlier debates.