BICYCLE HELMET
RESEARCH
FOUNDATION

cyclehelmets.org


Home page

Main topics
News Headlines

Frequently asked Questions
For Policy Makers

Research evidence
Misleading claims
Helmet laws
Analysis

Search Engine

Australia
Canada
New Zealand
UK
USA
Other countries

Full index
Links


BHRF
Policy statement
Register as a supporter
Feedback

Download this page

Seasonal variation in hospital admission for road traffic injuries in England: analysis of hospital statistics

Gill M & Goldacre M. Injury Prevention 2009;15:374-78

Summary

The authors gathered Hospital Episode Statistics (HES) data on traffic injuries in England for the five years 1999-2004 in order to analyse seasonal variations in the injuries to different road users.

Original abstract:  

Admissions for car occupants were highest in the winter months but seasonal variation was not great (highest and lowest months: December, 16% above monthly average; June, 5% below). There was a summer peak and winter trough in admissions for adult cyclists (June, 34% above average; December, 27% below) and motorcyclists (August, 33% above average; January, 43% below). Admissions for child pedestrians were highest in late spring and lowest in mid-winter (May, 24% above average; December, 28% below). By contrast, admissions for adult pedestrians were higher in winter than summer (December, 33% above average; July, 17% below). From April to September, there were more admissions for pedestrians and cyclists in England (44 875 in the six years of the study) than for car occupants (34 582). For cyclists, proportionally more injuries in the winter months were severe. Severity of injuries to car occupants did not show seasonal variation.

This commentary explains that the data are applied inappropriately, causing over-estimation of the risks of cycling and under-estimation of the risks of walking. The error arises from the inconsistent definition of "traffic injury" in the HES  (ICD10) system. The authors are not alone in this confusion. For some years, the UK Department for Transport has also failed to emphasise the different definitions of “traffic injury” as applied to cyclists and pedestrians. This has led to the erroneous belief that serious cyclist injuries are under-reported by the Police. Ironically, in reality, it is serious pedestrian injuries that are under-reported by the Police. The error renders the analysis invalid as far as cycling is concerned.

Evidence presented

Five years of summed data are presented for injuries in each month of the year and for each road user group. The percentage of each month relative to the annual average is also presented as a quick measure of seasonal variation. The peak and trough seasonal variations are then assessed using a Poisson regression model and Chi-squared test. All road user groups showed significant variation. These data all related to injuries requiring admission to hospital (a generally accepted definition of 'serious injury').

The authors make a fundamental error in their assessment of cyclist and pedestrian injuries. They fail to appreciate that in the HES, 'traffic injury' has a different definition for cyclists and pedestrians, thus:

Cyclist traffic injury: all injuries in collisions; all falls in the highway; all falls in unknown place (by default assumed to be in the highway).

Pedestrian traffic injury: injuries due to traffic collision only (in practice, a very few non-vehicle related injuries also get recorded).

This means that pedestrians suffering injury in falls are not recorded as traffic injuries, but cyclist falls are. Further, a large but unknown number of cyclists injured off-road will be wrongly coded as "traffic accidents" due to the default assumption that injury occurred on the highway in the absence of specific information to the contrary. This is known as the "dustbin code" effect.

The net result is over-recording of cyclist "traffic injuries" and considerable under-recording of pedestrian "traffic injuries".

The extent of the consequent distortion of the results may be appreciated if pedestrian injuries are re-defined into line with that for cyclists. This is possible through data presented in the 2006 and 2007 editions of the standard publication Road Casualties Great Britain. Even if elderly pedestrians (who are most likely to suffer falls) are excluded from this exercise, the results are revealing (single year data):

 

Serious injury due to:

Cyclists

Pedestrians (<=65y.o)

Collision with motor vehicle

2,186

7,688

Fall (in highway or unspecified place)

4,880

63,500

Source

RCGB 2006 Chapter 6 Table 6a

RCGB 2007 Chapter 6 Table 6f

 

The inclusion of the full range of pedestrian injuries has multiplied their total injuries by a factor of ten.

It is surprising that the authors never suspected that there might be a serious problem with the data. Looking at Table 1 in their paper, we observe for cyclists about 2.5 falls for every collision, but for pedestrians almost none.  "Falls in walking" is the greatest cause of head injury. One does have to wonder why the authors never stopped to think about the content of their data.

The authors refer to a Government research report "Road accident casualties: a comparison of STATS19 [i.e. Police-reported] data with Hospital Episode Statistics" (their ref 10), which displays "traffic injury" data but does not highlight the different definitions for cyclists and pedestrians. The Department for Transport (DfT) repeatedly claims that cyclist injuries are under-reported by the Police. The DfT is wrong – its researchers are also confused by the inconsistent definition of "traffic injury". Ironically, it is pedestrian injuries in traffic accidents that are under-reported, as is clear from the data presented in the above DfT report. However, just because the DfT gets it wrong does not excuse Gill and Goldacre from making the same mistake.

The authors go on to make a loose comparison of risk between cycling and driving. Given the foregoing discussion, it is not surprising that the conclusions are incorrect. The authors conclude that cycling is riskier than driving. In fact, their analysis would be wrong even if they were applying correct data, since they do not consider the differences between the cycling and motoring populations, nor that bicycles are used in different ways from cars.

A much fairer comparison of road user risk has been available for some years (Wardlaw, 2002). This shows that risks in driving and cycling vary from country to country. Driving in Britain is safe by international standards, and the driving population is dominated by experienced, middle-aged behaviour. Young drivers in the UK face risks 8 times the national average. There are other developed countries, such as France, where driving is as risky, or more risky, than cycling in Britain. Hence one cannot make a blanket judgement that cycling is riskier than driving, since the range of risks in driving amply contains the risks of cycling.

Conclusions

This paper does provide some useful data on its main objective – seasonal variations in traffic injuries to different road users. However, concerning cycling, it does not handle the evidence in a useful manner. It reaches invalid conclusions about the risks of walking and cycling.  It does not give proper consideration to the distinct nature of risks in walking and cycling, relative to driving.

References

Wardlaw, 2002

Wardlaw MJ, 2002. Assessing the actual risks faced by cyclists. Traffic Engineering + Control Dec 2002 p352-356.