By its very nature, data in sport is inherently noisy. As technology evolves, and more data is generated, it is important that we quantify the boundaries of this noise (variability). Once the boundaries of noise are defined, we can then have increased confidence on judgement calls made when observations lie outside of these boundaries.
Fundamentally, the confidence we can have in systems and data will be determined by their reliability and validity. This article explores how this might be achieved in an applied sporting environment.
RELIABILITY
Reliability refers to the extent to which a tool or technique produces consistent results. In essence, it deals with the repeatability of findings. For example, if a particular study was conducted several times, would it yield the same results each time? If it did, we could say the data, or the instrument which generated the data was reliable.
In the specific case of wearable tracking technology, we know that linear measures of low velocity locomotion are more reliable than multi-directional measures of high velocity locomotion. When working with athlete monitoring systems, it is crucial to establish the reliability of the technology and each of the metrics it generates before you begin to make reliable, confident decisions based on the data derived from it.
VALIDITY
Validity relates to the extent to which a device measures what it claims to measure. For example, if you run 100m, is the technology used to measure that distance accurate?
In the specific case of wearable tracking technology, we know that they have acceptable levels of validity to measure both distances and velocities. This is important when using wearable technology to assess training loads.
For your data to be valid, it must first be reliable. In other words, if technology doesn’t measure what it purports to measure, by definition it cannot produce consistent, reliable results.
ASSESSING RELIABILITY AND VALIDITY OF ATHLETE MONITORING SYSTEMS
As the use of athlete tracking technologies has become increasingly widespread, the academic community has focused a lot of attention on scrutinising and quantifying the reliability and validity of data generated.
Reliability and validity of data can be situation and environment dependent. As such, practitioners are advised to conduct in-house tests within their own workspace (e.g. standardised runs) to quantify the confidence they can have in data generated. These tests are unlikely to be as rigorous as those conducted by academic institutions, but they can give you a useful perspective on your systems and can inform some of the processes you put in place.