Sociodemographic factors associated with routine outcome monitoring: a historical cohort study of 28,382 young people accessing child and adolescent mental health services
Patient-reported outcome measures (PROMs) are important tools to inform patients, clinicians and policy-makers about clinical need and the effectiveness of any given treatment. Consistent PROM use can promote early symptom detection, help identify unexpected treatment responses and improve therapeutic engagement. Very few studies have examined associations between patient characteristics and PROM data collection.
We used the electronic mental health records for 28,382 children and young people (aged 4–17 years) accessing Child and Adolescent Mental Health Services (CAMHS) across four South London boroughs between the 1st of January 2008 to the 1st of October 2017. We examined the completion rates of the caregiver Strengths and Difficulties Questionnaire (SDQ), a ubiquitous PROM for CAMHS at baseline and 6-month follow-up.
Results and Conclusions
SDQs were present for approximately 40% (n = 11,212) of the sample at baseline, and from these, only 8% (n = 928) had a follow-up SDQ. Patterns of unequal PROM collection by sociodemographic factors were identified: males were more likely (aOR 1.07, 95% CI 1.01–1.13), whilst older age (aOR 0.87, 95% CI 0.87–0.88), Black (aOR 0.79 95% CI 0.74–0.84) and Asian ethnicity (aOR 0.75 95% CI 0.66–0.86) relative to White ethnicity, and residence within the most deprived neighbourhood (aOR 0.87 95% CI 0.80–0.94) were less likely to have a record of baseline SDQ. Similar results were found in the sub-group (n = 11,212) with follow-up SDQ collection. Our findings indicate systematic differences in the currently available PROMS data and highlights which groups require increased focus if we are to gain equitable PROM collection. We need to ensure representative PROM collection for all individuals accessing treatment, regardless of ethnic or socioeconomic background; biased data have adverse ramifications for policy and service level decision-making.
Key Practitioner Message
- Routine patient-reported outcome measure (PROM) collection can detect emerging symptoms, highlight unexpected treatment responses and increase therapeutic engagement, thus providing an invaluable source of clinical information.
- Previous research has revealed low rates of outcome monitoring in child and adolescent mental health services (CAMHS), yet, there has been little exploration of sociodemographic factors associated with variation in collection rates.
- This is the first study to examine PROM collection equality, using electronic healthcare data for a large clinical cohort of children aged 4–17 years. Poorer rates were associated with factors including female gender, older age and Black and Minority Ethnic (BAME) background.
- CAMHS could improve PROM equity in care by facilitating access to underrepresented groups, using more culturally inclined and age-appropriate methods.
- Future work is needed to understand and address differential PROM collection if lower completion groups are to be fairly represented in clinical and service decisions reliant on PROMs.
In England, the government's National Service Framework (Department of Health, 2004) and the National Health Service (NHS) Outcomes Framework policy (Department of Health, 2010) recommend that Child and Adolescent Mental Health Services (CAMHS) use patient-reported outcome measures (PROMs) to assess the impact of their interventions. These guidelines advise that PROMs, specifically the Strengths and Difficulties Questionnaire (SDQ), are collected during a patient’s first appointment, to assist clinicians understanding of patients’ presenting problems, and again six months later or before depending on treatment closure.
PROMs have proven influential in service improvement and commissioning across a variety of healthcare contexts (Clark, Fairburn & Wessely, 2008), including enhancing the quality of CAMHS practice (Bickman et al., 2011; Lambert et al., 2005; Yeung et al., 2012) and highlighting national and local level trends in wellbeing and symptomatology (Wolpert, 2014). When used systematically, they can help prevent young people and their families at high risk of disengagement from CAMHS by providing early detection of unexpected or slower than anticipated responses to treatment (Bickman, 2008; Bickman et al., 2011; Black, 2013; Chen, Ou & Hollis, 2013). Serial outcome measure collection has been found to be advantageous to patients. Regular feedback from self-completed questionnaires provides patients with greater insight into their presenting problems and increases their engagement with therapy (Unsworth et al., 2011). Intervention effectiveness is reported to increase for patients who receive and complete higher rates of repeated PROMS (Bickman et al., 2016).
Strategies introduced as part of the government endorsed Five Year Forward View for Mental Health (2016) mean that PROM data routinely collected across England are now submitted centrally to NHS digital to facilitate rapid access and dissemination of this information, which is hoped to inform improvements to clinical services nationally. In principle, PROMs are available for all children and families accessing CAMHS, yet levels of incomplete data with PROM remain high across CAMHS in England (Fleming et al., 2016). This is problematic, as these missing data can lead to biased estimates of improvement and lead decision makers to draw false conclusions. Clinical audits of specialist CAMH services show as few as 6% of patients may meet the recommended level of paired outcome monitoring (Batty et al., 2013; Hall et al., 2013; Johnston & Gowers, 2005).
Importantly, patients who do not respond to or do not receive outcome measures may differ systematically from those who respond or are given the opportunity to complete such questionnaires. Sociodemographic and clinical factors are known to impact survey and PROM uptake in clinical populations (Garratt, 2009; Hutchings et al., 2012). For example, postoperative mandatory postal questionnaire return rates in the UK are lower for patients who were male, socioeconomically deprived, younger, non-White, in worse health and had previously received treatment for a similar condition (Hutchings et al, 2012).
Equivalent research in the context of children’s mental health is lacking. Recent evidence suggests that face-to-face sessional outcome measure use occurs less frequently for children presenting with more complex needs, such as children who are looked after (CLA) or those in contact with social services (Edbrooke-Childs, Gondek, Deighton, Fonagy, & Wolpert 2016a; Moran et al., 2012; Wolpert, Curtis-Tyler, & Edbrooke-Childs, 2016). Young people from Black and Minority Ethnic (BAME) backgrounds may be at a particular risk for PROM non-use. BAME have lower rates of engagement with CAMHS than White British groups (Skokauskas et al., 2010). Whilst attendance rates for initial appointments are similar across ethnic groups, BAME children are more likely to withdraw from treatment prematurely and have shorter episodes of treatment, potentially resulting in less opportunity to use PROM (Miller, Southam-Gerow & Allin, 2008). Consequently, outcome monitoring in CAMHS from nonrepresentative patient populations may provide biased information and may sustain current inequalities in clinical care provision across children’s mental health services in the UK.
Current evidence alludes to a disparity between government guidelines for the use of PROM in CAMHS and their application in routine practice, as well as potential underrepresentation in specific patient populations. To know how to develop and resource the ideal outcome monitoring system, where no sociodemographic groups referred for treatment are disproportionally represented, we need to understand the potential biases in existing processes. As far as we are aware, there have been no large cohort studies that examine sociodemographic or other clinical differences between PROM completion for children and young people accessing CAMHS. The present study has two aims: first to provide an up to date assessment of the proportion of children accessing mental health services in the UK who complete PROM in keeping with National guidance (Department of Health, 2004), and second to investigate PROM uptake by potential risk factors for noncompletion, including sociodemographic and clinical characteristics.
The current study employed a historical cohort design, using data from the South London and Maudsley National Health Service (NHS) Foundation Trust (SLAM) Clinical Record Interactive Search (CRIS). This research tool was developed in 2008 via the National Institute of Health Research Biomedical Research Centre (NIHR BRC) and has been described in detail formerly (Downs et al., 2019; Perera et al., 2016; Stewart et al., 2009). In brief, CRIS comprises a direct de-identified copy of the SLAM electronic health records, containing data from a range of adult and child mental health services, providing care to a population of approximately 1.2 million residents living in South London. CAMHS are accessible to approximately 190,000 young people living in the local catchment areas of South London boroughs of Croydon, Lambeth, Lewisham and Southwark and provide a range of mental health services delivered by multi-disciplinary teams in local inpatient and outpatient settings (Downs et al., 2017).
Data were obtained from children and young people who were accepted referrals by SLAM CAMHS between the 1st of January 2008 and the 1st of October 2017. Typically, CAMHS accept referrals from primary care, social care and educational services for children and young people of school age between 4 and 18 years old experiencing neurodevelopmental, behavioural or emotional difficulties. Cases were included for analysis if children were aged between 4–18 years old, were allocated to a SLaM outpatient services for treatment (including patients who began treatment and those who never attended their initial appointment) and lived within the local geographical catchment area of SLaM at the time of accepted referral.
The success of using database registries for clinical research hinges on the type and quality of EHR data available (Perera et al., 2016). Therefore, to improve reliability of obtaining data we opted to extract sociodemographic and clinically important variables available in structured fields rather than free text as these are more readily accessible. Extraction of data from the CRIS database was performed using SQL queries, returned raw uncleaned Microsoft Excel data flat data files.
Outcomes variables: baseline and follow-up SDQ completion
The SDQ is a structured 25-item questionnaire screening for symptoms of childhood emotional and behavioural psychopathology (Goodman, 1997). Caregiver SDQs are a NICE recommended PROM, which is ubiquitous to all CAMHS services and presenting problems. It is current clinical practice to collect caregiver rated SDQ for young people, either by post before their first face-to-face meeting using confidentially stamped and addressed envelopes, or during clinical appointment. However, there is considerable variation both within and between services regarding when to request baseline SDQ data. In cases where there is ambiguity about how to proceed with the referral, more clinical information including the SDQ and other NICE recommended disorder-centric PROMs such as the Revised Children’s Anxiety and Depression Scale (RCADS) which is a measure used to screen for anxiety and depression (Chorpita et al., 2000; Ebesutani, Bernstein, Nakamura, Chorpita, & Weisz 2010) may be requested before a child is accepted or rejected by a service. Conversely, if the referral outcome is clear,SDQ data may be requested for the first time during the patient’s initial appointment with CAMHS. In the event of long wait list times, families might be contacted by post before being offered an initial appointment to help discern whether CAMHS input is still required or provide an up to date assessment of symptomology and family concern. The staff member, that is clinician, assistant psychologist or administrator who requests SDQ information also varies depending on the time of administration. Returned SDQ, data are entered to the child’s electronic health records (EHR), either by scanning and uploading a copy of the completed SDQ or inputted using an inbuilt electronic SDQ form. Once added to the patients EHR, SDQ information is available to any approved member of NHS staff, as per other information collected as part of routine CAMHS care.
Baseline caregiver rated SDQ were coded as present in CRIS if completed before or 30 days after the first face-to-face consultation with a CAMHS clinicians, as this was deemed the maximum time allowed to elapse for reported symptoms to still be reflective of presenting difficulties as the time of initial assessment. Follow-up SDQs were identified if caregivers completed them within 180 days of a baseline SDQ, irrespective of whether the questionnaires occurred within the same episode of CAMHS care. Of note, this search strategy will only detect SDQ data when an electronic version on the measure has been selected, not when added to EHR as a scanned attachment. Finally, we chose to examine caregiver SDQ rather than self-reported SDQ as all presenting CAMHS referrals are eligible for a caregiver completed SDQ; self-rated SDQs require the child to have reading age of at least 11, which may exclude a considerable proportion of referrals.
Predictor variables: sociodemographic and clinical characteristics
Using CRIS ethnicity categories within the EHR, originated from 17 NHS ,data dictionary categories were collapsed into five broad ethnic groups (according to categories defined by the UK Office for National Statistics). These were defined as the following: White (White British, Irish and Other White Background), Black (African, Caribbean and Other Black), Asian (Indian, Pakistani, Bangladeshi, Chinese and Other Asian), Mixed and Other (White and Black Caribbean, White and Black African, White and Asian and Other Mixed and any other ethnic group), and NOT stated (ethnicity not provided).
Other sociodemographic variables extracted at study entry included the child’s age at the time of CAMHS first referral accepted within the data collection period, gender and index of neighbourhood deprivation for the main caregiver residence that was categorised into local area deprivation quintiles (McLennan et al., 2011), computed using the social deprivation index associated with the truncated postcode provided for the child’s primary caregiver.
A number of clinical variables were extracted from administrative data fields within CRIS, including initial treatment setting, that is inpatient or outpatient and the number of completed CAMHS treatment episodes attributed to a patient before 2006/07. Other clinical data prior to 2008 were not available in CRIS.
Secondary analysis of this database source has been approved by the Oxfordshire Research Ethics Committee C (reference 08/H0606/71+5).
Strategy for analysis
All analyses were conducted using statistical software package STATA version 13 (StataCorp, 2013). Analyses were conducted to examine factors associated with baseline and follow-up SDQ completion. To compare demographic, clinical characteristics of individuals and SDQ outcomes crude analyses were conducted, using chi-squared for categorical variables and Student’s independent t-test for continuous variables. Crude associations between sociodemographic variables and SDQ completion were examined using logistic regression. Multivariable logistic regression was used to examine the associations between predictors and SDQ outcomes, after adjustment for the potentially confounding effects of the sociodemographic and clinical factors predictor variables extracted from CRIS. Factors associated with SDQ follow-up completion were examined only in patients who had recorded baseline and follow-up SDQ.
The final sample comprised 28,382 children and young people (55.3% male), accepted to SLAM CAMHS between the 1st of January 2008 to the 1st of October 2017, with a mean age at CAMHS referral of 12.0 (SD = 3.8) accessing treatment from a range of services from within SLAM.
The proportion of young people identified as having SDQ information recorded in their electronic mental health records is provided in Table 1. Approximately forty per cent (39.5%) of the total sample had one caregiver-reported SDQ present in their medical records during the CAMHS initial assessment period; only 3.3% of the total sample had a follow-up SDQ completed. Of those with a completed baseline SDQ, only a small proportion (8.3%) had a follow-up SDQ. Young people with a completed baseline SDQ were significantly younger (Mean age = 10.9 years, SD = 3.6) compared to those without (M = 12.7 years, SD = 3.8), (t(41.2) df = 28,380; p < .01).
|Total sample n = 28,382||
Total n = 11,212
Total n = 928
Proportion of baseline with paired BL-FU
n = 11,212
|Number of recorded SDQs||–||11,212 (100%)||928 (100%)||8.3%|
|Male||15,702 (55.3%)||6,453 (57.6%)||510 (55.0%)||7.9%|
|White||11,512 (40.6%)||5,110 (45.6%)||453 (48.8%)||8.9%|
|Black||7,381 (26.0%)||2,981 (26.6%)||239 (25.7%)||8.0%|
|Asian||1,052 (3.7%)||386 (3.4%)||20 (2.2%)||5.2%|
|Mixed and Other||3,589 (12.6%)||1,599 (14.3%)||139 (15.0%)||8.7%|
|Not Stated||4,848 (17.1%)||1,136 (10.1%)||77 (8.3%)||6.8%|
|Child’s age at CAMHS assessment||12.0 (3.8)||10.9 (3.6)||10.6 (3.5)||–|
|Number of referrals prior to 2008||1.1 (0.4)||1.1 (0.3)||1.1 (0.3)||–|
|Level of deprivation (quintiles)|
|1st (least deprived)||5,586 (19.9%)||2,260 (20.3%)||212 (23.1%)||9.4%|
|2nd||5,606 (20.0%)||2,159 (19.4%)||187 (20.4%)||8.7%|
|3rd||5,586 (19.9%)||2,226 (20.0%)||175 (19.0%)||7.9%|
|4th||5,599 (20.0%)||2,267 (20.4%)||184 (20.0%)||8.1%|
|5th (most deprived)||5,680 (20.2%)||2,197 (19.8%)||161 (17.5%)||7.3%|
The logistic regression analyses for baseline SDQ completion is presented in Table 2. After adjusting for other potentially confounding patient characteristics and sociodemographic variables, male children were more likely to have a caregiver completed SDQ (adjusted odds ratio [aOR] 1.07; 95% CI, 1.01–1.13; p = .01).
|OR (95% CI)||p Value||aORa (95% CI)||p Value|
|Child’s gender (males vs. female)||0.86 (0.82–0.90)||<.001||1.07 (1.01–1.13)||.01|
|Black||0.85 (0.80–0.90)||<.001||0.79 (0.74–0.84)||<.01|
|Asian||0.73 (0.64–0.83)||<.001||0.75 (0.66–0.86)||<.01|
|Mixed and Other||1.01 (0.93–1.09)||.86||0.96 (0.89–1.04)||.28|
|Not Stated||0.38 (0.35–0.41)||<.001||0.34 (0.31–0.37)||<.01|
|Child’s age at CAMHS assessment||0.88 (0.87–0.88)||<.001||0. 87 (0.87–0.88)||<.01|
|Number of referrals prior to 2008||0.63 (0.58–0.67)||<.001||0.64 (0.59–0.69)||<.01|
|Level of deprivation (quintiles)|
|1st (least deprived)||Reference|
|2nd||0.92 (0.86–0.99)||.04||0.91 (0.84–0.98)||.02|
|3rd||0.98 (0.90–1.05)||.51||0.94 (0.87–1.02)||.11|
|4th||1.00 (0.93–1.08)||.97||0.95 (0.87–1.02)||.16|
|5th (most deprived)||0.93 (0.86–1.00)||.05||0.87 (0.80–0.94)||<.01|
- aOR, adjusted odd ratios; CI, confidence intervals; OR, odd ratios.
- a adjusting for gender, ethnicity, age at CAMHS assessment, number of previous referrals prior to 2008 and level of neighbourhood deprivation.
In relation to child’s ethnicity, compared to White ethnic groups, Black, Asian and NOT stated groups were all associated with reduced likelihood of completing baseline SDQ. The number of referrals prior to 2008 and older age of the child at first referral within the observation period were associated with a reduced baseline SDQ completion. In terms of neighbourhood deprivation, relative to the most affluent areas, children resident in the most deprived areas were significantly less likely to have a baseline SDQ present (aOR 0.87 [0.80–0.94], p < .01).
Table 3 presents the unadjusted and adjusted associations between sociodemographic variables and follow-up SDQ outcomes, for those cases with completed baseline SDQs (n = 11,212). In the adjusted analyses, children who are female or older at the time of CAMHS assessment have a greater number of CAMHS episodes prior to 2008 and from non-White ethnic category were associated with a lower likelihood of follow-up SDQ completion. Relative to the lowest area of deprivation, the children from the most deprived areas of residence were less likely to have a recorded follow-up SDQ.
|OR (95% CI)||p Value||aORa (95% CI)||p Value|
|Child’s gender (males vs. female)||1.12 (0.98–1.29)||.09||1.15 (0.99–1.32)||.06|
|Black||0.90 (0.76–1.06)||.19||0.91 (0.77–1.07)||.25|
|Asian||0.56 (0.36–0.89)||.01||0.56 (0.36–0.90)||.02|
|Mixed and Other||0.98 (0.80–1.19)||.83||0.98 (0.81–1.20)||.87|
|Not Stated||0.75 (0.58–0.96)||.02||0.72 (0.56–0.93)||.01|
|Child’s Age at CAMHS assessment||0.98 (0.96–1.00)||.05||0. 98 (0.96–1.00)||.01|
|Number of referrals prior to 2008||0.86 (0.67–1.10)||.24||0.86 (0.67–1.11)||.24|
|Level of deprivation (quintiles)|
|1st (least deprived)||Reference|
|2nd||0.92 (0.75–1.13)||.41||0.92 (0.75–1.13)||.42|
|3rd||0.82 (0.67–1.02)||.07||0.83 (0.67–1.02)||.08|
|4th||0.85 (0.69–1.05)||.13||0.85 (0.69–1.05)||.12|
|5th (most deprived)||0.76 (0.62–0.95)||.01||0.76 (0.61–0.94)||.01|
- aOR, adjusted odd ratios; CI, confidence intervals; OR, odd ratios.
- a adjusting for gender, ethnicity, age at CAMHS assessment, number of previous referrals prior to 2008 and level of neighbourhood deprivation.
Using electronic case register data for a very large community CAMHS cohort, we aimed to investigate current levels of PROM collection and whether sociodemographic or clinical factors are associated with differential rates of PROM completion. Our findings demonstrate that caregiver-reported SDQ were present in the health records for approximately 40% of young people who were accepted for CAMHS assessment and just 3% at 6-month follow-up. These figures are concerning as they suggest that only a small proportion of child and adolescent mental health providers are able to adhere to repeated PROM collection in keeping with National guidance (Department of Health, 2004). Our results also provide the first indication that in CAMHS there may be important inequalities in mental health outcome reporting, particularly for non-White families and families from less affluent neighbourhoods.
Our findings reflect collection patterns for just one of several outcome measures routinely used in CAMHS and only examined data extracted for the caregiver-reported version of this measure despite self-reported and teacher versions existing. Yet, these figures are worrisome given that the caregiver-reported SDQ is a policy recommended measure designed to detect symptoms for a number of mental health difficulties, which may not be captured by more disorder-focused questionnaires. Results from previous healthcare audits revealed that inadequate PROM collection is likely to be an issue across the UK rather than SLaM specific. For example, inspection of health records from two other NHS Trusts demonstrates repeated use of any SDQ reporting type for only 17% of their sample (Hall et al., 2013). Even within Children and Young People Improving Access to Psychological Therapies (CYP IAPT), a service transformation model set up specifically to deliver standardised outcome-focused treatment found that only 42% of young people had paired outcome measures recorded nationally (Edbrooke-Childs et al., 2015). Encouragingly, early intervention services such as the Scallywags project in Cornwall UK, which provides eight hours a week of community support for families over six months, have achieved over a 90% face-to-face time two outcome measure collection rates, demonstrating a potential cultural shift in PROM use (Broadhead et al., 2009). Still, strategies are clearly needed to enhance the use of clinically important nationally recommended PROMs.
Our findings show that clinicians collect PROMs less frequently with BAME families. After controlling for potential confounders, all non-White ethnic categories were associated with lower baseline SDQ completion. However, this difference is relative to SDQ completion rates in our reference group, for example whilst there is no increase in absolute differences for SDQ completion across ethnic groups; we observed a disproportionate increase in SDQ recording for White patients. Moreover, ethnic differences appeared to be less prominent in relation to follow-up SDQ completion, with the exception of children from an Asian ethnic background who show nearly a twofold reduction in follow-up SDQ use compared to children from a White, Black and mixed and other ethnic background. The fact that inequalities were observed in our adjusted model demonstrates that this cannot be attributed purely to differences in socioeconomic status or other potentially confounding patient characteristics accounted for during analysis between BAME families relative to White British children and highlights other factors may explain the relationship between ethnicity and outcome monitoring in UK CAMHS.
We surmise that poorer PROM uptake in BAME patients may be due to less contact with clinicians (i.e. potentially more drop-outs from services) and that BAME families may be less predisposed to return paper-based questionnaires. Our findings and our clinical experience suggest that current outcome collection methods need to be better tailored to engage BAME groups. We do not believe our findings demonstrate deliberate, nonequitable practices in PROM administration. Previous research has highlighted health professionals’ concerns that when used in clinical practice, routine outcome measures could pose a language barrier to non-native English-speaking families (Jette et al., 2009) and cultural sensitivity issues (Copeland, Taylor & Dean, 2008; Jette et al., 2009) which in turn could have adverse implications for patient engagement or raise validity issues (CORC, 2014). Moreover, CORC outline potential barriers for uptake of PROMs with BAME families. For example, despite the SDQ being available in 86 different languages, clinicians may still need to allow for extra time in sessions to complete a non-English SDQ or have to book and prepare an interpreter for PROM collection (CORC, 2014) which could be a deterrent for their use.
Another possible explanation for these findings relates to the severity or type of mental health problems endured by children from BAME backgrounds. Findings from a systematic review of mental health inequalities revealed that relative to White British patients, children from Black and Black Caribbean backgrounds were more likely to be treated for less common but serious conditions such as psychosis, disordered eating and deliberate self-harm in young females of South Asian ethnicity (Stansfeld et al., 2004). In these cases, clinicians could refer to structured outcome measures less often due to concerns that they will discourage therapeutic engagement (Edbrooke-Childs et al., 2017; Wolpert et al., 2016), or depending on the patient’s diagnosis may find greater therapeutic value using disorder-centric outcome measures over the SDQ which provides a broad assessment of emotional and behavioural issues. Moreover, the additional administrative burden required to complete these questionnaires might surpass their potential clinical benefit (Martin et al., 2011). However, as we only derived our sample from outpatient and community services which traditionally provide care for children and young people with more common mental health conditions, it is unlikely that this will be a significant factor contributing to our study findings.
In relation to socioeconomic deprivation, we observed that at both time points living in a more deprived neighbourhood was significantly associated with worse SDQ completion. Whilst these are the first findings to identify ethnic and socioeconomic disparities in CAMHS outcome measure collection, they align with previous findings from adult survey and PROM collection research, demonstrating that both these populations are poorer at returning postal questionnaires (Hutchings, Neuburger, Gross et al., 2012), which could suggest that individuals from these backgrounds are less likely to engage with PROMs rather than being at greater risk of not receiving them in the first instance.
We speculate that caregivers from low socioeconomic backgrounds face several barriers that may reduce their motivation or ability to complete outcome measures compared to wealthier families. The business hours held by CAMHS mean they often offer appointment times which are hard for caregivers working in low-paid, inflexible shift roles to attend, which may limit their opportunities to provide questionnaire data (Levy & O’Hara, 2010). Moreover, the demands and daily stressors exerted on families living in poverty may prevent families from prioritising mental health needs, especially making time to engage with paper-based reporting (Santiago, Kaltman & Miranda, 2013). Caregivers from more socially deprived backgrounds are also less likely to perceive their child as having mental health difficulties relative to caregivers from a higher socioeconomic background, even when reported SDQs scores are identical (Huang, Hiscock & Dalziel, 2019), presumably lowering their motivation to report on their child’s mental health state using questionnaires.
Additionally, we found that being male and presenting at older age was initially associated with lower baseline caregiver SDQ completion. The gender association changed direction after adjusting for other sociodemographic factors, which could be a reflection of age-gender difference in mental health disorder incidence affecting caregiver SDQs collection. For example, early-onset conditions including behavioural difficulties, autism spectrum disorder and attention-deficit hyperactivity disorder (ADHD) are more frequently observed in younger males, and adolescent-onset emotional conditions including anxiety disorders, mood disorders and eating disorders are more female predominant (Zahn-Waxler, Shirtcliff & Marceau, 2008). We suspect that clinicians find parent-reported PROMS on internalising symptoms harder to collect in older samples as adolescents are more likely to attend unaccompanied by their parents. Furthermore, this finding may be accounted for by low parental perceptions of PROM relevance for reporting emotional symptoms, particularly for the SDQ which includes fewer questions relating to emotion compared to externalising symptoms. Nonetheless, national guidance specifies the need for CAMHS to evaluate patient-reported outcomes from the viewpoint from both the young person accessing treatment themselves and their caregivers (Department of Health, 2004).
Strengths and limitations
This research has several strengths. SLaM is the primary specialist mental health provider for the South London region. Thus, our clinical sample included patient’s accessing the entirety of specialist and regional CAMHS outpatient services provisioned by SLaM, affording a complete representation of children and young people with mental health conditions living across a highly socioeconomically diverse urban area. Furthermore, given the inclusivity of our dataset, the study findings are unlikely to have been affected by issues of selection biases resulting in underrepresentation of BAME ethnic groups, a problem commonly experienced in European mental health research where ethnic minorities are less frequently recruited to prospective cohort studies or clinical trials (Brown et al., 2014). Moreover, we were able to extract data from the electronic health records of close to 29,000 CAMHS users, which afforded the statistical power required to account for a variety of possible confounders and to detect small yet important differences. Finally, our research was conducted independently from clinical practice, so the study itself could not have influenced clinician’s behaviour.
Several limitations should be kept in mind when interpreting the findings of the current research. The study was conducted on a large, densely population and ethnically diverse catchment area within South London (Downs et al., 2017), so findings may not be representative of CAMHS across the UK. We opted to examine caregiver SDQ collection rates due to its strong endorsement as a PROM by national programmes such as CYP IAPT (Wolpert et al., 2015) for use across all ages eligible for CAMHS provision. However, other Trusts may have different data collection protocols, which could influence individual service provider decision to utilise specific outcome measures. Next, given the naturalistic nature of routinely collected cohort data it is possible that residual confounding affected our findings. Depending on the case, clinicians find it useful to collect PROM from families at different time points, for example, by post before their first appointment or on site prior to or during their first session (Law & Wolpert, 2014). Research suggests that postal survey’s generally have lower response levels compared to face-to-face data collection (Mathers et al., 2007). Accordingly, patients who receive SDQ through the post may be less likely to respond compared to SDQ collected within session accompanied by a clinician.
Similarly, SDQ data which are scanned and added to EHRs as an attachment are not recorded as present using the current registry-based research methodology which may have under-estimated questionnaire completion rates. However, we do not believe that it would have affected the studies main findings in relation to sociodemographic differences as it is unlikely that people scanning questionnaires will be biased to one ethnic group or another. Moreover, scanned questionnaires are not clinically useful as they have not been scored and therefore do not add clinical value. Family’s motivation to return SDQ forms may also be affected by the length of time elapsed since making their referral and receiving an SDQ, as well as caregiver characteristics not routinely captured in medical records, including age, gender and level of education.
Furthermore, previous research investigating mental health prevalence within ethnic categories identified substantial disorder heterogeneity (Stansfeld et al., 2004); as such our decision to collapse more descriptive census categories into broader categories may have hindered the discovery of important within ethnic category inequalities. This information could help fulfil legal obligations outlined in the Department of Health’s Race Equality Scheme (2005–2008), requiring the NHS to be more ‘responsive’ to the needs of all patients, which may vary by ethnicity and to eradicate existing service access and health outcome inequalities between ethnic minority groups (King's Fund, 2006).
Patients and their families only have an opportunity to complete SDQs once initiated by a clinician and speciality services, for example, ADHD and neurodevelopmental teams are likely to have different priorities on when to initiate and encourage parental SDQ completion and other outcome measure use. As the current study did not examine PROM collection rates according to the type of service speciality, we may have overlooked important within Trust variation in SDQ collection. Similarly, patient’s condition and how compliant families are may differentially affect parental SDQ completion rates. Therefore, future research is needed to understand the impact of these potentially explanatory factors, specifically whether the effects of ethnicity and neighbourhood on outcome collection rates are confounded by service speciality and patient diagnosis.
Given the considerable time frame captured in this cohort study, it would also be interesting for future research to examine whether under-reporting of PROMs by any of patient characteristic identified in the current research fluctuated, that is worsened or improved over the ten-year study period or whether they remained consistent throughout. Correspondingly, research exploring trends in PROM collection over time for the entire cohort would be beneficial. This would allow us to examine the potential influence of the introduction of national PROM improvement strategies, for example CYP IAPT in 2011 on levels of CAMHS outcome measure reporting.
Finally, whilst we have identified existing sociodemographic discrepancies in children with and without completed SDQ, we are lacking information which would help understand the reasons for these associations. For example, assessing rates of initial appointment nonattendance could shed light on the proportion of children who are unlikely to have received and therefore completed baseline mental health outcome measures. Moreover, qualitative data from family and service providers views on these findings could provide a set of potential causal explanations for future research (Edbrooke-Childs, Newman, Fleming, Deighton, & Wolpert 2016b).
Routine failure to record change from a service user perspective is problematic. This information is valuable and has been found to improve patient–practitioner communication, treatment response monitoring and the identification of undiagnosed difficulties (Chen et al, 2013; Marshall et al., 2006; Wolpert et al, 2016). Moreover, evidence suggests that combined use of service user and clinician-reported PROMs is required to accurately assess the effectiveness of delivered interventions (Hall et al., 2013). We found that, at least for our local CAMHS population, paired PROM collection were very low. Unless these rates improve PROMs are unlikely to provide useful information to improve clinical practice, service development and policy, particularly for BAME families and those from poorer neighbourhoods.
Ultimately, we believe that future research is needed to develop new digital strategies and explore, adapt and test existing electronic data collection platforms, for example Patient-Reported Outcomes Measurement Information System (PROMIS) in NHS settings (Bennion et al., 2017; Porter et al., 2016; Porter et al., 2011) to assist with equitable collection of routine outcome measures for all CAMHS users, including cross-culturally and socioeconomically diverse populations.
Clinical Records Interactive Search (CRIS) is supported by the National Institute of Health Research (NIHR) Biomedical Research Centre for Mental Health (BRC) Nucleus at the SLaM NHS Foundation Trust and Institute of Psychiatry, Psychology and Neuroscience (IoPPN), King’s College London (KCL) jointly funded by the Guy’s and St Thomas’ (GSST) Trustees and the South London and Maudsley Trustees. A.C.M. is supported by the GSST Charity. J.D. is supported by NIHR Clinician Science Fellowship award (CS-2018-18-ST2-014) and has received support from a Medical Research Council (MRC) Clinical Research Training Fellowship (MR/L017105/1) and Psychiatry Research Trust Peggy Pollak Research Fellowship in Developmental Psychiatry. E.S. currently receives support from the NIHR Biomedical Research Centre (BRC) at SLaM NHS Foundation Trust (IS-BRC-1215-20018), the NIHR through a programme grant (RP-PG-1211-20016) and Senior Investigator Award (NF-SI-0514-10073 and NF-SI-0617-10120), the European Union Innovative Medicines Initiative (EU-IMI 115300), Autistica (7237) MRC (MR/R000832/1, MR/P019293/1), the Economic and Social Research Council (ESRC 003041/1) and GSST Charity (GSTT EF1150502) and the Maudsley Charity. All authors are affiliated with the NIHR Specialist BRC for Mental Health at the SLaM NHS Foundation Trust and IoPPN, KCL. R.D.H. has received research funding from Roche, Pfizer, Lundbeck and Janssen. The remaining authors have declared that they have no competing or potential conflicts of interest.
The Oxfordshire Research Ethics Committee C (08/H0606/71 + 5) approved the CRIS system as an anonymised data resource for secondary analysis, with governance provided for related projects by a patient-led oversight committee.
- 2013). Implementing routine outcome measures in child and adolescent mental health services: from present to future practice. Child and Adolescent Mental Health, 18, 82–87.
- 2017). E-therapies in England for stress, anxiety or depression: what is being used in the NHS? A survey of mental health services. British Medical Journal Open, 7, e014844.
- 2008). A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child and Adolescent Psychiatry, 47, 1114–1119.
- 2016). Implementing a measurement feedback system: a tale of two sites. Administration and Policy in Mental Health and Mental Health Services Research, 43, 410–425.
- 2011). Effects of routine feedback to clinicians on mental health outcomes of youths: Results of a randomized trial. Psychiatric Services, 62, 1423–1429.
- 2013). Patient reported outcome measures could help transform healthcare. British Medical Journal, 346, f167.
- 2009). Understanding parental stress within the Scallywags service for children with emotional and behavioural difficulties. Emotional and Behavioural Difficulties, 14, 101–115.
- 2014). Barriers to recruiting ethnic minorities to mental health research: as systematic review. International Journal of Methods in Psychiatric Research, 23, 36–48.
- 2013). A systematic review of the impact of routine collection of patient reported outcome measures on patients, providers and health organisations in an oncologic setting. BMC Health Services Research, 13, 211.
- 2000). Assessment of symptoms of DSM-IV anxiety and depression in children: A Revised Child Anxiety and Depression Scale. Behaviour Research and Therapy, 38, 835–855.
- 2008). Psychological treatment outcomes in routine NHS services: A commentary on Stiles et al. (2007). Psychological Medicine, 38, 629–634.
- 2008). Factors influencing the use of outcome measures for patients with low back pain: a survey of New Zealand physical therapists. Physical Therapy, 88, 1492–1505.
- Department of Health (2004). National service framework for children, young people and maternity services: Core standards. Available from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/199952/National_Service_Framework_for_Children_Young_People_and_Maternity_Services_-_Core_Standards.pdf [last accessed 04 June 2020].
- Department of Health (2010). The NHS Outcome Framework 2011/12. Available from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/213789/dh_123138.pdf [last accessed 04 June 2020].
- 2019). An approach to linking education, social care and electronic health records for children and young people in South London: a linkage study of child and adolescent mental health service data. British Medical Journal Open, 9, e024355.
- 2017). Linking health and education data to plan and evaluate services for children. Archives of Disease in Childhood, 102, 599–602.
- 2010). A psychometric analysis of the Revised Child Anxiety and Depression Scale - parent version in a clinical sample. Journal of Abnormal Child Psychology, 38, 249–260.
- 2017). Patient reported outcome measures in child and adolescent mental health services: Associations between clinician demographic characteristics, attitudes, and efficacy. Child and Adolescent Mental Health, 22, 36–41.
- 2015). Children and young people’s improving access to psychological therapies: Rapid internal audit national report. Available from:https://www.ucl.ac.uk/evidence-based-practice-unit/sites/evidence-based-practice-unit/files/pub_and_resources_project_reports_rapid_internal_audit_2015.pdf [last accessed 04 June 2020].
- 2016a). When is sessional monitoring more likely in child and adolescent mental health services? Administration and Policy in Mental Health and Mental Health Services Research, 43, 316–24.
- 2016b). The association between ethnicity and care pathway for children with emotional problems in routinely collected child and adolescent mental health services data. European Child & Adolescent Psychiatry., 25, 539–546.
- 2016). Learning from a learning collaboration: The CORC approach to combining research, evaluation and practice in child mental health. Administration and Policy in Mental Health and Mental Health Services Research, 43, 297–301.
- 2009). The key findings report for the 2008 inpatient survey. Oxford, UK: Picker Institute Europe. http://www.nhssurveys.org/Filestore/documents/Key_Findings_report_for_the_2008__Inpatient_Survey.pdf [last accessed 04 June 2020].
- 2013). The use of routine outcome measures in two child and adolescent mental health services: a completed audit cycle. BMC Psychiatry, 13, 270.
- 2019). Parents’ perception of children’s mental health: seeing the signs but not the problems. Archives of Disease in Childhood., 104, 1102–1104.
- 2012). Factors associated with non-response in routine use of patient reported outcome measures after elective surgery in England. Health and Quality of Life Outcomes., 10, 34.
- 2009). Use of standardised outcome measures in physiotherapy practice: perceptions and applications. Physical Therapy, 89, 125–135.
- 2005). Routine outcome measurement: a survey of UK child and adolescent mental health services. Child Adolescent Mental Health, 10, 133–139.
- King's Fund. (2006). Briefing: access to health care and minority ethnic groups. Available from: https://www.kingsfund.org.uk/sites/default/files/field/field_publication_file/access-to-health-care-minority-ethnic-groups-briefing-kings-fund-february-2006.pdf [last accessed 04 June 2020].
- 2005). Providing feedback to psychotherapists on their patients' progress: Clinical results and practice suggestions. Journal of Clinical Psychology, 61, 165–174.
- 2014). Guide to using outcomes and feedback tools with children, young people and families. Available from: https://www.corc.uk.net/media/2112/201404guide_to_using_outcomes_measures_and_feedback_tools-updated.pdf [last accessed 04 June 2020].
- 2010). Psychotherapeutic interventions for depressed, low-income women: a review of the literature. Clinical Psychology Review, 30, 934–950.
- 2006). Impact of patient-reported outcome measures on routine practice: a structured review. Journal of Evaluation in Clinical Practice, 12, 559–568.
- 2011). Practitioners' attitudes towards the use of standardized diagnostic assessment in routine practice: a qualitative study in two child and adolescent mental health services. Clinical Child Psychology and Psychiatry, 16, 407–420.
- 2007). Surveys and questionnaires: The NIHR RDS for the East Midlands / Yorkshire & the Humber.
- 2011). The English indices of deprivation. London: Department for Communities and Local Government. Crown. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/6320/1870718.pdf [last accessed 04 June 2020].
- 2008). Who stays in treatment? Child and family predictors of youth client retention in a public mental health agency. Child and Youth Care Forum, 37, 153–170.
- 2012). What do parents and carers think about routine outcome measures and their use? A focus group study of CAMHS attenders. Clinical Child Psychology and Psychiatry, 17, 65–79.
- 2016). Cohort profile of the South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLaM BRC) Case Register: current status and recent enhancement of an electronic mental health record-derived data resource. British Medical Journal Open, 6, e008721.
- 2016). Framework and guidance for implementing patient-reported outcomes in clinical practice: evidence, challenges and opportunities. Journal of Comparative Effectiveness Research, 5, 507–519.
- 2011). Health literacy and task environment influence parents’ burden for data entry on child-specific health information: randomized controlled trial. Journal of Medical Internet Research., 13, e13.
- 2013). Poverty and mental health: how do low-income adults and children fare in psychotherapy? Journal of Clinical Psychology, 69, 115–126.
- 2010). Ethnic minority populations and child psychiatry services: An Irish study. Children and Youth Services Review, 32, 1242–1245.
- 2004). Ethnicity, social deprivation and psychological distress in adolescents: school-based epidemiological study in east London. British Journal of Psychiatry, 185, 233–238.
- StataCorp. (2013). Stata Statistical Software (Release 13). College Station, TX: StataCorp LP. https://www.stata.com/manuals13/u.pdf [last accessed 04 June 2020].
- 2009). The South London and Maudsley NHS Foundation Trust Biomedical Research Centre (SLaM BRC) case register: development and descriptive data. BMC Psychiatry, 9, 51.
- 2011). Therapists’ and clients’ perceptions of routine outcome measurement in the NHS: A qualitative study. Counselling and Psychotherapy Research, 12, 71–80.
- 2014). Uses and abuses of patient reported outcome measures (PROMs): Potential Iatrogenic impact of PROMs implementation and how it can be mitigated. Administration and Policy in Mental Health and Mental Health Services Research, 41, 141–145.
- 2015). Measurement Issues: Review of four patient reported outcome measures: SDQ, RCADS, C/ORS and GBO–their strengths and limitations for clinical use and service evaluation. Child and Adolescent Mental Health, 20, 63–70.
- 2016). A qualitative exploration of patient and clinician views on patient reported outcome measures in child mental health and diabetes services. Administration and Policy in Mental Health and Mental Health Services Research, 43, 309–315.
- 2012). Clinical Outcomes in Measurement-based Treatment (Comet): A trial of depression monitoring and feedback to primary care physicians. Depression and Anxiety, 29, 865–873.
- 2008). Disorders of childhood and adolescence: Gender and psychopathology. Annual Review of Clinical Psychology, 4, 275–303.