Pediatric chronic health conditions are defined as illnesses that have lasted, or are expected to last, at least six months; have a pattern of recurrence or deterioration; have a poor prognosis; and produce consequences or sequelae that impact on the individual’s quality of life (Australian Institute of Health and Welfare [AIHW], 2005; Martinez & Ercikan, 2009). Common childhood chronic health conditions include cancer, cystic fibrosis, Crohn’s disease, diabetes, epilepsy, and asthma. Approximately 20%–30% of children and young people are reported to have a chronic health condition, depending on the definition used. A relatively small percentage (1.6%) have a chronic health condition that is severe enough to impact on their functional ability to regularly attend school (Shaw & McCabe, 2008). In Australia this amounts to around 67,000 students.
For this reason, most pediatric hospitals in western developed countries have hospital-based schools or education support services. For example, the Children’s Hospital School at Great Ormond Street Hospital, London, aims to minimize the interruption and disruption to children and young people’s education so that academic progress and an interest in learning will continue, as far as medical circumstances permit. Similarly, at the Hasbro Children’s Hospital, Rhode Island, in the United States, the hospital school has the following aims: to make the transition from hospital back to school as smooth as possible for the patient and classmates, to maintain academic involvement with the patient’s home school and teachers, and to utilize hospitalization as a unique and positive learning experience. And in Melbourne, Australia, teachers at the Education Institute at the Royal Children’s Hospital work with more than 2,000 school-aged students each year to keep them engaged with their education and connected to their regular school and classmates.
The rationale for these education support programs is that children and young people with chronic health conditions are at increased risk of disengagement from school, education, and learning, and, worse, academic, social, and emotional and quality of life outcomes both in the short and the longer term (Martinez & Ercikan, 2009; Maslow, Haydon, McRee, & Halpern, 2012; Nasuuna, Santoro, Kremer, & Silva, 2016). Furthermore, access to a quality education for all children, including those managing a chronic health condition, is a right that is enshrined in both national and international laws (UNICEF, 2006).
While the common goal of education support programs is to prevent students with a chronic health condition from disengaging from school, education, and learning and to maintain continuity in their human development processes (Seymour, 2004), many different types of services or interventions exist for this group of students, shaped by their setting and context, and described differently (Dempsey, 2019). Nonetheless, there appear to be some common elements or themes to these programs (Capurso & Dennis, 2017). The description of the education support program at the Queensland Children’s Hospital, Australia, funded by the State Department of Education, reflects many of the key common themes of education support programs, as described by Capurso and Dennis (2017). They are:
However, there is a need to generate more robust evidence about the effectiveness of these types of education support programs. A systematic review of educational support services for children and adolescents with chronic health conditions (Barnett, 2018) found just four controlled studies of a diverse range of education support programs aimed mostly at children with cancer, and inconclusive evidence about their effectiveness. It also revealed that the current state of the evidence about the effectiveness of these interventions is predominantly made up of qualitative research, case studies, and expert opinion, which are known to be more affected by bias than more rigorous controlled studies that include both an intervention group and a comparison or control group (Chalmers, 2005).
To fill this gap, the purpose of this paper, therefore, was to consider how a more rigorous controlled study of the effectiveness of education support services for students with chronic health conditions could be undertaken. Specifically, it outlines the protocol for a feasibility study as a first stage to evaluating the effectiveness of these programs. Feasibility studies are considered useful as preparation for the conduct of a larger controlled evaluation of complex interventions (Thabane et al., 2016), with the goal of reducing uncertainty and thereby increasing the chance of a successful larger study (Thabane et al., 2016). The proposed protocol is based upon and includes recommended and relevant sections of the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklist (Chan, Altman, Dickersin, & Moher, 2013).
Specifically, the aims of the feasibility study are to:
Many researchers (e.g., Chalmers, 2005; Gray, 2001; Littell & Shlonsky, 2010; Shlonsky & Gibbs, 2004) consider the randomized controlled trial (RCT) the “gold standard” in effectiveness evaluations as it is viewed to be the best design for minimizing the influence of bias. But for many others in the pedagogical community, the debate on this point is still open (Biesta, 2007). In our circumstance, given that access to a quality education is a right of the child along with the potential public sensitivity of randomly allocating hospitalized children and young people to either receive education support or not while in hospital, an RCT may not be considered either appropriate or ethical.
Fortunately, other controlled study designs are available for consideration. Controlled studies (i.e., studies with both an intervention group and a well-matched non-intervention or control group) are considered best for evaluating the effectiveness of an intervention because they state the counterfactual. That is, they investigate and compare both what happens if a group of people receive the intervention and what happens if a group of people don’t receive the intervention. This, in turn, allows stronger and more robust claims of causality and effectiveness of a given intervention; that is, that any differences in results between the two groups were due and can be attributed to the intervention (Chalmers, 2005; Shlonsky & Gibbs, 2004).
One such design is that of a controlled cohort study, where both the intervention group and the control group exist “naturally” in the community. Such is the case in Victoria, Australia, where the Department of Education and Training (DET) funds education support programs at some, but not all, pediatric hospitals and departments. The DET funds education programs at two pediatric hospitals (Royal Children’s Hospital and Monash Children’s Hospital), but not four regional pediatric departments in hospitals in Warrnambool, Ballarat, Gippsland, and La Trobe. Children and young people with chronic illnesses admitted to the latter hospitals where there is no funded education support program, therefore, naturally form the control or comparison group.
The subjects for the current study consisted of the children and young people under treatment for a chronic health condition at the Royal Children’s Hospital (the intervention group) and the four regional hospitals.
An important first step in effectiveness evaluations is to develop an answerable research question in PICO format (Shlonsky & Gibbs, 2004). The general and overarching aims of education support programs have been discussed earlier. As such, the specific research question for our evaluation of the effectiveness of education support programs in Victoria, Australia, is as follows.
|Population (P)||Do hospitalized children with a chronic health condition,|
|Intervention (I)||who receive hospital-based education support|
|Comparison (C)||as compared to those who receive no hospital-based education support,|
|Outcomes (O)||have higher levels of engagement in education and learning and quality of life?|
Participants will need to meet the following eligibility criteria:
The criterion of an expected LOS of 5+ days was informed by the work of Prof. Stephen Zubrick and colleagues, whose Australian research investigated the impact of school absence on academic performance. Their study (Hancock, Shepherd, Lawrence, & Zubrick, 2013) was based on students who were enrolled in the public school system in Western Australia from 2008 to 2012. The data included information collected by schools on students and their caregivers upon enrolment, attendance records, and the results of the National Assessment Program – Literacy and Numeracy (NAPLAN). The authors hypothesised a threshold effect of absence from school on academic performance, such that a small amount of absence from school might have minimal effect on academic performance, but beyond some threshold attendance level there would be a noticeable drop in measured academic performance. Their findings, however, generally support the notion that “every day counts.” That is, every day of absence from school is associated with progressively lower achievement in numeracy, writing, and reading.
They also found that, generally, an absence of four weeks per half year resulted in performance dropping to that of the national minimum standard. However, school absence had a much greater influence on the achievement of students in lower socio-economic index schools with absences of three weeks per half year resulting in performance dropping to that of the national minimum standard.
In earlier research on categories of at-risk attendance, again in the Western Australian school jurisdiction and based on empirical findings from the 1993 Western Australian Child Health Survey (WACHS), Zubrick, Silburn, Gurrin, and Shepherd (1997) found that students who were, on average, absent for two weeks per half year would be considered to be at-risk for lower academic competence.
Furthermore, in 2014 the Royal Children’s Hospital Education Institute undertook an investigation into what happens when the young people/students they had worked with were discharged from hospital. Results showed that one month after discharge, one in three children were still at home and had not returned to school. Of the children who had returned to school during the month after discharge, two out of three spent a period of time at home prior to returning to school – the average length of time spent at home prior to returning to school was 13 days (ARACY, 2015).
For these reasons, in this feasibility study, the eligibility criterion of an expected in-hospital LOS of 5 or more days in a six-month period was chosen as a proxy measure of an expected total length of absence from school of potentially 14 (or more) days in a six-month period. Such a length of absence would put the child in an “at-risk” category of worse academic outcomes.
The education support program at the Royal Children’s Hospital (RCH) in Australia is funded by the Victorian DET and employ qualified teachers to work with hospitalized young people/students to help maintain engagement with the students’ regular schools and peers, and engagement in education and learning. The Education Institute teachers evaluate new patients to determine the appropriate level of educational support. Evaluations are based on need or educational risk, with eligible students being offered individual or group learning sessions, or a combination of both. Each student receives an Individual Learning Plan (ILP) prepared in collaboration with the student, families/carers, and the enrolled educational setting. The ILP is reviewed regularly and updated in line with the student’s educational progress. The hospital kindergarten program is led by qualified early childhood educators who, through engaging play-based learning, help prepare children for school. To be eligible for the program, children must be at least 4 years old before April 30 in the year of enrolment. The Education Institute offers an evidence-based program built on high-impact teaching strategies, with a focus on literacy and numeracy underpinned by the Victorian Curriculum.
Elements of the education support program are described in a program logic model developed in consultation with staff from the Centre for Program Evaluation at the University of Melbourne. It depicts a socio-ecological approach that recognizes the influence of a number of spheres within which the child functions on a daily basis (Bronfenbrenner, 1995). These include the spheres or levels of the individual, family, schools, community and service systems. (For more detail, see Figure 2: Logic Model of an Education Support Program for Children and Young People with a Chronic Health Condition; Barnett, 2018.) The key activities and variables of the intervention across the first three spheres are listed in Table 1. Finally, the education support model at the RCH also reflects the five common themes of pediatric hospital-based education support programs as described in the introduction.
|Sphere of influence/level||Key activities of education support|
|Individual young person||
Ethics approval will be obtained for the study from a registered Australian Human Research Ethics Committee prior to commencement of the research.
Approval to undertake the feasibility study will be sought from the Victorian DET. Once such approval has been obtained, all five pediatric sites (one pediatric hospital as the intervention site and four regional hospitals with pediatric wards as the control sites) will be approached by the lead researcher and invited to participate in the feasibility study. Once consent has been obtained from these two levels, we will proceed with the rolling recruitment of individual young people. Specifically, all new inpatients as they present to hospital and who meet the eligibility criteria will be approached by the lead researcher and asked if they would participate in the trial. Both the young person and his or her parent/s or caregiver/s will receive a Plain Language Statement about the study and asked to provide written consent.
For aim number 4 (Obtain feedback from stakeholders about the feasibility of the above with the view of conducting a larger scale trial), we will identify two key stakeholders at each of the five locations and ask them to participate in semi-structured interviews with the lead researcher. The purpose of the interviews will be explained, and verbal consent will be obtained and recorded by the lead researcher.
Sample size calculations were run to determine the number of participants required in the study to have an 80% chance of detecting a 0.3 standard deviation effect of the intervention based on the results of the outcome measures. Based on these criteria, we would need complete data from 180 participants in each arm of the study. Attrition and loss to follow-up is considered to be low, in the order of 25%.
As such, we would need to recruit 240 participants to each arm of the study. However, as this is a feasibility study, we will recruit 40 participants to each arm of the trial and evaluate the procedure used, time, and feasibility of achieving a larger sample for any future larger-scale study.
We will use a short form of the validated Attitudes to School Survey developed by Khoo and Ainley (2005) and used in the Longitudinal Study of Australian Children. The survey has been used and referred to by the authors as a measure of school engagement as it aims to measure aspects of the complex relationship between engagement, attitudes, and motivation in terms of their influence on intentions to participate in school and schoolwork. The survey includes a primary and a secondary school version each consisting of 30 statements that cover 5 domains: students’ general satisfaction with school, their motivation, their attitudes toward their teachers, their views on the opportunities their school provides, and their sense of achievement. Students are asked to indicate their level of agreement on a 4-point Likert scale ranging from strongly agree to strongly disagree.
We have developed a short form (15 statement) version, anticipating that the full version would be potentially too burdensome for the hospitalized and chronically ill child or young person. Items were chosen on the basis of their face validity/perceived appropriateness by the author, who had more than three years’ experience working with hospital-based teachers/educators and young people at the Royal Children’s Hospital in Melbourne, Australia, 2012–2016.
There are two ways to score the items: (a) calculate the total level of agreement and (b) calculate a scale score for the domains and survey as a whole. We will use the latter method as we are interested in the overall mean values of both the intervention and the control groups and related standard deviations.
There has been an increased recognition of the importance of measuring the quality of life of children and young people with chronic health conditions as a standardized way of measuring treatment effectiveness within the child’s functional context (Haverman, Limperg, Young, Grootenhuis, & Klaassen, 2017; Silva, Crespo, Carona, Bullinger, & Canavarro, 2015; Wright & Majnemer, 2014). Measuring quality of life in a validated and standardized way allows for evaluating both intervention effectiveness and cost effectiveness (Aledort et al., 2012).
The Pediatric Quality of Life (PedsQL™; Farias Queiroz, Costa Amorim, Zandonade, & Monteiro de Barros Miotto, 2015)) inventory is a widely used and validated modular measurement of health-related quality of life. It has been designed to measure the core dimensions of health as delineated by the World Health Organization, including school functioning. The 23-item measure covers four domains: physical functioning, emotional functioning, social functioning, and school functioning. It has both self-report and parent proxy report versions, and can be scored so as to obtain a total scale score or summary scores for physical health and psychosocial health. However, scale scores for each of the four domains have been used in previous research of children and young people with chronic illnesses (Farias Queiroz et al., 2015). There are age specific versions for 5–7 yo, 8–12 yo, and 13–18 yo children and young people.
Family socio-economic characteristics are well known as predictors (although not a prescription) of how well children and young people perform at school, both in terms of engagement and academic achievement (Daraganova, 2012; Hancock et al., 2013). We will, therefore, collect information on and control for the impacts of family socio-economic position; a composite measure made up of information on the mother’s highest level of education, parental income, and occupation type/status (Daraganova, 2012).
Parental involvement in the child’s education is also understood to be a predictor of the child’s educational engagement and achievement, and has also been shown to mediate the influence of both family income and maternal education (Altschul, 2012). For this feasibility study, we will use three measures of parental school engagement that have been used in the LSAC:
How often do you and study child talk about his/her school activities?
(a) Daily; (b) A few times a week; (c) About once a week; (d) A few times a month; (e) Rarely or never.
During this school year, how often did someone in this household help the child with his/her homework?
(a) 5 or more days a week; (b) 3 or 4 days a week; (c) 1 or 2 days a week; (d) Less than once a week; (e) Never.
In the last 12 months, how many times have you contacted the school about the child’s academic program for this year?
(a) Not at all; (b) Once or twice; (c) Three or four times; (d) More than four times.
Parents/carers of children and young people with a chronic health condition experience higher levels of psychological stress, anxiety, and depression than parents with such a condition (Barnett, Giallo, Kelaher, Goldfeld, & Quach, 2018; Muscara et al., 2015). In addition, parents’ mental health and parenting style are known to be important predictors of a child’s academic performance and engagement in school, education, and learning (Barnett et al., 2018). We will use the Depression Anxiety Stress Scales (DASS-21; Lovibond & Lovibond, 1995), which is a validated measure of anxiety, stress, and depression. The utility of the measure is enhanced by the provision of normative data.
Blinding refers to keeping study participants and personnel “unaware” of which study group they belong to (i.e., intervention group or no-intervention/control group). Blinding in controlled studies is considered an important strategy for minimizing the potential of any performance or detection bias. However, blinding of personnel and participants is often difficult when evaluating psychosocial interventions (Montgomery et al., 2013), and blinding of both groups is considered not achievable in the current study.
Baseline data will be collected by the lead researcher or hospital-based educator at the time of student recruitment to the study or within one week of recruitment. It will consist of basic demographic data of the young person (age, gender, school grade/level, type of chronic illness, cultural identity, main language spoken at home) and of the parent (socio-economic position, involvement in child’s education, and psychological stress). We will also collect baseline data on the primary and secondary outcomes using the measures described above.
We will collect primary and secondary outcome data at both the 3- and 6-month time points post the time of baseline data collection.
In addition, we will collect information on the number and type of education support activities (as described in the program logic model; see Figure 2) that were received by the young person, his or her parent/s, and the school for both the intervention and control groups.
Aim 4 of the study is to obtain feedback from stakeholders about the feasibility of conducting a larger-scale trial based on the experience and learnings of the feasibility study. The lead researcher will conduct semi-structured interviews with two key stakeholders who have been involved in the study at each of the five sites in order to obtain information about their experience of the implementation of the feasibility study and learnings.
The primary and secondary outcomes form the dependent variables in this research. The independent variables include being in receipt of the intervention or not being in receipt of the intervention (i.e., membership of either the intervention or the control group).
Quantitative data will be analyzed descriptively. Continuous or scale data will be analyzed using analysis of variance (ANOVA), and we will report group means and standard deviations. We will control for confounders as previously described. We will use the data and variables of the number and type of education support activities as a measure of dose and perform sensitivity analysis. Qualitative data obtained from stakeholder interviews will be analyzed thematically.
This protocol paper provides information about how a feasibility study could be undertaken to obtain robust information about the effectiveness of an education support program for hospitalized students with chronic health conditions. While the protocol was developed to evaluate the effectiveness of the education support program at the Royal Children’s Hospital in Australia, the study design is potentially replicable for evaluating other hospital-based education support programs for children and young people with chronic health conditions.
To date, evidence of effectiveness of education support programs for this group of children and young people is dominated by qualitative research, case studies, and expert opinion. More robust evidence is needed. Importantly, this feasibility study protocol tackled some important questions or considerations, including developing an answerable research question, choosing a controlled study design that compares the outcomes of both an intervention group and a well-matched non-intervention or control group, eligibility criteria, important and validated outcome measures, and how data should be analyzed and reported. The use of relevant and validated outcome measures, in particular quality of life, and consistent reporting of results using group means and standard deviations, is particularly important for future research in this field. Doing so would allow for comparing and even pooling results from different studies, thus providing even more robust evidence of the effectiveness of education support programs. To date, this has not been possible due to significant inconsistencies in these areas across the small number of controlled studies in this field.
Feasibility studies are considered useful as preparation for the conduct of a larger controlled evaluation of complex interventions (Thabane et al., 2016) by providing information about the feasibility of a main or larger study, with the goal of reducing uncertainty, and thereby increasing the chance of a successful larger study. As such, the current feasibility study includes a qualitative component to review and assess important aspects of the implementation of the trial, including recruitment, administering the measurement tools, and follow-up with trial participants. Such experience and learnings would be useful for preparing for any future larger study.
The editorial team of Continuity in Education would like to express their gratitude to the reviewers who generously gave their time and expertise to improve this article and who asked to remain anonymous. The editorial processing of this article was managed by Chief Editor Michele Capurso while the copyediting was carried out by Kirsten McBride.
The authors have no competing interests to declare.
Aledort, L., Bullinger, M., Von Mackensen, S., Wasserman, J., Young, N. L., & Globe, D. (2012). Why should we care about quality of life in persons with haemophilia? Haemophilia, 18(3), e154–e157. DOI: https://doi.org/10.1111/j.1365-2516.2012.02771.x
Altschul, I. (2012). Linking socioeconomic status to the academic achievement of Mexican American youth through parent involvement in education. Journal of the Society for Social Work & Research, 3(1), 13–30. DOI: https://doi.org/10.5243/jsswr.2012.2
ARACY. (2015). Full report: School connection for seriously sick kids: Who are they, how do we know what works, and whose job is it? Retrieved from https://msmissingschool.blob.core.windows.net/assets/pages/Missing+School+Report+FULL+version+published.pdf
Barnett, T. (2018, May 14–18). What does the evidence tell us about an effective model of education support for children with chronic health conditions? Paper presented at the HOPE XI Congress, Poznan, Poland.
Barnett, T., Giallo, R., Kelaher, M., Goldfeld, S., & Quach, J. (2018). Predictors of learning outcomes for children with and without chronic illness: An Australian longitudinal study. Child: Care, Health and Development, 44(6), 832–840. DOI: https://doi.org/10.1111/cch.12597
Biesta, G. (2007). Why ‘what works’ won’t work. Evidence-based practice and the democratic deficit of educational research. Educational Theory, 57(1), 1–22. DOI: https://doi.org/10.1111/j.1741-5446.2006.00241.x
Bronfenbrenner, U. (1995). Developmental ecology through space and time: A future perspective. In P. Moen, G. H. Elder, Jr., K. Lüscher, P. Moen, G. H. Elder, Jr. & K. Lüscher (Eds.), Examining lives in context: Perspectives on the ecology of human development (pp. 619–647). Washington, DC, US: American Psychological Association.
Capurso, M., & Dennis, J. L. (2017). Key educational factors in the education of students with a medical condition. Support for Learning, 32(2), 158. DOI: https://doi.org/10.1111/1467-9604.12156
Chalmers, I. (2005). If evidence-informed policy works in practice, does it matter if it doesn’t work in theory? Evidence & Policy: A Journal of Research, Debate and Practice, 1(2), 227–242. DOI: https://doi.org/10.1332/1744264053730806
Chan, A.-W., Tetzlaff, J. M., Altman, D., Dickersin, K., & Moher, D. (2013). SPIRIT: New guidance for content of clinical trial protocols. The Lancet, 381(9861), 91–92. DOI: https://doi.org/10.1016/S0140-6736(12)62160-6
Daraganova, G. (2012). Is it OK to be away? School attendance in the primary school years. Longitudinal Study of Australian Children Annual Statistical Report. Retrieved from https://growingupinaustralia.gov.au/sites/default/files/asr2012.pdf
Dempsey, A. (2019). Pediatric health conditions in schools: A clinician’s guide for working with children, families, and educators [Published online]. Oxford, UK: Oxford University Press. DOI: https://doi.org/10.1093/med-psych/9780190687281.001.0001
Farias Queiroz, D. M., Costa Amorim, M. H., Zandonade, E., & Monteiro de Barros Miotto, M. H. (2015). Quality of life of children and adolescents with cancer: revision of studies literature that used the pediatric Quality of life inventory™ [Calidad de vida de los niños y adolescentes con cáncer: revisión de literatura de estudios que utilizaron el pediatric Quality of life inventory™]. Invest Educ Enferm, 33(2), 343. DOI: https://doi.org/10.17533/udea.iee.v33n2a17
Hancock, K., Shepherd, C., Lawrence, D., & Zubrick, S. (2013). Student attendance and educational outcomes: Every day counts. Retrieved from https://www.telethonkids.org.au/globalassets/media/documents/research-topics/student-attendance-and-educational-outcomes-2015.pdf
Haverman, L., Limperg, P. F., Young, N. L., Grootenhuis, M. A., & Klaassen, R. J. (2017). Paediatric health-related quality of life: What is it and why should we measure it? Arch Dis Child, 102(5), 393–400. DOI: https://doi.org/10.1136/archdischild-2015-310068
Khoo, S., & Ainley, J. (2005). Attitudes, intentions and participation. LSAY Research Reports. Longitudinal surveys of Australian youth research report, n.41. https://research.acer.edu.au/lsay_research/45
Littell, J. H., & Shlonsky, A. (2010). Toward evidence-informed policy and practice in child welfare. Research on Social Work Practice, 20(6), 723–725. DOI: https://doi.org/10.1177/1049731509347886
Lovibond, S. H., & Lovibond, P. F. (1995). Manual for the Depression Anxiety Stress Scale. Sydney, Australia: The Psychological Foundation of Australia. DOI: https://doi.org/10.1037/t39835-000
Martinez, Y. J., & Ercikan, K. (2009). Chronic illnesses in Canadian children: What is the effect of illness on academic achievement, and anxiety and emotional disorders? Child: Care, Health & Development, 35(3), 391–401. DOI: https://doi.org/10.1111/j.1365-2214.2008.00916.x
Maslow, G., Haydon, A., McRee, A.-L., & Halpern, C. (2012). Protective connections and educational attainment among young adults with childhood-onset chronic illness. Journal of School Health, 82(8), 364–370. DOI: https://doi.org/10.1111/j.1746-1561.2012.00710.x
Montgomery, P., Mayo-Wilson, E., Hopewell, S., Macdonald, G., Moher, D., & Grant, S. (2013). Developing a reporting guideline for social and psychological intervention Trials. American Journal of Public Health, 103(10), 1741–1746. DOI: https://doi.org/10.2105/AJPH.2013.301447
Muscara, F., McCarthy, M., Woolf, C., Hearps, S., Burke, K., & Anderson, V. (2015). Early psychological reactions in parents of children with a life threatening illness within a pediatric hospital setting. European Psychiatry, 30(5), 555–561. DOI: https://doi.org/10.1016/j.eurpsy.2014.12.008
Nasuuna, E., Santoro, G., Kremer, P., & Silva, A. M. (2016). Examining the relationship between childhood health conditions and health service utilisation at school entry and subsequent academic performance in a large cohort of Australian children. Journal of Paediatrics and Child Health, 7, 750. DOI: https://doi.org/10.1111/jpc.13183
Seymour, C. (2004). Access to education for children and young people with medical needs: a practitioner’s view. Child: Care, Health and Development, 30, 3. DOI: https://doi.org/10.1111/j.1365-2214.2004.00408.x
Shaw, S. R., & McCabe, P. C. (2008). Hospital-to-school transition for children with chronic illness: Meeting the new challenges of an evolving health care system. Psychology in the Schools, 45(1), 74–87. DOI: https://doi.org/10.1002/pits.20280
Shlonsky, A., & Gibbs, L. (2004). Will the Real Evidence-Based Practice Please Stand Up? Teaching the Process of Evidence-Based Practice to the Helping Professions, 4(2), 137–153. DOI: https://doi.org/10.1093/brief-treatment/mhh011
Silva, N., Crespo, C., Carona, C., Bullinger, M., & Canavarro, M. C. (2015). Why the (dis)agreement? Family context and child-parent perspectives on health-related quality of life and psychological problems in paediatric asthma. Child: Care, Health & Development, 41(1), 112–121. DOI: https://doi.org/10.1111/cch.12147
Thabane, L., Hopewell, S., Lancaster, G., Bond, C., Coleman, C., Campbell, M., & Eldridge, S. (2016). Methods and processes for development of a CONSORT extension for reporting pilot randomized controlled trials. Pilot and Feasibility Studies, 1. DOI: https://doi.org/10.1186/s40814-016-0065-z
Wright, F. V., & Majnemer, A. (2014). The Concept of a Toolbox of Outcome Measures for Children With Cerebral Palsy: Why, What, and How to Use? In (Vol. 29, pp. 1055–1065). DOI: https://doi.org/10.1177/0883073814533423
Zubrick, S., Silburn, S., Gurrin, L., & Shepherd, C. (1997). Western Australian child health survey: education, health and competence. Australian Bureau of Statistics and TVW Telethon Institute for Child Health Research, [Perth].