Cochrane Review Groups (CRGs) can ask Network Associate Edtiors for feedback on Cochrane Reviews that have been submitted for editorial review.Â There are various scenarios when it may be appropriate to contact your Associate Editor for review screening or other checks; see POSSIBLE SCENARIOS Possible scenarios for referral of reviews for screening.
The aim of screening is primarily intended to help decide what sort of additional work might be needed before publication. Screening is carried out using a Triage Tool (see Appendix 1) that focuses on three separate aspects of the review:
Further information about the Triage Tool can be found in the Operational guide to the Cochrane Editorial & Methods Department Review Triaging Tool. The process for review screening is detailed in Box 2below (Process for review screening), but please contact your Associate Editor if you have any queries.
- When CRGs have identified the need for screening (see Box 1 for possible scenarios), they should contact the Associate Editor and Senior Editor for their Network as soon as possible. Any advance notice of upcoming reviews to be referred for screening, in particular that require a quick turnaround or are large, would be much appreciated. In most cases reviews will be referred for screening prior to copy edit, however it may be appropriate for the processes to be run in parallel.
- Reviews can be referred for screening at any stage of the editorial process, and checks can be carried out on, but not limited to the:
- Results and analysiS
- Implementation of GRADE and Summary of findings tableS
- Abstract and plain language summary
- Discussion and conclusions
- Associate Editors will consult with other sources of advice as necessary, for example other Associate Editors, Senior Editors or the Methods Support Team (to be established during 2019).
- Associate Editors will screen the review using the Triage Tool (see Appendix 1) and return a written report to the CRG.
- Associate Editors will also specify whether they will need to rescreen the amended review before proceeding, or if they are happy for the CRG to check amendments are made.
- Please note that turnaround time for a screening report will depend on the volume of queries received and the size of the review. An estimate of completion can be provided once a request is received.
Archie version no.
Implementation of protocol methods
Summary of findings table
Appropriate eligibility decisions
Check protocol comparison generated from Archie and Differences between protocol & review for any changes to design of review (eligibility criteria, outcomes);
Check for exclusions based on reporting of data
SoF table presents main outcomes (benefits & harms) for main comparison
Look at methods section for consistency of SoF table outcomes; Assess methods for using GRADE
Title reflects review question
Research question (PICO) clear & rationale for review described
Appropriate risk of bias assessment
Check for omission of standard domains;
inclusion of any non-standard domains is explained & justified;
domains appear well understood (fit between explanation and domain, appropriate judgments)
PICO (including Settings) are accurate & informative
Search date <12 months from publication
Outcomes fully defined (i.e. time of measurement, scale of measurement, range of scores specified)
Characteristics of included studies summarised
Consider copying & summarising information presented under ‘Description of included studies’/Overall completeness & applicability’. Look for details in SoF table relating to settings & participants.
Analyses match with methods section
MDs/SMDs; fixed/random effects, subgroup analysis. Check protocol comparison and Differences between protocol & review to see what plans changed from protocol.
Assumed & Corresponding risks included (where appropriate)
Findings for all important outcomes reported for main comparison(s), including information about harms
Check consistency with first SoF table & others as appropriate
Data from non-standard designs (cluster, cross-over, etc.) appropriately incorporated where relevant
Check ‘Unit of analysis issues’ in methods/footnotes in forest plots/sensitivity analyses. Scan study characteristics to confirm unit of allocation & sample sizes if in doubt.
GRADE ratings justified & adequately explained
Direction, magnitude & confidence intervals of effects clearly described where appropriate
Clear & accurate summary of narrative results (where appropriate)
Reporting results avoids emphasizing statistical significance to determine presence or absence of an effect
Quality ratings presented for narrative results (where appropriate)
GRADE ratings for outcomes reported in abstract
Multiple measurements from studies with more than one eligible comparator handled appropriately
Check for double counting of studies in Forest plot & adjustment of events/sample size in control groups
Absolute effects used to illustrate the relative effects where appropriate
Outlying results acknowledged & explored appropriately
Consider how plausible the direction/size of effects are overall, explore data from studies with unusually large or discordant effects