The primary goal of the Evaluation and Tracking team is to provide rigorous, high quality, timely feedback on both performance and outcomes for the Weill Cornell Clinical Translational Science Center (CTSC). As internal evaluators, the Evaluation key function provides crucial input regarding the management of the CTSC and offers a major source of feedback to their colleagues at CTSC. The Evaluation and Tracking team is engaged in a full range of evaluation activities in concert with all CTSC key functions using methods that range from fundamental evaluation techniques to emerging cutting edge methods and metrics.
Our central focus is on the evaluation of translational research or training innovations in a networked system that is integrated with our performance management system and emphasizes the synergies between our local CTSC hub, our multi-hub collaborators and the national network.
Local CTSA Evaluation: The foundation of our evaluation capability is the comprehensive range of local data that we collect about hub processes and outcomes. Each CTSC Core has a relevant set of metrics and measures. With the informatics Core we have integrated this evaluation information system into the CTSC management information system (WebCAMP). All regularly collected hub-level information is entered into this system. These metrics are augmented with a wide variety of additional data such as case studies, interviews, and surveys that provide necessary detail for understanding metric results The CTSC Performance Management Committee (PMC) meets every other week to coordinate and synthesize the efforts of the Administration, Evaluation & Continuous Improvement and Quality & Efficiency.
Common Metrics: The national CTSA network has developed and will continue to roll out a set of “common metrics” to provide high-level feedback related to hub performance. The CTSC is uniquely situated to collect common metrics locally and utilize them in our performance management effort. Dr. William Trochim, CTSC Director of Evaluation & Continuous Improvement, has been the sole evaluation representative to the Common Metrics Leadership Team which oversees execution of the CTSA national network’s common metrics effort. He is co-lead on the Evaluation Common Metrics Leadership Group, Chair of the Hub Resources and Services Common Metrics Workgroup, co-lead of the Process Markers Common Metrics Workgroup and a member of the Informatics Common Metrics Workgroup.
Multi-Hub Evaluation Projects: Another major focus of our national evaluation effort has been on the development of multi-hub collaborative pilot studies of locally-originated innovations. The CTSC Evaluation Core has a proven track record of establishing multi-hub teams committed to evaluating pilot projects in order to assess methodologies and interventions. These multi-hub studies have traditionally included both evaluators and relevant core representatives from each participating hub. These multi-hub projects have focused on mutual areas of interest (e.g. bibliometrics, IRB duration), and have brought together evaluators from across the CTSC network to study these systems and publish their results.
National Network: At the national network level, we will build on and extend our leadership roles in addressing major evaluation issues (e.g., through leadership on the Methods and Processes Domain Task Force and Common Metrics efforts) and in disseminating successful results.