From the outset, it is essential that the Alliance have a clear set of expected outcomes and impacts, thoughtful methods for assessing them, and strategies for collecting and analyzing the associated data needed to do so. Both qualitative and quantitative dimensions are necessary, and the best way to answer the oft-asked question, “What does success look like?” is to engage the community, via The Alliance leadership team, because doing so will bring greater clarity to Alliance goals and processes. Consequently, although the present document does not address the important issue of outcomes/impacts/metrics in detail, it does offer some examples that will guide initial planning. Indeed the relevant outcomes and metrics may evolve with the Alliance such that early outcomes may differ from that are assessed after the Alliance has become well established. The leadership team will work with an external evaluator who can help to identify the measurable impacts and outcomes and the strategies for doing so.
By definition, impact indicators monitor the progress of achieving the program’s objectives, such as change in knowledge, attitude, and intended behavior. As such they measure psychological variables directly. Outcome indicators, on the other hand, assess whether the program goal has been achieved, such as longer term changes or changes sustained over time. This may include objective measures.
Because the impacts identified by the Alliance include supporting ideas, developing skills, and sense of community as well as facilitating co-production of knowledge, and translating across disciplinary boundaries, psychological measures might include "feelings of" integration, the degree to which members feel that they have been exposed to new methods, theories and disciplines and satisfaction with, or amount learned from workshops or other platforms. To effectively measure impact, it will be necessary to collect data from the target group at the start of the program, as participants formally become engaged. Follow up survey information could be collected annually to identify numerical changes in impacts and to establish qualitative evidence in the form of written responses.
The outcomes identified by this document include: organizing and supporting an intellectual commons; synthesizing information regarding funding opportunities; helping scholars locate needed, specific collaborators; assisting with the preparation of proposals, providing an online system for facilitating effective interaction (including a database of researchers, research results, collaborative opportunities); providing travel funding; developing resources; creating regular forums; and broadening participation. Most of these outcome can be measured by counting the number of instances in a given program year. Therefore, outcome metrics are likely to include counts of the numbers of members, website hits, grant proposals developed and/or evaluated, and workshops or meetings convened. Metrics might also be developed to describe the amount and diversity of information made available to the membership, such as a virtual library of relevant journal articles, solicitations or a graphic showing the names and specialty areas of experts working in pertinent areas of the social sciences - both topically and geographically. Evaluation of outcomes will be measured at the end of years one, two, and three, with a summative report at the end of year four.