Developing a Data Collection and Analysis Plan
Data use is a central feature in improvement efforts, as there needs to be reference points to what an issue is, and once resolved, how well the solution is doing over the previous process/method. By collecting data and analyzing it, an institution can show the effects of their corequisite remediation in a substantive and empirical manner.
In considering how to create a data collection and analysis plan, colleges will need to determine the following:
- Types of data to use: There are several types of data that can be used to inform improvement. In some cases, colleges may also want to have a deeper understanding of why particular outcomes were observed, so they may want to collect data on perspectives from key stakeholders or on barriers and facilitators to achieving outcomes. In these cases, the college may benefit from collecting qualitative data, such as through interviews or focus groups.
- Data from a sample versus the full population: Data are often collected from just a sample of individuals (rather than the full population) in order to cut down on the costs of data collection and the burden on participants and to improve the quality of the data that is collected. However, collecting data from a sample also has limitations. If a college is using quantitative data and testing for statistical significance, the sample must be large enough to do so.
- Create new measures or use ones developed by others: Wherever possible, it can be useful to use survey questions and scales that have been developed by other colleges or researchers. Using existing measures can save a college time in mapping out a data collection plan, can make the evidence more relevant to other colleges, and can help ensure that the measure has been proven to be of high quality with other data. However, there may not be other studies that have looked at the concepts a college is interested in; in these cases, the college will need to develop new measures.
- Whether existing data sources can be used: Because data is costly and takes time to collect, it can be beneficial to rely on existing data sources wherever possible. If students or instructors are already surveyed or brought together for other reasons, it may be easier to add a few questions to these existing surveys or to pull together instructors for focus groups during department meetings, rather than creating separate data collection efforts.
- Determine which data should be collected. Identify the best sources of data for each evaluation question, whether this is quantitative data from administrative records or surveys; qualitative data from focus groups, interviews, observations, or surveys; or program data from program documentation and data records. Determine how many individuals data will be collected from (i.e., all individuals participating or a sample), and whether existing measures will be used or new ones will be developed.
- Determine how to construct the comparison group. Because rapid-cycle evaluation aims to produce evidence of effectiveness, it is important to have a comparison group to be able to measure this effect when looking at outcomes. For other types of measures on how the program was rolled out and whether there were barriers and facilitators, it may be less important to have comparison group data. For each measure, describe whether there will be a comparison group and how the comparison group will be constructed.
In developing this plan, please consider the following questions.
- Will you primarily be relying on existing data sources?
- Will you be measuring things that can be observed within a short period?
- Will you pilot new questions or instruments to ensure that they will produce the data you intend for them to produce?
- Will you collect data anonymously and inform participants about who will have access to responses to increase the likelihood they will provide honest and accurate responses?
- Will you provide participants with information on how the data will be used to communicate the value of investing time in data collection and providing honest and accurate responses?
Did alignment strategy (requirement to meet four hours and align syllabi) lead to greater alignment of instruction?
Who and how many in the data collection sample
All students enrolled in two-instructor sections of the corequisite
Measures and instruments used
Add question(s) to existing student satisfaction survey
Sections that have instructor pairs who were not selected by lottery to pilot the alignment strategy