Appendix 1

Developing an Assessment Vehicle
For INTD 105 Writing Seminar
Critical Writing and Reading Core Committee, Fall 1999

Mantra: "The ultimate purpose of assessment is to improve teaching and learning" (Framework for Outcomes Assessment, Middle States Commission on Higher Education)

C. A. Easton's wallet-sized process:

  1. Determine "outcomes"
  2. Evaluate achievement of outcomes
  3. Analyze data
  4. Apply to questions of "institutional effectiveness"
  5. Communicate results

Assessment focuses on a program or course's "outcomes." The learning outcomes of INTD 105 are apparent in the Core Guidelines:

The principal skill to be acquired in INTD 105 is the ability to produce sustained, coherent, and persuasive arguments on significant issues that arise from the content at hand. In turn, students will be expected to express themselves clearly according to the conventions of standard English.

What kind of data can we collect as evidence for how well these outcomes are met? Assessment experts encourage program facilitators to integrate assessment into ongoing activities, not create an extra burden for faculty. But they also note that the information gleaned from assessment should not be otherwise available. Although we could look at each instructor's "assessment" of individual students (course and paper grades), this would not supply data for a program assessment. Some typical methods of data collection are:

  1. standardized tests
  2. portfolios
  3. essay written for course (e.g., #5 or #6) collected from all/some students in program
  4. student satisfaction survey
  5. alumni success survey
  6. "anthropological" interviews and observation

We are not limited to one kind of data collection.

We are more than likely going to develop a vehicle of "summative" assessment. That is, we analyze our data after the "activity" (i.e., a semester's courses) is completed. This differs from the "formative" assessment individual instructors might perform in their courses during the course of instruction for the purpose of seeing how well a course is meeting its goal. The summative analysis of data should determine how well outcomes for the program are being met. For example, data collection methods a, b, and c require readers to judge how well students produce sustained, persuasive arguments following the conventions of standard English. Such readers would follow an agreed-upon rubric (see attached example), scoring students in various categories in a range from "does this poorly" to "does this excellently." This produces the data that needs to be interpreted. This interpretation should reflect institutional standards and values. If, for example, 30% of the essays cannot sustain an argument, we might consider that more significant than the information that 50% have spelling errors.

Interviews and surveys of satisfaction and success also demand interpretive strategies that reflect institutional standards and values. Course satisfaction does not necessarily indicate that course outcomes have been met.

Whichever data collection method is used, this step in the process seeks to answer the question, "How well do our students meet the educational outcomes of this program?"

  1. We need to determine how the results of the data analysis can answer questions about "institutional effectiveness." In other words, is this program serving an actual need for Geneseo? Since assessment data can serve faculty in efforts to improve instruction, the data analysis should address the goals posed by all members of the College community.
  2. Finally, the assessment vehicle needs to include a plan for communicating its results to the College community. The documentation of the assessment should also be comprehensible to external bodies, such as Middle States evaluators and SUNY trustees.