Understanding Problem-based Learning
With all the talk of educational reform today, changes in classroom assessment practices are undoubtedly at the core of the discussion. The Latin derivative of the word assess means “to sit beside.” Assessment is a process we use to observe, analyze, and provide verbal or written feedback. Or said differently, assessment is a process used to assist
in the development of an individual towards a goal. Authentic assessment captures this spirit of “sitting beside” our students. During authentic assessment, we observe, analyze, and provide feedback WHILE students work towards the achievement of our outlined educational goals (standards). Therefore, authentic assessment happens along with instruction and is formative, rather than summative in nature. Authentic assessment does not discount traditional end of the unit tests. However, only a portion of student learning is best measured in this traditional, end of the unit way. So what sorts of assessment tools, both traditional and authentic, are available to us as we sit beside our students to monitor progress and provide feedback?
If you brainstorm all of the things we can ask students to do to show their level of understanding or competence of a given goal, these options can all be grouped into one of three categories. We can ask students to write (draw or orally explain), to do something or perform a given task, or we can ask them to create or produce a product. These options are sometimes referred simply as Type I, Type II, and Type III assessments.
Type I assessments are traditional paper/pencil assessments including multiple choice, true/false, matching, short answer, and extended responses (or essays).When Type I items are grouped together, they comprise an assessment module as depicted by the state achievement and diagnostic tests in Ohio. Type I assessments can be written at three levels of thinking (think Bloom’s Taxonomy revamped). The acquiring level requires students to find, access, or memorize information.
Verbs such as measure, recall, retell, or find are typically associated with acquiring level questions. Acquiring questions assess rehearsed skills or knowledge (memory/recall). Whereas, processing questions require learners to mentally manipulate concepts during the assessment. Verbs such as compute, decide, compare, contrast, or determine are often associated with this assessment level and processing level items challenge learners to use analytic versus memory skills. Extending level Type I assessments involve more complex problem solving. These items incorporate verbs such as solve, develop a procedure/plan or create and devise; they require learners to synthesize multiple ideas and generate a solution or proposal.
Project EXCITE Odyssey student sheets provide a multitude of Type I assessment options. Another example of a Type I assessment set related to the topic of drinking water quality follows:
Given a figure showing the process of treating drinking water at a water treatment plant:
- Identify possible sources of water contamination (acquiring);
- Compare and contrast threats to the quality of drinking water coming from a water treatment plant and drinking water coming from ground water wells (processing); and
- Propose a plan to improve the drinking water quality for a ground well system that was found to be sub-standard (extending).
Type II assessments can be described as performance tasks. During a Type II assessment, learners are asked to “do or perform” to demonstrate knowledge or skills. During Project EXCITE Odysseys, students plan and conduct controlled experiments, make oral presentations of their findings, make posters or multimedia presentations, and debate the appropriate actions or best-fit solutions to the problem at hand. All of these opportunities represent Type II assessment options.
The final assessment type, Type III, involves long term events (or extended student projects). Learners are asked to create or produce artifacts to demonstrate their knowledge, skills, or dispositions (attitudes). They involve student investigation of topics over extended periods of time, encourage student creativity and offers students some choice and decision-making about the content and process related to their project. During Project EXCITE Odysseys, students develop an Odyssey Portfolio to document their individual contributions to the examination of the problem over the course of the entire Odyssey. Moreover, EXCITE students plan and implement a Take Action Project (service learning) to apply what they have learned to help solve the problem (thus serving the community).
Checklists and scoring rubrics are matrices or guidelines that help define quality performances for Type II and Type III assessments. They enhance the objectivity of scoring and often help improve student performance since task expectations are specified and communicated in advance. Therefore learners can self evaluate and modify their work prior to submission. Project EXCITE has developed a series of rubrics to score the assessments used throughout the Odyssey. The EXCITE Odyssey assessments and rubrics can be found here.
Sample Type I, II, and III assessments and scoring rubrics are also found via the Ohio Resource Center at: http://www.ohiorc.org. When searching for the ORC assessments, conduct an advanced search by grade level, topic, and assessment resource.
In order for EXCITE to make a significant impact on the teaching and learning in our local schools, as educators we must focus our efforts on research-based best practices, such as contextualizing learning through problem-based strategies and infusing environmental health science as a way to integrate the curriculum. We can then examine the effectiveness of our efforts on student learning by employing Type I, II, and III assessment strategies. We are making steadfast progress in paving the way for long lasting improvements in teaching and learning. It is an EXCITE-ing challenge that can only be realized through the continued involvement of our schools’ most valuable resources . . .YOU!