Tag Archives: evaluation

On Practice Research Networks and the changing of thinking and practice

How do people have their thinking changed?” is the topic of the first Collaborative Exploration (CE) in my graduate critical thinking course this semester. My exploration led me through various steps to a question for further inquiry: what moves and motivates people to make changes when working within the framework of a profession or a particular form of practice? Let me explain (and thereby update earlier post). Continue reading

On “Practice Research Networks” and critical thinking

“How do people have their thinking changed?” is the topic of the first Collaborative Exploration (CE) in my graduate critical thinking course this semester. The scenario reads:

There are many approaches to teaching or coaching, each of which aims to improve the knowledge or thinking of students or some other audience. In other words, each aims to change their thinking… We might ask how strong the basis is for any given approach to teaching or coaching. We could, in the spirit of critical thinking, scrutinize the assumptions, evidence, and reasoning behind the approach. In this case, we want you to do this scrutinizing for a teaching/coaching approach “X” (where you choose X…), but also to go further: …consider how to change the thinking of an exponent of X so that they think more critically about their approach.

The approach X I chose to examine is the Human Givens approach to therapy and mental health (HG). This approach has been developed in England since the late 1990s and has exponents in a few other countries, but very few in the United States. Continue reading

How to evaluate the effect of a reflective-practice promoting workshop

Consider a workshop designed to foster reflective practice.  How to evaluate the effect of the workshop on reflective practice?  This could be evaluated by asking participants to record when they undertake a post-workshop reflection process.  This process could use guidelines recorded on an individually customized and evolving template (along the lines below).

As noted, the substance of the reflections is private—participants are not asked to share this.  However, submission of the googleform allows assessment of the frequency of participants undertaking the process and of substitution of new guidelines.  These data could then be used to compare the effect of a given workshop in comparison with previous versions of the workshop or with other kinds of professional development workshop that also aim to foster reflective practice, and to compare the same workshop run with different nationalities or experience/status of participants. Continue reading

A set of principles for developing creativity

revised 23 Dec. 2013

1.  Creativity as processes-in-context

An individual’s creativity happens and is recognized in some context.  Indeed, shaping the relevant context provides additional opportunities for an individual’s creativity.  An individual’s context-shaping efforts, in turn, influence the creative pursuits of others.  Such ongoing “intersecting processes” are depicted schematically here: Continue reading

Evaluation of educational change

These notes capture the state of evolution at the end of Spring ’01 of a graduate course on Evaluation of Educational Change that later became Action Research for Educational, Professional and Personal Change (using a framework described in the book Taking Yourself Seriously, http://bit.ly/TYS2012). Continue reading

Evaluation of academic leaders

A. Members of an academic unit (e.g., a College) should look for a senior academic leader (e.g., a Dean) who makes multi-yearly evaluations hardly necessary.  A genuine leader makes explicit their own objectives with respect to operations of the Unit, takes stock continuously of what is working well and what needs improvement, and reports regularly to the Unit on progress on each of these objectives.
Continue reading

Moving beyond conventional rubrics

Using the following rubric, most conventional rubrics are inexpert.

The rubric:
Expert: Helps the evaluated person see how to improve and can be used formatively in self-assessment
Proficient: Helps the evaluated person see how to improve
Needs Improvement: Evaluates the person on multiple criteria
Does Not Meet Standards: Allows evaluations to be averaged out to an overall figure or category.

Moreover, they waste a lot of paper. In the following example (drawn from Marshall 2009), all the information we really need is that teachers should aim to anticipate misconceptions that students are likely to have and plan how to overcome them.

(At least this example does not commit the all-too-common sin of listing multiple unrelated criteria, which often leads to the evaluated person not fitting in any box because they meet, say, some of the expert criteria, some of the proficient, and some of the needs improvement.  The fudging that goes on to assign a box anyway undermines the credibility of rubric use.)

We could strip most conventional rubrics down to one column that captures the relevant criterion.  The question still remains: how does one bring any give criterion about if it is not happening?  In order to help the evaluated person see how to improve we need to move beyond the conventional rubric.

The first step I recommend to my students, some of whom are teachers, is to complete a regular plus-delta evaluation for a manageable set of criteria, say, 5-15 items that you are prepared to focus on at this point.  (Too many items and none get much attention or it is too time consuming to keep evaluating your performance regularly.)  The evaluation might be done by an observer or it might be done by you directly after the class or other performance being evaluated.  Paying attention to the things you did well (the plus) makes it more likely that you will work on the things that need improving or changing (the delta).  This approach assumes that you (or the person you ask to evaluate you) have ideas about ways to improve.  If some issue arose that you do not know how to address, the plus-delta evaluation puts you in a good position (in terms of rich detail about the class or other performance being evaluated and emotional state) to raise it for discussion with someone who might have more experience or ideas.

I do not know the history about the rise of conventional rubrics.  I do know that I have encountered many people who use or advocate rubrics but were not able to show me that rubrics are used to help the evaluated person see how to improve or how to do that.


Marshall, K. 2009. Teacher Evaluation Rubrics.  http://ecologyofeducation.net/wsite/wp-content/uploads/2009/09/teacher-eval-rubrics-may-16-09.pdf