On “Practice Research Networks” and critical thinking

“How do people have their thinking changed?” is the topic of the first Collaborative Exploration (CE) in my graduate critical thinking course this semester. The scenario reads:

There are many approaches to teaching or coaching, each of which aims to improve the knowledge or thinking of students or some other audience. In other words, each aims to change their thinking… We might ask how strong the basis is for any given approach to teaching or coaching. We could, in the spirit of critical thinking, scrutinize the assumptions, evidence, and reasoning behind the approach. In this case, we want you to do this scrutinizing for a teaching/coaching approach “X” (where you choose X…), but also to go further: …consider how to change the thinking of an exponent of X so that they think more critically about their approach.

The approach X I chose to examine is the Human Givens approach to therapy and mental health (HG). This approach has been developed in England since the late 1990s and has exponents in a few other countries, but very few in the United States. Read more of this post

How to evaluate the effect of a reflective-practice promoting workshop

Consider a workshop designed to foster reflective practice.  How to evaluate the effect of the workshop on reflective practice?  This could be evaluated by asking participants to record when they undertake a post-workshop reflection process.  This process could use guidelines recorded on an individually customized and evolving template (along the lines below).

As noted, the substance of the reflections is private—participants are not asked to share this.  However, submission of the googleform allows assessment of the frequency of participants undertaking the process and of substitution of new guidelines.  These data could then be used to compare the effect of a given workshop in comparison with previous versions of the workshop or with other kinds of professional development workshop that also aim to foster reflective practice, and to compare the same workshop run with different nationalities or experience/status of participants. Read more of this post

A set of principles for developing creativity

revised 23 Dec. 2013

1.  Creativity as processes-in-context

An individual’s creativity happens and is recognized in some context.  Indeed, shaping the relevant context provides additional opportunities for an individual’s creativity.  An individual’s context-shaping efforts, in turn, influence the creative pursuits of others.  Such ongoing “intersecting processes” are depicted schematically here: Read more of this post

Evaluation of educational change

These notes capture the state of evolution at the end of Spring ’01 of a graduate course on Evaluation of Educational Change that later became Action Research for Educational, Professional and Personal Change (using a framework described in the book Taking Yourself Seriously, http://bit.ly/TYS2012).


presented in a form that keeps evaluation in the center

A long version of the course title might be “Engagement in Educational Change with Evaluation as a Tool,” where
“Education” is construed broadly to include not only school curricula, but also educational policies and institutional arrangements, and training, coaching, and the conduct of workshops in any setting;
“Evaluation” stands for the systematic study of i) what has been happening before; and ii) of the effects of any changes you implement, presumably changes designed in response to evaluating what had been happening before; and
“Engagement” denotes that the course is not only about evaluating past situations or any future changes, but also about collaborating with other people in several ways:

  • to design and bring about constructive change, which includes-
  • to reflect on what change you really desire;
  • to undertake the evaluations;
  • to dialogue and reflect on the implications of any results; and
  • to insure that the results of the evaluation have an influence on the relevant people and groups, i.e., the potential “stakeholders.”

“Engagement” also reminds you that, in order to contribute effectively to change, you need to be engaged yourself—to have your head and heart together.  The course, therefore, provides tools for personal reflection on your practice.

Evaluation related to Engagement in Educational Change can be summarized in an “Action Research” spiral:

    Systematic study of what has been happening ->
    Reflection & dialogue ->Design an action/change/engagement ->
    Implement this action ->
    Systematic study of effects of the action ->
    Reflection & dialogue ->
    Revise the action to improve it and/or Promote its wider adoption ->

As the course unfolds you should come to appreciate the following flow of thought:
0.  Suppose you are concerned about some educational practice or policy or institutional arrangement, or the equivalent in some other setting [including your personal work and life].

1.  In order to influence/change what is going on, it is important to study systematically what:
a. has been happening; or
b. is about to happen (e.g., under a new mandate); or
c. could happen given a change you or someone else is designing.  (Reflective practice—the goal of the CCT Program—implies that we evaluate the effects of any changes we make and learn from that evaluation. )

2.  “Study systematically” means to evaluate the effects of changes in practices/ policies/ institutional arrangements, either by comparing before vs. after the change, or the changed situation vs. a unchanged control.

3.  Evaluation of changes help you to
a.  Promote their wider adoption, or
b.  Revise the changes, or advance new courses of action

4.  Whether anyone pays attention to the evaluation depends its political use/fulness for mobilizing support and addressing (potential) opposition.  The politics of evaluation and educational change more generally could be a course in itself, but for now note that:
a.  If you build evaluation into your proposals for change, it shows your preparedness to learn from the effects of the change, and this might increase support for making changes (3b) or promoting their wider adoption (3a); and
b.  If you identify the different stakeholders and look ahead to what the research results could allow them to do, this can enter into the process of designing the change and its evaluation.

5a.  Action Research typically starts with 1a, focuses first on studying what has been happening, but inevitably gets drawn into issues of constituency building or “stakeholder buy-in.”
b.  The Evaluation Clock (introduced in week 6) helps you keep an eye on the buy-in of sponsors or stakeholders in deciding what and how to evaluate, how to analyze results, etc.
c.  Participatory Action Research (PAR) achieves buy-in through participation of stakeholders in the designing changes, implementing them, and evaluating them.

6.  Participation is enhanced by facilitation, other group processes, and reflective practices that bring out and acknowledge the different participants’ voices.

7.  In this course you develop your ability to go from concern about some educational (or related) practice/ policy/ institutional arrangement to influencing what is going on.  To this end you:
a. experience, learn, and practice various ways to promote participation and reflective practice (including your own participation);
b. examine critically the evaluations of others (or the lack of the appropriate evaluations); and
c. undertake a project in an area of your particular concern in which you design and perhaps carry out a pilot version of either i) an evaluation of a change and/or ii) facilitating participation in change or facilitating reflective practice.

Evaluation of academic leaders

A. Members of an academic unit (e.g., a College) should look for a senior academic leader (e.g., a Dean) who makes multi-yearly evaluations hardly necessary.  A genuine leader makes explicit their own objectives with respect to operations of the Unit , takes stock continuously of what is working well and what needs improvement, and reports regularly to the Unit on progress on each of these objectives.

“Operations” does not mean here the high-profile issues such as grants received by the Unit or new partnerships.  It means the range of everyday and recurrent practices that support members of the Unit to do the best work they can under the always-constrained circumstances, allowing faculty members in particular to preserve a balance between research & writing, service & institutional development, and teaching & mentoring.  (E.g., what email etiquette does the leader aspire to and model for the Unit members?)

Whether the current leader continues after the review or a new leader is found, we should not be unsure of what his/her objectives are after three months, let alone by the time the multi-year evaluation takes place.

B. We should also look for a leader who ensures that staff support the faculty well, especially the work of faculty to serve students.  This requires a leader to:

1. evaluate staff that the leader supervises at regular intervals in a way that supports the staff members’ improvement and follows the necessary procedures if any staff member is not able to fulfill their duties and needs to be dismissed;

1a. ensure that faculty members know that #1 has taken place, preferably with consultation of faculty members;

2. establish supervision of staff at the level as close as possible to the faculty members served and a systematic means of feedback to the supervisor from those served. (E.g., staff serving a departmental program might formally report to the Department chair, but the chair could delegate supervision to the director of a program housed within the Department); and

2a. Explain to faculty the rationale for any staff re-organization, evaluate its effects, and make adjustments accordingly.

C. #A&B correspond to a model of evaluation in general, in which there are four different uses of evaluations, which in this situation play out as below, and in which #2, 3 & 4 are not neglected in favor of #1:

1.  To inform decisions by superiors about whether to continue the leader’s appointment and, if so, what guidance or expectations to attach to the re-appointment;

2. For the evaluated person to identify ways to improve;

3. For the evaluators to clarify what they have learned from interacting with the evaluated leader about the ways that they want to interact with academic leaders, whether that refers not only to the current Unit leader, but to a leader of a unit at any level.

4. To inform institutional learning about how faculty and the leader’s superiors can get the most from interacting with the current Unit leaders or any other leader and from evaluating leaders.

D. Because what actually happens in academic institutions departs from #A, B, and C, it is necessary to:

1. Develop an approach to moving from where we find ourselves towards #A, B, and C, which might start with

2. Explain the rationale for the points made under #A, B, C. (That is, don’t assume they are self-evident.) — this may be the subject of a future post.  For now, consider Taking stock as an ethical imperative.

Moving beyond conventional rubrics

Using the following rubric most conventional rubrics are inexpert.

The rubric:
Expert: Helps the evaluated person see how to improve and can be used formatively in self-assessment
Proficient: Helps the evaluated person see how to improve
Needs Improvement: Evaluates the person on multiple criteria
Does Not Meet Standards: Allows evaluations to be averaged out to an overall figure or category.

Moreover, they waste a lot of paper. In the following example (drawn from Marshall 2009), all the information we really need is that teachers should aim to anticipate misconceptions that students are likely to have and plan how to overcome them.

(At least this example does not commit the all-too-common sin of listing multiple unrelated criteria, which often leads to the evaluated person not fitting in any box because they meet, say, some of the expert criteria, some of the proficient, and some of the needs improvement.  The fudging that goes on to assign a box anyway undermines the credibility of rubric use.)

We could strip most conventional rubrics down to one column that captures the relevant criterion.  The question still remains: how does one bring any give criterion about if it is not happening?  In order to help the evaluated person see how to improve we need to move beyond the conventional rubric.

The first step I recommend to my students, some of whom are teachers, is to complete a regular plus-delta evaluation for a manageable set of criteria, say, 5-15 items that you are prepared to focus on at this point.  (Too many items and none get much attention or it is too time consuming to keep evaluating your performance regularly.)  The evaluation might be done by an observer or it might be done by you directly after the class or other performance being evaluated.  Paying attention to the things you did well (the plus) makes it more likely that you will work on the things that need improving or changing (the delta).  This approach assumes that you  or the person you ask to evaluate you have ideas about ways to improve.  If some issue arose that you do not know how to address, the plus-delta evaluation puts you in a good position (in terms of rich detail about the class or other performance being evaluated and emotional state) to raise it for discussion with someone who might have more experience or ideas.

I do not know the history about the rise of conventional rubrics.  I do know that I have encountered many people who use or advocate rubrics but were not able to show me that they are used to help the evaluated person see how to improve or how to do that.


Marshall, K. 2009. Teacher Evaluation Rubrics.  http://ecologyofeducation.net/wsite/wp-content/uploads/2009/09/teacher-eval-rubrics-may-16-09.pdf

Effective collaborators: Skills and dispositions (Re-engagement)

Re-engagement— Respect, risk, and revelation combine so that participants’ gears are re-engaged (to use a machine metaphor), allowing us to mobilize and sustain quite a high level of energy during the collaboration. But re-engagement goes beyond an individual’s enhanced enthusiasm. It is a collective or emergent result of the activities that bring people who have generative differences into meaningful interactions that can catalyze transformations. In other words, meaningful social engagement and opportunities for personal introspection contribute to participants discovering new possibilities for work with others on ideas they brought to the collaboration.

  • inquire further on the issues that arise in our own projects.
  • select and focus on a subset of the specific plans or knowledge generated during the collaboration in our subsequent work.
  • engage actively with others.
  • inquire further into how we can support the work of others.
  • are reminded of our aspirations to work in supportive communities.
  • make the experiences of the collaboration a basis for subsequent efforts to cultivate collaborators.
  • arrange to assist or apprentice with the facilitator in a future collaboration.

(Of course, what we state in the end-of-collaboration evaluations cannot show that we will follow through on intentions to stay connected or to make shifts in our own projects and work relations. A need or desire for periodic re-charging of our ideas and intentions is evident when past participants return to subsequent collaborations.)
[See Introduction to this series of posts.]


Get every new post delivered to your Inbox.

Join 167 other followers