Thursday, March 30, 2017

Confusion May Be Contagious

In discussing a number of my recent confusions with colleagues in China, I realized that my confusions were also causing a couple of them to become confused. The thought that confusions are contagious then escaped from my lips without much thought. Now I am wondering if there is any evidence to support such a claim.

Many years ago when working at the University of Bergen in Norway in the area of system dynamics-based learning environments, I devised a principle called UUPS – pronounced ‘oops’, standing for the Universal Underlying Principle of all Systems – namely, the notion when one begins working on a complex and challenging problem is that something has already gone wrong. Typical early problems include (a) misdiagnosing the problem, (b) not providing sufficient resources early in the effort to address the problem, (c) focusing on just one aspect of a complex and dynamic problem, and (d) assuming that the future will resemble the past with regard to the problem situation.

A first corollary to UUUPS is the notion that mistakes rarely occur in isolation. Typically, one mistake leads to another and so on. Misdiagnosing the problem can lead one to address a symptom of a problem rather than an underlying cause. One might then develop an elegant but entirely ineffective solution. Yes, I have done so on more than one occasion although my solutions are rarely very elegant. That first corollary is related to my current thought that confusion may be contagious. Recent political events tend to reinforce this nascient belief.

I added two additional corollaries to UUPS: (a) there are rarely sufficient resources to do what you believe should be done to address a problem situation involving a complex and dynamic system, and (b) others generally have good or better ideas about what can be done to improve the situation. But then I rarely listen to myself much less to others.

I seem to recall a phrase treated in some depth in one of psychology courses many years ago around the popular claim that misery loves company. The instructor asked us to read about that notion and finds its origins and offer an explanation based in psychology that would lead some credence to that popular belief. What little I recall from that class is the notion that the evidence tends to show that misery likes miserable company. Apparently, a 14th century Italian historian named Dominici de Gravina wrote, in his Chronicon de rebus in Apulia gestis something that translates roughly as follows: "It is a comfort to those who are unfortunate to have had companions in misfortune.”

My more recent dives into the research pertaining to confusions being contagious led mainly to articles about contagious diseases. Being easily influenced by what I read, I then had he thought that perhaps confusion could be treated as a kind of disease – a disease of the mind so to speak. This notion has promise in terms of understanding how confusions are formed, reinforced and spread to others. Perhaps.

Confusion is a contagious disease of the mind. Onward through the fog, as they say at Oat Willie’s (see http://oatwillies.com/). Could it be that a person develops a malady that results in frequent confusions? If so, how might that happen? Many have written about habits of the mind (for example, see http://www.teachthought.com/pedagogy/what-are-the-habits-of-mind/). In general, those persons have taken a positive view of the habits of the mind and argued for their support and development in teaching and learning. However, a more neutral approach would be to view mental habits just as other habits are viewed – namely, in terms of repeated activation that lead to a relatively thoughtless repetition of a particular disposition in a certain kind of situation. The notion of reinforcement patterns in the brain’s neural network structure seems to support such a general analysis. So, it may be possible that through repeated activation a person develops a mental habit that is likely to result in confusion in some cases. It could be repeated reliance on one source or one authority or one perspective to account for a wide range of beliefs. I recall reading Quine and Ullian’s Web of Belief (see http://emilkirkegaard.dk/en/wp-content/uploads/W.-V.-Quine-J.-S.-Ullian-The-Web-of-Belief.pdf) years ago and thinking that when confrounted with something that does not fit with prior beliefs and dispositions that one is forced to call into question an entire set of beliefs, however reluctantly.

Then I remember one of Nietzsche’s aphorism in The Dawn of Day: The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently.” Perhaps following that advice is one way to inoculate oneself and others from confusion. One of my philosophy mentors, Oets Kolk Bouwsma, argued that many philosophical constructs were a result of conceptual confusion (see https://en.wikipedia.org/wiki/Oets_Kolk_Bouwsma). This is akin to Wittgenstein’s notion in Philosophical Investigations that language can lead one astray (see http://aprender.ead.unb.br/pluginfile.php/170854/mod_resource/content/1/RPGB%20Wittgenstein%20Phil%20Investigations.pdf). I have formed a habit of reminding myself of two of Wittgenstein’s key ideas: (a) From the Tractatus Logico-Philosophicus – we picture facts to ourselves (that is to say that people naturally create internal representations to make sense of things they experience that are new or puzzling; see http://tractatus-online.appspot.com/Tractatus/jonathan/); and (b) from Philosophical Investigations – we talk about those internal representations to which no one has direct access with others in the form of language games. If one only talks with those who hold similar views and dispositions, then one is not likely to question one’s assumptions or carefully examine the quality and credibility of the evidence supporting those views and dispositions (recall Nietzsche’s advice).

Well, there seems to be no lack of conceptual confusion on my part or among the general public. Prehaps confusion has reached epidemic proportions. Not to panic … one can always crawl back into Plato’s cave described in The Republic (see https://web.stanford.edu/class/ihum40/cave.pdf). Life in the shadows is sometimes easier to manage.








Tuesday, February 28, 2017

Fake News: Spinning and Winning



Fake News: Spinning and Winning
“Truth crushed to earth, shall rise again” (William Cullen Bryant)

In Nelson Goodman’s (1954), Fact Fiction and Forecast, the notion of projectible predication arises to differentiate hypotheses based on regularities well grounded in experience and those which are not. There is a parallel treatment of counterfactuals (If X, then Y, and not-x, as in “If this thing in my hand was made of copper, it would conduct electricity but it is actually a wooden popsicle stick”) involving relationships well-grounded in experience and those which are not. What might we say about facts, fictions, and forecasts, in educational research or in the current political climate?

What are some facts in the area of educational research? A study by the 2014 Program for the International Assessment of Adult Competencies (PIAAC) involving 33 countries shows that 7 of those countries scored significantly higher on a literacy scale and six scored significant lower (measured understanding, evaluating, using and engaging with written text) than the USA and the USA was slightly below the average (see https://nces.ed.gov/fastfacts/display.asp?id=69).
Here is an associated counterfactual claim: If an adult person M (say that is me – an American) is a lifelong resident of Japan, then M is more likely to be literate than N (where N is an adult American picked at random). Is that counterfactual claim reasonable?

Here is another claim supported by extensive educational research: Directive feedback (providing corrective information) tends to work well with learners new to a topic or domain whereas facilitative feedback (providing guidance and cues) tends to work well with more advanced learners (Shute, 2007; see https://www.ets.org/Media/Research/pdf/RR-07-11.pdf). Now, suppose that P is a learner new to the domain of logic and epistemology (me, for instance – my dissertation was in that area) and someone claims that P is more likely to benefit from directive feedback in the area of logic than Q (a middle school student in rural Alabama). Is that a reasonable claim?

One way to treat counterfactuals (IF-Then claims with the IF-clause clearly false) is to dismiss them as trivial or even meaningless. Yet some seem to make sense to some people. Other counterfactuals can be used to make jokes as in: “If I knew everything, then I would know _______________ .” I forgot to mention that this was a pop quiz. How did you fill in the blank? I used this phrase: “… then I would know where parallel lines meet.” Math humor is not so humorous to very many people.

On to fictions. I just love fictions. Sometimes I think about my training in philosophy … one of my professors said that the never-ending business of philosophy was to help us understand the boundaries between sense and nonsense. My own take on philosophy is that it is a kind of thought in slow motion. Fictions – claims that do not hold up under scrutiny. Scrutiny is when you close one eye and take a closer look for those of you taking notes. There are some blatant fictions as this one I discovered in a book on medieval logic: “I just ate the last cannibal” spoken in a group of monks who had taken vows of silence. Bouwsma’s (one of my professors) example was this: “I just suffered a fatal heart attack.”

There are many less blatant fictions. Here is one: “Humans only came to the Grand Canyon area about 4,000 years ago.” Here is another one: “All of the fossils found in the Burgess Shale in the Canadian Rockies were fossils of creatures still living somewhere.” Consider this one: “There is no evidence that human activities contribute to climate change.” Some people believe what they want to believe and are reluctant to take a closer look at evidence or consider alternative perspectives or beliefs. There is a difference between advocacy for something and evidence supporting something. A critical issue concerns the nature of good and compelling evidence. Just as there is a fuzzy boundary between sense and nonsense, the boundary between advocacy and research is somewhat fuzzy. 

Just as counterfactuals turn out to be somewhat problematic, there is another kind of IF-THEN claim that is also problematic. I call it an unconditional conditional and it has the general form of If X, then Y, where no matter what is put in for X the person making the unconditional conditional claim will maintain the truth of Y. No refutation of the unconditional conditional is considered possible. In such a case, one cannot conclude that Y is a fiction … one can only walk away from the unreasonable challenge of the advocate of the unconditional conditional in trying to offer evidence that Y or the unconditional conditional with Y as the then-clause may not be true. 

Does this ever happen in educational research? In medical research? In political discourse? The challenge of finding examples in each of those categories is left to the reader – this is the mid-term exam. Hint – the answer to the first set of three questions  is ‘yes’ – this does not constitute timely nor informative feedback. It is merely encouragement to keep on keeping’ on.
When you have completed the mid-term exam, you may want to continue on to forecasts. My forecast is that some of you will pass the mid-term. After all, it was a take-home exam … or take-to-the-bathroom exam.

Having said a few things about IF-THEN claims, it seems natural to apply some of that discussion to forecasts, as these often come in the form of complex IF-THEN statements, such as:” “If W, and X and Y, then Z” – W might refer to the learning or instructional context and X might refer to the students or teachers and Y might refer to the intervention or treatment. Obviously, each of the parts of the complex IF clause could be compound, which means that the forecast result Z depends  on a conjunction of a set of contributing factors. If Z does not occur, the advocate of Z is likely to look for one or more deficiencies in the set of contributing factors. Another approach is to construct a replication study or a revised version to see if Z might occur. Yet another approach is to revise Z and conduct a replication study. Forecasting or predicting and then confirming or refuting or refining is not easy … it is what scientists and meteorologists and other investigators are trained to do. 

I have a vague memory of reading Fact, Fiction and Forecast about 45 years ago. I was fascinated by the concept of the hypothetical predicate ‘grue’ for things that are green before some future date and blue after that date. At this time, ‘all emeralds are green’ and 'all emeralds are grue’ are both true and confirmed by the same evidence. However, few believe that after that future date that emeralds will all be blue. I also realized that I did not understand what a meteorologist meant by a forecast of 40% chance of rain. Was it that 40% of the area covered by the forecast would surely receive rain, or that any random spot in the forecast area would have a 40% chance of rain or that it will rain 40% of the day or ??? Forecasting still bewilders me. I recall a sports enthusiast being asked to predict the outcome of an event about to begin. The sports enthusiast replied “Let’s just watch and see what happens.” My respect for sports enthusiasts rose significantly that day.

I suppose we need a final exam since we have had a pop quiz and a mid-term exam. The final exam is a single multiple choice question:

Which of the following statements is true?
  1. There is someone in this room who loves all and only those persons in this room who do not love themselves.
  2. Never in the course of human history have events so resembled the present as they now do.
  3. It is a fact that X leaked Y but that fact is fake news
  4. There are an even number of planets in the Milky Way galaxy
  5. If X is a human being, then X knows less than X is typically inclined to believe that X knows.
  6.  More than one of the above is true
  7. More than one of the above is false.
 Truth? What is truth? I will go where you go, answered Ruth. My trumpet is louder than yours so follow me said someone else. The truth they are telling might only be the truth that is selling. And the slow one now will later be fast said the Nobel laureate.

Sunday, January 15, 2017

So much beauty ... so little ...

So much beauty, so much tragedy, so much nonsense …what shall we make of it all?

From CNN - Myanmar's Mergui Archipelago


Lake Louise in Canada:
My own picture taken in Bali in 2016:



And another  one of LA (Lower Alabama) coast at sunset:


I have been so fortunate to see so much natural beauty in so many different places.

Then I think about the beautiful people I have known – especially my children and grandchildren and their spouses. So many acts of kindness and understanding I have witnessed through them – makes me want to be  more kind and understanding.

I need to have these thoughts of beauty when I think about so many tragedies in so many places, from Aleppo to Haiti and too many other places to mention, many of which are closer to home. Natural tragedies such as hurricanes and earthquakes are beyond our control, but many tragedies are within our spheres of influence, yet they continue to occur, in my own country and elsewhere around the world.

Like Lawrence Ferlinghetti, I am waiting (see https://www.poetryfoundation.org/poems-and-poets/poems/detail/42869) – seems it has been a long time coming … and a lot of waiting for that rebirth of wonder … seems we may have to wait another 4 or 8 years or more … but we should do more than wait. Hope is not enough, either. What can one do? One can find another and create a small place in which wonder can be reborn. Or maybe it takes more than two?

Learning Context

What constitutes a learning context and why should we care?

It is a reasonably well-established view that context is in large part what determines the meaning of an utterance. Context – or more specifically, use, which occurs in a context - in part determines meaning (see https://plato.stanford.edu/entries/pragmatics/ for an elaboration). The point is that use and context are critical factors in determining the meaning of what someone has said. Likewise, the use and context are critical factors in determining whether, how and why learning might or might not be occurring in a particular situation. So, to determine if meaningful learning is or is not occurring or likely to occur, one needs to consider the learning context and what the learner might be doing or not be doing in that situation.

The simple approach to providing a description of a learning context is to indicate what is around the learner during a learning activity. It might be other learners, a teacher, a book, and a computer as in a typical classroom setting, or it could be music playing in a coffee shop with other people around who may or may not be learners, or it could be many other situations in which learning is intended to occur.

However, simply describing what is around a learner falls short of providing a good or complete understanding of a learning context. What the learner is doing or might do is also relevant. If the learning situation happens to be in a coffee shop with music playing, then a learner is not likely to benefit much from an audio or audio-video file played on that learner’s laptop computer. The coffee shop situation does not provide effective affordance or support for audio-based learning. Likewise, in a classroom setting, if the learner is working with a small group of students who are busy chatting on their smartphones, that context might have too many distractions to support learning to solve a complex problem, unless the chatting happens to be about that problem and is providing some insights about what to do.

The point I am trying to make is quite simple. When describing a learning context, it is reasonable to include all of the things that a learner can touch (or see or hear or smell or taste) as well as all of the things that can touch the learner. In addition, just as the speaker is part of the context when it comes to determining the meaning of what is said, the learner is part of the context when it comes to understanding to what extent (and why) meaningful learning might be occurring.


Of course what is to be learned is part of the learning context as well. What is to be learned includes the general topic, the level of understanding sought, as well as the specific knowledge, skills and attitudes that are part of the learning goal and expected outcomes. 

Tuesday, January 10, 2017

The More Things Are Called Smart …

It used to be the case that there were a few people one might call smart. Now, we call phones smart, whiteboards smart, computer programs smart, learning environments smart, technologies smart, and even automobiles and cities smart. What has become of the meaning of the descriptive adjective ‘smart’?

What I really crave is a smart chocolate ice cream cone. I suppose someone has an answer for that as well … perhaps one that has no sugar, no calories, and no reason to ask for seconds. I am wondering about the meaning of ‘smart’ because I was asked about it in a meeting with faculty and graduate students in the Educational Technology Department in the Institute of Education at Tsinghua University in Beijing.

Perhaps the location of an educational technology program within a university reflects or influences how ‘smart’ is conceptualized. One can find educational technology programs in a college or faculty of education, or in a college or faculty of information, or information science, and even in an arts and sciences college or faculty. In an education college or faculty, one might imagine the emphasis to be on technologies that can be used to make people smarter. In a college of information or information science, one might imagine the emphasis to be on devices and technologies that did things that people who used to be called smart used to do. It would be an interesting exercise for someone to look at where educational technologies were located within a university structure, the various emphasizes and perspectives reflected in those programs, and any patterns or correlations that might emerge from those data.

My own views are shaped by my experience – first with philosophy and later with computer science. My philosophy perspective is based in skepticism (e.g., recognizing the limits of knowledge and the ease with which we are inclined to believe we know more than we actually know or believe that the world happens to conform to the limits of our knowledge) while my computer science perspective is based on artificial intelligence and expert systems. I have seen the word ‘smart’ used quite differently in both of those contexts.

I would consider Socrates smart, for example. He would elicit ideas about such things as justice and virtue from the sophists and show that their beliefs led to contradictions or absurd conclusions, such as a virtuous ruler as one who manages to squeeze the most work and highest taxes from citizens, applying the notion of virtue as that which benefits the stronger – with the conclusion that what is good for one is not necessarily good for another. I was fascinated early in life by the logic of such an indirect proof. I had also seen it used in mathematics, as in the common proof that pi is an irrational number – assume it is rational and then step by step one arrives at a contradiction which means the point of departure was not sound. Pi must be irrational – aren’t we all from time to time?

And then to earn a living, I took up computer science and became fascinated with expert systems and the representation of human expertise in software programs. I was attracted to complexity and the notion that somehow software might be able to extend the ability of people to deal with complexity, which could be conceived as a way of making those persons smarter – able to perform more like experts.

One vivid memory I have of my time as a practicing computer scientist involved being responsible for some 400 plus programs written in Assembly Language and Fortran running simultaneously on five different computers, four were 32-bit Perkin-Elmer mini-computers and the fifth was a 30-bit graphics computer. The application was real-time simulations of air combat training. One task I had was to add software tracking of the sun’s position along with a number of fighter jets that would simulate firing air-to-air missiles at enemy aircraft and record probable hits and misses and then debrief the pilots after the exercise. The task was to show the pilots the location of the sun in their cockpit views when they fired a heat-seeking missile, as it would go after the sun and miss the target. Getting that simple change to work proved a very complex and challenging task. I got stuck trying to make it work as every time I thought I had it figured out, the software would crash (in simulated runs rather than in actual exercises – we always made sure the software would work before using it in live exercises).

After a week of pondering over the crashes and narrowing the logic failure to a few lines of code among the thousands of lines involved, my mentor and a senior programmer (Walt Davis) advised me to make a single random change in those few lines and see what happens. He also could not see a problem with the logic and agreed the problem had to be among those lines of code. The change I settled on was to reload a memory location with the same doubleword that was just loaded in the previous line of code – yes, that was in Assembly Language. Unbelievably, the change solved the problem. What I had thought had to be a problem with the logic proved to be a timing problem. By reloading that register with the same information, the other programs running in parallel had time to catch up and do their tasks. Walt was smart. He did not know what was wrong but he knew I needed to get more information, and making one small change was a way to start getting more information.

Perhaps a smart computer processing system would have suggested a similar change, but I have yet to see such a system perform with the smartness of a Walt Davis. I suppose that might become possible at some point.

Back to my point – what is with all these uses of the word ‘smart’. Is it not hard enough to identify a smart person much less identify which devices are smart or which are only being called smart so as to improve sales. Some leaders and leaders to be call themselves smart. The sophists with whom Socrates interacted called themselves smart. Self-attribution of smartness seems to be a likely indication of lack of smartness.

At least devices are not yet able to call themselves smart. Yes, that will probably happen at some point. But then I recall Bob Gagné’s advice to me while working together at the Air Force Armstrong Laboratory in San Antonio. He said, “our job is to help people learn – to help them become smarter … better at solving problems.” Oh yes. We might even be able to use various devices and techniques in doing that all important job.