When we put things in boxes we have to remember that we're making an approximation. The world isn't black and white, in fact we have to remember it isn't discrete shades of colours either, it's a truly analogue world. What do I mean here? As the world becomes increasingly digital it's really important that we remember that it's not really digital at all, there are no simple states - all models are inherently an approximation of reality. For example, we can measure the temperature of some warm water. We can use our hand and determine that it's 'warm'. We can use a thermometer - let's say the thermometer say the water is about 40 degrees celcius (or 104 F if you're that way inclined). Want more accuracy? Great let's use a digital thermometer - it reads 39.7C - now we're getting more accurate right? Not really actually, even if the digital thermometer has a very high level of calibration, we're still converting temperature from an analogue (or continuous) state to a digital one with discrete values - inherently an approximation. If we were able to look closely at the analogue thermometer we'd see that line hovering around the 40 mark, if we had more marks we may be able to get that number more accurate? Say we had 0.1 indicators and we could see the line now sitting around the 39.7 - guess what, we're still using a digital scale to measure against and converting analogue to digital ourselves using that scale. In actual fact the temperature of the water is what it is, but no matter how many digits we put in to a measure it doesn't actually make it more accurate. In fact our initial assessment of 'warm' is no worse in many ways than the 39.74322 degrees we can get on our uber expensive digital meter - the reality is that the temperature is what it is, regardless of what scale we put it on. The whole scale, be it Farenheit, Celcius or even Kelvin is only an artificial measure that we've created to make understanding temperature easier.
Come to think of it, all the classification that we learned in biology is similar in that it's an approximation - useful to us, but inherently inaccurate. Let's take the most basic of classifications that's used in plants; fruits and vegetables, when is a veggie really a fruit? We know that tomatoes are officially a fruit right? What about cucumbers, peppers, advocadoes, string beans? All fruits officially from a science perspective. And these are examples with pretty black and white descriptions like whether or not they have the seeds (don't start with strawberries as they're apparently pretty controversial). So rhubarb isn't a fruit okay, it's a veggie of sorts - that is if we could agree on the definition of what a vegetable was. And there's the point. Classification relies upon absolutes. Classification is just black and white with a few more well-defined shades thrown in - a digital approach to an analogue world.
I'm sure some of you are aware of the observer effect in physics? In layman's terms this is when things change by measuring them. We see this plenty in the learning world, when we start to measure feedback or assess students it changes behaviours and has an effect on both teaching and learning. This seems great for learning, knowing that by measuring what we're doing we're affecting it, perfect, measure away! But there's more to it, I've also touched on Quantum Mechanics before and the ideas of things like Schrodinger's cat and furthermore Heinsberg's Uncertainty Principle. In essence these things tell us there's only so much we can really 'know'. As we measure and observe one thing, something else changes and each known unveils another unknown. Combining this again with another high-level theory (relativity of sorts) we get a system where when we measure something we change it, when we get that measurement we lose sight of another measurement and the 'fact' we have determined is only relative - true in that instant and with other unknowns! It's probably fair to ask do we know more at this point than we did before we measured?
Okay, too much physics. Let's look at this in simpler logic terms. When we say by the end of this course you will learn XYZ, it means we have to accurately define XYZ and then measure against it for a point in time for all students. What about the next week, the next month, year, decade? What are we actually measuring? Using our simplistic quantification of learning is only true (even if it were properly and accurately measured) at the instance it is was measured. Now add to that the issues in measuring XYZ. Writing effective assessments is a difficult task for the most experienced of learning designers and testing beyond recollection of 'knowledge' is probably the minority. So our learning objectives tend to rely heavily first upon a classification that we've already seen is somewhat of an approximation; a digital version of the analogue world. They tend to be badly designed and badly measured too - so we end up with an approximation with dodgy measurements, that's only good for when it's measured; how good do those learning objectives really look?
Now let's add some more of those oh-so dodgy classification types into our learning to further make things foggy. How about learning 'types' - remember when everyone was hot on auditory learners etc? Back to our point earlier around quantifying and classifying; there's no such thing as an auditory learner - it's a simplistic approximation of the way some people may learn best; and even that's been questioned in recent years. How about those Myres-Briggs personality types - heck, I'm still unsure at 44 whether I'm an introvert or an extrovert let alone the deeper analysis - and just because I was an extrovert yesterday doesn't mean I'll be one tomorrow either or I'll take that role consistently throughout an exercise.
You could also kind of twist uncertainty principles in here again when we talk digital and analogue. The more highly defined something becomes the less confidence we can actually have in its accuracy. In learning objectives does that mean the more SMART we make them, the less smart they really are? Maybe, but what it really means is that the deeper we try to classify something, the less sure we can actually be we've got it exactly in the right place.
So that's kind of the conundrum we face, the more me quantify the less confident we can be about what we've defined. As a friend of mine once said in a drunken moment 'the more you know, the less you know, aye, know what I mean?'