Tuesday, January 18, 2011

Your cognitive toolkit

Edge asks what scientific concept would improve everybody’s cognitive toolkit? And lots of scientists answer.  There are some interesting ones, although I warn you that it’s a lot of reading and hard to stop once you’ve begun.  I don’t have a single answer to the question (see answer 2 Smile), but here are some of the contenders.

1. Estimation.  Some of the problems people have in understanding the world - and I suspect what puts many people off science – is difficulty in understanding quantity, scale and especially probability.  Two major issues arise from this: first, its hard to see the solution to problems if you don’t know what ballpark to aim for and second, it can be hard to understand that there’s even a problem if you don’t understand the scale of what you’re looking at.  People are frequently wrong by orders of magnitude when estimating numbers that are outside their range of experience, such as how many atoms there are in a pin or how big the universe is.  But they are also frequently wrong about familiar things when it comes to probability.  This is because much in probability is counterintuitive.  The most famous example is the Monty Hall Problem, but there are plenty of trivial examples of how our expectations can be fooled.  Consider how many people in a room will turn out to have the same birthday.  The probability is (obviously) 100% when there are 367 people in the room, but it surprises most people that it’s 99% with 57 people in the room and 50% with 23 people, assuming that a birth is equally probable on every day.  One of the reasons our intuition is fooled is that we’re not trying to match a particular person’s birthday.  We’re looking for a match between the birthdays of any two people in the room, which is a lot more likely.  A reliable intuitive grasp of quantities, probabilities and scale is an immensely useful tool for a scientist, or for anyone trying to make sense of the world.

2. I think you’ll find its a bit more complicated than that.  Science deals in simplifications.  We look at something and slice out a lot of the complexity so we can understand the core of what’s going on.  XKCD puts it this way.  This is called reductionism and is an excellent tool.  We use it all the time in computer science: we make assumptions about the environment to make systems tractable.  However, it’s all too easy to throw out the baby with the bathwater by assuming that something is less crucial to a system than it actually is.  When we do this, we sometimes find that we’ve been asking the wrong question all along.  It sounds like a cliché, but we frequently make assumptions about people’s motivations and behaviour that lead us to build systems that don’t do what anyone wants.  This is especially true of very large systems such as social and health systems and can lead to systemic problems such as those which led to tragic cases like those of Peter Connelly and Victoria Climbié.  One of the hardest things to do in science is to make sure you’re asking the right questions.  Sometimes it’s necessary to focus on the complexities rather than assuming them away.  The guy in the XKCD cartoon is infuriated because the physicist assumes that the complexity can be bolted on afterwards, when sometimes it’s the very heart of the thing.  Unfortunately, it often seems to be relegated to the ‘future work’ section of papers.  Complexity in environments can influence how we engineer solutions and failure to do this is a leading cause of systemic failure.  Understanding complexity well enough to make informed decisions about what questions to ask and how to go about answering them is therefore another important skill.  Unfortunately, it is one that’s hard to teach and not always easy to learn through experience.

No comments:

Post a Comment