06 October 2012

Things Short-haired People Don’t Understand

This should really be titled, “Husband – JUST READ THIS AND QUIT NAGGING ME.”  But I figured I’d write for a more general audience, so here you go – things you people who’ve never had long hair need to know about my lifestyle, because sometimes it just doesn’t seem to get through.

1.  I need more shampoo than you.

Your hair is like ONE INCH long.  At most.  In some places it’s shorter than your eyebrow hair.  You don’t shampoo your eyebrows, do you?  Why should you need to shampoo that part of your scalp at all?  But even so, for the sake of this discussion let’s assume your hair is one inch long, and mine is ten inches long.  I will need to use – you guessed it – TEN TIMES AS MUCH shampoo as you to get the same amount of coverage on my hair.  So don’t be telling me I’m using too much shampoo.  I’m using twice as much shampoo as you.  Three times at most.  So really, YOU are the one using too much shampoo. 

2.  I should not shampoo every day.

The thing about short hair is that it has not been on your head long.  ALL of your hair on your entire head has been there for less time than the bit of hair that’s two inches away from my head.  I did extensive, thorough research on this subject (thank you, Answers.com) and have determined that hair grows at a rate of about half an inch per month.  That means ALL YOUR HAIR has been around a maximum of, say, three or four months.  Mine?  These ends have been with me for two YEARS.  So while you can happily destroy your hair by shampooing away replenishing scalp oils every single day because you’re not even going to see that hair half a year from now, I NEED that oil to maintain this mane I hope will still be treating me right two years down the road.  I don’t want some kind of split-end mutiny on my hands.  Do you even know what split ends are?  Can you get a split end in three months?

This applies to hair dye too, by the way.  When I go to dye my hair, I’ve got to worry about how it’s going to affect my look two years down the road.  How old am I going to be?  What will I be doing with my life?  Will this affect any future job I could try to get?  Because some colors are easier to dye over than others.  I’m just lucky I can go back to brown whenever I have to – I salute all you brave blondes out there.

3. Your bad haircut is as nothing compared to my bad haircut.

Again, if you get a bad haircut, the absolute longest you have to worry about it is three months.  And I’m pretty sure in two weeks it’s going to settle in just fine and you can get it fixed, no problem.  You can move on with your life.  Two weeks isn’t even long enough to really notice the roots under my dye job.  When my hairstylist messes up, she chops off three extra INCHES, not millimeters – and we’ve already discussed that it can take months to recover from that kind of error. 

Now, you may be saying, “But you still have a lot of hair to keep cutting and get the shape right.”  NO.  If I wanted to take another three inches off my hair, I would have done it the FIRST time.  Now I have to wait another SIX MONTHS to get it even to the point where it SHOULD HAVE BEEN WHEN I WENT IN.  That’s HALF A FREAKING YEAR.  If I want to cut more off and reshape it, I basically have to resign myself to an entirely different hairstyle and look.  Maybe I don’t even have the right clothes or earrings to pull off that mop.  I could have to invest in a whole new wardrobe.  So don’t tell me your awful haircut is worse than mine.

4.  My hair takes a lot longer than yours to get pretty every day.

If I want to do my hair and make it look actually pretty, it takes me an hour.  I have to do it in layers, one row at a time, getting each section right before I move on to the next part.  This is a complicated work of art I’m sculpting, here.  I’ve watched you short-haired people “doing your hair.”  It takes like ten minutes.  It doesn’t even involve any kind of iron.  So if I say I need to get ready to go out somewhere, you can assume that I need to get my hair done, and that’s going to add an hour to whatever time you were estimating for yourself.  And that’s assuming you’re also doing your makeup like I am.  No?  No makeup?  Add another half hour.  Being beautiful takes WORK, bitch.

5.  A ponytail is a legitimate hairstyle.

I don’t want to take an hour out of every single day to get my hair looking gorgeous.  You’ll be lucky if you get that once a week.  Once a week for me is about equivalent to all the time you’ve racked up over the week doing your hair daily, anyway.  If I don’t do my hair up nice, though, it’s utterly hideous because I also have curly hair (and that is just a whole other rant for later).  It’s not only ugly, it gets in my way.  I can put up with it getting in my way if it’s pretty, but if it’s going to be hideous too then that is just unacceptable.  So if I shove my hair up in a ponytail all day long, DO NOT make fun of me and my childish-looking hairstyle.  It’s convenient and comfortable.  End of discussion.

I hope you’ve learned something.

24 July 2012

Is it a healthy sense of caution if you’re constantly envisioning your own death?

I’m on a plane over New Mexico right now (well, not right now right now, when I’m posting this or you’re reading this.  I mean maybe I am.  It’s just a highly unlikely coincidence.)

I’ve been saying to a lot of people lately that I’m not afraid of flying, and I see now that I was so, so very wrong about that.  I thought I was telling the truth.  But sitting on this plane right now, I don’t actually think I’ve gone a whole minute without being fully aware of a pervasive sense that I’m stuck in a poorly ventilated tin can death trap.

Let me give you (future me who’s reading this and trying to convince herself she’s really not afraid of flying) a few examples that I’ve come to realize do not connote a healthy level of fear:

1. As I sent that last-minute text to my husband before I had to turn the phone off on the tarmac, I wondered whether he would think to post my message to my friends on FB when I died so they could know the last sweet sentiment I said to anyone I loved.

2. I’ve repeatedly cycled through all my dozens of plane crash stories, trying to figure out which one best applies to my current flying environment and whether I’d die if any one of a wide variety of malfunction or human-error scenarios occurs.

3. When we lifted off I was looking out the window watching the city get smaller and smaller, and with every miniscule lag in acceleration (typical of even a successful takeoff), I was Zen-preparing myself to watch that ground start to tilt and get bigger again.

4. I was pretty convinced that the drawn-out grinding sound I heard on the ascent was an engine failing.

5. I practically ran back from the bathroom because there was a small jolt of turbulence and I needed to get back to the safety of my seatbelt before a panel ripped off the plane and I got sucked out the hole like that one lady did in that one Cracked article I read that one time.

6. When we landed on my first flight we turned into the airport at an angle, and all I could imagine was the plane barrel-rolling out of control and plummeting into the earth.

7. Whenever we went into a cloud I was ready for the moment another unseen plane collided headlong with ours, and I couldn’t decide just how likely I was to even know what hit me in the fractions of a second it’d take for me to get crushed or exploded to death. (I mean in a head-on collision, our plane and the other plane would each probably be going ~500 mph for an effective speed of ~1000 mph, or 450 m/s, and if our plane was in the neighborhood of 100m long, then at row 25 I’d be dead in about a tenth of a second and it’s arguable whether all of that sensory information could manifest a conscious acknowledgement in that time, although I have a sinking feeling I might get to enjoy a few milliseconds of perfect imminent-death awareness. P.S. that is why you learn algebra, my friends.)

I know that air travel is safe.   I know this.  I know that even if problems occur I’m likely to make it out just peachy.  But none of that matters when you’re dealing with a phobia.  Talking yourself out of a death phobia is pretty useless.

And I still fly.  Regularly, even.  At the beach I still swim out into water that’s probably deep enough to hold great white sharks and I inadvertently do my best injured seal impression trying to stay afloat.  I’m totally willing to drive on the Lake Ponchartrain Causeway even though I’m pretty sure the bridge is going to collapse and I’m going to survive both the impact and the threat of drowning only to be shredded alive by a pack of ravenous alligators.  I sometimes even lean against railings on high balconies, although that just seems foolhardy when I can get the same view just standing near the edge rather than risking death-by-shoddy-railing-craftsmanship.

It’s just I feel nauseous every single time I get on a plane.

02 July 2012

The difference between Necessary and Sufficient, or, Why your emoticons should not have noses

In this crazy new cyber-world we’re living in, the entire rich array of human emotional facial expressions is being reduced to nothing more than a select few humble punctuation marks grouped together to look like caveman scratchings turned on their side.  In social media conversations, these so-called “emoticons” (also called “smilies”, for those of you not hip enough to be up on your “cyber-lingo”) have assumed the vital role normally played by our naturally expressive faces, becoming the sole representation of our emotions toward the people with whom we interact.  This is distressing in and of itself, but it’s not the point of my discussion today.

Correctly typed, the most common standard emoticons consist of virtual “eyes” and a virtual “mouth”, made using punctuation marks.  The simplest of these is the basic colon-plus-end-parenthesis – :) – though many other variations exist:  ;)  :D  :(  :’(

But a deeply bothersome trend has managed to grow and fester deep in the bowels of the emoticon world: the dash-nose.  This hideous abomination has wormed its way into all the great emoticons, a defilement I’ve never abided graciously:  :-)  ;-)  :-D  :-(  :’-(

And today, I finally figured out why that nose bothers me so much.

Humans have a very limited range of physical features they like to monitor during social interactions.  When we see another human face, we attend most to the eyes and mouth because these are the expressive features that move and tell us how we’re supposed to respond to their owner.  But a nose?  No one cares what a nose does.  A nose stays pretty much the same no matter what we’re doing, and outside of augmenting a very select few emotional expressions (e.g. the scrunch of disgust, the flaring nostrils of fuming rage), our noses are practically pointless.

Which brings me to Necessary and Sufficient.  These terms are regularly used in the sciences to describe two unique aspects of how important a certain factor is in creating a given outcome.  A factor that is necessary must be present to produce an outcome, while a factor that’s sufficient is all that’s required to produce that outcome.  So if it’s necessary, you absolutely have to have it, and if it’s sufficient then it’s all you actually need.  (And it is possible for a thing to be both necessary and sufficient – or neither.)

These are both readily testable properties.  To determine if something is necessary for a certain outcome, you just remove it and see if you obliterate the outcome.  To determine if something is sufficient to produce an outcome, you remove everything else and leave only it, and see if the outcome remains the same. 

For example, removing a necessary facial feature will prevent you from recognizing an emotional expression (like a smile), while leaving only a sufficient facial feature present will still allow you to recognize that expression.

Let me demonstrate on myself.  Say hello to me:


I hope you were gracious enough to at least offer a greeting.  I mean look at that big ol’ toothy grin.  That is a smile.  How could you ignore that kind of smile?  And how can you tell it’s a smile?  Well, the corners of the mouth are turned way up, the eyes are happily scrunched, and the nose… yeah, it’s not doing much. 

Now, let’s look at what happens when I take the liberty of altering each of these three facial features (mouth, eyes, and nose) independently. 

Let’s start with Necessary.  Is any of these three features necessary for you to be able to tell that I’m grinning at you?

The truth is, no.  As long as you have any combination of the other two features (eyes and nose, mouth and nose, eyes and mouth), you can tell I’m meant to be smiling at you.  That said, the third smile with both eyes and a mouth present is definitely the most informative of the three faces, in that it looks the most like it’s smiling.  This suggests that the nose is the least necessary component of the smile.

So how about Sufficient?  Would any of these features alone be enough for you to tell I’m still smiling?

Well, how about that?  My mouth and eyes are each sufficient, but my nose does absolutely nothing toward helping you figure out if I’m smiling.  In fact, if that nose picture still looks like I might be smiling at you, it’s only because I didn’t go and doctor the dimple out of that freakishly sculpted right cheek so you’re still getting the impression of a mouth-smile.

So what does this tell you about your use of :-) and :-( and ;-) ? 

It says that the only thing the nose-dash is doing is making you take longer to generate your virtual expression, and making others take longer to observe and evaluate it.  The extra dash adds nothing at all of value.  In fact, if you were to do my same necessary/sufficient experiment with an emoticon, you’d find that BOTH the eyes and mouth are necessary to convey information, but the nose is neither necessary nor sufficient for anything – see how it’s the exact same dash for every emoticon you type? 

The emoticon nose is, in short, a waste of a character.  This could have a profound impact on the quality of your tweets, people.  Think about that the next time you write another, “LOL :-D !!!!1!1!”

24 June 2012

I went up the mountain to kill a skunk

Tucson, Arizona is a small city nestled in a gorgeous desert ringed by mountains, remnants of an old volcano, and the city lights are stunning on stormy nights like this one.  So tonight, on my way home from visiting with friends, I decided to embrace my childhood and geological heritage and head up Catalina Highway into the mountains to marvel at the nighttime view from the Babad Do’ag lookout point up at mile marker three.

Everything was going so well at first.  I had all the windows down and the music on but (for once in my life) turned low, and there was a storm brewing to the south and I could just catch flashes of lightning off in the distance beyond the city as I navigated the winding mountain road up toward the lookout point.

I was going to stop at Babad Do’ag, like I said, except that for some reason red and blue lights were flashing as I approached and I saw a couple of cop cars stopped at the lookout, and I decided that for the sake of my own tranquility and enjoyment I would just move on and find a better spot higher up on the mountain to stop and revel in the beauty of the night.

It was dark out, obviously, and there were enough cars on the road that I didn’t have my high beams on.  So when something small and dark entered my field of vision, I barely had enough time to slam on my brakes.  Seriously, this was the hardest I’d ever put my foot down on a pedal in my life.  The smell of burning rubber wafted up into my car and my purse flew onto the floor at my side and the distant car behind me got far closer in the rearview than I would have liked, but I narrowly – narrowly – avoided hitting the skunk that then meandered out from under my bumper and happily went on its merry way into the night. 

I was a little shaken, but after I was sure the critter was well off to the side of the road I continued up the mountain.  I made it a couple more turns before I decided enough was enough and I didn’t want to risk any more heart-attack situations in the pursuit of a nice view I’d seen plenty enough already.  So at the next pull-out I turned around and headed back down the mountain.

At this point I was going five under.  I took extra precautions as I neared the area where I’d just seen the skunk, hoping to see it earlier than I did last time even though I was pretty sure it would avoid the road completely after it almost died.

But here’s the thing about skunks – from the side they’re pitch black.  And while I was driving away that bastard had turned right around just like I did, and it put itself square in the middle of my lane again on my way back down the mountain.

If I thought I slammed on my brakes hard the first time, I was mistaken.  That second time I hammered that pedal to the floor so hard I was pretty sure the car was going to snap. 

But this time I was heading downhill.  The skunk went under my bumper and I felt a little jitter even before the car stopped, and by then I just had to keep moving because it was already past the wheels.

I wasn’t totally sure if I really hit it.  The skunk was in the exact center of the lane so maybe the car just went over it, maybe the jitter I felt wasn’t real, I didn’t know.  I turned the car around again and headed back up the mountain to check because if the skunk was injured I was damn well going to take it to a vet.

But alas, when I got there the poor skunk was lying slumped in the road, and as I slowed to examine it there was no movement, nothing.  Two other cars had come down the mountain while I turned to head back for my skunk, so it might have been one of them that did it, but I’m pretty damn sure I was the one that really killed it.

I didn’t end up stopping at any lookout points.  The cops were still at my favorite spot as I passed them for a fourth time on the way back down the mountain, and I thought to stop and tell them about my poor little skunk, but they looked quite busy with whatever delinquent they’d cornered up there in the parking lot so I let it go.

So really, in the end, I went up the mountain tonight to kill a skunk.  That was pretty much the sole existential purpose of my well-intentioned detour this evening.  I think I’m going to probably go cry a little and sleep it off and try to convince myself it wasn’t my fault.

06 May 2012

URGENT! How to help a stroke victim

Watch me use this blog for some good!

This is easily the most important thing you will learn today – unless you learn how to give CPR or how to solve world hunger or something.  I’m going to tell you how to detect a stroke and how to cure a certain type of stroke.  You can easily save someone’s life with this simple information.

Strokes are caused by a loss of oxygenated blood to parts of the brain, and they can kill you or cause serious lifelong debilitation.  They can be caused by head injuries and the like, or spring up out of nowhere.  Even young people in their twenties can have sudden strokes, so don’t just think it’s an old-people thing.

So first, how can you be sure someone’s had a stroke?  Well, a favorite mnemonic is the first three letters of STROKE:

Smile.  Have the person try to smile at you – check to make sure their face is symmetric and that the smile is natural. 
Talk.  Have the person say a full sentence or two – make sure they are coherent. 
Raise both arms.  Have the person lift both arms above their head.

If a person has problems with ANY ONE of these, get them to a hospital like right freaking now.  If they pass the test, keep checking over the next few hours to make sure things haven’t changed.  Strokes are messy and things will often change.  The moment they do, get that person to a hospital like right freaking now.  (Your time window is a few hours wide at best.)


Before I explain the most critical reason why you take them to that hospital (like right freaking now), you have to know there are two major types of stroke: hemorrhagic and ischemic.  Hemorrhagic strokes involve a hemorrhage – a burst blood vessel or something that causes a bleed in the brain (nasty falls often cause hemorrhagic strokes).  Ischemic strokes are caused by a blockage preventing the movement of blood – for example a blood clot that blocks an artery.  (And don’t worry – you shouldn’t have to know these words when you walk into the hospital, they should know it for you.)

If you get your stroke victim to the hospital and tell the staff you think they have had a STROKE (note the emphasis on both of those critical items), the very first thing the hospital ought to be doing is getting an MRI or a CT scan of that person’s brain.  If the hospital doesn’t order a brain image for your stroke victim, INSIST that they do it IMMEDIATELY.

Here’s why: You can tell the difference between a hemorrhagic and an ischemic stroke using these imaging methods.  Ischemic strokes (caused by clots) can be treated with blood thinners, while giving blood thinners for hemorrhagic strokes (caused by bleeds) will kill people.  Hemorrhagic strokes can be dealt with surgically.

So for example, if the stroke is ischemic and you’ve caught it within 2-3 hours, the hospital can administer blood-thinning drugs...

...And that person can walk away from their stroke with no problems whatsoever.  Zero.

That, my friends, is a miracle.  TELL THIS TO EVERYONE YOU KNOW AND SAVE LIVES.

30 April 2012

Another Five Abominable Brain Myths

The day after I posted my first list of five brain misunderstandings that make me cringe, I remembered another cringe-worthy myth and have therefore been forced to compile an additional list.  For the record, #1 on this list really should have been #2 on the Top Five…

#5. Drugs will put holes in your brain.

Sure, hardcore drugs can kill you, but they don’t do it by putting holes in your brain.  I’m pretty sure D.A.R.E. is primarily responsible for spreading this myth, at least according to a few of my fellow nineties kids. 

First, let’s tackle the bit about what a “hole” is.  Generally speaking, the fastest (and only) way to get an actual hole into your brain is with a bullet or a tamping iron or something (see, for instance, Phineas Gage).  Otherwise, what one might call a “hole” is most often really a region of damaged brain which, while damaged and nonfunctional, is still packed with all sorts of fluids and tissue and whatnot. 

Your brain is a very densely-packed organ full of cells, bathed in a fluid called cerebrospinal fluid, encased in a skull.  If you endure damage to the brain that doesn’t open up the skull in the process, then the damaged brain regions are still going to be filled with all that kind of stuff.  If you happen to see an MRI of such a lesion (say, from a stroke or a tumor or something) it might look like a dark space in an otherwise bright brain, but all that means is that the water moving around in that region is not neatly organized like it is in the rest of the brain.

So keeping that caveat in mind, not too many drugs even damage your brain in a way that would create proper lesions like the kind described above.  In fact, I couldn’t come up with one.  Let me give you a list of popular drugs that certainly don’t put holes in your brain even when abused: heroin, cocaine, meth, pot, alcohol, ecstasy, LSD, prescription pills, mescaline, bath salts, roofies… I think I’m starting to stretch it here.  Some of these will mess you up all sorts, but unless they give you a stroke they’re not causing a brain lesion that would look like a hole to anyone.

Still, kids, just say no to drugs.

#4. Your brain is some other-entity that is separate from you.

I’m going to admit I managed to confuse myself no end trying to figure out what I want to say here, so bear with me.

I’m a hundred percent guilty of this idea of an other-entity brain.  I do it all the time in these very posts, suggesting that your brain is something different from you that does its own thing and occasionally gets up to mischief.  It’s called anthropomorphizing, assigning human characteristics to things that aren’t human. 

Anthropomorphism is an easy white lie that allows us to speak simply about complicated processes.  Your DNA wants to replicate itself, your brain decides things, and so forth.  Assigning wants and needs and decisions to probabilistic biological processes is a completely inaccurate representation of what’s really happening.  And I don’t have a problem with it, so long as it’s recognized as a rhetorical device that simplifies a conversation.  But it’s a problem if you see it as the whole picture.

Anthropomorphizing the brain is especially easy, because while a brain is simultaneously just an organ, a big glob of mush inside a person’s head, it is also the very essence of what makes that person that person.  I want to avoid getting into arguments about a soul and whatnot – that’s not what I mean.  I mean that it is easy to think of your brain as “you”, and also easy to see it as only a “thing”, and that makes talking about it difficult.  And complicating this matter further is that pesky word “mind”, which is somehow different still from a “brain” and falls somewhere else on this “you” versus “thing” spectrum.

The concept of “self” would take me all year and a few hundred pages to tap into, so all I’m going to say is that trying to determine who “you” are is a real bitch no matter how you cut it, and your “self” is an ever-changing, many-headed beast that is exceedingly difficult – if not impossible – to define.  And the brain is a vital facet of it.  A brain is nothing more than a bundle of neurons and synapses and electrical impulses, and it is also the material substrate of the emergent Self.  Just like electrons with their particle and wave properties, both the mundane biological Brain and the lofty cognitive Mind have to be thought of as two aspects of a singular whole, not discrete entities.  It’s an ugly and difficult undertaking.

As humans we need to have agents separate from ourselves to explain certain of our actions, like addictions (“I try so hard to stop but my brain just won’t let me”).  Your brain has to be that cognizant little scapegoat and it has to be something separate from you.  And it really feels that way, too.  You feel like there’s some ugly little demon sitting inside your head telling you to do bad things.  It’s that fabled devil on your shoulder exactly.  You can have that cognitive dissonance and I wouldn’t dream of taking it from you.

Here’s the thing.  Your brain isn’t separate from your mind or yourself – it’s all one big package.  At the same time, your brain is not a conscious entity.  Believing that your brain wants or needs or decides, that’s incorrect.  Your brain is a well-organized chemical soup that operates according to certain biological principals, and from that soup your glorious conscious Self emerges.  So you can trust that when I say your brain wants something, I’m only doing it to simplify a point.

#3. The brain is a muscle.

I don’t know whether people saying this mean “the brain is literally a muscle” or “the brain is like a muscle in that the harder you work it, the bigger/better it gets,” but I’m going to shoot down the entirety of the former and a major assumption of the latter.

First off – and I think this one is obvious – the brain is not literally an actual muscle.  Not in any way.  They’re made of wildly different tissue types and everything.

The second statement, that the brain is like a muscle, contains a critical caveat.  In some ways, the brain is like a muscle, in that you have to use it to keep it strong.  But no matter how hard people try to sell you their guaranteed fitness regimen to get you ready for the Brain Olympics or whatever, the unfortunate fact is that the vast majority of it is total bullshit designed to take your money.

I want to be careful what I say because it’s very important not to throw the baby out with the bath water, and I don’t want you to give up on those crosswords just yet.

Here are some things we know about the benefits of mental exercise.  People with higher education tend to fare better cognitively as they age.  So do people with mentally-challenging occupations.  This could be because those things help people get stronger brains, or it could be because people with strong brains tend to get higher education and mentally-challenging occupations.  Also, training people how to do various mental tricks can have long-lasting beneficial results (that’s the whole definition of learning, right?).

Here are some things we know about the limitations of mental exercise.  Generally speaking, a lot of the things we do to hone our mental abilities – crosswords, puzzles, list-learning, et cetera – just make us very good at tasks like crosswords, puzzles, list-learning, et cetera.  In other words, many mental exercises don’t generalize very well across entire cognitive domains like memory or processing speed. The idea of performing a set of discrete tasks to give yourself a better memory in particular is preposterous, so don’t believe any $50 computer program that promises to train you into having a better memory.  It simply doesn’t work that way.  But that’s not to say exercising your brain is hopeless.

If you want to make your brain into a lean, mean, information-crunching machine, then you need to engage it in a variety of novel tasks.  Go ahead and do that crossword – but also learn a new musical instrument and join a chess club and take up painting and go ride a bike.  The most important thing you should do to exercise your brain is to regularly teach yourself NEW things.  Don’t just get good at the same old things.  Get outside your comfort zone.  I mean that.  Get outside your comfort zone.  NEW and DIFFICULT things help your brain improve.

And keep in mind – none of this is going to make it any easier to remember that new acquaintance’s name at a cocktail party.  If you want to do that, go look up mnemonic tricks for how to remember people’s names and be done with it.

#2. Internet IQ tests give an accurate representation of one’s true IQ.

I’m going to show you a bell curve:

The middle line represents the average (µ) – in this case, the average IQ score.  Each line on either side of that represents one standard deviation (σ) from that average, and is always a set number of points.  The colors represent what percent of the population falls between each of those numbers. 

The standard Intelligence Quotient test (IQ test) is designed to fit on such a bell curve.  Researchers have tested thousands of people and then normalized the scores so that the average score (µ) is 100, and each standard deviation (σ) is 15 points away from that.  This means that 68.2% of the population has an IQ from 85–115, 95.4% of the population has an IQ from 70–130, and 99.7% of the population has an IQ from 55–145. 

To rework that just a little, 99.95% of the population has an IQ less than 145.  Keep that fact in mind.

I’m going to use myself as an example here.  I have an above-average IQ.  I don’t know exactly what it is.  The last time my IQ was tested I was five years old, and IQ tests of five-year-olds are notoriously difficult because they tend to vary a lot depending on the environment and the kid’s energy level, etc.  Before I could get my IQ tested as an adult, I learned how to administer an IQ test and now I’m ruined for IQ tests forever because they depend on me not knowing all the answers and tricks.  Nevertheless, based on other standardized tests I’m confident that I have a moderately above-average IQ.

Every single time I’ve taken an IQ test online (even long before I learned how to give the test), I’ve gotten a score anywhere from 140-170. 

That’s impossible.  For me, I mean.  That score would mean I’m smarter than 99.6% – 99.9998% of the American population.  Guys… I’m pretty smart, but I’m not that freaking smart. 

Put another way, an IQ over 155 (the middle of my internet scores) happens for 1 out of every 8,000 people.  There are less than forty thousand people with an IQ over 155 in the entire United States.  Do I really think I’m as good as the top 40,000 in the entire country?  Definitely not.  (And by the way, if I really had an IQ of 170 I’d be in the top 500, which is just laughably funny – hell, the test frankly starts to break down as a good measure once the numbers get that high.)

I’ve tested some people with IQ’s this high, and they are incredibly brilliant.  Incredibly brilliant.  I’d kill to be that smart.  I’ve also tested people with IQ’s around 80, and they’re also pretty darn smart.  I guess what I’m saying is that if you’ve got an IQ above 100, you should be very proud of yourself.  You’re smarter than half your country.  If you have an IQ above 115, congratulations!  You’re in the top 16% and that is very impressive indeed.

But don’t ever trust a free internet IQ test.  These tests aren’t structured like real IQ tests, they don’t probe even a fraction of the cognitive abilities a real IQ test does, they haven’t been correctly tested against thousands of people to get proper averages and standard deviations, and many are designed to inflate your ego because they want you to come back and click some more.  (I don’t want to say they’re always higher than your real IQ, by the way – they’re just not trustworthy and accurate.)

It’s also worth noting that even a standard IQ test probes a variety of cognitive abilities which in many ways have nothing to do with how well you function in society or as a human being.  IQ tests don’t tell how engaging or charismatic you are, they can’t say if you can run a company, they don’t test your ingenuity or your perseverance or your ability to empathize with your fellow man.  Those traits are all at least as important as your “intelligence”.  Some of the most amazing people I've ever met had an IQ less than 70.  So who really cares how many points you can rack up with the click of a mouse?

#1. Some people are “left-brained” and some people are “right-brained”.


Oh sure, I know what people mean when they say that.  They mean that some people (“right-brained” people) have brains that make them very creative and intuitive and free-thinking and whatnot – while other people (“left-brained” people) have brains that are more analytical and logical and objective.  Generally the people saying this are the ones who happily label themselves “right-brained” and despise “left-brained” people for being stiffs.

Before I explain the problem I’m going to repeat myself, because you can’t lose sight of this: THERE IS NO SUCH THING AS LEFT-BRAINED AND RIGHT-BRAINED.

This ill-conceived notion came about in the wake of observations that certain brain functions tend to be lateralized to either the left or the right brain hemisphere.  For instance, your right brain hemisphere receives sensory input from and delivers motor commands to the left side of your body, while your left hemisphere controls the right side of your body.

Also, in most right-handed people, language is supported predominantly by left brain regions.  For lefties like me, on the other hand (pun not intended), language is more often spread across both brain hemispheres or dominant on the right – which makes sense, since we write out our language using our left hands and the left hand is controlled by the right hemisphere.

That said, the vast majority of brain functions don’t fall cleanly into one brain hemisphere or the other.  In fact, even for the ones in which one side of the brain does most of the work under normal operating conditions, if that side gets damaged the other side can usually pick up the slack.  (Practically the only cases in which this doesn’t occur very smoothly are in the above-mentioned sensation, motor control, and language production functions).  It is likely that each of the two hemispheres is processing somewhat unique aspects of the same information, because there are two of them and what’s the point of doing the exact same thing twice when you could be getting more out of what you’ve got?  (Ah, anthropomorphizing is so easy!)

The “X-brained” problem really started when poorly-informed people began making up a whole bunch of junk about all sorts of brain functions supposedly being fully lateralized when really they aren’t.   Then another layer was added when people started pigeonholing these constellations of “lateralized” functions into personality types, even though those “types” are inconsistently described and don’t match known lateralization patterns and have never been substantiated by any actual science (quite the opposite, in fact).  Finally and without any supporting evidence, people started arguing that they were “right-brained” or “left-brained” because they never understood math or because they were artistic geniuses or because they were ridiculously super-geeky, and all because we needed yet another label to attach to ourselves.

Whether we realize it or not, we seem to like calling ourselves names.  “Right–” and “left-brained” are particularly ugly to me, because self-labeling in this way is a blatant unwarranted dismissal of half of one’s potential merits.  It’s like saying “I’m just not good at math.”  I hate that phrase.  I’m not saying it’s not sometimes true.  I’m saying that by stating it, you are giving yourself a bye on having to apply any mathematical effort.  If you call yourself “left-brained”, you’re giving yourself an undeserved escape route off the creative path.  Math is not easy.  Creativity is not easy.  Saying you’ve got a certain type of brain or that you’re just not good at something is fabricating an untrue “unconquerable” biological obstacle that you don’t have a right to give yourself.

So what if you’re not good at math?  All that means is you’ve got to work harder, not that you get to quit.  If you’re not good at sports, or you’ve never been artistic, or you just can’t dance, work harder and actually test the bounds of what you can and can’t do.  There’s a difference between recognizing your limitations and giving up before you’ve even started, and “I’m not very good at X” and “I’ve always been X-brained” are both just ways of artificially limiting yourself.

In short, both “right-brained” and “left-brained” are nothing but meaningless self-deprecating insults, and now you should know better.

21 April 2012

The Top Five Brain Misunderstandings That Will Drive a Neuroscientist to the Brink

I was reminded by a comment on my last post that there are a lot of common misunderstandings out there about the brain.  So here they are, the top five TOTALLY RIDICULOUS things said about the brain that make me want to fly into a homicidal rage when I see them being propagated in popular media:

#5. To grow old is to grow senile.

This one is kind of personal because my research is on human aging.  My grand design when I was fifteen was to cure Alzheimer’s disease, so I’ve been cogitating on it for a while.  The truth is that there is a normal, healthy aging process, and there are a host of separate pathological aging processes that unfortunately tend to get lumped in with healthy aging as What Happens To You When You Get Old. 

Diseases like Alzheimer’s and Parkinson’s aren’t normal.  Dementia is not – I repeat NOT – something that happens to everyone.

This isn’t to say that healthy aging doesn’t involve some declines in certain cognitive functions.  First and foremost, the speed of information processing decreases (meaning older adults are simply slower to do things than they used to be, and we young things must cultivate some patience).  Also, older adults have also been reported to show declines in focused attention, meaning they can be more easily distracted.  Finally, older adults tend to show moderate declines in some aspects of memory – for example in remembering certain names, or where the keys have run off to, or coming up with the right word  in a sentence.  Healthy aging is NOT, however, associated with the profound impairments of memory seen in diseases like Alzheimer’s, which at its worst robs people of the ability even to remember who they are.  See the difference?

Older adults also show improvements with age in other cognitive abilities.  Vocabulary and other crystallized knowledge (i.e. knowledge of facts) both increase throughout the lifespan.  Empathy and the ability to reason emotionally and socially are also far easier for older adults than young adults.

We are a society that fears aging and death, so much so that even words like “old” and “elderly” have acquired a negative connotation.  And misunderstandings like #5 here serve only to help propagate this fearful, antagonistic sentiment toward older people.  News flash – unless you plan to jump off a bridge at age forty, you’re probably going to reach old age someday.  It’s a stage of your life, just like childhood or adolescence or middle age.  And, according to a lot of the people I’ve interviewed at my job, it can be totally awesome if you let it.

#4. You are born with all the brain cells you will ever have.

First off, I just want to tackle this notion of being born with all of anything.  In and of itself, that idea is kind of silly, because I think we all recognize that the version of you that existed a day before the miraculous moment you were born is just about the same as the version of you that existed the day after – minus that whole breathing and eating through your mouth instead of your bellybutton thing.  It’s not like your fetal body is busily building and building right up until you pop out of your mother’s vagina and then all of a sudden you’re in decay mode, you know?

So with that aside, I’m here to tell you that not only are you not done making neurons when you’re born – you’re not done when you hit adolescence, or adulthood, or even that dreaded old age.  You are probably done when you’re dead (but then again, that’s leaving aside the whole philosophical argument about exactly when that other mysterious life-capping moment actually happens and whether your cells still do their thing for a few hours after you’ve reportedly kicked the bucket).

The word we’re looking to investigate here is neurogenesis, or the creation of new neurons.  For a long time we couldn’t find any brain regions that continued to make new neurons on into adulthood, but this was mostly a problem of detection – we didn’t have the right tools to find neurogenesis, so of course we assumed it never happened.  Now that we can properly search for it, neurogenesis is cropping up everywhere – in our friend the hippocampus, the cerebellum, any number of cortical regions – and it appears to be a very important part of continued brain maintenance and function.  (Actually, there’s a really cool story here about hippocampal neurogenesis that was still sort of hand-wave-y last time I checked, but even my two fellow brain researchers fell asleep during the talk we went to about it, so I’ve elected not to share it with you fine people.  Count your blessings.)

Here are some fun facts about your brain growth after birth.  Your head is disproportionately big when you’re born, but it’s not as big as it’s going to be when you grow up (otherwise your mum would be complaining a hell of a lot more than she already is).  That increase in brain size from birth to adulthood comes with a slight increase in the number of neurons you have, but more importantly a large increase in the number of connections they make.  In childhood your neurons shoot out all sorts of projections all over the place, and then during adolescence your brain goes through this massive pruning binge to take out the connections that aren’t doing you any good.  Throughout your teenage years your neurons continue to gain efficiency as they get wrapped in layers of cell membrane called myelin, which is required to allow neurons to effectively propagate their signals.  Therefore, your brain isn’t even fully developed until your early twenties.  You heard me right – your early twenties.  And boys, your brains take a couple years longer to get there than girls’ do.  Explains a lot, doesn’t it?

You can tell this is an old chart because the “cell birth” line (among others) doesn't keep on going out through adulthood.  Also, see how we’ve known for a long time that myelination and synaptic elimination and pruning continue into adulthood.

The upshot is that at this very moment, your brain is still making brand-new neurons.  It’s a very modest amount compared with the total number of neurons in your whole brain, true, but new neurons are still being born.  It’s happening right now.  And it’s going to keep on doing that for a very long time, hopefully.

#3. Smoking copious amounts of pot has no lasting detrimental effect on your brain.

Seriously?  Are you high?  First off, if you’ve ever met a chronic pot smoker, you already know this isn’t true.  You know it.  I shouldn’t even have to back this up with studies, but I’m going to anyway so we never have to have this argument again.

Here’s a very select smattering of the results from papers published since 2010 alone: (A) Chronic pot users perform more poorly on measures of executive function (that means planning, reasoning and decision-making) than non-users, and this effect is worse for those who started before age 16; (B) rats given cannabinoids in adolescence showed reversible impairments on many cognitive tasks but irreversible deficits in short-term memory measures; (C) chronic pot use has been associated with reduced hippocampal volume (although, to be fair, having skimmed that paper I have some concerns about the methods – for a better review of the effects of pot on brain metabolism and structure I encourage you to look at this paper); (D) and luckily, treatment for pot addiction with gabapentin significantly reduced pot use and increased performance on cognitive tests.  Hell, if you want a recent review of cognitive deficits associated with pot use, just read this paper.

People make similar arguments about alcohol, and I’m here to say that large enough quantities alcohol will totally mess with your head over time.  (This one I have some personal experience in!  I swear I used to be quicker than I am now, and I’m blaming vodka-tonics.)  I won’t take you through a lit review here, I’ll just leave it at Korsakoff’s syndrome.  If you think that’s unfair, fine, I’ll tell you all about run-of-the-mill alcoholics’ impairments in perceptual-motor skills, visual-spatial function, learning and memory abilities, abstract reasoning and problem solving.  Don’t make me go there.

Please don’t try and justify your habits by saying they don’t affect you.  It’s a no-brainer (pardon the expression) that excessive amounts of intentionally mind-altering substances will, over time, affect the brain and its function.  I’m sorry, but that’s just the way it is.  Just embrace the fact that you’re willingly damaging yourself and move on.

#2. Pretty much anything movies ever say about brain disorders and treatments.

As a writer, I struggle with convincingly portraying anything about which I know absolutely nothing.  I understand the plight.  But let me give you three of my favorite examples that will let you know just how disastrously misguided most movies and shows (and news outlets) are about the brain:

(A) One episode of Boston Legal starts with William Shatner’s character animatedly yammering away in an MRI machine spouting off baseball statistics for a pair of scientists who are quizzing him while watching “brain activation” blobs flit across a still image of his brain.  The scene cuts to a doctor’s office where the neurologist (?) informs him he has Alzheimer’s disease.

PROBLEMS (well, some of them): You cannot move more than a few millimeters in an MRI scanner or the image will be totally messed up; No one watches brain activation patterns real-time because (a) it’s not feasible and (b) some guy watching a bunch of blobs real-time isn’t remotely as valid as running actual statistics on the data; THIS IS NOT THE WAY YOU DIAGNOSE ALZHEIMER’S and it’s a disgrace to imply that it is; Using an MRI to detect Alzheimer’s is only now becoming a realistic possibility but it hasn’t reached clinicians yet and also people are focused not on brain function but structure; Semantic information (like baseball statistics) is one of the few things that’s preserved in early-to-moderate stages of Alzheimer’s; You would never ever tell someone, “the tests show you have Alzheimer’s disease,” because Alzheimer’s can’t actually be diagnosed until you are dead (while you’re still alive you really have probable Alzheimer’s), and a good doctor would break that news more gently and with a caregiver around to help assimilate what the doctor is telling a probable Alzheimer’s patient.

(B) In my favorite example from House, House has been in a bad situation he can’t remember because he was blitzed out on substances at the time.  He gets his buddy to perform deep brain stimulation to jog his memory.  The first jolt allows him to see a fuzzy, silent, black-and-white still image or two, so he says to crank up the juice – and then suddenly he can see the full memory playing like it’s a proper modern movie.

PROBLEMS: I don’t even know where the start.  The House show always pulls out a single term like deep brain stimulation (DBS) and makes a total mockery of it because apparently not a single writer on that show has ever even taken a single class in medical school.  So to summarize a bare minimum of points – first off, your brain simply does not behave in this way when you electrify it.  When electrically stimulated, you will probably perceive some anomalous sensations, but you will not replay the one memory you’re looking for, and it will not be a 1920’s silent film at low voltage and a modern movie a high voltage.  Also, memories do not ever play back perfectly like that for you anyway, and you all know that.  Finally, DBS would never be used for such a purpose – mostly it’s an extreme therapeutic option for people with Parkinson’s disease, severe depression, and certain other disorders, and no one has a good handle on why it works.

(C) Any movie in which “getting amnesia” means “losing all knowledge of yourself and sense of who you are.”

PROBLEMS: This phenomenon of losing yourself is real, and is called a dissociative fugue.  It is psychological in nature and usually occurs in response to a deeply traumatic situation (not a blow to the head).  It’s also transient, lasting a few days or weeks at most.  Amnesia is a totally different thing, best characterized by the movie Memento.  It can be caused by head injury, surgical resection of temporal lobe areas, oxygen starvation, and a number of other insults.  When a neuroscientist (or even Wikipedia) talks about amnesia, by default they mean anterograde (forward-looking) amnesia, in which patients cannot form new memories (remember HM…?).  These patients often, but not always, also show retrograde (backward-looking) amnesia, meaning that they cannot remember past events.  Amnesia is not transient.  These patients will live their whole lives in never-ending cycles of few-minute increments.  Also, they will never forget facts about themselves like their name and birth date and where they grew up, because this is a different type of memory unaffected by amnesia.  This whole movie problem is really one of semantics – how freaking hard would it have been for that first major movie exec to do a little research and say, “Hey, this isn’t amnesia, it’s a fugue – let’s say this guy has a fugue instead”?  They could have avoided decades of ridiculous misunderstanding! 

And let me just make this clear: You will never – under any circumstances – get hit on the head, lose all memory of yourself and your past, and then regain it miraculously a few weeks later.  Never.

(Update: Dammit, okay, I thought of a circumstance.  If the blow to the head is part of a deeply traumatic situation, you could enter a fugue state - but that's not due to the head bump! My point stands.)

#1. You only use 10% of your brain – imagine how much smarter you would be if you used 100%!

You know what happens when you use 100% of your brain?  EPILEPSY.  Freaking epilepsy.  That’s the definition of epilepsy.  If you want to have a gran mal seizure, go right ahead and “use 100% of your brain” all at once.

This “10%” notion gained credence because when we first started imaging human brains (using methods like MRI), researchers found that the little colored blobs lighting up during performance of any given task covered only a small portion of the total brain.  Why?  Because different parts of our brains do different things.  During verbal tasks, language areas light up; during reasoning tasks, reasoning areas light up… you get my point.

Suggesting that we should find a way to use 100% of our brains is like suggesting we should use 100% of our cars when we drive them.  It would obviously be totally efficient if your car drove forward and backward at the same time in all gears while also honking the horn and wiping the windshield and flashing the lights and blaring every possible radio station and opening and shutting the doors.  That’s clearly the best way to get to your local grocery store.  

Our brains are, in some ways, no more than a conglomeration of specialized little parts which each do their own mental tasks.  At any one time, you “use” about 10% of your brain to—

No, you know what?  I can’t even say that oversimplification in good taste.  This whole “10%” thing is 100% bullshit.  What does that even mean, “using 10% of your brain”?  I dare anyone to quantify with our current methods (A) how much of our (B) brains we (C) use at any one moment.  All three of those points need a better definition before you even start down that road, and anyone who takes the first step on it is asking exactly the wrong question based on a host of incorrect assumptions – for one, that the brain is a singular computational entity operating at X capacity like a damn CPU.  The idea is utter crap on every possible level.

Actually, that CPU bit brings me to an important historical point – brains have always been, by default, equivalent to whatever latest technology is hot at the moment.  In this day and age, we equate it part and parcel with a computer.   

It’s such a handy metaphor, right?  Like they were made for each other.  You know what people said your brain was like in the old days?  A switchboard.

 Extra points to the kids who know what a switchboard is.

We don’t have a good sense for what the brain is really doing, so we associate it with metaphors and then generate silly notions based on what we know about those metaphorical devices.  The point is this: Our brains are well optimized just the way they are, using different bits for different jobs.  Just know that you do NOT want all your neurons to fire all at once.  You will immediately die.

15 April 2012

Déjà vu

Déjà vu is one of the coolest phenomena ever.  I know I already said that about change blindness.  I also said I’d change my mind.

I love déjà vu in the same way that I love sneezes and yawning and blind spots and dreams and migraines.  These things all make me very happy.  They’re small reminders that my brain is still there, it’s organic, it does things I can’t predict.  They point out that we are laughably unaware of the mucky mushy underpinnings of our lofty cognitive musings.  Déjà vu makes us remember we’re only human.

Well, maybe that’s not true for everyone.  Déjà vu means different things to different people.  What it certainly is not is a literal re-experiencing of a moment that happened in that exact same way at some previously unspecified time.  This is an incorrect interpretation of the phrase, because even though the direct translation of “déjà vu” is “already seen,” the definition of the word includes the notion that one is re-seeing something one knows one couldn’t possibly have seen before. 

Now, some say that déjà vu is some special form of extra-sensory perception, or it’s a signal the Matrix has been altered, or it tells us things about our past lives, or it’s some sort of breakdown between all the versions of our lives we’re simultaneously living.  Here’s the thing.  Déjà vu is already a beautiful miracle without making it anything paranormal or supersensory.  It’s a truly incredible process and a delightful experience.  This may sound weird coming from an urban fantasy writer, but I just don’t like the taste of forcing supernatural elements where they don’t belong.

Scientific theories posit numerous explanations for déjà vu, most having to do with the medial temporal lobe (remember the hippocampus…?).  Keep in mind that researchers very rarely have the opportunity to study déjà vu, given how transient and unpredictable it is – in fact, apparently only about 60-70% of people report having ever experienced the phenomenon at all (yet another variable thing I thought was common to everyone!).  So even some of the so-called “scientific research” discussed here tends to wax philosophical.

According to these researchers, déjà vu may occur because:
(1) Some aspect of the current experience excites the brain pathways that produce a sense of familiarity with the event, but not those that support proper recollection of a previous event, creating a disconnect that makes us feel like we know it without being able to pull out exactly when or where we experienced it before.  
(2) Our brains probably store memory in such a way that a small stimulus (a smell, a color) can trigger the incomplete recall of a real but different memory… and in some cases this might give us a sense that the current experience has already been experienced.  (The first part of this is certain – it’s the second that’s up in the air.)
(3) Our two brain hemispheres might sometimes get slightly out of sync when processing an input, such that one side gets that direct input fractions of a second earlier than usual and therefore misinterprets the added information from the other half of the brain as a repeat of an already-experienced memory. 
(4) We “experience” many types of things in media like books and movies, which allow us to feel strong familiarity for things we’ve never actually experienced in real life – and when we see it in real life for the first time we might accidentally think we’ve already seen it. 
(5) Some researchers believe that precognitive dreams (i.e. dreams which predict future events) may create a sense of déjà vu later on when they are properly experienced.  I’ll tackle this one shortly.  
(6) And lastly – and this is the least controversial of the theories because it’s the most testable – déjà vu can occur as a result of an epileptic event, like a seizure, in the medial temporal lobe. 

(There are plenty of other theories I’ve decided to let you discover on your own, seeing how long that paragraph has become already.)

I like aspects of a lot of these, but I want to put my money down on the first and last – the disembodied familiarity thing and the seizure thing.

There’s a lot of evidence that one’s concrete knowledge of a previously-experienced event (call it recollection) and one’s comparatively vague sense of familiarity with an event are different things that are processed differently by different brain regions – recollection by the hippocampus, and familiarity by… well, parahippocampal and/or perirhinal cortex, depending on who you talk to (they’re both structures basically adjacent to the hippocampus).  In the rare déjà vu experience, it’s possible that something about the current environment differentially stimulates the familiarity and recognition brain structures, creating a detached sense of familiarity.

Notice that in the previous sentence, I said it was something about the external environment causing the brain activation.  But it’s also possible that your brain just does this stuff to itself, without any outside help.  For example, people with temporal lobe epilepsy sometimes report feeling déjà vu right before a seizure strikes.  But you don’t have to have epilepsy to have epileptiform brain activity, and in fact every single person on the planet has endured some level of seizure-like activity in his or her brain.  Basically, every once in a while some tiny group of neurons goes a little haywire and activates for no good reason, but it’s natural and nothing to worry about.  Mostly these events don’t impact our conscious lives at all.  But maybe, sometimes these events occur in just the right place at the right time, activating our familiarity structures out of the blue, and suddenly the whole world around us feels like we’ve done it before.

Regardless of whether it’s externally or internally generated, it makes sense that déjà vu is an innocent brain mistake which makes us feel something that’s not really real.  It helps explain why we sometimes feel recursive déjà vu – the sense that we’ve even had this particular sense of déjà vu before, and that we’ve had a déjà vu of that déjà vu of a déjà vu, and so forth.  That’s just our brain accidentally and repeatedly triggering a feeling that this event has occurred before when it hasn’t.  So I’m pretty darn confident that when you experience déjà vu, that exact experience has never happened to you before – no matter how much you want to believe that.  That want, that need – that’s just your brain talking.

Which brings me to precognitive dreams.  I will certainly insult people with my opinion about this, but I’m willing to take that hit and say that the ability to actually foresee future events in a dream is literally impossible.  Let me rephrase that so I can be totally clear – precognitive dreams cannot be the true experience of a real-life event before it happens.

There are just too many problems with the idea that dreams can be pre-plays of real events (not least the violation of causality).  I’ll name a small few.  (1) The vast majority of things that happen to us happen repeatedly, so it’s practically impossible to avoid dreaming up scenarios which will be similar to later life events; also, “similar” is not at all the same as “identical”.  (2) If you compare every dream that you’ve ever dreamed with every event that has ever happened to you, you will absolutely come up with matches, and it has nothing to do with foreseeing anything.  (3) Our brains can make us feel conviction about things we actually can’t remember very well, so when those similar real events happen we can be duped into accidentally overwriting our dreams to match the events (someday I’ll write a post about this point).

Okay, enough of that.  I don’t want to give the impression I don’t believe dreams can be predictive.  Brains are prediction machines.  Especially human ones.  It’s arguably what we do best.  So it’s totally reasonable that your brain makes very, very good predictions about the future while you’re dreaming, using information you might not consciously piece together while going about your daily.  I am a happy believer when someone tells me that they always dream of a white elephant before someone dies – so long as they also tell me the white elephant is their brain’s way of assimilating a host of (subconscious) clues indicating those other people were about to die.  Such a dream would be entirely plausible, and maybe even probable.

What I’m saying is that déjà vu serves as a reminder that our brains are doing a lot of things behind the scenes.  In fact we don’t have conscious access to the majority of the things our brains do.  (Go ahead, try and stop your heart just by thinking it.)  When magical things like déjà vu and prescient dreams happen to us, we can congratulate our brains for being so gosh-darn brilliant without us even knowing it.  They really are capable of miraculous feats.

P.S. I got really sick of seeing the phrase “déjà vu all over again” in article titles as I looked all this up.  I used to love saying that and now it’s tainted for me forever.  So sad…

01 April 2012

The object of your dreams

Every single one of my writing ideas has surfaced in a dream.  The making of Canine, for instance, started with a dream about my departed dog – well, about a sentient dog-creature which I identified in the dream as both my sweet girl but also a male wolf-dog.  You know how dreams go.

I had occasion recently to question the nature of dreams – specifically, dream protagonists.  And I was shocked to find that not everyone dreams the same way about dream protagonists.

I want you to think about all the dreams you’ve had – recently or over your life, I don’t care.  Actually, it would be interesting to consider whether your dreams have evolved over time, too.  But I want you to think about the main characters in those dreams, and to assess the following three qualities (please keep in mind that it is feasible to have all possible combinations of these qualities as you consider them):

1. Identity.  Who is the protagonist in your dreams?  Is that person yourself, or someone else?  In other words, what does your protagonist look like?  (If you answer, “Well, it’s like a version of me that does things I would never or can’t do,” then for this purpose, your answer is “yourself”.) 

2. Agency.  Are you the protagonist, or are you more like a camera following someone else?  Do you identify that protagonist as yourself, no matter what they look like?

3. Perspective.  Are you seeing things from the protagonist’s perspective (first-person) or are you watching the protagonist from the outside (third-person)? 

I ask you these questions because I was really surprised to hear the answers from my friends.  Many people, when I’ve queried them, have said that (1) their real self is the only protagonist they dream about, (2) they always identify as the protagonist, and (3) because dreams will be dreams, they see it in first person or in some combination of first and third person.

I thought this was CRAZY. 

Like seriously crazy.  Since I can remember, I’ve been dreaming either as myself or as other people or animals or characters (I told you about my dream where I was Raphael from the Ninja Turtles, and in the first dream I remember I was a deer, and I’ve been my stories’ protagonists and strangers I don’t recognize).  Generally when I dream as a woman, I’m myself – but even that changed a couple weeks ago when I finally had a dream as another woman.  Most of the time I identify as the protagonist, but I’ve had dreams where I’m just following someone else around, like I’m the camera.  And lastly, I’m pretty consistent with everyone else in that I have dreamed in the first, first-and-third, or third person (it’s ever-changing within single dreams, usually).  I still have a hard time believing my friends have never had a dream like that.

The point it really hammered home for me is that my experience is not everyone else’s experience.  Of course I knew this, but not in so concrete a way – and it took me 26 years to figure out that not everyone has dreams in which they are a different sex or species.  I wonder now how many other assumptions I’m making about so-called “human experience” that are just my personal idiosyncrasies.  It’s really kind of concerning.

Please fill up my comment field with your answers to my above three questions, because I would really, really like some more data about this.  You may answer in proportions – e.g. you dream as yourself 80% of the time but someone else 20% of the time, etc.  Thank you!