Sunday, October 12, 2014

Too Young to Remember

There's a common belief that if a child was exposed to trauma when xe was very young, too young for xim to consciously remember the trauma, it won't really affect xim. Since xe doesn't remember the trauma happening, and (unless told about it) probably won't know it happened at all, xe shouldn't have any lasting effects, right?


Laboratory rats demonstrate this effect pretty clearly. Like human toddlers, 18-day-old rats soon forget many of the memories they've laid down (such as the association between a sound and an electric shock). The rate of forgetting is quicker in rats than in humans, but the mechanism is thought to be the same.

If a rat can't remember being 18 days old, they certainly can't remember being 14 days old or younger. But when rat pups are removed from the nest for 3 hours each day (a stressful experience for a rat pup) from 2-14 days old, they show long-lasting changes in physiology and behavior. Even though they presumably have no memory of being removed from the nest, it still makes them anxious, hypersensitive to stress, and prone to alcoholism. (Little know fact: rats like alcoholic drinks about as much as humans do, with the same range of individual variation in voluntary drinking.)

The same is true of humans. Studies of children adopted from institutions* between around six months and three years old (when most children will have little or no memory of their life pre-adoption) has shown that these children nevertheless tend to function poorer than children adopted at younger ages or non-adopted children, with different ages being crucial for different specific symptoms. In this study of children from Russian orphanages, for example, the latest age at adoption was 27 months old (a little over 2 years), but even so, later-adopted children had poorer self-control than earlier-adopted children. And this study found that children adopted from well-run but emotionally deprived institutions between 13-24 months old had significantly more behavior problems than children adopted from similar institutions before 13 months. (Interestingly, though, few studies have shown any lasting effect of trauma occurring before 6 months, even though some studies show short-term effects. It seems that experiences in later infancy and toddlerhood can completely reverse the effects of experiences before 6 months of age.)

In both humans and rats, conscious memories of trauma are only one part of the impact that trauma can have on an individual. Far more significant is the impact that extreme stress can have on brain structure and function - and since young children's brains are growing and changing much more dramatically than older children or adults, the effects of trauma on the brain can be even greater** in these children.

So don't discount the effect of a trauma the child was too young to remember. It could have reshaped the child's brain, changing the way xe thinks and feels in ways xe can't tie to any conscious memories.

* Adopted children are the best to study the effects of early childhood trauma, because the change in home environment usually means the child suffers no trauma in later childhood that could confound the results.
** The study linked to, though old, is the only one I know of that directly compares adopted children who experienced good infant care and trauma in later childhood to children experiencing early trauma. Unfortunately, it's only available through institutional access. If you'd like to read it, let me know your e-mail address in the comments and I'll mail it to you.

Sunday, July 27, 2014

Daughters of Neanderthal Men - Haldane's Law, Culture, or Both?

Genetic research has now proven that Neanderthals and homo sapiens interbred, and that Neanderthal-human hybrids contributed to the human genetic pool. (I myself am 3% Neanderthal, according to the genetic test I had.)

Oddly enough, however, neither Neanderthal mitochondrial DNA (passed down from mothers to their children) nor Neanderthal Y chromosomes (passed down from fathers to sons) have survived in the human population. This implies that our Neanderthal ancestry doesn't come equally from all of the possible types of hybrid children. Instead, we appear to be specifically descended from women with Neanderthal fathers, who inherited human mitochondrial DNA and no Y chromosome. Why?

Two research articles I've seen have put forward two possible explanations. This article suggests Haldane's Law as an explanation. Haldane's Law is a pattern often seen in interspecies hybridization, where the heterogametic offspring (males in mammals) have poorer fertility than the homogametic offspring (females in mammals). This would predict that among Neanderthal-human hybrids, the daughters would be fertile, but the sons would be sterile. However, this doesn't explain why daughters of Neanderthal women (with Neanderthal mitochondrial DNA) didn't contribute to our genetic pool.

Other accounts have suggested cultural explanations, rather than biological ones. Perhaps, for whatever reason, daughters of Neanderthal men were the only hybrids who actually mated with humans. The others, rather than being sterile, either remained celibate or mated only with Neanderthals. But what would cause this pattern of behavior?

Both chimpanzees and bonobos, despite their behavioral differences, show female-biased dispersal - males stay with their home troup, while their sisters leave and find new troups to join. This study suggests that Neanderthals showed a similar pattern of sex-biased dispersal, with the men in a Neanderthal tribe being more related to each other than the women were. Modern human cultures vary quite a bit in this practice, but given our relatives, patrilocality was probably the ancestral pattern for us as well. So, for the sake of argument, let's assume both humans and Neanderthals were patrilocal.

It's also important to note that humans show two different mating strategies. As far as I know, every culture has a normative expectation that men will live with and support the mother (or mothers) of their children. (If someone knows of a culture where this is not the norm, let me know.) However, in most cultures, a subset of men buck this pattern, impregnating women (through rape, voluntary flings, or feigned commitment) and then playing no part in the support of the resulting children. Let's assume, for the sake of argument, that Paleolithic human men showed the same two patterns of behavior, fathering children both in and out of committed relationships. Maybe Neanderthal men did so as well.

The big question is - which strategies resulted in hybrids?

If men of both species - or just human men - took wives of the other species occasionally, then we'd expect their wives to come live in their tribe, and the hybrid children would grow up with their father's people. Patrilocality would predict that the sons would stay in the tribe, while the daughters went off to marry men of one or both species. Sons of Neanderthal women would have a human Y chromosome, and their Neanderthal mitochondrial DNA would not be passed on to their children. However, their sisters would pass on Neanderthal mitochondrial DNA. So unless human men were willing to marry Neanderthal women but not hybrid women (which seems unlikely), intermarriage should have led to Neanderthal mitochondrial DNA surviving in human populations.

Conversely, if men had sex with the other species but didn't marry them, then the hybrids would be raised by their mother, either as a single mother or with a stepfather of the same species as her (who may or may not have realized the kids weren't his - though hybrids would undoubtedly look pretty unusual). Children with Neanderthal mothers would have grown up among Neanderthals, and both genders would most likely have taken Neanderthal partners, with their descendants dying out along with the other Neanderthals. However, children with human mothers would have lived with humans and intermixed with humans.

This account neatly explains why no Neanderthal mitochondrial DNA survived in human populations. But why didn't Y chromosome sequences survive? After all, hybrid men with Neanderthal fathers are expected to have married human women. But if we combine this account with Haldane's Law, then these men would have been sterile, and left no descendants. Only their sisters succeeded in passing on their genes, mingling down the generations until everyone in their group had a little bit of Neanderthal ancestry.

Wednesday, June 18, 2014

Almost Human

Recently, I've been fascinated by human evolution, and the different species that were closely related to us.

We humans like to think we're special, so different from all the other species. We tend to see it in black-and-white – either you are a person, or you aren't. But in human evolution, it wasn't black-and-white. There was no one point in time when we became human. Instead, different traits of humanity appeared at different times, and depending on what you think is most crucial, you'd draw the line at different points. Compassion for others and basic tool use were most likely present before our ancestors split off from chimps and bonobos. Upright walking and smaller jaws distinguished Lucy and other australopithecines from the other apes, but their brain was mostly unchanged. Then brain sizes increased, and the first stone tools were found, skillfully crafted by homo habilis. At first, our ancestors only made one kind of tool, but then we had an explosion of tool-making diversity, and we started to make technological advancement, with a steady improvement in tool designs over time. Then, deliberate burials, carved statues and cave paintings began to appear, suggesting the birth of imagination and religion.

There is a lot of disagreement over when certain crucial human behaviours appeared. It used to be thought that homo habilis was the first hominid to make and use tools, until we discovered that many primates make simple tools, such as stripping a stem of leaves to fish for termites. There has been a lot of debate about whether creativity and deliberate burial were unique to homo sapiens or could also be seen in Neanderthals and our common ancestor homo heidelbergensis. (My impression, from the research, is that all three species did this, but homo sapiens did it more extensively.) There have been a lot of debates about language, when and how it first emerged. We used to think Neanderthals didn't talk, but genetic evidence suggests they did (and may have even used a tonal language!). Now, the bigger question is whether homo erectus could talk, and how well.

This debate, for many people, involves an element of looking for the crucial step, the crucial point at which we became 'fully human'. In this way, it mirrors how many people think about severely disabled people – where is the line between a person who struggles with X and Y and someone who is not really a person anymore?

When homo sapiens first appeared, we shared our world with four other hominid species – Neanderthals, Denisovans, homo erectus and homo floresciensis (nicknamed 'hobbits'). We're not sure where Denisovans fit in (one suggestion is that they were a cousin to Neanderthals), but both us and Neanderthals descended from homo heidelbergensis, which descended from homo erectus. Homo floresciensis, who were tiny little guys, were another branch off of homo erectus.

All of these species lived fairly similar lives, making tools, eating a mix of meat and plant products, living in small, tight-knit social groups. Two of these species were actually close enough that we could produce fertile offspring, although probably with difficulty. (DNA research suggests that only daughters with Neanderthal fathers and homo sapiens mothers contributed to the small percentage of Neanderthal DNA in all non-African people. Their brothers were probably infertile, and the reverse crossing may not have been viable or may have only produced sterile offspring.)

I sometimes wonder how we might see ourselves and other species differently, if those other hominids had survived as separate populations. If, as in many fantasy stories, we shared our world with other species who are different and yet so similar, would we see them as people, or as talking animals? Would we even see such a divide? Would we still think of ourselves as so special and unique, if our closest relatives were still around to show us how non-unique we were? Or would we just move the line over a bit?

It used to be that we did not see personhood in such a black-and-white way. In the medieval era, a nobleman was more of a 'person' than his wife, and both of them were more 'people' than their servants were. Their servants, in turn, were more 'people' than a different ethnic group would be. Personhood was a spectrum. Over time, this idea fell out of favour, mainly because it led to some vicious prejudice.

But in some ways, our current black-and-white divide isn't that good a concept to replace it with. I don't think the same ethical standards apply to me as to my cat. If we saw all species as having the same moral rights, then my cat would be no different from Jeffrey Dahmer – both of them killed and ate other living creatures, not because they had to do so to survive, but because they enjoyed it. But I think a cat killing and eating mice for fun is very different from a human killing and eating another human for fun.

But where it gets messy is when the difference is less clear. If we didn't see Neanderthals as people, how would we see their children? How much Neanderthal ancestry would you need, before you weren't considered a person? (Incidentally, I have 3% Neanderthal ancestry, which is the same percentage I'd have if I had one great-great-great grandparent who was a Neanderthal.)

Well, let's say we did see Neanderthals as people. What about homo erectus? They had an average brain size about 2/3rds of our own. There's no evidence that they buried their dead, or showed any sign of imagination. There's a lot of debate about whether they used language or not. If they did use language, they'd have conveyed much simpler ideas, and may have had a simpler language structure. And yet they made tools and may have used fire to cook their food. And they loved their families, cared for the sick, injured or disabled, and worked together to achieve common goals.

It would be so interesting, getting to know a homo erectus. But for many people, it would probably also be quite threatening. They were so similar to us, but at the same time, they were so different. In speciation terms, there is no evidence that we successfully interbred with them – either we couldn't interbreed at all, or all offspring that resulted were sterile. But a homo sapiens and a homo erectus could certainly become friends, if both were open to the possibility. What kind of friendship would that have been?

Sunday, June 15, 2014

A Look At the Grandroids Demo

Grandroids now has a playable, backers-only demo!

It's been out since April, and I was waiting to see if it would be OK to release this on my blog, but now I believe it is.

I've been making a series of videos showing my tinkering with the game. You can watch them here:

So far, they just have the visual system and the gait control in place, but it's amazing how alive they seem even with only that.

I've been tinkering with their inner working to figure out how they walk. In the later videos, you can see me giving them brain-based limps, leg tics and slow-moving limps. It's a lot of fun.

Thursday, June 05, 2014

Euthanasia and the Slippery Slope

Normally, I hate slippery slope arguments. But in this case, I think it's warranted.

I'm OK with euthanasia being an option for terminally ill patients who are suffering a great deal. My problem with euthanasia is that, with our society's attitudes to disability, it will not stop there.

And I have evidence to back this up. Look at pets.

For pets, euthanasia is an accepted option - no controversy about whether or not it should be allowed. My own parents have euthanized several of our pets, when they were terminally ill and in a lot of pain.

For example, my dog Sasha, at age 12, was found to have breast cancer that had travelled all along her stomach and partly down one leg. Doctors told us there was little they could do, her condition was pretty much certain to kill her even if we tried aggressive treatment. They were willing to surgically remove some of the tumors, but if they removed them all, they'd cause such extensive damage that she'd probably die simply from shock. We chose to euthanize her.

Another pet we euthanized was my 20 year old cat Timmy. He'd been bleeding from the mouth and meowing while trying to eat, and it turned out half of his jaw was one big tumor. Doctors told us that in other cases, when they'd removed a tumor like this, the cats had refused to eat and starved themselves to death. If they didn't remove it, it would keep growing and causing more and more pain until Timmy no longer could eat. We chose to euthanize him.

These are both cases where, if they'd been humans with the same issues, I would be in favour of allowing them the option of euthanasia. (Timmy's issues are unlikely to occur in any human, because part of his problem was a quirk of cat behavior that isn't present in humans, but Sasha's illness occurs in humans and, if not caught early enough, can cause the serious problems she had.) Unlike animals, most humans can tell us whether they want to fight the illness or not, and the decision should be up to the patient. But I don't have a problem with a woman with very advanced metastatic cancer choosing to die a bit earlier in a more comfortable way, when her death is pretty much guaranteed either way.

But we've also had some animals that vets counselled euthanasia for, that we refused to euthanize. Most people, in our position, would have euthanized these pets, and missed out on the happy years they had left.

Charlie was a 15 year old cat with diabetes. She'd been getting skinnier and skinnier, and then she got frostbite on her ear when it shouldn't have been cold enough for frostbite, so we took her in. We were expecting either 'she's fine, nothing to worry about' or 'there's nothing we can do, she's terminally ill'. Instead, we got a diagnosis of diabetes - a condition we knew full well was manageable with medication. The doctors taught us how to give her injections. At first, Charlie hated them, but over time we got better at giving them and she realized they made her feel better, so she tolerated them. And we had two more years of cuddling and purring and bumping against my Dad's recorder as he played. Her diabetes had no impact on her quality of life. At least, not until it killed her kidneys, and she died on the way to the vet.

Anja was a rat, about a year old. (Rats live around 2-3 years.) We're not sure exactly what happened, but our best guess is that one of the toys in her cage fell on her and broke her back. In any case, her hindquarters were paralyzed, and vets recommended we put her down. We refused, and they gave us advice on how to care for her. They told me to keep her isolated in a restricted area for awhile so she could heal, and gave me advice on softer bedding that wouldn't hurt her as she dragged herself along it. But over a couple weeks, her mobility improved dramatically, until her only impairment was an inability to jump or climb. We didn't bother changing to softer bedding, just put her back in her cage with her friend (rats are social, and get depressed if kept alone too long). She went on to live a happy life, filled with treats, teeth grinding and cuddles, until she died suddenly, most likely from old age.

Charlie and Anja had lives that were worth living, just as humans with diabetes or spinal cord injuries do. Even if Anja's recovery hadn't been so dramatic, we could've given her a good life. Rat advice websites state that some male rats develop progressive hindquarter paralysis in later life, and these rats still live a good life when their back legs can't move at all.

But I'm pretty certain that many people, if they'd had Charlie or Anja, would have put them down. Similarly, people often put down cats with cerebellar hypoplasia, but Youtube abounds with videos of happy CH kitties wobbling around. (Buddy, for example.) Our society has the attitude that disability, any disability, is a horrible fate. And if euthanasia was available, many people would approve of it even for minor conditions. Worse yet, many newly-disabled people, without giving themselves time to see if they can adapt and live well, would jump to the conclusion that their good life is over and seek euthanasia.

Another example of how many people will choose death over disability, even for very minor conditions, can be seen in abortion rates for prenatally diagnosed conditions. Most babies prenatally diagnosed with a disability, any disability, are aborted. In one study, out of 40 prenatally-detected cases of sex chromosome aneuploidy, 25 (63%) were aborted. To give some background, there are four types of common sex chromosome aneuploidies - XXX, XYY, XXY and monosomy X.

XXX and XXY are very mild, causing a slight drop in IQ (around 5-10 points) and a higher risk of learning disabilities and ADHD, as well as making kids a bit taller than expected. Many people with these two conditions are never diagnosed, because they have few or no symptoms and no one thinks to test their chromosomes.

XXY, also known as Klinefelter's Syndrome, is a bit more significant. In addition to the same traits seen in XXX and XYY, Klinefelter boys sometimes experience feminine body changes at puberty (breast growth, getting curves). These changes can be readily prevented by testosterone treatment. In addition, Klinefelter Syndrome usually causes infertility, which can't be prevented. Even so, many men are never diagnosed, or are only diagnosed in adulthood due to infertility.

Monosomy X, also called Turner Syndrome, is the only sex chromosome aneuploidy often detected at birth. It causes a distinctive set of minor physical changes, such as extra skin around the neck, which have no significant impact but allow doctors to spot the condition. In addition, Turner girls are infertile, shorter than expected, often don't go through puberty unless given estrogen treatments, and have an increased risk of congenital heart defects. Turner Syndrome also causes nonverbal learning disability and difficulty reading nonverbal cues.

Even the most severe of these conditions is easily managed, and poses no real impediment to a full and happy life. And two of these conditions are barely detectable at all. Yet when parents are given the option to abort, two-thirds of these kids will be aborted. Even a condition as mild as Turner Syndrome is seen as a serious enough problem to make not living at all preferable in their eyes.

And until this changes, I don't want yet another way for disability prejudice to kill us off.

Sunday, May 25, 2014

What Studying Autism Won't Teach You

There is a lot of research into autism. Some of it is primarily motivated by practical, real-life questions about how best to educate and accommodate autistic people. But other research is motivated by more scientific questions - what can autism tell us about the social brain?

Unfortunately, the answer is - not much.

Autism is a behaviourally defined condition. It is also a highly heterogeneous condition. What this means is that the category of autism lumps together a wide variety of kids, with very different underlying conditions, who share a set of behaviours.

This is a big problem for anyone trying to do scientific research on autism. Studying social skills in autism is semi-tautological - at best, all you'll learn is whether your test predicts real-life behaviour. Because autistic people are heterogeneous and defined by social skill impairment, any social skill test that is correlated with real-life impairment will show impairment in autism, by definition.

Case studies of autism would actually be more useful, because then the individual differences aren't smoothed and averaged out. But case studies also suffer from having no statistical power - no ability to tell coincidence from correlation. And we don't yet know the best way to subdivide autistics so we get more homogeneous groups to study.

Because autistics are pre-defined by social impairment, but have very different sets of impairments in other areas, studying them as a group will give you the impression that social skills are separable from these other skills, even if they aren't.

To illustrate, I'll discuss a few hypothetical cases.

Imagine one kid who has visual processing problems, and can't read facial expressions for the same reason he confuses a cat with a dog. If you test him alone, his impairment in facial expression recognition will be accompanied by severe impairments in every other visual skill, and he won't be impaired in understanding tone of voice, making it pretty obvious his impairment isn't primarily social.

Imagine another kid who has auditory processing problems. He struggles in even identifying speech as speech, much less understanding it. Obviously, he has a language delay, and this delay in language deprives him of the chance to 'look inside' another person's mind, and check his assumptions about what they're thinking and feeling. As a result, his social cognition is delayed, although his ability to read facial expressions is OK. Speech therapy addresses his language delays, teaching him to talk, but he doesn't get training in social skills except what he needs to cooperate with speech therapy.

Imagine a third kid, who is unable to feel embarrassed. He is impulsive and socially inappropriate, and because he's never embarrassed by his behaviour, he doesn't get why he shouldn't do it. The emotions he feels, he can recognize in others, but because he never feels embarrassed, he can't tell when others are feeling it.

Imagine a fourth kid, who has trouble shifting attention. Because many social cues are brief and fleeting, he'll miss them unless he happens to be paying attention to the right person at the right time. He also misses many other things, like the curb cut he tripped over because he wasn't watching where he was walking, or the little bird that spotted him walking past and flew away. On tests, his attention difficulties result in scattered performance - sometimes he gets it, sometimes he doesn't. And many skills are poor because he hasn't been paying attention to the right things at the right times to learn them.

Each kid has at least some social impairment, and could potentially be diagnosed as autistic. So let's say we average together their scores on various tests. This will obscure the one kid's visual impairment, because the other kids process visual stimuli just fine. It'll also obscure the other kid's auditory and language impairments. On a test of attention, the last kid's attention impairments will be obscured, because the other kids shift attention just fine. And if they even think to test embarrassment, they won't find much difference, because most of the kids get embarrassed fairly readily. The only areas where these kids' impairments overlap are in social areas, and so the average will show social impairment with no other impairments.

This apparent 'generalized social impairment without other impairments' is what a lot of autism research has found. And this has led to speculation that there may be some sort of 'social module' to the brain, some brain area or network that subserves all social skills, but isn't crucial to any other skill area. But as I've shown here, this apparent modularity could simply be an effect of averaging together results from many people with only one area of overlap in impairments.

What would really teach us about the social brain is to study groups pre-defined by biological, not behavioural measures. People with Fragile X Syndrome, Turner Syndrome, Williams Syndrome, Down Syndrome, etc. Or people with damage to brain regions thought to relate to social interaction, such as frontal lobes, amygdala, or anterior cingulate cortex.

There needs to be more research into the social skills of individuals with biologically-defined conditions. But what research has been done already suggests a very different view - social skills are not a unitary construct. There are face-specific impairments. There are emotion-specific impairments (like the never-embarrassed kid described above). There are specific impairments to social attention (or enhancements of it, in Williams Syndrome). Social skills are not a generalized skill area, but a collection of skills that work together in a certain context.

It's also important to consider the role of experience, and groups defined by certain kinds of experiences can be useful as well. Research into late-educated deaf people has highlighted the contribution that language skills make to social skills. Research into traumatized children has shown that emotional experiences can bias interpretation of social situations, so they're extra sensitive to signs of danger. And research into younger siblings has shown that a social partner who is just a bit ahead of you can accelerate your social development.

So if you want to understand the social brain, don't study autism. Study groups defined by something other than their behavioural traits.

Saturday, May 17, 2014

The Time I Used A Wheelchair

[in reply to this post, by a wheelchair user who's now walking a lot more]

I'm pretty sure I have undiagnosed mild hypermobility, and I can definitely relate to standing being harder than walking. I mostly deal with it by squatting whenever I'm expected to stand for a long period, which gets me odd looks sometimes but usually works well. But even squatting gets tiring if I do it too long, and often I don't want to just sit down on the floor. Plus, getting back up from squatting hurts a bit, if I do it too many times in succession.

Tours are pretty much the worst thing for my walking issues. Walking a short distance, then stopping to look at something, and then walking some more only to stop again. Even if I squat whenever I stop (if I can do that and still look at whatever I'm wanting to look at), pretty quickly I get to the point where getting back up hurts almost as much as if I'd stood the whole time. In any other situation, I can handle my mobility issues with minimal pain and no need for assistance, but when I'm doing a tour of some kind, it's really unpleasant.

In one autism scale I read, there was a question where you had to indicate whether you'd prefer to go to a museum or a theatre. It was supposed to reflect whether you're more interested in looking at things or watching people, but I chose theatre because going to a museum tends to cause me pain. Whenever I go to a museum, I have a choice of not being able to really experience what the museum has to offer, or else causing myself pain while trying to look at all of the displays. The only time this doesn't happen is when the displays are designed so I can easily enjoy staying (without standing) at one display long enough for my body to rest a bit, such as with interactive displays.

But one time, when I had to go to a museum for a class assignment, I worked up the courage to ask to use their wheelchair. It took a lot, emotionally, for me to be willing to do this, because despite all the disability acceptance stuff I've come to believe in, part of me still feels that I'm just a faker and that's a really bad thing. (I did enjoy faking disabilities as a kid, simply as a form of play, and I don't objectively think there's anything wrong with that, but my parents told me off for it and so I feel guilty about it.) I knew I had a good reason to use the wheelchair, and that the whole reason museums have borrowable wheelchairs is for people who can walk in other situations but find walking in a museum difficult or uncomfortable. I was also lucky that the person behind the counter didn't seem to think it was at all odd for a young, healthy-looking woman to walk up and ask to borrow a wheelchair.

And I'm glad I did, because that was pretty much the only museum visit I've ever had where I wasn't in pain for any of it. It's amazing how big a difference being in pain has to your enjoyment - I found myself actually being more interested in the displays and more curious about the subject matter, simply because I wasn't in pain while looking at them. It was incredible. It's not a particularly exciting museum, but I really enjoyed it anyway.

At the same time, I also noticed how strange it felt to be in a wheelchair when I'm used to walking. My legs got an odd feeling to them, like they thought they must be moving because I was going forward and couldn't understand why they weren't moving. (I got the same feeling when I first started driving, until I got used to it.) It was also strange to have trouble with a steep ramp, to be rolling backwards unintentionally while trying to look at something, and to have to fit my wheelchair into spaces instead of just my body fitting there.

People's reactions were also a surprise. There weren't many people there. But the people that were there seemed more quick to offer me help than they'd usually be, and once or twice someone tried to explain a display to me, as if I couldn't read it myself. I wasn't any different, but I could tell people saw me differently. It was the one thing I didn't like about the experience. I did get some help that was useful, but I also felt kind of uncomfortable. Part of me was afraid to stand up where the other patrons could see me, for fear they'd accuse me of faking, while another part wanted to launch into a big explanation of why I was in the wheelchair and how I usually walked instead.

I haven't used a wheelchair since then, but then, I haven't been to a museum either, or any other place where I'm expected to do a lot of starting and stopping and there are borrowable wheelchairs. But I'm thinking next time I'm in that situation, I should do it again.