Thursday, December 18, 2014

'But It's Not Healthy!' Why Fat-Shaming Cannot Be Excused Based on the Health Effects of Fat

A few years ago, my brother was a regular soccer player, liked to go swimming, and was starting to get interested in karate. He's always been a big boy, tall for his age, with broad shoulders and lots of muscle, but back then, his weight was fine. He liked to eat, but he didn't eat more than he needed. Although he wasn't super-athletic, he was healthy.

Since then, he's had a couple moves and some bullying, with kids calling him 'fat' - a label that was initially inaccurate. He's had a gym teacher shame him for not being able to run as many laps as other kids, and his high school forced him to do a calorie-counting assignment.

And now, he's noticeably overweight. He's quit soccer, rarely goes swimming, and never goes to karate. He spends most of his time inside, playing video games, and tells me he often feels like he can't go outside because he doesn't want people seeing him. He eats not only out of actual hunger, but also to comfort himself when he's feeling down - and he's been feeling down a lot.

I would like him to lose weight - not because I think he looks bad, but because his doctor says he has high cholesterol and may be at risk for heart disease. But whenever I make the slightest reference to his weight, his diet, or his level of exercise, he gets depressed. I can look at the cholesterol levels of my own snacks and substitute a high-cholesterol snack for a low-cholesterol one without getting upset. He can't. Merely thinking about cholesterol sends him in a downward spiral.

So, when I hear people say that fat acceptance is bad because fat is unhealthy, I get really upset. If being 'fat' wasn't worthy of insults and humiliation, then, ironically, my brother may never have become fat. And even if he did, I could get him to make changes to help him lose weight, just like I've done with myself.

Depression is not conducive to eating healthy or getting plenty of exercise. Depression saps your energy, makes you want to hide inside. Depression messes up your ability to regulate eating, making you eat more or less than you should. Depression makes you want to go for comfort foods, foods that taste good and make you feel a bit better, instead of the food you know is healthy.

If you hate your body, any reminder of how you look will trigger depression - an emotional state associated with lack of motivation and energy, comfort eating, and poor regulation of eating. This makes it harder, not easier, to make a positive change in your eating and exercise habits. Sure, some people do it anyway, but those people are exceptions, just like the former alcoholics who can sit in a bar with friends and not drink. It's not a tactic that will work for most people.

So if you think making fun of someone for being fat, or rejecting them because of their weight is in any way justified, think again. You're not helping them, you're hurting them. You're making it harder for them to lose weight, and you're making their life miserable.

So stop being an asshole, and try being nice instead. Leave comments about weight to doctors, who (hopefully) actually know what they're talking about, and give them a compliment instead. If you want to help them lose weight, invite them to go swimming with you or something else fun and active. Tell them not to worry about how they look or how well they can do the activity - the point is to just move around and enjoy doing it.

Thursday, November 06, 2014

Can Fiver the Raccoon Count? An SPSS Post.

Awhile back, I came across these two videos:








The videos depict a wild female* raccoon ('Fiver'), being trained to give 'high fives' in exchange for dog kibble. In the first night, in response to Fiver spontaneously tapping the woman's hand while getting kibble, she decides to start telling Fiver to 'give me five' and rewarding pats to the hand with kibble. Fiver's variable number of pats and seeming confusion and hesitation lead the woman to wonder if Fiver may have been expecting more pats to lead to more food. On the second night, therefore, the woman starts giving Fiver the same number of kibble pieces as Fiver's pats, to see if Fiver learns the association.


I decided to tabulate the results of these two videos and run some statistical analyses on Fiver's behavior.


Hypothesis

If Fiver does know how to count, and can tell that more pats leads to more kibble, what should we expect?


Assuming she wants to maximize her kibble reward, Fiver should react to the link between her pats and the kibble she gets by increasing the number of pats over time. Therefore, she should provide more pats per trial on the second night than on the first night. In addition, over the second night, she should increase her number of pats as she learns the contingency.


Method


I coded Fiver's pats based on carefully watching the video, as well as listening to the woman's comments. I also coded the amount of kibble Fiver received. In addition, I recorded whether the woman commented on the number of pats Fiver gave her. Each exchange of pats & treats was considered a single trial.


On two occasions on night 2, Fiver gets interrupted by another raccoon coming near (the first time, the other raccoon actually displaces Fiver and gets some kibble). Since Fiver seemed to take a while to get back into the flow of things after these interruptions, I decided to consider each interruption as the end of a 'session'. Therefore, on the two nights, Fiver had four sessions - one session of 10 trials on night one and three sessions of 25, 58 and 24 trials on night 2 - for a total of 117 trials.


I analyzed correlations between pats, trial number and kibble received for each session individually, and also compared the same variables across nights and sessions.


Results


Over both nights, Fiver gave a single pat 51% of the time, two pats 28% of the time, and 3-7 pats on the remaining trials. The trial with 7 pats (on the first night) appears to be an outlier, with the next highest number of pats in one trial being 5. She received 1 kibble in 59% of trials, two kibbles in 25% of trials, and 3-5 kibbles in the remaining trials.


Apart from the 7-pat trial, when she received 4 kibbles, all other trials on the first night resulted in 1 kibble per trial, despite her giving 1-4 pats (1 pat in 2 trials, 2 pats in 3 trials, 3 pats in 1 trial and 4 pats in 3 trials). As a result, once the outlier was excluded, there was no possible correlation between number of pats and number of kibbles. In addition, the woman never commented on the number of pats Fiver gave her. (Though she commented on the quality of the pats at times.) There was no correlation between trial number and number of pats Fiver gave, suggesting that Fiver did not show a trend towards increasing or decreasing pats over time on this night.


On the second night, Fiver received 1-5 kibbles for 1-5 pats, with a correlation of .965 between pats and kibbles. This correlation did not vary appreciably between sessions (.933-.991). Despite this, Fiver showed a negative correlation between trial number and number of pats (-.256), suggesting that she gave fewer pats as the night went on - the opposite of the predicted result. Within session, the negative correlation was stronger in session 4 (-.442) and nonsignificant in sessions 2 and 3 (.030 and .184). The woman gave frequent verbal feedback on trials, saying a single number in 45% of trials (92% of which were 1-pat trials, and the rest were 2-pat trials) and counting aloud in 38% of trials. In all cases where the woman commented on the number of pats, she gave the same number of kibble pieces as her comment indicated.


A T-Test comparing the two nights revealed that Fiver showed significantly greater variance in number of pats on the first night than on the second night (SD 1.83 vs .94). In addition, contrary to the hypothesis that Fiver would produce more pats on the second night, there was an almost significant difference in the opposite direction (p = .053), with Fiver producing an average of 3 pats per trial on the first night and 1.71 pats on the second night.


An ANOVA revealed a significant difference between sessions (P <.001). Post-hoc tests revealed that Fiver produced significantly more pats in session 1 (on the first night) than in sessions 3 (p = .003) and 4 (p <.001). In addition, Fiver produced significantly more pats per trial in session 2 (at the start of the second night) than in session 4 (p = .022). All other comparisons were non-significant.


Discussion

Overall, the results found were the opposite of what the hypothesis predicted. Rather than increasing her number of pats per trial, Fiver actually decreased the number of pats she produced over the second night.


These results could indicate that Fiver was unable to perceive a link between the number of pats and the number of kibbles she received. Alternately, Fiver may not have wanted to receive multiple kibbles at one time. The presence of rival raccoons may have made her wary of potential theft, leading to a strategy of requesting only as many kibbles as she could eat in one bite. In addition, since Fiver performed 107 trials on the second night, the decline in number of pats over time could be attributed either to fatigue or satiation. Certainly, Fiver seems to be less motivated as the night goes on, although part of this is undoubtedly because she's watching out for her rivals.


With the current data, we can't determine whether Fiver's behavior represented a deliberate strategy or a simple lack of understanding.


Anecdotally, Fiver does seem to respond more promptly and with less coaxing in the second night than in the first. However, this is not necessarily due to the change in contingency, since there are several other factors that might influence her behavior.


Firstly, from what I can gather, the session on the first night started after Fiver had already been given a lot of kibble noncontingently. The sudden shift between noncontingent to contingent reward may have confused and/or frustrated Fiver. In contrast, on the second night, the reward was contingent to begin with, and Fiver had already had several trials of contingent reward the night before.


Secondly, Fiver is a wild raccoon, and the woman who posted the videos describes her as 'new' in the first video. This suggests that Fiver was unfamiliar with the woman, and may have been a bit afraid of her. (Fiver often backed away between trials, supporting the fear interpretation.) Once again, on the second night, Fiver knows better what to expect - this woman will feed her kibble and not try to hurt her. Wild animals are often neophobic, especially towards humans. Though city-dwelling animals often have a lower flight distance than rural animals, coming close enough to eat from her hand may have still been a bit outside of Fiver's comfort zone.


Thirdly, miscommunication may have played a part. On three occasions on the first night, immediately after accepting the reward, Fiver did a very fast, very light pat, which the woman refused to reward her for. After each of those quick pats, Fiver paused and acted as if she was waiting to be rewarded, and seemed to get confused when the reward didn't come. On the second night, I saw only one such unrewarded pat, which occurred early on in session 2. It's unclear whether Fiver learnt to give better pats or the woman became more consistent in rewarding all of her pats, but the proportion of unrewarded pats was certainly far lower on the second night.




* The woman mistakenly refers to Fiver as male in the video, but Fiver has since had pups, making it clear that she's actually female.

Sunday, October 12, 2014

Too Young to Remember

There's a common belief that if a child was exposed to trauma when xe was very young, too young for xim to consciously remember the trauma, it won't really affect xim. Since xe doesn't remember the trauma happening, and (unless told about it) probably won't know it happened at all, xe shouldn't have any lasting effects, right?

Wrong.

Laboratory rats demonstrate this effect pretty clearly. Like human toddlers, 18-day-old rats soon forget many of the memories they've laid down (such as the association between a sound and an electric shock). The rate of forgetting is quicker in rats than in humans, but the mechanism is thought to be the same.

If a rat can't remember being 18 days old, they certainly can't remember being 14 days old or younger. But when rat pups are removed from the nest for 3 hours each day (a stressful experience for a rat pup) from 2-14 days old, they show long-lasting changes in physiology and behavior. Even though they presumably have no memory of being removed from the nest, it still makes them anxious, hypersensitive to stress, and prone to alcoholism. (Little know fact: rats like alcoholic drinks about as much as humans do, with the same range of individual variation in voluntary drinking.)

The same is true of humans. Studies of children adopted from institutions* between around six months and three years old (when most children will have little or no memory of their life pre-adoption) has shown that these children nevertheless tend to function poorer than children adopted at younger ages or non-adopted children, with different ages being crucial for different specific symptoms. In this study of children from Russian orphanages, for example, the latest age at adoption was 27 months old (a little over 2 years), but even so, later-adopted children had poorer self-control than earlier-adopted children. And this study found that children adopted from well-run but emotionally deprived institutions between 13-24 months old had significantly more behavior problems than children adopted from similar institutions before 13 months. (Interestingly, though, few studies have shown any lasting effect of trauma occurring before 6 months, even though some studies show short-term effects. It seems that experiences in later infancy and toddlerhood can completely reverse the effects of experiences before 6 months of age.)

In both humans and rats, conscious memories of trauma are only one part of the impact that trauma can have on an individual. Far more significant is the impact that extreme stress can have on brain structure and function - and since young children's brains are growing and changing much more dramatically than older children or adults, the effects of trauma on the brain can be even greater** in these children.

So don't discount the effect of a trauma the child was too young to remember. It could have reshaped the child's brain, changing the way xe thinks and feels in ways xe can't tie to any conscious memories.

* Adopted children are the best to study the effects of early childhood trauma, because the change in home environment usually means the child suffers no trauma in later childhood that could confound the results.
** The study linked to, though old, is the only one I know of that directly compares adopted children who experienced good infant care and trauma in later childhood to children experiencing early trauma. Unfortunately, it's only available through institutional access. If you'd like to read it, let me know your e-mail address in the comments and I'll mail it to you.

Sunday, July 27, 2014

Daughters of Neanderthal Men - Haldane's Law, Culture, or Both?

Genetic research has now proven that Neanderthals and homo sapiens interbred, and that Neanderthal-human hybrids contributed to the human genetic pool. (I myself am 3% Neanderthal, according to the genetic test I had.)

Oddly enough, however, neither Neanderthal mitochondrial DNA (passed down from mothers to their children) nor Neanderthal Y chromosomes (passed down from fathers to sons) have survived in the human population. This implies that our Neanderthal ancestry doesn't come equally from all of the possible types of hybrid children. Instead, we appear to be specifically descended from women with Neanderthal fathers, who inherited human mitochondrial DNA and no Y chromosome. Why?

Two research articles I've seen have put forward two possible explanations. This article suggests Haldane's Law as an explanation. Haldane's Law is a pattern often seen in interspecies hybridization, where the heterogametic offspring (males in mammals) have poorer fertility than the homogametic offspring (females in mammals). This would predict that among Neanderthal-human hybrids, the daughters would be fertile, but the sons would be sterile. However, this doesn't explain why daughters of Neanderthal women (with Neanderthal mitochondrial DNA) didn't contribute to our genetic pool.

Other accounts have suggested cultural explanations, rather than biological ones. Perhaps, for whatever reason, daughters of Neanderthal men were the only hybrids who actually mated with humans. The others, rather than being sterile, either remained celibate or mated only with Neanderthals. But what would cause this pattern of behavior?

Both chimpanzees and bonobos, despite their behavioral differences, show female-biased dispersal - males stay with their home troup, while their sisters leave and find new troups to join. This study suggests that Neanderthals showed a similar pattern of sex-biased dispersal, with the men in a Neanderthal tribe being more related to each other than the women were. Modern human cultures vary quite a bit in this practice, but given our relatives, patrilocality was probably the ancestral pattern for us as well. So, for the sake of argument, let's assume both humans and Neanderthals were patrilocal.

It's also important to note that humans show two different mating strategies. As far as I know, every culture has a normative expectation that men will live with and support the mother (or mothers) of their children. (If someone knows of a culture where this is not the norm, let me know.) However, in most cultures, a subset of men buck this pattern, impregnating women (through rape, voluntary flings, or feigned commitment) and then playing no part in the support of the resulting children. Let's assume, for the sake of argument, that Paleolithic human men showed the same two patterns of behavior, fathering children both in and out of committed relationships. Maybe Neanderthal men did so as well.

The big question is - which strategies resulted in hybrids?

If men of both species - or just human men - took wives of the other species occasionally, then we'd expect their wives to come live in their tribe, and the hybrid children would grow up with their father's people. Patrilocality would predict that the sons would stay in the tribe, while the daughters went off to marry men of one or both species. Sons of Neanderthal women would have a human Y chromosome, and their Neanderthal mitochondrial DNA would not be passed on to their children. However, their sisters would pass on Neanderthal mitochondrial DNA. So unless human men were willing to marry Neanderthal women but not hybrid women (which seems unlikely), intermarriage should have led to Neanderthal mitochondrial DNA surviving in human populations.

Conversely, if men had sex with the other species but didn't marry them, then the hybrids would be raised by their mother, either as a single mother or with a stepfather of the same species as her (who may or may not have realized the kids weren't his - though hybrids would undoubtedly look pretty unusual). Children with Neanderthal mothers would have grown up among Neanderthals, and both genders would most likely have taken Neanderthal partners, with their descendants dying out along with the other Neanderthals. However, children with human mothers would have lived with humans and intermixed with humans.

This account neatly explains why no Neanderthal mitochondrial DNA survived in human populations. But why didn't Y chromosome sequences survive? After all, hybrid men with Neanderthal fathers are expected to have married human women. But if we combine this account with Haldane's Law, then these men would have been sterile, and left no descendants. Only their sisters succeeded in passing on their genes, mingling down the generations until everyone in their group had a little bit of Neanderthal ancestry.

Wednesday, June 18, 2014

Almost Human

Recently, I've been fascinated by human evolution, and the different species that were closely related to us.

We humans like to think we're special, so different from all the other species. We tend to see it in black-and-white – either you are a person, or you aren't. But in human evolution, it wasn't black-and-white. There was no one point in time when we became human. Instead, different traits of humanity appeared at different times, and depending on what you think is most crucial, you'd draw the line at different points. Compassion for others and basic tool use were most likely present before our ancestors split off from chimps and bonobos. Upright walking and smaller jaws distinguished Lucy and other australopithecines from the other apes, but their brain was mostly unchanged. Then brain sizes increased, and the first stone tools were found, skillfully crafted by homo habilis. At first, our ancestors only made one kind of tool, but then we had an explosion of tool-making diversity, and we started to make technological advancement, with a steady improvement in tool designs over time. Then, deliberate burials, carved statues and cave paintings began to appear, suggesting the birth of imagination and religion.

There is a lot of disagreement over when certain crucial human behaviours appeared. It used to be thought that homo habilis was the first hominid to make and use tools, until we discovered that many primates make simple tools, such as stripping a stem of leaves to fish for termites. There has been a lot of debate about whether creativity and deliberate burial were unique to homo sapiens or could also be seen in Neanderthals and our common ancestor homo heidelbergensis. (My impression, from the research, is that all three species did this, but homo sapiens did it more extensively.) There have been a lot of debates about language, when and how it first emerged. We used to think Neanderthals didn't talk, but genetic evidence suggests they did (and may have even used a tonal language!). Now, the bigger question is whether homo erectus could talk, and how well.

This debate, for many people, involves an element of looking for the crucial step, the crucial point at which we became 'fully human'. In this way, it mirrors how many people think about severely disabled people – where is the line between a person who struggles with X and Y and someone who is not really a person anymore?

When homo sapiens first appeared, we shared our world with four other hominid species – Neanderthals, Denisovans, homo erectus and homo floresciensis (nicknamed 'hobbits'). We're not sure where Denisovans fit in (one suggestion is that they were a cousin to Neanderthals), but both us and Neanderthals descended from homo heidelbergensis, which descended from homo erectus. Homo floresciensis, who were tiny little guys, were another branch off of homo erectus.

All of these species lived fairly similar lives, making tools, eating a mix of meat and plant products, living in small, tight-knit social groups. Two of these species were actually close enough that we could produce fertile offspring, although probably with difficulty. (DNA research suggests that only daughters with Neanderthal fathers and homo sapiens mothers contributed to the small percentage of Neanderthal DNA in all non-African people. Their brothers were probably infertile, and the reverse crossing may not have been viable or may have only produced sterile offspring.)

I sometimes wonder how we might see ourselves and other species differently, if those other hominids had survived as separate populations. If, as in many fantasy stories, we shared our world with other species who are different and yet so similar, would we see them as people, or as talking animals? Would we even see such a divide? Would we still think of ourselves as so special and unique, if our closest relatives were still around to show us how non-unique we were? Or would we just move the line over a bit?

It used to be that we did not see personhood in such a black-and-white way. In the medieval era, a nobleman was more of a 'person' than his wife, and both of them were more 'people' than their servants were. Their servants, in turn, were more 'people' than a different ethnic group would be. Personhood was a spectrum. Over time, this idea fell out of favour, mainly because it led to some vicious prejudice.

But in some ways, our current black-and-white divide isn't that good a concept to replace it with. I don't think the same ethical standards apply to me as to my cat. If we saw all species as having the same moral rights, then my cat would be no different from Jeffrey Dahmer – both of them killed and ate other living creatures, not because they had to do so to survive, but because they enjoyed it. But I think a cat killing and eating mice for fun is very different from a human killing and eating another human for fun.

But where it gets messy is when the difference is less clear. If we didn't see Neanderthals as people, how would we see their children? How much Neanderthal ancestry would you need, before you weren't considered a person? (Incidentally, I have 3% Neanderthal ancestry, which is the same percentage I'd have if I had one great-great-great grandparent who was a Neanderthal.)

Well, let's say we did see Neanderthals as people. What about homo erectus? They had an average brain size about 2/3rds of our own. There's no evidence that they buried their dead, or showed any sign of imagination. There's a lot of debate about whether they used language or not. If they did use language, they'd have conveyed much simpler ideas, and may have had a simpler language structure. And yet they made tools and may have used fire to cook their food. And they loved their families, cared for the sick, injured or disabled, and worked together to achieve common goals.

It would be so interesting, getting to know a homo erectus. But for many people, it would probably also be quite threatening. They were so similar to us, but at the same time, they were so different. In speciation terms, there is no evidence that we successfully interbred with them – either we couldn't interbreed at all, or all offspring that resulted were sterile. But a homo sapiens and a homo erectus could certainly become friends, if both were open to the possibility. What kind of friendship would that have been?

Sunday, June 15, 2014

A Look At the Grandroids Demo

Grandroids now has a playable, backers-only demo!

It's been out since April, and I was waiting to see if it would be OK to release this on my blog, but now I believe it is.

I've been making a series of videos showing my tinkering with the game. You can watch them here:

So far, they just have the visual system and the gait control in place, but it's amazing how alive they seem even with only that.

I've been tinkering with their inner working to figure out how they walk. In the later videos, you can see me giving them brain-based limps, leg tics and slow-moving limps. It's a lot of fun.

Thursday, June 05, 2014

Euthanasia and the Slippery Slope

Normally, I hate slippery slope arguments. But in this case, I think it's warranted.

I'm OK with euthanasia being an option for terminally ill patients who are suffering a great deal. My problem with euthanasia is that, with our society's attitudes to disability, it will not stop there.

And I have evidence to back this up. Look at pets.

For pets, euthanasia is an accepted option - no controversy about whether or not it should be allowed. My own parents have euthanized several of our pets, when they were terminally ill and in a lot of pain.

For example, my dog Sasha, at age 12, was found to have breast cancer that had travelled all along her stomach and partly down one leg. Doctors told us there was little they could do, her condition was pretty much certain to kill her even if we tried aggressive treatment. They were willing to surgically remove some of the tumors, but if they removed them all, they'd cause such extensive damage that she'd probably die simply from shock. We chose to euthanize her.

Another pet we euthanized was my 20 year old cat Timmy. He'd been bleeding from the mouth and meowing while trying to eat, and it turned out half of his jaw was one big tumor. Doctors told us that in other cases, when they'd removed a tumor like this, the cats had refused to eat and starved themselves to death. If they didn't remove it, it would keep growing and causing more and more pain until Timmy no longer could eat. We chose to euthanize him.

These are both cases where, if they'd been humans with the same issues, I would be in favour of allowing them the option of euthanasia. (Timmy's issues are unlikely to occur in any human, because part of his problem was a quirk of cat behavior that isn't present in humans, but Sasha's illness occurs in humans and, if not caught early enough, can cause the serious problems she had.) Unlike animals, most humans can tell us whether they want to fight the illness or not, and the decision should be up to the patient. But I don't have a problem with a woman with very advanced metastatic cancer choosing to die a bit earlier in a more comfortable way, when her death is pretty much guaranteed either way.

But we've also had some animals that vets counselled euthanasia for, that we refused to euthanize. Most people, in our position, would have euthanized these pets, and missed out on the happy years they had left.

Charlie was a 15 year old cat with diabetes. She'd been getting skinnier and skinnier, and then she got frostbite on her ear when it shouldn't have been cold enough for frostbite, so we took her in. We were expecting either 'she's fine, nothing to worry about' or 'there's nothing we can do, she's terminally ill'. Instead, we got a diagnosis of diabetes - a condition we knew full well was manageable with medication. The doctors taught us how to give her injections. At first, Charlie hated them, but over time we got better at giving them and she realized they made her feel better, so she tolerated them. And we had two more years of cuddling and purring and bumping against my Dad's recorder as he played. Her diabetes had no impact on her quality of life. At least, not until it killed her kidneys, and she died on the way to the vet.

Anja was a rat, about a year old. (Rats live around 2-3 years.) We're not sure exactly what happened, but our best guess is that one of the toys in her cage fell on her and broke her back. In any case, her hindquarters were paralyzed, and vets recommended we put her down. We refused, and they gave us advice on how to care for her. They told me to keep her isolated in a restricted area for awhile so she could heal, and gave me advice on softer bedding that wouldn't hurt her as she dragged herself along it. But over a couple weeks, her mobility improved dramatically, until her only impairment was an inability to jump or climb. We didn't bother changing to softer bedding, just put her back in her cage with her friend (rats are social, and get depressed if kept alone too long). She went on to live a happy life, filled with treats, teeth grinding and cuddles, until she died suddenly, most likely from old age.

Charlie and Anja had lives that were worth living, just as humans with diabetes or spinal cord injuries do. Even if Anja's recovery hadn't been so dramatic, we could've given her a good life. Rat advice websites state that some male rats develop progressive hindquarter paralysis in later life, and these rats still live a good life when their back legs can't move at all.

But I'm pretty certain that many people, if they'd had Charlie or Anja, would have put them down. Similarly, people often put down cats with cerebellar hypoplasia, but Youtube abounds with videos of happy CH kitties wobbling around. (Buddy, for example.) Our society has the attitude that disability, any disability, is a horrible fate. And if euthanasia was available, many people would approve of it even for minor conditions. Worse yet, many newly-disabled people, without giving themselves time to see if they can adapt and live well, would jump to the conclusion that their good life is over and seek euthanasia.

Another example of how many people will choose death over disability, even for very minor conditions, can be seen in abortion rates for prenatally diagnosed conditions. Most babies prenatally diagnosed with a disability, any disability, are aborted. In one study, out of 40 prenatally-detected cases of sex chromosome aneuploidy, 25 (63%) were aborted. To give some background, there are four types of common sex chromosome aneuploidies - XXX, XYY, XXY and monosomy X.

XXX and XXY are very mild, causing a slight drop in IQ (around 5-10 points) and a higher risk of learning disabilities and ADHD, as well as making kids a bit taller than expected. Many people with these two conditions are never diagnosed, because they have few or no symptoms and no one thinks to test their chromosomes.

XXY, also known as Klinefelter's Syndrome, is a bit more significant. In addition to the same traits seen in XXX and XYY, Klinefelter boys sometimes experience feminine body changes at puberty (breast growth, getting curves). These changes can be readily prevented by testosterone treatment. In addition, Klinefelter Syndrome usually causes infertility, which can't be prevented. Even so, many men are never diagnosed, or are only diagnosed in adulthood due to infertility.

Monosomy X, also called Turner Syndrome, is the only sex chromosome aneuploidy often detected at birth. It causes a distinctive set of minor physical changes, such as extra skin around the neck, which have no significant impact but allow doctors to spot the condition. In addition, Turner girls are infertile, shorter than expected, often don't go through puberty unless given estrogen treatments, and have an increased risk of congenital heart defects. Turner Syndrome also causes nonverbal learning disability and difficulty reading nonverbal cues.

Even the most severe of these conditions is easily managed, and poses no real impediment to a full and happy life. And two of these conditions are barely detectable at all. Yet when parents are given the option to abort, two-thirds of these kids will be aborted. Even a condition as mild as Turner Syndrome is seen as a serious enough problem to make not living at all preferable in their eyes.

And until this changes, I don't want yet another way for disability prejudice to kill us off.