Tuesday, May 17, 2016

We Can't All Get Along

I came across this article recently.

Unfortunately, comments were closed on it, explicitly because it 'could turn bad and hurt people' and the owner of the community 'wants all people who come here to feel safe to express something'. A strange justification for shutting down conversation about a serious issue, in my opinion. But in any case, that's why I'm posting a blog entry about it instead of just having my say in the comments.

Essentially, the article imagines a hypothetical disability conference that brings together everyone in the 'disability community' to speak our piece with one rule - no one can say they're offended by something. If you get offended, and show it, you'll be booed out of the hall. This is painted out as a wonderful thing, but it sounds absolutely terrible to most self-advocates.

What the 'let's all get along camp' don't seem to get is that there really are irreconcilable differences. There are people in the 'disability community' who think people like me are better off dead. This is not hyperbole, this is serious. There are also people in the 'disability community' who openly endorse practices that I have nightmares about, practices that deeply wounded me as a child. (They weren't practiced under the same name, because I was undiagnosed, but the damaging aspects are the same.)

And I've had it relatively good, among autistic self-advocates. There are many people who have been hurt far worse than me. Think about it - this 'inclusive conference' would invite both Issy Stapleton and her mother - who, if you've followed the news story, you'll know has been convicted of trying to murder Issy. This conference would place a survivor of an attempted murderer in the position of being asked to listen to her murderer defend her actions and not take offense at that. And if she had the reaction that most victims of such a serious crime would have, she'd be 'booed out of the hall'.

There are also people who've gotten seriously hurt over less serious and more widely accepted practices in the disability community. Practices such as restraints, ABA, genital examinations, and others have been reported by self-advocates to have caused significant trauma and PTSD symptoms. It's not reasonable to expect someone not to get offended when you're describing their traumatic experiences and saying it's a good idea to do these things. It would be an exceptional person who could stay calm under those circumstances.

If you're a supporter of LGTB rights, imagine a conference like this one, but about LGTB issues. And we invite everyone - not just LGTB people and PFLAG members and supportive doctors and therapists, but also ex-gay ministries, church leaders, parents who abuse their LGTB kids, therapists like Kenneth Zucker (who teaches trans kids to act their birth gender), everyone who has an interest in LGTB people in any form.

And then you say that even if someone is endorsing practices that drove you to past suicide attempts or caused years of misery in your life, you're not allowed to get offended. If you act offended, your voice will be shut out.

Does that really sound like a good thing?

Thursday, April 28, 2016

Are We Getting Smarter?

I have a book somewhere called Are We Getting Smarter?. It's written by James R Flynn, the man for whom the Flynn effect is named. For those of you who haven't heard of it before, essentially, every decade the average IQ increases. It's estimated that using an IQ test normed in 1997, the average IQ in 1932 would have been 80 (it should be 100).

Flynn's argument is that this change does not reflect actual intelligence. Instead, it reflects changes in testability. Essentially, the increased exposure to tests, due to higher education rates, higher rates of testing in education, and so forth, makes people better at taking tests, and this increases their IQ.

This is certainly possible. But I doubt it accounts for all of the increase. There are a lot of reasons why we should actually be getting smarter, and not just better at taking tests.

First, nutrition. Severe malnutrition in infancy can lower a child's IQ by around 20 points compared to better fed children. In regions where many children are malnourished, a significant correlation between height and IQ is generally found, because both are reduced by malnutrition.

There's an increase in the rates of obesity and obesity-related diseases throughout most developed countries. This is a concern, certainly, but the bright side of this change has been a decline in the rate of severe malnutrition in all age groups, including children. This is also shown by the historical increase in average height in these countries. Further, we've also seen the discovery of vitamins in 1912 and the first vitamin supplements in the 1930s.

In addition, nutritional status for infants and pregnant women has been subject to particular changes. Like all adults, pregnant women have had an increase in overall nutrition. However, they have also been marketed far more nutritional supplements than the general population, and are more likely to take these supplements. Birth control has also improved the health and nutritional status of mothers. Larger families cost more to feed, and repeated pregnancies put more strain on the mother's body - especially if she's also breastfeeding. With birth control, women have more control over their rate of birth, and typically choose to have a small number of children who are well-spaced apart.

And speaking of breastfeeding, the ratio of bottle to breastfeeding has gone through two distinct transitions. Most of us today are taught that 'breast is best', but when formula was first invented, this was not necessarily the case. A malnourished mother will generally produce poorer-quality breastmilk, to the point where if she has access to clean drinking water and enough quality infant formula, her baby could very well be more healthy if formula fed. (In modern times, unfortunately, many malnourished mothers lack access to safe water and/or can't afford to buy enough formula to actually meet their child's needs. If the formula is watered down to make it last, or if the water used to mix the formula contains pathogens, breastfeeding is definitely better regardless of maternal nutritional status. However, due to the lack of birth control in that time period, many mothers from otherwise affluent backgrounds were malnourished purely because of back-to-back pregnancies.)

By the 1970s, however, maternal nutrition had improved substantially. It was around this time that the shift back to breastfeeding began, in part spurred on by evidence that breastfed infants appeared to be doing better than formula fed infants. Soon after, the first studies linking breastfeeding to higher IQ were performed.

Lastly, we've seen the adoption of public vaccination programs. The intended impact, of course, was to reduce the rate of serious complications from viral illnesses, such as congenital rubella syndrome and measles encephalitis (both of which can cause severe cognitive disability). Those complications are rare even when infection rates are high, and therefore would have only a minimal impact on average IQ. But illnesses also interact with malnutrition. Fighting illness takes resources, and a child who is frequently sick will need to eat more when healthy to compensate for the work of fighting off infections. A child who is both malnourished and frequently sick will be in a worse nutritional state than a child with the same diet who has been vaccinated for those illnesses.

So changes in infant and childhood nutrition and health certainly can account for some increase in IQ in the past hundred years. But this isn't the only factor that we've changed.


From ancient times, lead has been used to make many different things, such as pipes for drinking water. The Romans used lead pipes despite at least some awareness of the danger of lead poisoning. Medieval Europeans, on the other hand, seem to have forgotten that there were any dangers to lead, and this continued into the industrial and modern period, when the uses of lead extended from pipes to include paint, gasoline, and other things.

The clinical symptoms of acute lead poisoning are severe and obvious, but such poisoning has always been fairly rare. However, chronic low-level lead exposure in the first five years of life has been shown to lower IQ slightly in children without any clinical symptoms of lead poisoning. Such exposure would have been nearly ubiquitous before we knew the dangers of lead, and has been declining steadily since we removed lead from gasoline and paint and began gradually removing existing lead sources from our lives.

And lead is not the only environmental toxin we have reduced our exposure to, though it's the best documented. While there are probably some new toxins in our environment that we don't yet know the risks of, overall, we have gotten much more careful about what we have in our food, drinking water and the air we breathe - especially for young children. Though we don't know the impact of many of these other toxins on IQ, it's likely that at least some of them can decrease IQ in children.

Lastly, there are also nonbiological environmental effects on IQ. During the first 2 years, the brain is actually pruning unneeded neurons to make room for the ones we will need. And one big determination of whether a neuron makes the cut is how it's being used - for example, a 6 month old baby can distinguish all the possible phonemes in all human language, but by 9 months, babies can only distinguish phonemes that are important in the languages they've heard regularly spoken around them. (For example, a 9 month old exposed to only Japanese will have lost the 'l' and 'r' distinction.)

This becomes particularly crucial when we consider the most disadvantaged children in society. At the beginning of the last century, if a child was orphaned or their parents couldn't care for them, society's answer was to put them in an institutional setting. A child who spends the first few years of life institutionalized will frequently end up with an IQ in the borderline to mild cognitive disability range, as well as suffering a wide range of behavioural and emotional problems. Fortunately, with greater awareness of the impact of orphanages on child development, most developed countries have eliminated or greatly reduced their use - replacing them with foster care, which, while still problematic emotionally, does not have a noticeable effect on the child's IQ score.

But it's not just orphanages that can result in pruning important neurons because of insufficient stimulation. Among children living in family environments, children who are victims of parental neglect typically show a decrease in IQ compared to adequately cared for children. The most dramatic examples are children like Genie, who spent the first 13 years of her life in a single room, chained to a potty chair for most of the time. Although it's uncertain whether Genie's IQ was normal to begin with, her early pre-isolation development was definitely not consistent with the severe cognitive and language impairments she showed in her teens and adulthood.

Of course, cases like Genie are extremely rare, but many more children are exposed to subtle neglect. For example, a parent who regularly leaves her baby to be babysat by a five year old sibling not only places the physical safety of both children at risk - this also results in the baby being exposed to less adult conversation and less competent scaffolding of early interaction and play. Even a parent who is suffering from serious depression tends to interact less with their infant, resulting in poorer language and social skills.

The good news is that exposure to child maltreatment is also decreasing. It used to be that children were not apprehended at all, even for the most severe abuse - only children who were orphaned or willingly relinquished wound up in state care. Since then, more and more children are being removed against their parent's will, including neglected children like the hypothetical five year old babysitter and infant sibling described above.

Even when children are not removed, their care is improving. Psychiatric treatments and parenting skills programs are more readily available and have become more effective. For example, before the invention of antidepressants, the only treatments for depression were hospitalization or expensive psychotherapy, neither of which were as effective. Several decades later, CBT was also invented, and has since become the front-line psychotherapy for depression. Both antidepressants and CBT are as effective in treating depressed parents as they are in treating anyone else, and are certainly reducing the rate of infants exposed to chronic parental depression. Parent-specific programs, such as parental sensitivity training, have also increased tremendously.

In addition, we have also seen an increase in programs aimed specifically at children. Head Start, a program that provides education and support to toddlers and preschoolers from low income families, was first implemented in the 60s. Participation in Head Start appears to improve cognitive ability in children. The same effect may also be seen among low-income children attending preschools in general - and certainly preschool attendance has increased tremendously.

Wednesday, April 13, 2016

Parenting Impact on Autistic Kids: You Can't Have Your Cake and Eat It Too

In the 1960s, autism (and childhood schizophrenia, which included many kids who'd now be diagnosed with autism) was thought to be caused by bad mothers. The theory referred to 'refrigerator mothers' - mothers so cold and distant that the child turned to autism as a coping strategy.

Now, of course, we know this is nonsense, and very hurtful to the mothers who were so unjustly blamed. Parents of autistic kids don't consistently differ from parents of non-autistic kids in their parenting skills. But many people in the autism community* go too far in the opposite direction - denying any impact of parents on their autistic kids.

What they don't seem to realize is that if parenting styles don't affect autistic kids, then a lot of autism therapies would also be useless, because these therapies involve deliberately and systematically doing things that some parents do on their own.

The easiest example is relational therapies such as DIR and Floortime. The interactional style that the therapists take in these therapies is pretty much the same as the parenting dimension known as 'parental sensitivity' - a very well studied parenting dimension that has a lot of important implications for child development in both typical and disabled children. Parents high in parental sensitivity tend to have children who are more securely attached, have fewer behaviour problems, have better social-cognitive skills, and even have slightly better language skills (especially if they have a disability affecting language development, such as deafness).

Based on this, we would predict that an intervention mimicking sensitive parenting behaviour should reduce behaviour problems, improve social skills, and improve language development, as well as improving attachment security. At least two of those effects has been documented as a result of relational therapies, with this study, this study, this study and this study all finding improvements in social skills and this study finding improvements in expressive language in autistic children receiving relational therapies. But in order for this treatment to work, autistic kids must also be affected by naturally-occurring differences in parental sensitivity (such as differences due to the parent's own attachment style or marital conflict).

ABA is less easily equated to parenting styles, because there are two distinct aspects to ABA - direct teaching and prompting of skills, and consistent rewards and punishments to modify behaviour. In parenting styles research, those two components split up into separate domains of parenting behaviour.

The impact of consistent rewards and punishments has been very extensively studied under the dimension of consistent discipline. Children who get consistent discipline tend to show lower behavior problems and better attention and impulse control. So it stands to reason that ABA, which includes consistent discipline, would reduce behaviour problems, and the research supports this. But similarly, naturally occurring variations in how consistently parents discipline their children (such as differences due to parental depression) must also affect autistic children's risk of behaviour problems.

Parental teaching has been studied far less than consistent discipline. However, parental direct teaching is associated with improved emergent literacy skillsmathematics skills and earlier toilet training (most parents directly teach toilet training, but the parents who start earlier tend to have kids who are fully toilet trained earlier). So, at the very least, this suggests that ABA's direct teaching should improve academic skills and self-care skills. This study and many others have found that ABA improves self-care, but I couldn't find any data on ABA's impact on academic skills. Once again, if ABA can directly teach skills to autistic kids, individual differences in parents' tendency to directly teach skills must also affect their kids.

Almost all of the practices commonly used in autism therapy are also things that a subset of parents do on their own. So it's intellectually dishonest to simultaneously claim that these therapies can affect autistic kids' development and at the same time insist that naturally occurring differences in their parents' behaviour can't also affect their children. And if you claim those treatments can cure autism (a claim that isn't supported by the data, by the way), then parents who do the same things spontaneously should logically also be able to cure autism (or prevent it - a really early cure is indistinguishable from prevention).

If parents can't cause autism, they also can't cure it. And neither can a therapist who only does things that many parents do anyway.

* Note - I'm using 'autism community' to refer to the community of mostly parents and professionals, while 'autistic community' refers to the community of mostly autistic adults.

Tuesday, March 01, 2016

Are Most Autistic People Low Functioning? The Answer is NO!

You often see this claim in the comment section of news articles about high functioning autism. People say 'Well, they're the lucky few. Most people with autism are low functioning, like my child." And then they go on to describe their child is the most negative light possible.

Which made me wonder - what's the truth? I always figured that high functioning is more common, just like mild cognitive disability is more common than severe cognitive disability, because we're the tail end of a normal spectrum. But does the data back me up on that?

Now, it's important to keep in mind that low functioning autism tends to be diagnosed earlier and more easily than high functioning. So if we look only at already-diagnosed autistic people, low functioning will be over-represented. I will be looking at autism screening studies on a random selection from the general population, finding ones that a) provide some information relevant to functioning level, and b) used a design capable of detecting both extremes of functioning (eg screened children older than toddler-aged, did not select based on a sign of good functioning such as attending mainstream schools). In addition, I will only be looking at studies published in 1990 or later, and they must have found at least 15 autistic people.

Next question is how to define functioning level. I've seen a number of definitions - IQ score, language level, adaptive functioning, even presence or absence of certain behavior problems. It gets complicated. In this analysis, I'll be focusing on IQ score, language level and adaptive functioning, using the following definitions:

High functioning autism (all three of these):
* normal or above average baseline verbal functioning (though may have nonverbal episodes due to overload or other issues)
* normal or above average IQ
* adaptive functioning is mildly impaired or better (note - studies usually find a gap in IQ and adaptive functioning among autistic people with normal IQ)

Low functioning autism (at least two of these):
* baseline minimally verbal or nonverbal (though may use AAC devices)
* IQ score less than 50
* adaptive functioning is moderately to profoundly impaired

Anyone who doesn't meet criteria for either group is medium functioning.

If data is given on only one of these metrics, I'll base my judgments of functioning level on that metric alone, and make it clear that I'm doing so. Unfortunately, I only found three relevant studies, and all reported solely on IQ.

The first study was performed in a South Korean community in 2011. They screened both a random population sample of 7-12 year old children and a high risk sample, but I'll only discuss the general population sample here. They found a prevalence of 2.64% autistic kids in the general population sample, and ascertained 201 children.

In this study, functioning level data was based on IQ. The autistic kids from the general population sample had an average IQ of 98, which is clearly in the normal range. Indeed, an IQ of 98 is not significantly different from the general population average of 100. Only 16% of the children had an IQ less than 70, with the percentage less than 50 not being reported (note - in the general population, 3.5% have an IQ less than 70). Indeed, 12% had IQs over 120, in the high-average to gifted range. Therefore, the proportion of autistic kids in this sample who are high functioning is estimated at 84%.


The next study screened children in two UK communities in 2001. They screened 2.5 to 6.5 year old children. They found a prevalence of .6% and ascertained 97 children.

In this study, functioning level data was based on IQ. They do not report the average IQ, but 25.8% of the children had an IQ (or DQ, for the younger children) less than 70, with the prevalence of IQ less than 50 not being reported. Therefore, the proportion of autistic kids in this sample who are high functioning is estimated at 74.2%.

The last study was performed in Toyota, Japan in 2008. All children were screened for autism at 18 month and 36 month check-ups. They found a prevalence of 1.81% and ascertained 228 autistic children.

In this study, functioning level data was based on IQ. They found that 66.4% had an IQ of 70 or higher, and 16.1% had an IQ of 50 or less. Therefore, the proportion of autistic kids in this sample who are high functioning is estimated at 66.4% and the proportion who are low functioning is estimated at 16.1%.

The three studies all found a very high proportion of high functioning children, with 66-84% of the autistic kids having an IQ over 70. While not all of these children will be high functioning according to my criteria, most probably are. In addition, the older the sample of children, the higher the proportion of high functioning children, suggesting that HFA may be more difficult to diagnose in children under age 4, or that some kids may move from medium-low functioning to high functioning during this period. However, even the Japanese study, which performed its' second screening at 3 years of age, found a majority of children with average IQs.

Only the Japanese study provided data on how many autistic children had an IQ of less than 50, finding 16.1%. However, the other two studies almost certainly had even lower prevalence - particularly the South Korean study, which found 16% with IQs below 70.

Clearly, those commenters are wrong. Descriptions of high functioning autism are actually a far better representation of the majority of autistic people than descriptions of low functioning autism. Despite the scare tactics used by many 'awareness' campaigns, most autistic people have an average IQ. The severe, low functioning end is actually a minority among autistic children.

Wednesday, February 10, 2016

Autistic or Person With Autism?

A Wrongplanet member by the name of tetris recently performed a survey asking autistic people a few questions about how they preferred to be described. The results are posted here, but I'll also describe them here, and compare them with the only similar study I have found, this UK study.

Tetris had 321 respondents and four questions, with each question being responded to by 318-320 people. Only a small proportion skipped any questions.

The first question was 'Do you prefer Autistic or person with autism?' In response, 292 people (91.82%) said they preferred 'autistic'. Only 26 people (8.18%) preferred 'person with autism'.

The second question asked 'Do you mind if people use person with autism? (When it is maybe used interchangeably with autistic, this is not necessarily when it is insisted upon)' In response, 101 people (31.56%) said they would mind, and 219 people (68.44%) said they would mind.

The wording of this second question is a bit confusing to me. I had to do a double-take and read it over carefully, because in my experience, people sometimes answer 'do you mind?' questions in either direction (yes = 'you can do it' or yes = 'you shouldn't do it'). If others were similarly confused, it might make the results of this question unreliable. However, the results of this question do line up well with questions 1 and 3, so it might not be a major issue.

The third question asked 'Do you like it if people insist it should be person with autism?' This question, like the first one, got an overwhelmingly consistent response - 7 people (2.19%) said they liked it, 62 (19.38%) didn't care and 251 people (78.44%) disliked this.

It's interesting to note that, if we assume all those who disliked insistence on 'person with autism' self-described as 'autistic', that still leaves13.38% who use 'autistic' but don't mind others insisting on 'person with autism'. I wonder if these people have only a slight preference for 'autistic' over 'person with autism', and are happy using either term to describe themselves?

The fourth question addressed a different aspect of autism labeling - it asked 'Do you agree with functioning levels/labels? (LFA, HFA)'. In response, 52 people (16.25%) agreed with those labels, 56 people (17.50%) didn't care, and 212 people (66.25%) did not agree with those labels.

The UK study, meanwhile, asked people to tick off multiple labels from a list to indicate which ones they find acceptable for discussing autism. They studied autistic people, family members and professionals, but I will only discuss the results for autistic people. Note that they did not ask which the autistic people used to describe themselves, but more generally what they'd use to describe any autistic people.

To compare with tetris' first question, the UK study also found that 'autistic' was preferred over 'person with autism', but this preference was less pronounced - around 25% endorsed 'person with autism' and 60% endorsed 'autistic'.

Partly, this difference may be due to people being allowed to choose multiple options - so a person who finds both 'autistic' and 'person with autism' acceptable may have chosen both in the UK study, but would have had to choose between the two in tetris' study. This could explain the increased acceptance of 'person with autism' in the UK study, but it doesn't explain the reduced acceptance of 'autistic'.

However, the UK study also had 'autistic person' as an option (endorsed by 35%). It's possible that some people might be OK with being called 'an autistic person' but not with being called 'an autistic' (ie, they're OK with 'autistic' as an adjective but not a noun). The fact that these were two separate options in the UK study might have led people to see the 'autistic' option as implying use of that word as a noun, whereas the way tetris' first question is framed doesn't clearly imply either use. If we assume that everyone who chose 'autistic person' left 'autistic' unchecked, then the two together would make up 95% of the sample, similar to tetris' sample.

With regards to tetris' second question, the UK study found 25% endorsing 'person with autism', while tetris found that 31.56% did not mind if 'person with autism' was used. This similar percentage could imply that both questions are primarily tapping those individuals who use both terms interchangeably - probably preferring 'autistic', judging from tetris' question 1, but not strongly preferring that term.

The UK study did not have an equivalent to tetris' third question. But with regards to tetris' fourth question, the UK study did include 'high functioning autism' and 'low functioning autism' in their list of options. Approximately 30% of autistic respondents endorsed 'high functioning autism', while only around 5% endorsed 'low functioning autism'. This leaves 65-70% who did not endorse either term, lining up very well with the 66% in tetris' sample who did not agree with functioning labels. It is interesting, though, to consider that a substantial proportion of the UK sample were apparently OK with 'high functioning autism' but not 'low functioning autism'. I wonder what term(s) they'd prefer for the non-HFA individuals?

In any case, both studies agree on one important point - most autistic people prefer being called 'autistic people' rather than 'people with autism'. Although both studies found a few who preferred 'person with autism', over half of each sample endorsed 'autistic' or 'autistic person'

So which shows more respect - to use a phrase deemed by non-autistic people as 'respectful', or to describe people the way the want to be described?

Friday, February 05, 2016

Evolution is Not Incompatible With Religion, Part 1 - Why Literalism Is Wrong

I'm getting really sick of Creationists. And the thing that especially bothers me is how they equate creationism with Christianity and evolution with Atheism.

Well, there are plenty of Christians who believe in evolution, my own parents included. You don't have to hide your head in the sand and ignore the overwhelming evidence in favour of evolution, just to keep your belief in God. You can instead see evolution as the tool that God used to make all life, including us.

A related idea is the idea of taking the Bible literally, thinking every bit of it is the literal word of God. Which is quite frankly ridiculous, given the history of the Bible and its' stories.

When I was a kid, at some sort of summer camp (might have been Girl Guides, I can't remember), one of the camp leaders led us in a game called 'telephone'. In that game, the kids sit in a row, and the teacher hands the first kid in line a note. The kid reads the note and whispers what it says, word for word, in the ear of the kid beside him or her. Each kid down the line then whispers the message to the next kid, doing their best to copy it precisely.

Of course, the message seldom comes through exactly the same. Every single time we played this game, the message was changed - sometimes it was almost unrecognisable. And the same must be true of the Bible.

Historians are not certain exactly when the books of the Old Testament were written. We do know that by the time they were first written down, most of the tales they contained were already very old. Before the Bible was written, these tales were passed down by oral tradition. And no matter how precisely people tried to maintain these stories, the game of telephone shows us what happens when a message is passed from person to person orally - it gets changed.

Even siblings can disagree on what happened during a memorable childhood event - neither of them lying, but instead simply remembering the same events differently. And this can be seen in the New Testament, which was written by a mix of the Apostles and some of the early Christians. Most, if not all, of the New Testament was written many years after Jesus' death. Even the parts written by the original twelve Apostles don't all agree, just as any story told by many people will not perfectly agree. And other parts were written by men who had joined the church later, such as the Apostle Paul (who was not one of the original twelve).

Furthermore, the Bible was not written in English. The Old Testament was written in Old Hebrew, the language of the Israelites at the time, and the New Testament was written in Koine Greek (an older form of Greek, which was used in the eastern half of the Roman Empire during the time of Jesus). Both testaments were translated into Latin in 400 AD (though Greek translations of the Old Testament were available, the Hebrew version was used, as it was felt to be more accurate).

This Latin version continued to be used throughout the Middle Ages, long past the death of Latin as a living language (which meant there were only second-language speakers of Latin). During that time, translation of the Bible was forbidden, for fear that a Bible that could be read by laypeople would be misinterpreted by them. Although a few translators broke this rule, it was not until 1604 that the most widely accepted English Bible, the King James Bible, was written, translating from the Latin version.

In 1604, of course, English was spoken differently than it is now. This was during the lifetime of Shakespeare, and just as many people today find Shakespeare's plays hard to understand, they have similar issues with the King James Bible. So many versions of the Bible today have been translated yet again, into a more modern form of English, using the King James Bible as a base.

So, the Bibles owned by most people today are either a translation of a translation, or a translation of a translation of a translation! This is important because translations are also a source of error. Many concepts don't map perfectly across languages. I'm French-English bilingual, and I can think of some examples where French concepts don't map perfectly to English - for example, there are certain verb tenses that are not shared across the two languages. (A more relevant example is that in the form of Greek used in the New Testament, homosexuality and pedophilia are both described by the same word, arsenokoitai, making it unclear whether the Apostle Paul condemned gays, pedophiles, or both.) The more distantly two languages are related, the worse this incomparability becomes. While Greek, Latin and English are all Indo-European languages, Hebrew is a Semitic language, so the Old Testament Hebrew->Latin translation must have been especially tricky. This makes any literal interpretation of the English Bible especially prone to error.

Besides that, from the quotes of Jesus' parables, it's clear that Jesus did not speak literally. When he talked about a man sowing seeds that either grew or failed to grow, he wasn't just giving farming advice - he was drawing an analogy between seeds and believers. Since Jesus' parables are so clearly intended to be interpreted rather than taken literally, why would we expect the rest of God's word to be literal? Why can't Old Testament tales be just as figurative as Jesus' parables?

Thursday, January 28, 2016

How to Make Online Spending More (In)Accessible

Extra Credits recently released this video:



In it, they discuss how the EU has recently passed some laws trying to protect children from predatory free-to-play games, and how children really aren't the big target of these games. For those of you not well versed in the game industry, free-to-play games are games which cost absolutely nothing to acquire - but you can spend money to unlock various upgrades, such as cosmetic changes, increases in power or greater options.

While responsible free-to-play games expect most of their players to spend a small amount of money on those items of most interest to them, predatory free-to-play games attempt to get players to spend thousands of dollars on their game. As you might expect, most children aren't able to spend thousands on one game. Most parents don't allow kids free access to their credit cards. Children playing free-to-play games - or doing any other activity involving online purchases - usually have to clear their purchases with a parent, who is usually not invested in the game and therefore not likely to spend more than they can afford on it. Sure, there are exceptions, but this is not the real danger from free-to-play games.

The real danger is what free-to-play does to adults with weak executive functions. Maybe people using a game to cope with a mental health issue, and therefore not putting the game in its' proper perspective, or people who have trouble understanding the value of money, or maybe just people who are impulsive and/or compulsive with most things they do. In other words, people like me - or like I would be, if I didn't know myself so well.

And it's not just free-to-play. All electronic purchases carry this risk. Electronic money isn't tangible. It's hard to get a visceral sense of how much - or how little - you have. If it's getting harder and harder to find the bills in my pocket, I know I'm running out of cash. But how can I get the same feedback from a card? With my bank card, I only know I'm out when I try to buy something and fail, which is why I prefer to pay cash instead.

With a credit card, you don't even get that much feedback - instead of declining a purchase when you run out, it'll just send you into debt. And by the time a credit card refuses to buy what you're trying to buy, you'll be hundreds of dollars in debt.

So, how can we protect people who have trouble monitoring how much they spend? By making it more tangible and inconvenient to spend money. On my iPhone, I don't have a payment method programmed in for the app store. I do this deliberately, so that if I accidentally or mistakenly choose an option that costs money, it'll throw up an error. The one time I decided to spend money on the app store, I got a pre-paid card - it can be used in place of a credit card, but has only a very limited amount of money and can't have money added to it.

I programmed it in, and bought the apps I wanted, running myself broke. Then, I decided to surf the app store for some more free apps, and was horrified. The app store had stopped telling me the prices of the apps, or even whether they cost money or not. I'd try to download an app and the first sign I got that it cost money was when it popped up an error because it had tried to buy itself and my card was out. It was also unexpectedly difficult to make my iPhone forget about the card and revert to thinking I had no payment method.

To make online spending more accessible for people with poor executive functions, we need to (ironically) make it less accessible. Every purchase should be a deliberate decision, and one requiring several steps. It should require you to transition from your current activity to do something else in order to purchase. It should be an annoying process. Not difficult, but annoying. And most of all, you should always, always, know exactly what purchases you're making and how much they cost.

Most companies are never going to do this of their own free will, of course. They want us to lose control of our spending. They want us to spend more than we can afford, to spend money on their product that we'd have otherwise spent on necessities or on building a brighter future. The fundamental truth about capitalism is that they don't really care about us. All that matters to them is lining their own pockets.

And so the only way that companies will do what I propose is if they're forced to.