In Hive Mind, Garett Jones draws on an array of research from psychology, economics, management, and political science to make the case that IQ scores are a strong predictor of national prosperity.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Garett Jones is Associate Professor of Economics at the Center for Study of Public Choice, George Mason University. Garett's research and commentary have appeared in The New York Times, Wall Street Journal, Washington Post, Forbes, and Businessweek.
Figures,
Acknowledgments,
Introduction: The Paradox of IQ,
1. Just a Test Score?,
2. A da Vinci Effect for Nations,
3. James Flynn and the Quest to Raise Global IQ,
4. Will the Intelligent Inherit the Earth?,
5. Smarter Groups Are More Cooperative,
6. Patience and Cooperation as Ingredients for Good Politics,
7. Informed Voters and the Question of Epistocracy,
8. The O-Ring Theory of Teams,
9. The Endless Quest for Substitutes and the Economic Benefits of Immigration,
10. Poem and Conclusion,
Data Appendix,
Notes,
Bibliography,
Index,
JUST A TEST SCORE?
HERE'S THE MOST IMPORTANT FACT ABOUT IQ TESTS: skill in one area predicts skill in another. If a person has an above-average score on one part of an IQ test — the vocabulary section, for instance — she probably has an above-average score on any other part of the test. A thorough IQ test such as the Wechsler or the Stanford-Binet actually contains about a dozen separate tests. So check to see whether that person did well on solving the vocabulary test: if she did, she's probably better than average at memorizing a long list of numbers, she could probably look at the drawing of a person talking to a police officer and instantly realize that the officer is standing knee-deep in water, and she probably did better than average on the wood block puzzle.
That's the real surprise of IQ tests and other cognitive tests: high scores in one area tend to go along with high scores in other areas, even ones that don't outwardly appear similar. Psychologists often talk about the "general factor of intelligence," the "g factor," or the "positive manifold," but let's call it "the da Vinci Effect," since Leonardo's excellence spanned so many subjects from painting to clock design to military engineering. The da Vinci Effect means that our parents and grandparents are usually wrong when they tell us "everything balances out in the end" or "if you're weak in one area that just means you're stronger in another." When it comes to IQ tests — on average — if a person is stronger in one area, that's a sign the person is probably stronger at other tasks as well.
We'll return to the notion of the da Vinci Effect a lot, so it's a concept worth understanding well. The claim isn't that every relationship between mental skills is always strongly positive — there are always exceptions to every rule, just as there are people who smoke two packs a day and live to be ninety. But, as we'll see in this chapter, many of the most commonly recognized general skills have strong positive relationships, and it's rare to find any sort of negative relationship across large groups of people.
IQ tests are often the stuff of controversy. What can they really tell us? What can they actually measure? What real-world outcomes can they help us to predict? That's exactly what we'll discuss in this chapter. It's going to focus exclusively on studies done in rich countries, studies in which test subjects are reasonably healthy and have some prospect of a real education. And I make a claim that, in these settings, the mainstream of psychology is also comfortable making: IQ tests are a rough, imperfect measure of what the typical person would call general intelligence.
Of course, a test score is just a test score until we've seen real evidence that it predicts something beyond other test scores. But when we see that the da Vinci Effect turns up repeatedly during IQ tests in today's rich countries, we know we're getting closer to the real-world version of intelligence: the ability to solve a variety of problems, quickly recall different types of information, and use deductive reasoning in multiple settings. When ordinary people say someone is intelligent, they usually mean that the person has mental skills that span a wide range. They mean that that person's mental skills have at least a touch of the da Vinci Effect.
"True on Average"
I discuss a lot of facts in this book and make a lot of claims about general tendencies. It should go without saying but bears repeating when discussing the important topic of human intelligence: these statements are only true on average. There are many exceptions; in fact almost every case is an exception, with about half of the cases turning out better than predicted and half turning out worse.
It would be tedious if I had to repeat the phrases "true on average" or "this relationship has many exceptions" or "tends to predict" every single time I make a factual claim. So I won't. But remember: every data-driven claim in this book is only a claim about the general tendency, and there are always exceptions. Every person we meet, every nation we visit, is an exception to the rules — but it's still a good idea to know the rules.
Intelligence: As with Strength or Size, Oversimplification Often Helps
Suppose you were given a hundred computers and told your job was to figure out which ones were faster than others. There's one catch: you don't know the actual processor speed of any of the computers. How would you rank them? You might try running ten or twenty different pieces of software on each of them — a video game or two, a spreadsheet, a word processor, a couple of web browsers. For each computer, you could write down, on a scale of 1 to 100, how fast the computer runs each piece of software, and then average those numbers together to create a computer speed index for each computer. Of course, the process won't be entirely fair — maybe you unintentionally chose a spreadsheet program that was designed specifically for one type of computer — but it's a step in the right direction. Further, it's probably better than just trying out one or two applications indiscriminately on each computer for half an hour and then writing up a subjective review of each machine. Structuring the evaluation process probably makes it fairer.
Now suppose you were trying to assess the overall physical strength of a hundred male Army recruits. You know that some people are great at carrying rocks and some are great at pushups and so on, but you also suspect that, on average, some people are just "stronger" than others. There will be tough cases to compare, but perhaps you could create a set of ten athletic events — call it a decathlon. People who do better in each event get more points. Wouldn't the people with the ten highest scores generally be quite a bit stronger — in the common sense of the word — than those with the ten lowest scores? Of course they would. There'd be an exception here and there, but the ranking would work pretty well. And here's a big claim you'll probably agree with: recruits who did the best in the decathlon would usually be better at other lifting-punching-carrying tasks that weren't even part of the decathlon. The decathlon score would help predict nondecathlon excellence.
Again, an index, an average, will hide some features that might be important. But for large, diverse populations, there is almost surely a da Vinci Effect for strength. It's not impossible for an adult male who benches only seventy-five pounds to be great at pullups, but it will be relatively rare. Usually, strength in one area will predict strength in others. Some people are on average "stronger" overall. You get the point: the da Vinci Effect comes up in areas of life other than discussions of mental skill. In these other, less sensitive areas, it's easy to see the value of a structured test. We get the same benefit by measuring intelligence in a structured way.
It was psychologist Charles Spearman who began the century-long study of the da Vinci Effect. In a 1904 study of students at a village school in Berkshire, England, Spearman looked at student performance in six different areas: the classics (works written in Greek and Latin), as well as French, English, math, discrimination of musical pitch, and musical talent. And while it's perhaps obvious that people who did better at French would usually be better at Greek and Latin, it's not at all obvious that people with better musical pitch would be substantially better at math — and yet that's what Spearman found.
But Spearman went further than that — he asked whether it was reasonable to sum up all of the data into just two categories: a "general factor" of intelligence, and a residual set of skills in each specific area. If you tried to sum up a person's various academic skills — or later, his test scores — with just one number, just one "general factor," how much information would you throw away? We do this kind of data reduction every time we sum up your body temperature with just one number. (You know you're not the same temperature everywhere, right?). We also do this when we sum up a national economy's productivity by its "gross domestic product per person" (which hides the various strengths and weaknesses of the medical sector, the restaurant sector, and so on), or even when we describe a person as simply "nice" or "mean." Whether the simplification works well is a practical matter — so how practical is it to sum up all of your cognitive skills on a variety of tests with just one number?
As it turns out, it actually works pretty well. Here's one way to sum it up for modern IQ tests: this "general factor," this "g factor," this weighted average of a large number of test scores, can summarize 40 to 50 percent of all of the differences across people on a modern IQ test. Some people do better on math sections, some do better on verbal sections, some do better on visual puzzles — but almost half the overall differences across all tests can be summed up with one number. Not bad for an oversimplification.
At the same time, this g factor in mental skills helps to explain why reasonable, well-informed people can dispute the value of IQ tests. On the one hand, it's great to know that one number can sum up so much. On the other hand, a little more than half of the information is still left on the table — so if you're hiring someone just to solve math problems or just to write good prose, you'd obviously want to know more than just that one overall IQ score. What the g factor can tell you is that your math expert probably has a good vocabulary.
Measuring Cognitive Skills: A Rainbow of Diverse Methods
It's worth noting that the most comprehensive IQ tests aren't like normal tests; they're structured more like interviews. Some skeptics dismiss IQ tests as just measuring whether you're good at staring at a piece of paper, coming up with an answer, and writing it down. But the comprehensive IQ test used most often today — the Wechsler mentioned earlier — involves little paper- staring and almost no pencils. The person giving the test (a psychologist or other testing expert) asks you why the seasons change or asks you to recite a list of numbers that she reads out to you. You answer verbally. Later you are handed some wooden puzzle blocks and you try to assemble them into something meaningful.
And on one section, you do actually take a pencil to mark down your answers. Your job on this "coding test" is to translate small, made-up characters into numbers using the coding key at the bottom of the page. The circle with a dot inside stands for 4; an "X" with a parenthesis next to it stands for 7. Code as many as you can in a minute or two. (Note that I am not using actual items from IQ tests here. I just use examples that are similar. One doesn't give away answers to IQ test questions.)
However, some more rudimentary IQ tests really are just written multiple-choice exams, and one of them plays an important role throughout this book and in economic research: Raven's Progressive Matrices. Take a look at Wikipedia's sample Raven's question (Figure 1.1): What kind of shape in the lower-right corner would complete the pattern? Fortunately, the real Raven's is multiple choice, so you needn't solve it yourself. In all these questions, the goal is to look for a visual pattern and then choose the option that completes the pattern.
The questions eventually get quite difficult. The lower-right corner is always blank, and you choose the best multiple-choice response. Raven's is popular because it can easily be given to a roomful of students at once (no need for one tester per student) and because it appears (note the italics) to have fewer cultural biases than some other IQ tests: the test doesn't measure your vocabulary, your exposure to American or British history, your skill at arithmetic, or any other obviously school-taught skill. Most people don't practice Raven's-style questions at school or at home, so training (which obviously can distort IQ scores artificially) might not be much of a concern.
Verbal Scores Predict Visual Scores Predict Verbal Scores
The g factor or da Vinci Effect means that your scores on one part of an IQ test predict your scores on other parts. But how well do they do that? Is it almost perfect? And if so, what does an "almost perfect" relationship look like in the real world? Here's one example: the relationship between the heights of identical twins. Identical twins are almost always almost exactly the same height as each other.
Throughout this book, when two measures have a relationship that strong, I'll call that a "nearly perfect" or "almost perfect" relationship. The two measures don't have to be recorded in the same units: the average monthly Fahrenheit temperature in Washington, D.C., has a nearly perfect relationship with the average monthly centigrade temperature in Baltimore, for instance, rising and falling together over the course of a year. Another example of a "nearly perfect" relationship is your IQ measured this week versus your IQ measured next week. A few people have exceptionally good or bad test days, but they're not common enough to weaken the nearly perfect relationship. Even more relevant: in one study, a person's adult IQ has an almost perfect relationship with his IQ five years later.
A slightly weaker but still strong relationship exists between the body mass index (BMI) of identical twins raised apart. BMI is a complicated ratio of weight and height that is used to measure whether people are over- or underweight. You can imagine why this relationship might be a bit weaker than the height relationship: some parents feed their kids more calories, some kids live in towns where sports are popular, and so on. But the rule that identical twins have similar BMI is still extremely useful. This is what we'll call a "strong" or "robust" relationship. This is like the relationship between your IQ when you're a teenager and your IQ when you're in middle age, at least in the rich countries. High scorers in tenth grade are almost always above-average scorers in middle age, with some doing noticeably better than before and some doing noticeably worse. Here, the exceptions are interesting, noticeable, an area for future research, but only a fool would ignore the rule. For instance, the link between national average test scores and national income per person is strong.
Slightly weaker relationships need their own expression, and we'll call those "modest" or "moderate" relationships. Here, big exceptions are extremely common, but if you're comparing averages of small groups of people, you'll still see the rule at work. An example we're all familiar with is the relationship between height and gender. Men are usually taller than women, but enormous exceptions abound: indeed, few would protest the statement "men are taller than women" because we all know it's just a generalization. These "modest" or "moderate" relationships sometimes exist between different parts of an IQ test or across very different kinds of IQ tests. For example, one study of third graders found a moderate relationship between a child's Raven's score and her vocabulary scores — but the same study found a strong, robust relationship between vocabulary scores and overall reading skills in the third grade, and by the fifth grade even the Raven's score had a robust relationship with reading skills. As people get older, the relationships across different parts of an IQ test tend to grow more robust.
This is one of the surprising yet reliable findings of the past century: visual-spatial IQ scores have moderate to robust relationships with verbal IQ scores, so you can give one short test and have a rough estimate of how that person would do on other IQ tests. My fellow economists and I have taken advantage of this aspect of the da Vinci Effect in our research. We often have test subjects take the Raven's matrices since it has a moderate to robust relationship with other IQ test scores and it's quite easy to hand out copies of the written test to groups of students.
Anything less than a "modest" relationship I'll call a "weak" relationship. That's like the relationship between height and IQ. The relationship is positive, but much taller people only have slightly higher than average IQs. The relationship isn't nothing, but it's an effect that will only be noticeable when you compare averages over large numbers of people. Typically, a group of women who are six feet tall are probably just a little bit smarter than a group of women who are five feet tall, with the emphasis on "just a little bit." You should still do the job interview even if she walks through the door at 4'11".
IQ Without a Test
Wouldn't it be wonderful if we could get a rough measure of someone's IQ, their average set of mental skills, without having to give any test at all? That way, all the arguments about test bias, language skills, and who went to a good school could fade into the background and we could have a useful, if only somewhat accurate, measure of a person's IQ. Fortunately, the past few decades have presented us with just such a measure, and it comes from an MRI machine. Yes, magnetic resonance imaging, the same device that's used to scan for tumors and heart disease.
Excerpted from Hive Mind by Garett Jones. Copyright © 2016 Board of Trustees of the Leland Stanford Junior University. Excerpted by permission of STANFORD UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
EUR 6,74 für den Versand von USA nach Deutschland
Versandziele, Kosten & DauerEUR 4,69 für den Versand von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & DauerAnbieter: ThriftBooks-Dallas, Dallas, TX, USA
Hardcover. Zustand: As New. No Jacket. Pages are clean and are not marred by notes or folds of any kind. ~ ThriftBooks: Read More, Spend Less 1.02. Artikel-Nr. G0804785961I2N00
Anzahl: 1 verfügbar
Anbieter: PBShop.store UK, Fairford, GLOS, Vereinigtes Königreich
HRD. Zustand: New. New Book. Shipped from UK. Established seller since 2000. Artikel-Nr. FW-9780804785969
Anzahl: 15 verfügbar
Anbieter: moluna, Greven, Deutschland
Gebunden. Zustand: New. In Hive Mind, Garett Jones draws on an array of research from psychology, economics, management, and political science to make the case that IQ scores are a strong predictor of national prosperity.Über den AutorrnrnGarett Jones i. Artikel-Nr. 595016403
Anzahl: Mehr als 20 verfügbar
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Neuware - Garett Jones is Associate Professor of Economics at the Center for Study of Public Choice, George Mason University. Garett's research and commentary have appeared in The New York Times, Wall Street Journal, Washington Post, Forbes, and Businessweek. Artikel-Nr. 9780804785969
Anzahl: 2 verfügbar
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
Hardcover. Zustand: Brand New. 205 pages. 9.50x6.50x0.50 inches. In Stock. Artikel-Nr. x-0804785961
Anzahl: 2 verfügbar
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
Zustand: New. In. Artikel-Nr. ria9780804785969_new
Anzahl: Mehr als 20 verfügbar