Everything is great as long as the cities burn

July 10, 2020

When the George Floyd demonstrations began, there was a lot of justified concern that they would act as super-spreader events and set back the progress we have made against covid. (One couldn’t help but notice that the same people who been highly critical of public gatherings just days before were fine with gatherings in what they deemed to be a good cause.) This paper argues that, against all expectation, the demonstrations actually slowed the spread of covid.

The paper has met with some skepticism, but the methodology seems reasonable to me. I think it’s very plausible. However, its results are widely misunderstood, and to some degree it is the authors’ fault.

The paper explicitly acknowledges that they did not even attempt to look at whether covid spread at the demonstrations. Instead, the paper looks at cell-phone data to determine whether, on balance, the public overall congregated more during them. (ASIDE: The paper also looks a little at growth in covid cases, and that part is less convincing.) They found that the general public was so scared of the violence at these riots that they stayed home, and that effect outweighed the effect of the demonstrations themselves.

This makes sense. As large as the riots were, the vast majority of people did not participate, so even a modest negative effect among the majority could outweigh the riots themselves.

This is being spun as a defense of the demonstrations. Partly, that is the fault of the authors who asserted in their conclusion that “public speech and public health did not trade off against each other in this case.” This is a bizarre take.

Yes, public speech and public health may have coexisted, but the price of their coexistence was burning cities, many deaths, immeasurable property damage, and countless ruined lives. This is not a positive result.

The paper compounds the problem by failing to look at the key question: what happened when the riots settled down but the protests continued? At that point we would expect the negative effect to fade and the positive effect to dominate. Alas, the paper does not look at that question, so we don’t have the cell-phone data, but we can look at public data on covid spread.

According to the notes I took at the time, there was relative peace in the cities starting June 2. We expect a week’s incubation time between exposure and symptoms, so we would expect to see a surge in covid cases starting June 9. And, if we look at the data, that is exactly what we see.

To summarize, the burning of our cities masked the anticipated effect of the demonstrations on covid, but the anticipated rise happened as soon as the demonstrations became peaceful.

(Via Daily Wire.)


Heterogeneous measures

June 26, 2020

The CDC observes that the coronavirus outbreaks in Arizona, California, Florida, and Texas skew towards younger people, so the consequences will probably be less bad than the earlier outbreaks in the northeast:

Ongoing outbreaks of COVID-19 in Arizona, California, Florida and Texas are “significant,” but the younger average age of confirmed cases in these states might mean the “consequences” will be less severe, U.S. Centers for Disease Control and Prevention officials said Thursday.

In light of that, I wanted to revisit this paper out of CMU and Pitt. It shows that heterogeneous measures lead to significantly fewer deaths than homogeneous measures. In fact, heterogeneity matters much more than the strictness of the measures: even a more-moderate heterogeneous measure is significantly better than a stricter homogeneous measure. In plain English, what that means is we should be protecting the vulnerable tightly, but be looser with the less vulnerable. (And keep in mind this is just about deaths, without any consideration of ruined livelihoods or economic damage.)

This CDC observation suggests to me that these heterogeneous measures are finally happening. That’s a good thing.

POSTSCRIPT: Heterogeneous measures are better assuming they are not upside-down — as they were in New York and some other states — which locked down the general public tight but pushed coronavirus patients into nursing homes. That was insanity.

(Via Instapundit.)


The scientific tea party

May 22, 2014

This will come as no surprise to actual Tea Party people, but a major shock to the left:

Yale Law professor Dan M. Kahan was conducting a n analysis of the scientific comprehension of various political groups when he ran into a shocking discovery: tea party supporters are slightly more scientifically literate than the non-tea party population.

Shocking? Well, maybe to readers of the New York Times and the Huffington Post (i.e., liberals):

I’ve got to confess, though, I found this result surprising. As I pushed the button to run the analysis on my computer, I fully expected I’d be shown a modest negative correlation between identifying with the Tea Party and science comprehension.

But then again, I don’t know a single person who identifies with the Tea Party. All my impressions come from watching cable tv — & I don’t watch Fox News very often — and reading the “paper” (New York Times daily, plus a variety of politics-focused internet sites like Huffington Post & Politico).

I’m a little embarrassed, but mainly I’m just glad that I no longer hold this particular mistaken view.

(Via Monster Hunter Nation.)

In related news, conservative republicans are the most likely to know that the earth revolves around the sun, and the least likely to believe in astrology. Liberal democrats are the most likely to believe in astrology. Conservative and moderate Democrats are the least likely to know the earth revolves around the sun. (In fairness, liberal Democrats do decently well on heliocentrism.)

POSTSCRIPT: And, for the record: a rebuttal to a rebuttal of the Democratic astrology result.


The scientific method

March 1, 2014

The scientific method is about turning hypotheses into testable predictions, and then testing them. So who are the scientists here?

It turns out that a 200-year-old publication for farmers beats climate-change scientists in predicting this year’s harsh winter as the lowly caterpillar beats supercomputers that can’t even predict the past.

Last fall, the National Oceanic and Atmospheric Administration’s Climate Prediction Center (CPC) predicted above-normal temperatures from November through January across much of the continental U.S. The Farmers’ Almanac, first published in 1818, predicted a bitterly cold, snowy winter.

The Maine-based Farmers’ Almanac’s still-secret methodology includes variables such as planetary positions, sunspots, lunar cycles and tidal action. It claims an 80% accuracy rate, surely better than those who obsess over fossil fuels and CO2.

Now I can’t testify to the accuracy of any of these claims; they are — as they say — too good to check. But it is certainly the case that the climate scientists haven’t been making predictions that come true.

(Via Instapundit.)


STAP cells

February 9, 2014

Scientists appear to have discovered a new way to create stem cells without killing embryos that’s even easier than iPS cells (induced pluripotent stem cells). They found that simply exposing blood cells to acid turns them into stem cells. They call the process STAP (stimulus-triggered acquisition of pluripotency).


Concealed carry = fewer murders

January 18, 2014

Give law-abiding citizens the ability to protect themselves, and crime goes down:

Using data for the period 1980 to 2009 and controlling for state and year fixed effects, the results of the present study suggest that states with restrictions on the carrying of concealed weapons had higher gun-related murder rates than other states. It was also found that assault weapons bans did not significantly affect murder rates at the state level. These results suggest that restrictive concealed weapons laws may cause an increase in gun-related murders at the state level. The results of this study are consistent with some prior research in this area, most notably Lott and Mustard (1997).

Imagine that!


Naivete considered harmful

January 9, 2014

When I write about climate science, I take care to remember my limitations: As a computer scientist, not a climate scientist, I’m not really qualified to comment on the details of their work.

They would do well to do the same. Consider Gavin Schmidt’s naive remarks on programming:

when are scientists going to stop writing code in fortran?

as crimes go, using fortran is far worse than anything revealed in “climategate”…

[Response: You might think that, but it’s just not true. Fortran is simple, it works well for these kinds of problems, it complies efficiently on everything from a laptop to massively parallel supercomputer, plus we have 100,000s of lines of code already. If we had to rewrite everything each time there was some new fad in computer science (you know, like ‘C’ or something ;) ), we’d never get anywhere. – gavin]

Fad? Structured programming has been the dominant paradigm since the late sixties. That’s only ten fewer years than programming languages have even existed. Back when structured programming was invented, climate scientists were still taking about global cooling!


Global warming review

January 9, 2014

In honor of the passing of the polar vortex, I’d like to review what I think we know about the state of climate science:

  1. The earth is warming, probably as a result of human carbon dioxide emissions.
  2. The amount the earth has warmed in modern times so far, although measurable, is negligible. The warming trend we see (as in the famous “hockey-stick” charts) uses a smoothing function, and is much, much smaller than the year-to-year variation.
  3. Thus, anyone who says this or that weather is the result of global warming, is a fool, a liar, or both. Generally climate scientists refrain from making such statements, but climate activists and politicians do not. (This problem seems particularly prevalent in Britain, where both parties’ leaders believe global warming is affecting current weather.)
  4. We can calculate the direct impact of increased carbon dioxide on the climate as a straightforward physics problem. The direct impact is small.
  5. Predictions of dire consequences are based on feedback loops. For example, warming causes ice to melt, which results in more clouds, which either increase or decrease warming depending on where and how they form. Climate scientists differ on whether positive or negative feedbacks will dominate, but the former camp (i.e., feedback leads to more warming) seems to be larger, and is certainly more influential and better funded.
  6. There is no way to test whether the positive or negative feedbacks dominate, and by how much, so climate scientists build models. Predictions of future climate are made on the basis of these models.
  7. Most climate scientists (at least the influential, well-funded ones) believe that the strong-positive-feedback models are more convincing.
  8. KEY POINT: However, there is no way actually to test the long-term predictions of these models without waiting for the long term. The short-term predictions of the models have generally not come true. (In fact, in 2005 Gavin Schmidt could only point to one instance in which a climate model made a prediction that was subsequently validated.)
  9. Consequently, we just don’t know what long-term effects increased carbon dioxide will have, scientifically speaking.
  10. However, we do know that cutting carbon dioxide emissions to a degree that would make a difference (according to the models) would not only be disastrous, it is literally impossible, barring an unforeseen technological advance.
  11. Thus, we ought to be looking at reasonable, cost-effective ways to limit CO2 emissions (e.g., nuclear energy, carbon sequestration), but not ridiculous ways (e.g., everything the left wants). We should also be looking at geoengineering in case the worst comes to pass.

Note that above I am not criticizing any of the work of any climate scientists. I simply don’t have the background to do it. Other people who do have the background have criticized their work (the media calls them “climate skeptics”), but I have no way to judge who is in the right. Thus, I’m relying on the consensus view (by which I mean actual consensus, not the consensus of just one side, which is how the media seems to use the term), and — in points 8 and 9 — my basic understanding of the scientific method.

One area in which I do have the background to criticize their work is in their programming. Much of climate science relies heavily on data sets that must be processed by computer. Unfortunately, it seems that their standards of programming is very low, at least if the story of HARRY_READ_ME.txt is typical. This means that the data they are using is suspect. (And, unfortunately, the raw data doesn’t exist any more!)

Worse, there is very good reason to worry that the academic process itself in climate science is badly broken:

  • Climate scientists refuse to share their data, and actually delete their data when they might be forced to release it. (They also lie about it, and even break the law.) From scientists this is astonishing and horrifying, and it tells us that we simply cannot believe their results.
  • Influential climate scientists have subverted the peer-review process. This corrupts their entire field, not just their own work.
  • On occasion, they tell outright lies.
  • Alas, there’s no indication that anything has improved in the field since all the above misconduct came to light.

So where does this leave us? Climate scientists have a tough job: they can’t run controlled experiments and their most important predictions can’t be tested. You can’t blame them for that. You can blame them for shoddy programming, and for academic misconduct.


CDC report debunks anti-gun tropes

August 19, 2013

After the Newtown massacre, President Obama ordered the CDC “to research the causes and prevention of gun violence”. He evidently thought that it would come back with a report supporting gun control, making him a fool but a true believer. In fact:

On the contrary, that study refuted nearly all the standard anti-gun narrative and instead supported many of the positions taken by gun ownership supporters.

For example, the majority of gun-related deaths between 2000 and 2010 were due to suicide and not criminal violence . . .

In addition, defensive use of guns “is a common occurrence,” according to the study . . .

Accidental deaths due to firearms has continued to fall as well, with “the number of unintentional deaths due to firearm-related incidents account[ing] for less than 1 percent of all unintentional fatalities in 2010.”

Furthermore, the key finding the president was no doubt seeking — that more laws would result in less crime — was missing. The study said that “interventions,” such as background checks and restrictions on firearms and increased penalties for illegal gun use, showed “mixed” results, while “turn-in” programs “are ineffective” in reducing crime. The study noted that most criminals obtained their guns in the underground economy — from friends, family members, or gang members — well outside any influence from gun controls on legitimate gun owners.

And:

There was one startling conclusion which, taken at face value, seemed to give the president what he was looking for. The study reported that “the U.S. rate of firearm-related homicide is higher than that of any other industrialized country: 19.5 times higher than the rates in other high-income countries.” . . . [However:] “If one were to exclude figures for Illinois, California, New Jersey and Washington, DC, the homicide rate in the United States would be in line with any other country.” These areas, of course, are noted for the most restrictive gun laws in the country, thus negating any opportunity for the president to celebrate the report’s findings.

(Via Instapundit.)


Calorie labels don’t help

August 12, 2013

Research by CMU (my employer) finds that calorie information on menus does not induce people to consume fewer calories:

Despite the lack of any concrete evidence that menu labels encourage consumers to make healthier food choices, they have become a popular tool for policymakers in the fight against obesity. Carnegie Mellon University researchers recently put menu labels to the test by investigating whether providing diners with recommended calorie intake information along with the menu items caloric content would improve their food choices. . .

The results showed no interaction between the use of calorie recommendations and the pre-existing menu labels, suggesting that incorporating calorie recommendations did not help customers make better use of the information provided on calorie-labeled menus. Further, providing calorie recommendations, whether calories per-day or per-meal, did not show a reduction in the number of calories purchased.

Here’s a thought: Maybe people who order high-calorie food in restaurants are not ignorant of what they are doing. Maybe they are just making different decisions than the busybodies want them to make.

In any case, this research makes clear that the government should stop mucking with menus.


Unclean at any speed

July 22, 2013

Research suggests that electric cars are bad for the environment.


End the war on salt

July 22, 2013

The CDC has finally admitted that efforts to reduce people’s salt intake to nearly zero were completely misguided.

Others have known this for years.

(Previous post.)


End the war on salt!

May 23, 2013

Another blow to the war on salt:

In a report that undercuts years of public health warnings, a prestigious group convened by the government says there is no good reason based on health outcomes for many Americans to drive their sodium consumption down to the very low levels recommended in national dietary guidelines.

To save you the trouble of scrolling back through my archives, here’s the story on the war against salt: People are not all the same. Some people need more salt, some people need less. However, a small number of the people who need less salt can suffer serious health effects from having too much, more serious than the detrimental effects on people at the other end of the spectrum having too little.

Thus, the epidemiologists decided that if they were going to have a one-size-fits-all health policy (not having one never seems to have occurred to them), they should err on the side of less salt. But, having decided on the policy, they needed to punch up the rhetoric to make it effective, so they told everyone that they should cut salt nearly to zero, whether or not that was true for them as individuals. Thus, they “made a commitment to salt education that goes way beyond the scientific facts.”

(Previous post.) (Via Tom Maguire.)


Read it and weep

March 13, 2013

If you’re a fan of electric cars, you won’t like this:

A 2012 comprehensive life-cycle analysis in Journal of Industrial Ecology shows that almost half the lifetime carbon-dioxide emissions from an electric car come from the energy used to produce the car, especially the battery. . . When an electric car rolls off the production line, it has already been responsible for 30,000 pounds of carbon-dioxide emission. The amount for making a conventional car: 14,000 pounds.

While electric-car owners may cruise around feeling virtuous, they still recharge using electricity overwhelmingly produced with fossil fuels. Thus, the life-cycle analysis shows that for every mile driven, the average electric car indirectly emits about six ounces of carbon-dioxide. This is still a lot better than a similar-size conventional car, which emits about 12 ounces per mile. But remember, the production of the electric car has already resulted in sizeable emissions—the equivalent of 80,000 miles of travel in the vehicle.

So unless the electric car is driven a lot, it will never get ahead environmentally. And that turns out to be a challenge. Consider the Nissan Leaf. It has only a 73-mile range per charge. Drivers attempting long road trips, as in one BBC test drive, have reported that recharging takes so long that the average speed is close to six miles per hour—a bit faster than your average jogger.

It gets worse:

To make matters worse, the batteries in electric cars fade with time, just as they do in a cellphone. Nissan estimates that after five years, the less effective batteries in a typical Leaf bring the range down to 55 miles. As the MIT Technology Review cautioned last year: “Don’t Drive Your Nissan Leaf Too Much.”

And if you replace the batteries, you get to start your carbon footprint all over. Bottom line:

If a typical electric car is driven 50,000 miles over its lifetime, the huge initial emissions from its manufacture means the car will actually have put more carbon-dioxide in the atmosphere than a similar-size gasoline-powered car driven the same number of miles. . . Even if the electric car is driven for 90,000 miles and the owner stays away from coal-powered electricity, the car will cause just 24% less carbon-dioxide emission than its gas-powered cousin.

(Emphasis mine.)

But, electric cars put a lot of money into the pockets of Obama’s cronies, so they have that going at least.


Yes, Virginia

December 29, 2012

Yes, the 2008 financial meltdown was substantially caused by the Community Reinvestment Act, which:

  • Required banks to lend more to low-income communities.
  • Directed Fannie and Freddie to buy up mortgages and turn them into securities.
  • Directed Fannie and Freddie to buy up high-risk mortgages, thereby encouraging banks to make more high-risk loans.

The left is desperate to deny this, since the financial meltdown was their entire pretext, not just for staying in power despite an appalling economic record, but also for ruinous regulation of the financial sector. So far, with the help of their media allies, they have been largely successful at keeping the connection out of the public consciousness.

But that hasn’t kept economists from studying the subject, and a new paper shows a strong connection:

Did the Community Reinvestment Act (CRA) Lead to Risky Lending?

Yes, it did. We use exogenous variation in banks’ incentives to conform to the standards of the Community Reinvestment Act (CRA) around regulatory exam dates to trace out the effect of the CRA on lending activity. . . We find that adherence to the act led to riskier lending by banks. . . These patterns are accentuated in CRA-eligible census tracts and are concentrated among large banks. The effects are strongest during the time period when the market for private securitization was booming.

POSTSCRIPT: Because we ought to be reminded of it every few years, after the jump are some excerpts from the anti-prophetic 1999 LA Times article that covered nearly every aspect of the CRA that caused the financial crisis without seeing any problem with any of them:

Read the rest of this entry »


Economics 101

November 4, 2012

It’s literally on the first day of a typical introductory economics course that students are typically taught that price caps lead to shortages. Shortage lead to non-price rationing schemes for what supply is available, such as long lines.

Unfortunately, our politicians seem to have less than one day of economics training. Laws against “price gouging” — that is laws that forbid the market to adopt the market-clearing price dictated by supply and demand — are nothing more than price caps, and lead directly to shortages. We see this playing out once again in the wake of Hurricane Sandy:

Without “price gouging” laws, the price would rise, thereby encouraging distributors to ship more gas to the area, and also discouraging people from buying gas they don’t need. Also, it would put a stop to people waiting hours for gas.


Bag bans kill

August 24, 2012

It’s always fascinating to observe through practice the hierarchy of liberal causes. For example: when it’s convenient to them, liberals would have you believe that safety is the top priority for society (“think of the children!”). But while liberals favor safety over nearly any individual liberty, it’s at nearly the bottom when measured against other liberal priorities.

Especially environmentalism. Leftists across the country are enacting bans or heavy taxes on plastic grocery bags, and brushing off the scientific evidence that the reusable tote bags that they prefer are unsafe because they create a fertile breeding ground for harmful bacteria.

For instance, in a story headlined “Bacteria May Grow In Reusable Grocery Bags, But Don’t Fret”, NPR argued that:

Dr. Susan Fernyak, director of San Francisco’s Communicable Disease and Control Prevention division, tells Shots, “Your average healthy person is not going to get sick from the bacteria that were listed.” . . .

San Francisco banned disposable plastic shopping bags three years ago. San Francisco’s bag ban hasn’t affected the rates of E. coli infection in town, Fernyak says.

That was a strange position to take, from people who usually think that the slightest possibility of danger justifies a ban. Moreover, according to the latest research it simply isn’t true. A study at the University of Pennsylvania found that food-borne illness did spike after San Francisco enacted its ban, and that people died as a result.


This just keeps getting better

July 28, 2012

The CFL bulbs that the government wants to force us to use can damage human skin. Great.

(Via Instapundit.)


The salt is a lie!

June 6, 2012

Gary Taubes has an op-ed piece in the New York Times blasting the war on salt. As I’ve noted before, the anti-salt campaign is far out in front of the actual scientific evidence, and has been for decades. One key quote:

When I spent the better part of a year researching the state of the salt science back in 1998 — already a quarter century into the eat-less-salt recommendations — journal editors and public health administrators were still remarkably candid in their assessment of how flimsy the evidence was implicating salt as the cause of hypertension.

“You can say without any shadow of a doubt,” as I was told then by Drummond Rennie, an editor for The Journal of the American Medical Association, that the authorities pushing the eat-less-salt message had “made a commitment to salt education that goes way beyond the scientific facts.”

And another:

An N.I.H. administrator told me back in 1998 that to publicly question the science on salt was to play into the hands of the [food] industry. “As long as there are things in the media that say the salt controversy continues,” he said, “they win.”

I should add that the war on salt has been personally detrimental to me and my family. Some people (like me) need more salt than most, and the anti-salt warriors have sometimes made it difficult to obtain.

POSTSCRIPT: The anti-salt campaigners were happy to tell us that the science was settled, when it wasn’t remotely settled. That’s something to keep in mind when other campaigners tell us the same.

(Previous post.)


Game theory

April 22, 2012

Awesome:

(Via Althouse.)


The scientific method, the social sciences, and proof

April 8, 2012

The New York Times has an opinion piece decrying the efforts of social scientists to employ the scientific method:

Many social scientists contend that science has a method, and if you want to be scientific, you should adopt it. The method requires you to devise a theoretical model, deduce a testable hypothesis from the model and then test the hypothesis against the world. If the hypothesis is confirmed, the theoretical model holds; if the hypothesis is not confirmed, the theoretical model does not hold. If your discipline does not operate by this method — known as hypothetico-deductivism — then in the minds of many, it’s not scientific.

Such reasoning dominates the social sciences today. . . But we believe that this way of thinking is badly mistaken and detrimental to social research. For the sake of everyone who stands to gain from a better knowledge of politics, economics and society, the social sciences need to overcome their inferiority complex, reject hypothetico-deductivism and embrace the fact that they are mature disciplines with no need to emulate other sciences.

There’s no question that the social sciences are handicapped by the difficulty in doing controlled experiments. But does that mean they shouldn’t even try? They offer no real argument in support of their position.

In fact, the argument that they try to make exposes a complete misunderstanding of the scientific method. They argue that the scientific method is unnecessary because of various examples in which the sciences made do with mathematical proofs in place of experiments.

Okay, great! But understand that mathematical proof is better than experimental evidence, not worse. Proof offers us certainty, but it’s only available to certain domains. Unlike mathematics, logic, and much of computer science, the physical sciences have to settle for experimental evidence, because they cannot get the certainty that comes from mathematical proof.

I’ll happily allow that social scientists, or any scientists, can set aside experimental testing of their hypotheses in favor of something better. But the authors aren’t arguing for that. Instead they make the unjustifiable leap to the notion that because experimental testing is not always necessary, we can settle for something worse.  That’s nonsense.


Radio does not cause cancer, but the WHO waffles

February 18, 2012

The Economist reports (subscription required):

ALTHOUGH the myth that mobile phones cause cancer has been laid to rest, an implacable minority remains convinced of the connection. Their fears have been aggravated of late by bureaucratic bickering at the World Health Organisation (WHO). Let it be said, once and for all, that no matter how powerful a radio transmitter—whether an over-the-horizon radar station or a microwave tower—radio waves simply cannot produce ionising radiation. The only possible effect they can have on human tissue is to raise its temperature slightly.

In the real world, the only sources of ionising radiation are gamma rays, X-rays and extreme ultra-violet waves, at the far (ie, high-frequency) end of the electromagnetic spectrum—along with fission fragments and other particles from within an atom, plus cosmic rays from outer space. These are the sole sources energetic enough to knock electrons out of atoms—breaking chemical bonds and producing dangerous free radicals in the process. It is highly reactive free radicals that can damage a person’s DNA and cause mutation, radiation sickness, cancer and even death.

By contrast, at their much lower frequencies, radio waves do not pack anywhere near enough energy to produce free radicals. The “quanta” of energy (ie, photons) carried by radio waves in, say, the UHF band used by television, Wi-Fi, Bluetooth, cordless phones, mobile phones, microwave ovens, garage remotes and many other household devices have energy levels of a few millionths of an electron-volt. That is less than a millionth of the energy needed to cause ionisation.

Unfortunately:

All of which leaves doctors more than a little puzzled as to why the WHO should recently have reversed itself on the question of mobile phones. In May the organisation’s International Agency for Research on Cancer (IARC) voted to classify radio-frequency electromagnetic fields (ie, radio waves) as “a possible carcinogenic to humans” based on a perceived risk of glioma, a malignant type of brain cancer. . .

The Group 2B classification the IARC has now adopted for mobile phones designates them as “possible”, rather than “probable” (Group 2A) or “proven” (Group 1) carcinogens. This rates the health hazard posed by mobile phones as similar to the chance of getting cancer from coffee, petrol fumes and false teeth.

The WHO’s unfortunate waffling gives alarmists somewhere to hang their hat. As a result, we get misleading stories such as one in the January 2012 Consumer Reports which conveyed the impression of a risk by failing to observe that the “possible” designation was the lowest risk level, the same as coffee, and by allocating much more space to equivocal studies that failed to disprove a risk than to definitive studies that did disprove a risk.


The unscientific method

February 15, 2012

As I’ve written before, the main problem with global warming alarmism isn’t the reconstructions of the Earth’s past climate (which I haven’t the scientific expertise to debate), but the predictions for the future. The basis of the scientific method is to generate testable hypotheses, and then test them. The models that climate scientists use to predict long-term changes to climate simply cannot be tested over the short-term. In regards to long-term changes, they are simply unscientific.

Worse, when those same models are used to make short-term predictions, which can be tested, they tend not to come true. Two cases in point:

First, a new report from the British Met Office and the (infamous) Hadley CRU shows that the Earth has not warmed since 1997. Now, any particular year is mostly noise, but this is 15 years without any trend:

(Via Instapundit.)

Second, a new study from the University of Alabama shows that snowfall in the Sierra Nevada hasn’t changed for 130 years:

The analysis of snowfall data in the Sierra going back to 1878 found no more or less snow overall – a result that, on the surface, appears to contradict aspects of recent climate change models.

(Via Instapundit.)

The science of global warming, at as regards the future, isn’t settled. Strictly speaking, it hasn’t even begun.


Electric cars are not so green

February 14, 2012

Researchers have found that electric cars can be worse for the environment than conventional cars. Specifically, the study found that, in China, the pollution resulting from generating electricity for electric cars is worse than the pollution generated by gasoline-powered cars.

The trade-off is likely to be different in America, where we have cleaner electric generation, but it’s a reminder that environmental trade-offs can be much more subtle than the green industry would have us believe.

(Via Instapundit.)

UPDATE: I had forgotten that this isn’t the only recent study to question the green-ness of electric cars.


Today’s least-surprising scientific discovery

December 5, 2011

Yes, Virginia, policy uncertainty does suppress economic growth.


A stem cell success

November 27, 2011

Researchers have made the biggest success yet in stem cell therapy. And, like all actual progress in stem cell therapies (as opposed to political hype), it uses adult stem cells.

(Via Instapundit.)


“Balance” doesn’t work

October 30, 2011

A study published by the National Bureau of Economic Research looks at how various countries tried to deal with budget deficits. It finds that those countries that focused on spending cuts were successful, while those that attempted a “balanced” approach of tax hikes and spending cuts were not.

This is very timely, since Democrats are pushing the latter strategy. (Of course, even that would be an improvement over the usual Democratic strategy, which is to promise a “balanced” approach, but never deliver the spending cuts.)


Dry lab

October 24, 2011

Anthony Watts says that the greenhouse-effect lab experiment in Al Gore’s “Climate Reality Project” was faked. He charges that the video was bogus, and that the experiment could not be done the way the narrator (Bill Nye) claimed that it was. I find his evidence very compelling, especially for the former charge.

There’s no question that the greenhouse effect exists; Gore is on completely solid scientific footing for that at least. So why fake the experiment? I think the answer is pure showmanship. He wanted a very simple experiment to illustrate to the viewer how elementary the greenhouse effect is. Unfortunately, a legitimate demonstration of the greenhouse effect would be too complicated to serve the purpose. Rather than settle for reality, Gore preferred to fake the experiment.

POSTSCRIPT: Just to emphasize: Although this is a telling indication of Al Gore’s lack of honesty, it has no bearing on the global-warming debate. There’s no doubt that the greenhouse effect exists. We can calculate the direct effect of rising CO2 levels on the temperature of the Earth. That direct effect is modest. The real question is what happens next: Do the secondary effects amplify or counter the direct effects, and to what degree? It’s impossible to run an experiment, so no one knows.

(Via Instapundit.)


Lights out in Pakistan

October 24, 2011

The Economist had an article (subscription required) earlier this month about Pakistan’s disastrous electricity shortage. In the middle of the article was a little hint as to why the shortage exists:

Insufficient capacity is not even the biggest problem. That is a $6 billion chain of debt, ultimately owed by the state, that is debilitating the entire energy sector. Power plants are owed money by the national grid and the grid in turn cannot get consumers (including the Pakistani government) to pay for the electricity they use. This week, the financial crunch meant that oil supply to the two biggest private power plants was halted, because the state-owned oil company had no cash to procure fuel.

(Emphasis mine.)

Suppliers aren’t being paid and a shortage results? Imagine that!


Grains of salt required

September 30, 2011

Stories like this remind us to maintain a healthy skepticism about anything we hear from the IPCC:

The Intergovernmental Panel on Climate Change (IPCC), set up by the UN in 1988 to advise governments on the science behind global warming, issued a report [in May] suggesting renewable sources could provide 77 per cent of the world’s energy supply by 2050. But in supporting documents released this week, it emerged that the claim was based on a real-terms decline in worldwide energy consumption over the next 40 years – and that the lead author of the section concerned was an employee of Greenpeace. Not only that, but the modelling scenario used was the most optimistic of the 164 investigated by the IPCC.

(Via Instapundit.)

POSTSCRIPT: Since I haven’t had occasion to mention it for awhile, my position continues to be that the science on historical climate change seems to be reasonably solid (with a few major exceptions), but the science that purports to predict the future is little better than guesswork.


Relativity in question

September 23, 2011

This would be big if it holds up:

A startling find at one of the world’s foremost laboratories that a subatomic particle seemed to move faster than the speed of light has scientists around the world rethinking Albert Einstein and one of the foundations of physics.

Now they are planning to put the finding to further high-speed tests to see if a revolutionary shift in explaining the workings of the universe is needed – or if the European scientists made a mistake. . .

CERN reported that a neutrino beam fired from a particle accelerator near Geneva to a lab in Italy traveled 60 nanoseconds faster than the speed of light. Scientists calculated the margin of error at just 10 nanoseconds, making the difference statistically significant.

(Via the Corner.)


In other news, the earth is still round

September 3, 2011

President Obama’s new appointment for chairman of the Council of Economic advisors is Alan Krueger, known for an infamous study that argued that minimum wage increases increase employment in fast-food restaurants. The paper contradicted the one of the two basic tenets of economics, the Law of Demand, which says that as the price of a commodity (labor in this case) increases, people buy less.

Liberals loved the study, but given its astonishing and absurd finding, they should not have been surprised when it turned out to be junk. Its data were essentially random. The study that debunked it (which is quite accessible to the layman), concluded:

The data base used in the New Jersey fast food study is so bad that no credible conclusions can be drawn from the report.

and:

These serious mistakes and omissions have resulted in a study doomed to become a textbook example of how not to collect data.

The Law of Demand and the Law of Supply are the two basic tenets of economic theory. They are literally taught on the first day in any introductory economics course. Appointing a demand-curve denier as the president’s chief economist is comparable to appointing a flat-earther to head the US Geological Survey.

(Via Instapundit.)


Keep your science off my Keynesian economics

August 25, 2011

Economist Robert Barro has written yet another take-down of Keynesian economics for the Wall Stret Journal. In his latest, he observes that the theories being employed by the Obama administration are unencumbered by empirical validation.

(Via The Other McCain.)


Rally to the flag

August 23, 2011

A new psychology study finds (subscription required) that viewing the American flag pushes people to greater affinity for the Republican party:

The conclusion, which Dr Ferguson reports in a paper in Psychological Science, was that participants’ voting intentions were, indeed, affected by seeing the flag. The possible average scores on presidential voting intentions ranged from -10 (definitely voting for Mr Obama, definitely not voting for Mr McCain) to +10 (definitely voting for Mr McCain, definitely not voting for Mr Obama). The actual scores of those subsequently assigned to the two groups did not differ significantly the first time round. The second time, though, those who had been shown the flag were more weakly pro-Obama and more strongly pro-McCain, with a score of -3.0, than those who had not been shown the flag, who averaged -4.8.

For the political-party-warmth ratings, the potential score range was between -500 (extreme warmth towards Democrats, extreme cold towards Republicans) and +500 (extreme warmth towards Republicans, extreme cold towards Democrats). The team found that flag-viewers were cooler towards Democrats and warmer towards Republicans, with average scores of -90, while those who never saw a flag had scores that averaged -173.

The Economist also reports that earlier research on the Israeli flag suggested that seeing the flag might push people to more moderate positions, so one theory is that Republicans are subconsciously seen as more moderate than Democrats. As appealing as I might find that theory, I don’t think it’s right. Tim Groseclose’s work found that Barack Obama is only slightly more liberal (37.3 left of center) than John McCain is conservative (34.6 right of center), so voting for McCain is not a dramatically more centrist position than Obama.


Scientific literacy and global warming

July 28, 2011

A new paper (well, new a month ago) finds the more scientifically literate a person is, the less likely he is to believe that global warming poses a catastrophic threat.

Now, as much as global warming skeptics would like to draw the conclusion that the scientific evidence opposes global warming, that’s not what the study finds. It finds that in regard to the issue of global warming, scientific literacy tends to reinforce a person’s pre-existing biases: those predisposed to skepticism become more skeptical, and vice versa. The effect is stronger for the skeptics, which accounts for the overall finding.

What is interesting is that the effect does not appear to apply in general. In regard to nuclear power, the more scientifically literate a person is, the less likely he is to believe that nuclear power poses a safety risk, regardless of the person’s predispositions.

The paper notes that, in regard to nuclear power, the effect once again is stronger for those predisposed to think nuclear power is safe. Consequently, greater scientific literacy leads to greater polarization, just as with global warming. This leads the authors to the conclusion of their paper, which I won’t go into.

But curiously — unless I missed it when I skimmed the paper — they didn’t discuss, or even conjecture, why the effect was different for global warming and for nuclear power.

My guess is there are two effects being mixed together. One effect is the degree to which the scientific evidence points in one clear direction and people find that evidence convincing, and the second effect is the polarization effect that the paper emphasizes. The paper shows clearly that the first effect is not the whole story. But it seems likely to me that in nuclear power, the scientific evidence is so compelling that it dominates the polarization effect. Hence both populations move in the same direction. For global warming the polarization effect dominates, so both populations move in opposite directions.

This makes sense to me, because my reading of the climate science is that it does not support strong conclusions about the future either way. Global warming could be dangerous, or not; we just don’t know.

This is all conjecture. Hopefully someday someone will tease those effects apart.  But the one thing this study says clearly is that skepticism of global warming is not the result of scientific illiteracy.


End the war on salt!

July 21, 2011

A new study contradicts was nutritionists have been telling us for years:

For years, doctors have been telling us that too much salt is bad for us. Until now. A study claims that cutting down on salt can actually increase the risk of dying from a heart attack or a stroke. The research has left nutritionists scratching their heads.

Its findings indicate that those who eat the least sodium – about one teaspoon a day – don’t show any health advantage over those who eat the most.

ASIDE: I’ll note that this isn’t the first study to report findings such as this.

Personally, I welcome this news, whether it holds up or not. The truth is that different people need different amounts of salt, regardless of the averages say. I’ve known for many years that I happen to be one of the people who needs more salt than most. Unfortunately, the anti-salt campaign has occasionally made it difficult to get it. Anything that hinders the anti-salt campaign is good for me.

POSTSCRIPT: There’s an amusing addendum to attach to this story. The New York Times, in its reporting on this story, shows that media bias is not limited to politics:

Low-Salt Diet Ineffective, Study Finds. Disagreement Abounds.

A new study found that low-salt diets increase the risk of death from heart attacks and strokes and do not prevent high blood pressure, but the research’s limitations mean the debate over the effects of salt in the diet is far from over.

The article continued with four paragraphs telling why no one should believe the study before it deigned to report what the study actually found.

UPDATE: According to Scientific American, the anti-salt campaign has always been on shaky scientific footing. For example:

For every study that suggests that salt is unhealthy, another does not. Part of the problem is that individuals vary in how they respond to salt. “It’s tough to nail these associations,” admits Lawrence Appel, an epidemiologist at Johns Hopkins University and the chair of the salt committee for the 2010 Dietary Guidelines for Americans. One oft-cited 1987 study published in the Journal of Chronic Diseases reported that the number of people who experience drops in blood pressure after eating high-salt diets almost equals the number who experience blood pressure spikes; many stay exactly the same.

Indeed. People are trying to cut my salt intake, even though I need more than average. Of course, this is always the problem with one-size-fits-all policy.

(Via Hot Air.)

UPDATE: In light of this article, I’m going to strengthen this post’s title.

UPDATE: I said that the New York Times’s hit piece on the salt study proves that media bias isn’t limited to politics, but on further reflection, I think it’s entirely political. The New York Times is, after all, located in New York, where the mayor has waged a high-profile war on salt (and just about anything else that people enjoy, it would seem). If New Yorkers learn that Bloomberg’s entire war on salt was based on false information, they might wonder what other infringements of their personal liberty are unnecessary and/or counterproductive.


Machines don’t need health care

June 20, 2011

President Obama says the problem with our economy is that we have too many machines replacing people. He specifically mentioned ATMs.

This is a classic economic fallacy. Capital, such as machines, make labor more productive. One person can produce more using a machine than he can without it. That makes labor more valuable, not less. Any introductory economics class covers this. It’s sad that the president of the United States does not understand it.

Yes, technology can produce temporary disruptions as its ramifications work their way through the economy, but it has always been thus. It’s not a special problem now. (Besides, ATMs have been around for ages.)

Moreover, any disruption in today’s economy caused by technology is dwarfed by the disruption caused by Obama’s multiplication of the regulatory state. And that brings up the one sense in which the accumulation of capital could be seen as a negative sign: If the relative costs of labor and capital shift, so that labor becomes relatively expensive, businesses do have an incentive to substitute capital for labor. That is, capital does not cause labor to become less valuable (Obama notwithstanding, it’s quite the contrary), but capital can become more attractive if labor becomes less so.

This might actually be happening. As Tom Blumer put it, ATMs are exempt from Obamacare.

UPDATE: Another good rebuttal, along the same lines.


Arbitrary justice

May 25, 2011

Another reason not to trust the government (any government): The Economist (subscription required) reports on a new study that finds that even well-meaning justice systems can be quite arbitrary:

The government might deliver even-handed justice, but you certainly can’t count on it.


Whatever happened to Okun’s Law?

May 24, 2011

Jonah Goldberg points out that we are in the midst of the lousiest recovery on record:

Okun’s Law says that recoveries should come with a drop in unemployment, but our last three recoveries have been jobless ones. The weakness of the current recovery explains some of it, but it can’t be a complete explanation. Is Okun’s Law dead? What did we do to it?

A little bit of googling found me an article from the New York Fed, written during the much-more-mild jobless recovery of 2001-2003. They found:

We advance the hypothesis that structural changes—permanent shifts in the distribution of workers throughout the economy—have contributed significantly to the sluggishness in the job market.

We find evidence of structural change in two features of the 2001 recession: the predominance of permanent job losses over temporary layoffs and the relocation of jobs from one industry to another. The data suggest that most of the jobs added during the recovery have been new positions in different firms and industries, not rehires. In our view, this shift to new jobs largely explains why the payroll numbers have been so slow to rise: Creating jobs takes longer than recalling workers to their old positions and is riskier in the current uncertain environment.

From a policy perspective, this suggests two things to me. First, there are a lot of reasons why the economy might be shifting from one industry to another, and there’s no way to stop that sort of trend (if we even wanted to). We should get it over with. What I mean by that is we should not be trying to prop up shrinking industries. We can’t do it in the long run, and our attempts serve only to push the adjustment process into recessions (reality sets in when times are tough).

Second, we should make it easier and less risky to create new jobs in new industries. That means encouraging investment and cutting regulation.

Unfortunately, our current administration is doing exactly the opposite of all the above.


Money, not ideology, drives green car sales

May 20, 2011

Autoblog reports:

Research conducted by Auto Trader suggests that money, not the environment, is the main driving force behind motorists’ interest in eco-friendly vehicles, at least in Great Britain. The majority of UK motorists (73 percent) would consider “going green” to save money on fuel, compared to just 41 percent of drivers admitting that environmental concerns would motivate them to purchase a greener vehicle.

This is good (if unsurprising) news for an efficient economy. In a competitive market, the best measure of the resources expended to make a product is price. When people base decisions on price, they are optimizing resource allocation.

The thing about price is it values all resources even-handedly, according to their scarcity. When environmentalists push us to spend more for a green product, they are encouraging sub-optimal resource allocation. Specifically, they want us to use increase our use of more-expensive (and therefore more scarce) resources in certain categories (such as labor or precious metals), in order to conserve less-expensive (and therefore less scarce) resources in other categories (such as oil). Green advocates feel that the latter categories are more important despite being less scarce.

(Via Instapundit.)


CRASH and PROCEED

February 7, 2011

Since Glenn Reynolds commented on DARPA’s CRASH and PROCEED programs:

Well, my only thought is that if you have cyber-security that’s modeled on the human immune system, the result will be a generation of hackers expert in overcoming the human immune system. I can see longer-term problems with that . . . .

CRASH’s immune system motivation is really just a metaphor.  It really has nothing to do with the human immune system so you don’t need to worry about that.  The most important thing about CRASH is they are pushing for “clean-slate” operating systems work, which means starting over rather than trying to make incremental improvements to existing faulty operating systems.

PROCEED would be a bigger deal, if someone can figure out how to make fully homomorphic encryption practical. So far, no one has any idea how to do that. (There’s a somewhat layman-friendly explanation of what fully homomorphic encryption is about here.)


“If Godzilla appeared on the Mall this afternoon, Al Gore would say it’s global warming.”

February 5, 2011

Heh, but making a serious point:

GORDON PETERSON, HOST: It’s been a terrible winter. If global warming is the problem, why are we having such a tough winter? Well Al Gore told Gail Collins of the New York Times there’s about a four percent more water vapor in the air now in the atmosphere than there was in the ’70s because of warmer oceans and warmer air, and it returns to earth as heavy rain and heavy snow. That’s what Al Gore says.

CHARLES KRAUTHAMMER: Look, if Godzilla appeared on the Mall this afternoon, Al Gore would say it’s global warming…

[Laughter]

…because the spores in the South Atlantic Ocean, you know, were. Look, everything is, it’s a religion. In a religion, everything is explicable. In science, you can actually deny or falsify a proposition with evidence. You find me a single piece of evidence that Al Gore would ever admit would contradict global warming and I’ll be surprised.

The study of historical temperature is science, even if the peer review processes that are supposed to ensure that sound science prevails have been seriously corrupted. It operates by proposing hypotheses and testing them against data.

The projections of future temperature are not science. They rely on models that cannot be tested in the short term because temperature is too noisy. It will take decades to gather enough data to substantiate or reject them. Neither can you test the models on historical data (even assuming accurate historical data) primarily because of the problem of over-fitting the data, and secondarily because the historical data lies in a different part of the curve than the projections are interested in.

When Al Gore says the science of climate change is settled, it’s not true. It would be more accurate to say the science of climate change has yet to begin.

(Via Instapundit.)


Dilbert would understand

January 27, 2011

The phenomenon wherein incompetent people don’t know enough to realize they are incompetent is easy to see, but I didn’t realize that it has a name (the Dunning-Kruger effect) and has been studied scientifically.

And yes, there’s a political angle.


Autism study was an “elaborate fraud”

January 8, 2011

CNN reports:

A now-retracted British study that linked autism to childhood vaccines was an “elaborate fraud” that has done long-lasting damage to public health, a leading medical publication reported Wednesday.

An investigation published by the British medical journal BMJ concludes the study’s author, Dr. Andrew Wakefield, misrepresented or altered the medical histories of all 12 of the patients whose cases formed the basis of the 1998 study — and that there was “no doubt” Wakefield was responsible.

(Via Instapundit.)

UPDATE: Wakefield planned to profit from the fraud. (Via Instapundit.)


Superwheat

December 15, 2010

Australian scientists have found that increased carbon-dioxide levels can dramatically increase crop yields. It may also reduce crops’ need for water. Interesting.

(Via Instapundit.)


Finally

December 7, 2010

A serious, scientific look at what it takes to kill a zombie.


How not to do science

December 6, 2010

Are the TSA’s much-reviled body scanners safe? Some scientists aren’t sure. They say that existing analyses, which were designed for X-ray machines, aren’t appropriate to body scanners. X-ray machines distribute their radiation throughout the body, but body scanners apply it all to the skin, resulting in a much higher exposure than an X-ray-based analysis would suggest.

In response, the government was able to muster only this:

The FDA asserts that its method is correct. “This is how we measure the output of X-ray machines and how we’ve done it for the past 50 years,” says [FDA spokeswoman Kelly] Classic.

“We’ve always done it this way” is not a scientific rebuttal.

POSTSCRIPT: Some of the dubious scientists think the machines probably are safe (“You would have to be a heavy traveler to accumulate a large dose.”) but others aren’t sure (“At this point, until I knew more information, I’d tell people to take the pat-down.”)


Nuclear forensics

November 12, 2010

Scientists are learning how to trace the origin of a nuclear weapon by examining the aftermath of its blast. This is very important, since it seems all-but-certain that Iran will soon be a nuclear power. For us to have any hope of deterring Iran from turning nuclear weapons over to terrorists, we need the capacity of tracking those weapons back to them.

(Via Instapundit.)


The Fed considers deliberate inflation

October 9, 2010

The Federal Reserve is considering bringing back inflation, on purpose:

The Federal Reserve spent the past three decades getting inflation low and keeping it there. But as the U.S. economy struggles and flirts with the prospect of deflation, some central bank officials are publicly broaching a controversial idea: lifting inflation above the Fed’s informal target.

The rationale is that getting inflation up even temporarily would push “real” interest rates—nominal rates minus inflation—down, encouraging consumers and businesses to save less and to spend or invest more.

Idiots. We have established that this trick does not work. More precisely, it works only to the extent that you can trick people into thinking inflation will be low, otherwise people will simply compensate for the higher price level. More pithily, inflation does not stimulate the economy; only surprise inflation stimulates the economy.

The problem is that people quickly learn to expect inflation and the trick stops working. (And the fact that the Fed is being open about its plans doesn’t help!) Furthermore, once people start expecting inflation, they won’t believe you when you say you’re stopping. It’s very hard to put that genie back in the bottle again.

(Via Instapundit.)

POSTSCRIPT: Of course, if the Fed wants to bring back inflation, it’s making a good start.


iPS cells without retroviruses

October 1, 2010

A team of scientists has invented a technique to convert adult cells to stem cells without using retroviruses. Previous iPS techniques used retroviruses, which make them dangerous for medical use, and scientists have been looking for alternatives. The new technique uses messenger RNA instead and seems to have no down side. And if that weren’t enough, it’s twice as fast and up to 100 times as efficient.

iPS cells are superior to embryonic stem cells because they don’t require the destruction of embryos, and because they can be acquired from the patient, thereby eliminating the risk of rejection.

(Via the Corner.)


Video games improve decision making

September 30, 2010

A new study has found that playing video games trains people to make correct decisions faster, and that skill applies to everyday activities as well. Interestingly, the effect applies to “action video games” but not “slow-moving strategy games”.

(Via Instapundit.)


Administration covered up abstinence study

August 27, 2010

Remember the old canard about how the Bush administration put politics ahead of science? Now we have an administration that actually does that, and not just in regard to economics and oil spills either. Officials at the Department of Health and Human Services tried to keep the lid on the results of an HHS study on sex and abstinence, even going so far as to deny a Freedom of Information Act request.

HHS was ultimately forced to release the study when faced with a deluge of FOIA requests. The study found that the vast majority of parents (no surprise) and adolescents (a bit more surprising) believe that sex should wait until marriage.

POSTSCRIPT: Incidentally, this is a different study than the Penn study published earlier this year that showed that abstinence education works.


This could be useful

August 25, 2010

A clinical study in South Korea finds that a drug is effective in treating Starcraft addiction.


Public jobs cost jobs

July 24, 2010

Veronique de Rugy writes:

In this paper, published in Economic Policy Journal, economists Yann Algan, Pierre Cahuc, and Andre Zylberberg looked at the impact of public employment on overall labor-market performance. The authors use data for a sample of OECD countries from 1960 to 2000, and they find that, on average, the creation of 100 public jobs eliminated about 150 private-sector jobs, decreased overall labor-market participation slightly, and increased by about 33 the number of unemployed workers.


Barro on the stimulus

March 5, 2010

Harvard economist Robert Barro has a piece in the Wall Street Journal estimating the results of the 2009 stimulus bill. He estimates the multiplier (that’s the amount by which the economy grows for a given amount of fiscal stimulus) at 0.4 for the first year and 0.6 for the second. Let’s call it 0.5 overall.

That means that $300 billion in government spending comes at the cost of $150 billion in reduced private-sector activity. So, if at least half the stimulus spending is worthwhile, society as a whole is better off, having obtained more than it gave up. On the other hand, if at least half of the stimulus is wasted, then society is worse off. You can judge for yourself which is more likely.

But that’s just one side of the ledger. Increased government spending comes at the price of higher taxes, either now or at some future date. Barro estimates the tax multiplier at -1.1. (He notes Christina Romer, the president’s chief economic advisor, has found the tax multiplier to be even worse.) That means that the taxes to pay for $300 billion in spending result in $330 billion in reduced economic activity.

Put these together and you get a combined multiplier of -0.6. That means that every $300 billion in fiscal stimulus makes society worse off by $180 billion. And that’s assuming that not one cent of the stimulus is wasted. If half of the stimulus is wasted (which strikes me as a bare minimum), we’re $330 billion worse off.

In short, the 2009 stimulus bill was a complete disaster. And we just passed another one.

POSTSCRIPT: Interestingly, the Obama administration assumed a spending stimulus of about 1.5. It’s completely unsupported (they basically concede this). but it’s an interesting number. It means that after the -1.1 tax multiplier, we would still be 0.4 ahead, so the stimulus would better society if at least 60% of its spending were worthwhile. It strikes me that 40% is just about the lowest number that could possibly be argued for stimulus waste, so it seems as though the administration’s multiplier guess is the lowest number that could justify its plan. In other words, it seems that the Obama administration determined the number by working backwards from the plan’s desired outcome.


Cohabitation hurts

March 3, 2010

A new study finds that cohabitation before marriage hurts the likelihood of a successful marriage. Some people find this counterintuitive, but it makes sense to me. By living together without a commitment to stay together, you’re not practicing for marriage, you’re practicing for separation.

(Via the Corner.)


Lord Acton was right

February 6, 2010

Here’s the latest in the annals of unsurprising science. Scientists have have devised a psychology experiment that supports the hypothesis that power corrupts.

What was a bit more surprising was a second finding: Power corrupts people who believe they deserve their power. Those who didn’t exhibited a strange phenomenon the scientists dubbed hypercrisy (i.e., the opposite of hypocrisy), in which they were harder on themselves than others.

I’m not sure how significant the second finding is in practical terms. Should we choose our leaders by lottery? There’s a certain appeal to that, but I doubt it would really work out well. Also, I think that people pretty quickly come to believe that they deserve their good fortune.


Abstinence education works

February 4, 2010

A widespread belief among liberal social engineers is that abstinence education doesn’t work and “safe sex” is better. But no one had ever looked carefully at the subject, until now. A new study shows that the conventional wisdom has it completely backwards:

Sex education classes that focus on encouraging children to remain abstinent can persuade a significant proportion to delay sexual activity, researchers reported Monday in a landmark study that could have major implications for U.S. efforts to protect young people against unwanted pregnancies and sexually transmitted diseases.

Only about a third of sixth- and seventh-graders who completed an abstinence-focused program started having sex within the next two years, researchers found. Nearly half of the students who attended other classes, including ones that combined information about abstinence and contraception, became sexually active. . .

The study is the first to evaluate an abstinence program using a carefully designed approach comparing it with several alternative strategies and following subjects for an extended period of time, considered the kind of study that produces the highest level of scientific evidence.

The study shows not only that abstinence education works, but “safe sex” education is harmful. Students who were taught both abstinence and safe sex did slightly better than the control group but much worse than the abstinence-only group. Students who were taught safe sex only (not abstinence) did slightly worse than the control group:

Over the next two years, about 33 percent of the students who went through the abstinence program started having sex, compared with about 52 percent who were taught only safe sex. About 42 percent of the students who went through the comprehensive program started having sex, and about 47 percent of those who learned about other ways to be healthy did.

(Via the Corner.)


The psychology of extended warranties

January 3, 2010

New research attempts to explain why people buy overpriced, useless extended warranties.


Dense plasma focus

January 1, 2010

A new, highly preliminary, approach to fusion.


Vaccine allocation

December 26, 2009

According to a new paper, vaccines are being allocated the wrong way. Rather than using scarce vaccines on the population that is most at risk from a disease, the authors argue that they should be used on the population that is most likely to spread the disease. This would create herd immunity and save more lives overall.


Transfer payment

December 18, 2009

A group of scholars at George Mason University have done an econometric analysis of spending under the stimulus bill. They wanted to see what factors influenced the allocation of stimulus funds. For example, they wanted to see whether variables such as unemployment or mean income had a significant impact, which might indicate that the bill was tailored to stimulate the economy.

Their analysis found no correlation between stimulus spending and any economic variable. In fact, there was only one variable with a significant impact, and that was which party represented the district: Democratic districts obtained significantly more stimulus funding than Republican ones. Party was significant at the 99.9% confidence level (p < 0.001).

The other variable with some impact was whether the district was politically “marginal”, meaning that the district’s representative won by less than 5% of the vote. Politically marginal districts received less funding than non-marginal ones. Marginality was significant at the 90% level but not the 95% level.

Put these two together and you get a clear picture of how to get stimulus funding: be strongly Democratic. Weakly Democratic districts get less, and Republican districts get much less.

So the purpose of the stimulus travesty is empirically revealed. The biggest peacetime spending scheme in history is a massive transfer payment from Republicans and independents to Democrats.

(Via the Washington Examiner, via Instapundit.)


The benefits of openness

December 9, 2009

RealClimate.org this past week linked an older Real Climate post titled “Peer Review: A Necessary But Not Sufficient Condition“. The post makes the point that peer review is a necessary condition but not a sufficient condition for credibility, observing that bad papers sometimes make it through peer review. (The bad papers they want us to ignore are all skeptical of a human impact on climate change.)

In fact, I would say peer review is neither necessary nor sufficient for credibility. Bad and even horrible papers are sometimes accepted. Good papers are sometimes rejected. Some good papers, for one reason or another, are never submitted to peer review. So peer review is not a magic wand; it is simply a process that adds value by subjecting scientific work to skeptical scrutiny.

Anyway, one of the post’s main examples of the failure of peer review is a paper by Ross McKitrick and Patrick Michaels (a prominent global warming skeptic) that purported to find economic signals in the temperature records. (It doesn’t really matter what this means. The point is that the non-skeptics didn’t like it.)

It turns out that the work was flawed, because McKitrick and Michaels’s data was in degrees but their trigonometric library measured angles in radians. They acknowledge the error. (ASIDE: They also found that the correction does not affect the overall result. On the other hand, Real Climate alleges that there are other problems with the paper as well.)

But here’s the point: The error was discovered because McKitrick and Michaels made their code available! If they had withheld their code, no one ever could have found the error.

Alas, withholding the code seems to be a common practice in the climate science community. The Hadley CRU would not release its code, and it took a leak to expose the fact that its code was broken. (And unlike the previous case, CRU’s code does not admit an easy fix, or perhaps even any fix at all.)

In climate science, the facts may be on the majority’s side (although I’m not as confident as I once was), but they can learn from the skeptics something about the process of science.


NASA conceals climate data?

December 5, 2009

A global warming skeptic alleges that NASA is concealing its climate data. We’ll see what develops, but in the wake of the climate scandal, one has to assume that the allegation is likely true.

(Via Instapundit.)


Fair is fair

December 1, 2009

Michael Mann is facing some embarrassment for a statement he made on RealClimate.org in 2004. In response to this remark:

Whatever the reason for the divergence, it would seem to suggest that the practice of grafting the thermometer record onto a proxy temperature record – as I believe was done in the case of the ‘hockey stick’ – is dubious to say the least.

He replied:

No researchers in this field have ever, to our knowledge, “grafted the thermometer record onto” any reconstruction. It is somewhat disappointing to find this specious claim (which we usually find originating from industry-funded climate disinformation websites) appearing in this forum.

But he was misinformed. As reported in the New York Times, that’s exactly what Phil Jones did when he prepared the cover graph for the 1999 World Meteorological Organization report. Jones discussed doing so in his infamous “hide the decline” email.

This is embarrassing for Mann, whose intemperate response to the suggestion that anyone in his field would have done such a thing now looks foolish.

It’s even more embarrassing to Jones and his defenders, because they are now trying to argue that Jones’s presentation was appropriate, and Mann’s intemperate response implicitly concedes that it was not. In fact, in Mann’s comments on the scandal, he says that Jones was not trying to fudge the data, but he stops short of defending Jones’s presentation of the data.

But Christopher Horner has gone further and accused Mann of dishonesty. He points out that Jones got the “trick” from Mann’s 1998 article in the journal Nature. How then could Mann claim to be unaware that anyone does it?

That isn’t fair. In his intemperate response, Mann went on to make a distinction:

Often, as in the comparisons we show on this site, the instrumental record (which extends to present) is shown along with the reconstructions, and clearly distinguished from them.

In other words, Mann says, it’s okay to graft the instrumental record onto the reconstruction, as long as it is clearly marked which is which. That’s what he did in the Nature article, plotting the instrumental record with a dotted line that did not exactly stand out, but was clearly distinguishable. That’s also exactly what Jones did not do in the WMO report.

(Via Instapundit.) (Previous post.)


Withholding the data

December 1, 2009

I thought I’d made my last remarks about the climate scandal, but another development has occurred that requires comment. First some background.

One of the serious criticisms of the climate science community that predates the leak of the CRU documents is its frequent refusal to disclose their data and methodology. Real Climate tries to argue that doing so is perhaps unfortunate, but no worse than that:

From the date of the first FOI request to CRU (in 2007), it has been made abundantly clear that the main impediment to releasing the whole CRU archive is the small % of it that was given to CRU on the understanding it wouldn’t be passed on to third parties. Those restrictions are in place because of the originating organisations (the various National Met. Services) around the world and are not CRU’s to break. As of Nov 13, the response to the umpteenth FOI request for the same data met with exactly the same response. This is an unfortunate situation, and pressure should be brought to bear on the National Met Services to release CRU from that obligation. It is not however the fault of CRU.

But that’s crap. CRU researchers have made it clear that they have no desire to release their data, and their reasons have nothing to do with agreements with third parties. For example, in 2005 Phil Jones (the now-embattled head of CRU) wrote to an Australian scientist named Warwick Hughes:

We have 25 years or so invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it?

So Jones was not punctiliously observing his agreements with national met services; he was protecting his data from skeptical scrutiny.

That was known before the climate scandal hit. Now, the leaked emails [1107454306, 1106338806, 1212166714, 1155333435] make it even more clear that CRU was looking for a pretext not to release their data:

  • Phil Jones wrote of “hiding behind” various excuses for withholding the data (including agreements with third parties). He gives no hint that he really wants to release the data but unfortunately cannot. Quite the contrary.
  • Jones also suggests that he will “delete the file rather than send [it] to anyone”.
  • Tim Osborn wrote to people asking if their emails were intended to be confidential. When at first he didn’t get the desired response he made his intention explicit: if they said yes, “we will use this as a reason to decline the [freedom of information] request”.
  • Keith Briffa wrote that he was just too busy to release his data, but “Will supply the stuff when I get five minutes!!” That was in 2005, and it seems it never happened.

That brings us to today. It is now revealed that CRU deleted their raw data:

SCIENTISTS at the University of East Anglia (UEA) have admitted throwing away much of the raw temperature data on which their predictions of global warming are based. . .

The UEA’s Climatic Research Unit (CRU) was forced to reveal the loss following requests for the data under Freedom of Information legislation.

The data were gathered from weather stations around the world and then adjusted to take account of variables in the way they were collected. The revised figures were kept, but the originals — stored on paper and magnetic tape — were dumped to save space when the CRU moved to a new building.

It seems safe to say that if they really wanted to release the data, they would not have deleted it. Real Climate’s claim notwithstanding, the “main impediment” to releasing the data is not the national met services; the “main impediment” is the data is gone!

You might think that over at Real Climate they would be hanging their heads in shame, but no. They say that the data is not lost, because:

The original data is curated at the met services where it originated.

Terrific. The data wasn’t destroyed; someone could go collect it all over again.

This is disingenuous for at least four reasons:

  1. Who is to say that the met services still have the data? Governments (I still believe) are more corrupt than scientists. If CRU deleted the data, maybe some of the met services have as well. As barring that, maybe some of the data has been destroyed accidentally.
  2. If the data is still there, someone would still have to go collect it again. That would take ages in the best of circumstances; and with the debate as politicized as it now is, we are hardly in the best of circumstances.
  3. No academic is going to go to the effort to collect that data again. Academia runs on publications and you cannot publish something that’s been published before. Recreating the CRU data is not likely to result in an original research result, so no academic will take the time to do it. Those skeptics who might be so inclined generally don’t have the expertise to do it.
  4. Here’s the kicker. In any case, no one can reproduce the raw data set that CRU used. Even if someone re-collected all the raw data, realistically it would not be exactly the same data set. It’s not as though there is a definitive list of agencies, each of whom has a file called “the definitive historical meteorological data” that has been unchanged since CRU first collected it. And even if, beyond all probability, someone did manage to re-create the exact data set that CRU destroyed, they could never know that they had done so. Consequently, no one can analyze what CRU actually did with the data.

In short, the data is gone. It’s highly unlikely anyone will ever reconstruct it, and if they do, it won’t be exactly the same, so no one can ever check CRU’s work.

(Previous post.)

UPDATE: I agree with just about everything Megan McArdle writes here.


The climate scandal

November 28, 2009

As an outsider to the global warming debate, as nearly all of us are, it is hard to evaluate the claims and counter-claims. Some say there is a consensus. Is that true, and if so, is the consensus right or merely the product of group-think?

Last year, I was able to have a conversation with a mainstream (i.e., not a “skeptic”) climate scientist and got his account of the state of play in the field. He said that the evidence is very good that the climate is warming and that carbon dioxide levels are increasing, and that it seems very likely that humans are responsible. In regard to projections of future climate, he said that the direct effect of increased carbon dioxide is not very large, and estimating the indirect effects depends on computer models.

Unfortunately, (this is my opinion now, not his), we cannot rely on the computer models, because they do not make predictions that we can test, so we really don’t have any good science for predicting the future. Nevertheless, we have a pretty good idea about the past.

That’s what I thought, but my confidence was shaken two months ago when I read a National Review article alleging that prominent climate researchers refused to reveal their data and methodology. The allegation is serious; if true, it completely undermines their work. We don’t accept scientists’ word for their results. Even if a scientist is honest, he might make a mistake. We need to be able to verify the results. Refusing to reveal the data and methodology is like a mathematician claiming a theorem and refusing to provide the proof. Such a result is worthless.

Even worse, the article alleged that the researchers withheld their data specifically to keep it from skeptical scrutiny, saying: “Why should I make the data available to you, when your aim is to try and find something wrong with it?” The answer is: because that’s how science works.

Still, the article seemed to be based on interviews with just one side, so perhaps they weren’t a fair account of what happened. I did some googling, but I was unable to verify the claims independently. So I waited, hoping something would come out to clarify matters. And now, of course, something has.

Unless you’ve been living in a cave without internet access, or you get your news from the mainstream media, you’ve heard of the leaked emails and documents from the Hadley CRU. They show that the National Review article was absolutely accurate. The Hadley researchers were determined to withhold their data, specifically to protect it from skeptical scrutiny [122833062910596647041074277559, 12543451741256735067]. They were even prepared to delete it, rather than release it to the wrong people [1107454306, 1212073451].

The emails also show a deliberate campaign to corrupt the peer-review system. They discussed submitted papers (which are supposed to be confidential) and how to sabotage them [10547569291077829152, 1233249393]. They worked to oust editors with a skeptical bent, or who were even suspected of a skeptical bent [1051190249, 1106322460]. They spoke explicitly of “plugging the leak” at journals that sometimes published the work of skeptics [1132094873]. So when people speak of a consensus among peer-reviewed research, it turns out that doesn’t mean as much as you might think.

But let’s return to the withholding of data and methodology, because it turns out they had much to conceal. I’m not sure if the data was included in the leak; if so I haven’t seen an analysis of it yet. However, the code is part of the leak, and the code, frankly, is complete crap.

One file, the now infamous HARRY_READ_ME.txt, chronicles the effort of one poor programmer (Ian Harris, according to Real Climate) to maintain the code base for one of their temperature databases. It documents endless problems with the code: subscripts out of range, segmentation faults, overflow (e.g., causing a sum of squares to become negative), underflow, division by zero, silently ignoring exceptions. Pretty much a complete disaster. (UPDATE: Good link as of January 2014.)

ASIDE: The last one is so appalling it’s worth a short look. (Fuller story here.) At one point in Harry’s tale he had to deal with the code that determines whether a station contributes to a cell (whatever that means). This amounts to determining whether two points are within a certain range of each other. Rather than do the necessary geometric calculation, the code uses the graphics library instead (!):

..well that was, erhhh.. ‘interesting’. The IDL gridding program calculates whether or not a station contributes to a cell, using.. graphics. Yes, it plots the station sphere of influence then checks for the colour white in the output.

But better yet, when IDL occasionally generates a plotting error, the code simply ignores it and moves on. (You can find this in documents/cru-code/idl/pro/quick_interp_tdm2.pro).

There’s much, much, more. Near the end of Harry’s tale of horror comes this:

I am seriously close to giving up, again. The history of this is so complex that I can’t get far enough into it before by head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog. I could be throwing away all kinds of corrections – to lat/lons, to WMOs (yes!), and more.

The bottom line is that nothing their code produces can be trusted.

Some people are delighted by all this. I am not. Instead, I am furious. As a libertarian, I would like to believe that global warming is a myth, but I have thought it unlikely that an entire scientific field could be wrong. But now, who’s to say? Since it now appears that the peer-review process in climate science is corrupt, an outsider cannot begin to assess which claims are valid and which are not; we can only assess which ones are in and which are out. I feel as though I’ve been lied to. (And perhaps I have. How can I know?)

It would be exaggerating to draw from this that there is no science of climate, but not by all that much. Here’s the thing: climate change matters, if it is real that is (which I still think it probably is, retrospectively at least). It would be useful for us to know something about it.

(Previous post.)


More on the Hadley scandal

November 24, 2009

The most troubling aspect of the scandal arising from the Hadley CRU emails is the perversion of the peer review process. True, there is no doubt that the Hadley researchers withheld their data from skeptical scrutiny, and it appears that they may have even fudged their data, but those offenses taint only their own work. But by subverting the peer review process, they have tainted their entire field. The documents make clear that the Hadley researchers and their correspondents worked successfully to oust journal editors who had views contrary to theirs, or who they even suspected had such views:

Proving bad behavior here is very difficult. If you think that Saiers is in the greenhouse skeptics camp, then, if we can find documentary evidence of this, we could go through official AGU channels to get him ousted.

Worse, they did not follow proper protocol in refereeing submitted papers. Submitted papers are supposed to be confidential until accepted for publication, but the emails are rife with discussion of papers under review. For example:

With free wifi in my room, I’ve just seen that M+M have submitted a paper to IJC on your H2 statistic – using more years, up to 2007. They have also found your PCMDI data -laughing at the directory name – FOIA? Also they make up statements saying you’ve done this following Obama’s statement about openness in government! Anyway you’ll likely get this for review, or poor Francis will. Best if both Francis and Myles did this. If I get an email from Glenn I’ll suggest this.

Examples such as these make it clear that the peer-review process is not working property in climate science. Now I don’t believe, as some do, that this scandal somehow washes away all the evidence that the earth’s climate is warming, likely due to human activity. There is still much convincing evidence that it is.

What the scandal does do is destroy any notion that there is a consensus among scientists on the matter. Whatever consensus might appear to exist is because some have conspired (I use the word deliberately) to silence opposing views. Climate science, as a field, needs to take affirmative steps to right its ship, and quickly.

As George Monbiot (an impeccably credentialed British environmental activist) put it:

It’s no use pretending this isn’t a major blow. The emails extracted by a hacker from the climatic research unit at the University of East Anglia could scarcely be more damaging. I am now convinced that they are genuine, and I’m dismayed and deeply shaken by them.

I’m also deeply disillusioned by RealClimate.org. In their view, the leak did not expose a problem in the field, but rather the leak itself is the problem, and they are focusing on explaining away the damning material in it. Worse is this email from the leak:

Anyway, I wanted you guys to know that you’re free to use RC [RealClimate.org] in any way you think would be helpful. Gavin and I are going to be careful about what comments we screen through, and we’ll be very careful to answer any questions that come up to any extent we can. On the other hand, you might want to visit the thread and post replies yourself. We can hold comments up in the queue and contact you about whether or not you think they should be screened through or not, and if so, any comments you’d like us to include.

You’re also welcome to do a followup guest post, etc. think of RC as a resource that is at your disposal to combat any disinformation put forward by the McIntyres of the world. Just let us know. We’ll use our best discretion to make sure the skeptics dont’get to use the RC comments as a megaphone…

(Emphasis mine.) This makes clear that Real Climate is simply not, as it is billed, an impartial scientific resource. Pity. It would have been nice to have such a thing.

I culled this information from mainly from two summaries: by Charlie Martin and Iain Murray.

POSTSCRIPT: I want to note again that for those fighting ruinous government proposals like cap and trade, this is the wrong hill to die on. The work discussed here is primarily paleoclimatology, the study of past climate. What matters to policy is future climate, and the work in climate projection is very weak, as I discussed here. Our purposes are not served by making paleoclimatology the center of the debate.

(Via Instapundit.) (Previous post.)


Scientists behaving badly

November 22, 2009

In my previous post, I discussed my take on global warming. While climate scientists have learned much, the science of predicting the future climate is not done (as some activists claim). In fact, it would be more accurate to say that it has yet to begin. This isn’t the fault of the climate scientists; they are doing the best they can, but you simply cannot test long-term models in the short term.

That was my position until a couple of days ago. In broad strokes, my position has not changed, but I’ve been forced to revise one aspect. In light of the leaked emails and documents from the Hadley Climate Research Unit, I can only say that most climate scientists are doing the best they can.

A hacker broke into the computer system at the Hadley CRU and obtained hundreds of emails, data sets, and other files, and then released them on the internet. At first there was some question whether the files were genuine, but it now appears that they are authentic, or at least substantially so.

The files show that the Hadley scientists and their correspondents have been behaving in a very unscientific manner. The best summaries I’ve found are by John Hinderaker (here and here) and Bishop Hill. They show:

  • These scientists worked very hard to control access to their data, allowing access only to those who could be trusted to support their conclusions. They were willing to forgo publishing in journals that required data to be made public. They were even willing to go so far as to delete data rather than release it to Freedom of Information requests. (In light of this article, I’m not as shocked by this as I might have been.)
  • The scientists failed to observe proper protocol refereeing papers.
  • The scientists plotted to discredit a journal that sometimes published articles skeptical of anthropogenic global warming.
  • The scientists took the debate personally and emotionally, even to the extent of celebrating the death of a prominent skeptic.

Some of the emails appear to admit fudging data, but we don’t know the context of the messages, so I wouldn’t be surprised to find that they have a benign explanation. I think it’s better to focus on the broader pattern.

The broader pattern is the scientists at Hadley are not behaving like scientists but as activists. Rather than subjecting their data and methodology to scrutiny, they are hiding them from it. Rather than welcoming new ideas, they are fighting to silence them. Their behavior is utterly unacceptable, and it has tarnished their entire field.

The field of climate science now needs to take a serious look at itself and figure out how it can restore its credibility. Releasing all their data would be a good — indeed, essential — first step. Alas, if the folks at RealClimate.org are any guide, it’s not likely to happen. They argue that there’s nothing to see here; everything is fine. It’s not. If they think this sort of thing is okay, then their field has a bigger problem than just Hadley.

There are those of us who would like to defend the scientists and their work, even while we debate what can and should be done about it. But they’re making it a lot harder.

UPDATE: My point exactly:

Astonishingly, what appears, at least at first blush, to have emerged is that (a) the scientists have been manipulating the raw temperature figures to show a relentlessly rising global warming trend; (b) they have consistently refused outsiders access to the raw data; (c) the scientists have been trying to avoid freedom of information requests; and (d) they have been discussing ways to prevent papers by dissenting scientists being published in learned journals.

There may be a perfectly innocent explanation. But what is clear is that the integrity of the scientific evidence on which not merely the British Government, but other countries, too, through the Intergovernmental Panel on Climate Change, claim to base far-reaching and hugely expensive policy decisions, has been called into question. And the reputation of British science has been seriously tarnished. A high-level independent inquiry must be set up without delay.

There may be an innocent explanation for (a). I hope there is. But (b), (c), and (d) are certainly true, unless there is widespread forgery throughout the documents, and all indications are to the contrary. The climate science community (I’m thinking particularly of the Real Climate guys) need to get in front of this and take a serious, public look at themselves. If they persist in their failure to see how damning this is, they will seriously damage their field.

(Via Instapundit.)


The global warming debate

November 21, 2009

In the global warming (aka climate change) debate, I have been skeptical of both sides: those who say that the sky is falling and also those who say that the whole thing is a fraud. In my own conversations with climate scientists, I have found that they are much more careful and measured than the politicians and activists.

It is clear that carbon-dioxide levels are increasing and it seems likely that humanity is the principal cause. If we project future carbon-dioxide levels (an educated guess), it’s a simple physics calculation to determine the direct effect on the climate. The direct effect is not very large. But then there are the indirect effects. For example, slightly higher temperatures lead to polar ice melting, which leads to more water vapor, which either accelerates or counteracts climate change depending on the altitude at which the new clouds form. Predicting the future path of climate change depends entirely on the indirect effects, and we simply don’t know what they will be.

To try to guess the indirect effects, climate scientists have turned to computer models. Many of the models show dire consequences on increasing CO2 levels and some do not. Which, if any, of the models is accurate we simply do not know. The models generally do not make predictions that we can test, and those few that do have not performed well.

So when professional alarmists like Al Gore say the science of climate change is complete, they have it almost completely backwards. If we’re talking about projecting the future (which is what matters to public policy, after all), it would be more accurate to say that the science has not yet begun.

I am not a climate scientist, but I am a scientist. Science happens when you pose a hypothesis and devise an experiment that can test the hypothesis. (Or, better yet, when you prove a theorem, but that’s not in the cards for climate science.) In the physical sciences you can never prove a hypothesis conclusively, but if enough experiments fail to disprove it, you start to consider it confirmed.

When it comes to climate projection, all the models tell us is what will happen if the world responds in a particular manner. They have not been tested against the real world, and they can’t be. This isn’t the fault of the climate scientists; they’re doing the best they can. You simply cannot test long-term predictions in the short term. One does not have to be a climate scientist to recognize this fact.

Now it might be that the potential consequences of global warming are so dire that we must undertake a program of remediation without knowing what will happen. But it is plainly dishonest to claim that the science is done when it is not. Moreover, when it comes to policy, we must also recognize that there is another side of the equation, the economic question of what is possible. And there is considerable evidence that proposed strategies to fight global warming simply cannot be accomplished with existing technology.

My personal opinion is that we should look at reasonable, cost-effective steps for controlling CO2 emissions, such as expanding nuclear energy and researching new technologies such as carbon sequestration. We should also look seriously at geoengineering in case the worst comes to pass.

POSTSCRIPT: This post was occasioned by the scandal arising from the leaked emails and documents from the Hadley CRU. I’ll be writing about that shortly. (UPDATE: Here.)

UPDATE: Richard Lindzen, an well-respected and impeccably credentialled climate scientist at MIT, has an op-ed about climate feedback (what I called indirect effects) here. (Via Volokh.)


Vaccination slows flu mutation

October 29, 2009

A new study finds that flu vaccination doesn’t only slow the spread of flu through the population, it also slows the rate at which flu mutates.


Cancer treatment advance

October 22, 2009

Science Daily reports:

Researchers at the University of Pittsburgh School of Medicine and the National Cancer Institute (NCI), part of the National Institutes of Health, may be hot on the heels of a Holy Grail of cancer therapy: They have found a way to not only protect healthy tissue from the toxic effects of radiation treatment, but also increase tumor death.

(Via Instapundit.)


Magnetic monopoles

October 16, 2009

According to an article in Popular Science, scientists have succeeded in creating magnetic monopoles (that is, magnets with only one end). Monopoles have been the province of science fiction for so long, it’s staggering to think that they’re real now.

(Via Instapundit.)


Scientists make working liver cells from skin cells

October 12, 2009

Another exciting step forward for induced pluripotent stem cells. (Via Instapundit.)


The cash-for-clunkers disaster

September 22, 2009

Two economists have analyzed the cash-for-clunkers program and found that its economic costs outweighed its benefits by $1000 per car:

With per vehicle environmental benefits at $596 and the costs at $2,600 per vehicle, the clunker program is a net drain on society of roughly $2,000 per vehicle. Given the approximately 700,000 vehicles in the program, we estimate the total welfare loss to be about $1.4 billion. The welfare loss would be even greater if we fine tuned our estimate of the social cost per gallon to account for the spatial mix of clunkers. Clunkers, especially the trucks that comprise a large percentage of the traded-in vehicles, may have been retired disproportionately from rural locations where the social costs of pollutants are significantly lower. Also, if the average value of clunkers exceeds our conservative figure of $1000, then cost of the program would be higher. Even if the environmental gains were double our estimate, the net drain would still be close to $1 billion. While a more rigorous analysis would no doubt adjust these figures, we doubt that the basic conclusion would change.

(Via the Corner.)

We literally would have been better off if the government had simply burned $1 billion in cash.


Economic illiteracy

September 22, 2009

CBS News has an article about five health care promises that the president won’t keep. For two of them that’s a good thing, including this one:

Allow Drug Importation

During the campaign, Mr. Obama said his plan (PDF) would “Allow consumers to import safe drugs from other countries” because “some companies are exploiting Americans by dramatically overcharging U.S. consumers.”

As noted above, the Obama administration secretly conceded to forgo the importation of cheaper drugs in its deal with the pharmaceutical industry.

(Via Instapundit.)

Indeed, Obama did make that promise (one doesn’t want to trust CBS for this sort of thing):

Allow consumers to import safe drugs from other countries. The second-fastest growing type of health expenses is prescription drugs. Pharmaceutical companies should profit when their research and development results in a groundbreaking new drug. But some companies are exploiting Americans by dramatically overcharging U.S. consumers. These companies are selling the exact same drugs in Europe and Canada but charging Americans a 67 percent premium. Barack Obama and Joe Biden will allow Americans to buy their medicines from other developed countries if the drugs are safe and prices are lower outside the U.S.

This is sheer foolishness, as basic economics will tell you. In fact, one doesn’t need to know any economics; mere common sense should tell you that we cannot realize real savings by shipping a product to Canada and back.

What is happening here is called price discrimination. This happens when the supplier of a product or service sells it at a different price to different customers. This can only arise with an uncompetitive market (typically a monopoly); a competitive market will compete away such price differences. Also, it can only arise when the supplier is able to segment his market into (at least) two parts and prevent arbitrage between the segments.

Price discrimination is not inherently bad. Abstractly, it leads to more efficient resource allocation. (In the limit, called perfect price discrimination, commodities are supplied in the same quantity as they would in a competitive market.) Concretely, it lowers prices for those who are less able to afford a commodity, and raises them for those who are better able to afford it. A classic example is different rates at museums and zoos for children, adults, and seniors. (Progressives ought to be all in favor of this!)

Prescription drug companies have been able to break their market into U.S. and Canadian segments. (Additionally, many drugs are sold at an even lower price in Africa.) They then sell the drugs more cheaply in the less wealthy market segment, Canada. (An additional complication arises from Canada’s monopsonistic drug purchasing, but that doesn’t affect the analysis.)

So what will happen if we permit drug reimportation (i.e., arbitrage)? At a small scale, nothing at all. If the drug companies gain more from price discrimination than they lose from arbitrage, they’ll stick with it. But suppose we allow drug reimportation on a national scale? Then the drug companies’ ability to segment their market is gone, and price discrimination will disappear overnight.

That means that U.S. and Canadian prices will equalize at some price in the middle. We won’t see much benefit though. Since the U.S. market is much larger than the Canadian market, price discrimination has not affected our price significantly. Therefore, the new price will be imperceptibly lower than the U.S. price.

In summary, large-scale drug reimportation would screw Canada over, while obtaining no significant benefit for us. If this proposal were ever to see the light of day (and fortunately it appears it will not), we could expect Canada to fight hard against it. Barack Obama’s campaign call for reimportation was either economic illiteracy, or (more likely) simple demagoguery.

POSTSCRIPT: As I recall, John McCain supported reimportation as well; not that that matters any more.


Youth employment hits record low

August 28, 2009

Cause:

The U.S. Department of Labor reminds employers and employees that the federal minimum wage will increase to $7.25 on Friday, July 24. With this change, employees who are covered by the federal Fair Labor Standards Act (FLSA) will be entitled to pay no less than $7.25 per hour.

“This administration is committed to improving the lives of working families across the nation, and the increase in the minimum wage is another important step in the right direction,” said Secretary of Labor Hilda L. Solis.

Effect:

The proportion of people ages 16 to 24 who were employed in July was 51.4 percent, the lowest July rate since records began in 1948 and 4.6 percentage points lower than in July 2008.

(Via Hot Air.)

It’s basic economics (typically taught on the second day) that price floors cause surpluses. In the labor market, a minimum wage causes unemployment. Period.

Nevertheless, politicians seem to think that the inevitable consequences of their actions can be avoided through the power of their good intentions. “Improving the lives of working families?” Perhaps, if you’re lucky enough to have work.

UPDATE: Originally I erroneously titled this post “Youth unemployment hits record high” (which lives on in the permalink). Of course, low employment and high unemployment aren’t quite the same thing. But rest assured, youth unemployment is also at a record.


Modelling zombies

August 17, 2009

A group of mathematicians has written what has to be the first published paper to model a zombie outbreak. (Via Instapundit.)

Unfortunately, the model appears to be flawed. In the zombie literature, opinions differ as to whether a zombie epidemic can affect those who are already dead. Although many zombie works do have zombies arising from graveyards and such, the most (faux) serious work on zombies, Max Brooks, asserts that the zombie virus affects only the living. But, in either case, the literature is unanimous that destroying the brain puts down a zombie permanently.

In contrast, the paper’s model posits that zombies can arise from the dead population, and that dead population includes not only dead from natural (non-zombie) causes, but zombies that have been put down. This is clearly wrong.

Fortunately, Brooks’s scenario can be recovered by setting to zero the parameter that dictates how quickly the dead become undead. With that parameter set to zero, it doesn’t matter that the dead population is too large. However, the more typical scenario cannot be recovered without a new model, necessitating a new solution.

Alas, the world may have to wait a little longer for a serious mathematical treatment of the zombie problem.


Cell phones and driving: what the science says

July 20, 2009

The New York Times is running a series on the evils of cell phone use while driving. Its latest article is chock-full of anecdotes about serious accidents caused by cell-phone use, and laments how people are not responding to the research:

Extensive research shows the dangers of distracted driving. Studies say that drivers using phones are four times as likely to cause a crash as other drivers, and the likelihood that they will crash is equal to that of someone with a .08 percent blood alcohol level, the point at which drivers are generally considered intoxicated. Research also shows that hands-free devices do not eliminate the risks, and may worsen them by suggesting that the behavior is safe.

A 2003 Harvard study estimated that cellphone distractions caused 2,600 traffic deaths every year, and 330,000 accidents that result in moderate or severe injuries.

Yet Americans have largely ignored that research.

This line of argument has always struck me as asinine. It makes good sense to require drivers to have both hands available, so I’ve never been opposed to hands-free requirements. But opposing cell-phone use because it is distracting misses one major point: there are lots of distractions while we drive. We listen to the radio, we talk to people in the car, we have screaming children. (And those are just the reasonable things; there’s also stupid things like eating or applying make-up.) How do the risks of cell phone usage compare to all those other distractions? It seems likely that using a hands-free cell phone is more distracting than listening to the radio, about the same as talking to someone in the car, and quite a bit less than screaming children. But no one is talking about banning carpools, which surely lead to in-car conversations, or banning the transporting of children.

The NYT article did have something useful though. It had a link to a Department of Transportation publication containing an up-to-date (as of 2005) bibliography of research on the subject. On the assumption that the most respectable research appeared in peer-reviewed journals (this is the case in most fields, although not as much in mine), I looked up a handful of the articles on-line. In three cases I was able to find the actual paper, and in three I had to settle for the abstract.

What I found surprised me. None of the abstracts (nor the full papers when I could find them) discussed comparisons between cell-phone use and other distractions. In this, I imagine I was just unlucky (surely someone has looked at the question). But I did notice that all the studies were more measured that the reporting would suggest.

The most interesting was the 2003 Harvard study to which the NYT article alluded. As an aside, it did estimate that eliminating cell phone use would save 2600 lives. But that is not what the study was about. The study did a cost-benefit analysis. It built an economic model and compared the costs of a cell phone ban against the benefits in terms of lives and property saved.

The study found that eliminating cell phone use would be a break-even proposition. In fact, it would be a small net loss. And that doesn’t take into account the intangible cost of the reduction of our liberty. The study also found that eliminating cell phone are a very expensive way to save lives when compared with other possible safety measures. Finally, it’s worth noting that even a total ban on cell phone usage would not eliminate cell phone usage, nor would it probably even come close (see below).

More after the jump.

Read the rest of this entry »


Fetuses have memory

July 17, 2009

The Washington Times reports:

The unborn have memories, according to medical researchers who used sound and vibration stimulation, combined with sonography, to reveal that the human fetus displays short-term memory from at least 30 weeks gestation – or about two months before they are born.

“In addition, results indicated that 34-week-old fetuses are able to store information and retrieve it four weeks later,” said the research, which was released Wednesday.

Scientists from the Department of Obstetrics and Gynecology at Maastricht University Medical Centre and the University Medical Centre St. Radboud, both in the Netherlands, based their findings on a study of 100 healthy pregnant women and their fetuses with the help of some gentle but precise sensory stimulation.

(Via the Corner.)


Fischer-Tropsch progress

April 19, 2009

A new article published in the journal Science claims a breakthrough improving the efficiency of the Fischer-Tropsch process, which converts coal to liquid fuel suitable for engines.  The article itself is behind a registration barrier, so I can’t even read the full abstract, but the Armed and Dangerous blog says they’re reporting a 3-fold improvement in efficiency.  (Via Instapundit.)  The abbreviated abstract also reports a reduction in carbon dioxide emissions.

The United States has by far the world’s largest coal reserves.  I’m not sure what level of efficiency is necessary before FT fuel is competitive with imported oil, but this has to get us much closer (at least to sour crude, such as from Venezuela).


How to beat malaria

April 19, 2009

Efforts to eradicate malaria by exterminating mosquitoes have always failed because the mosquitoes have evolved resistances to insecticides.  A research group at Penn State has an ingenious idea for how to get around the evolved resistance problem.  It turns out that only old female mosquitoes transmit malaria, because they first have to acquire the parasite and allow time for it to develop before they can infect a person.  Thus, malaria can be controlled by killing only older mosquitos, and since those older mosquitos would already have reproduced, it would not trigger nearly as great of an evolutionary response.


Speed bumps kill

April 10, 2009

Instapundit links an article at Cracked.com that says that speed bumps, by delaying emergency traffic, kill 85 people for every one they save.  Cracked cites “a study in Boulder, Colorado” and quips:

Holy [expletive]! We think landmines have a better ratio.

For some reason, I didn’t want to accept Cracked.com as a primary source, so I tracked down the study.  A little googling found that the study was authored by one Ronald Bowman and it is cited throughout the internet by speed bump opponents everywhere.  The link, unfortunately, has gone bad (it was hosted at a defunct AOL site), but it’s in the Internet Archive here.  I don’t know Bowman’s credentials, but the study exists and looks plausible.

Also widely cited on the internet is a master’s thesis from the University of Texas, Austin by Leslie Bunte, Jr., the assistant fire chief of Austin.  That link has also gone stale, but it can be found on the Internet Archive here.  It is much more detailed that the Bowman study, but arrives at a similar result (page 152).  (Note: if you have trouble opening the file within your browser, try saving it to the desktop.)


Measuring without measuring

April 9, 2009

One of the weirdest things about quantum mechanics (perhaps the weirdest thing) is how it reacts to measurement.  A quantum mechanical system contains more information than can be measured, and when a measurement is made, the extra information is destroyed and replaced with the result of the measurement.

Together with the “no cloning” theorem, which says that a quantum system cannot be duplicated, this means that one can never learn the actual state of a system.  You are limited to a single measurement, and that one measurement cannot reveal everything.

Or so it’s been thought.  But new research might suggest that it is possible to learn something about a quantum system without measuring it:

[Two teams of researchers] managed to do what had previously been thought impossible: they probed reality without disturbing it. Not disturbing it is the quantum-mechanical equivalent of not really looking. So they were able to show that the universe does indeed exist when it is not being observed.

The reality in question—admittedly rather a small part of the universe—was the polarisation of pairs of photons, the particles of which light is made. The state of one of these photons was inextricably linked with that of the other through a process known as quantum entanglement.

The polarised photons were able to take the place of the particle and the antiparticle in Dr Hardy’s thought experiment because they obey the same quantum-mechanical rules. Dr Yokota (and also Drs Lundeen and Steinberg) managed to observe them without looking, as it were, by not gathering enough information from any one interaction to draw a conclusion, and then pooling these partial results so that the total became meaningful.

This result is so surprising that it seems likely the Economist is garbling the story, or that I’m misunderstanding its significance.  I am reminded, however, of a counterintuitive result in computer science theory that shows how information can be extracted from a complete lack of information.  If this story is right, I wonder if they used anything like it.

The construction goes like this: Consider a room containing an odd number of people, each of whom has a bit (zero or one) imprinted on his forehead.  No one can see his own bit, and they are not allowed to communicate.  The people in the room want to determine the overall parity of the room (whether the bits’ sum is even or odd), but with no way to learn their own bits, no one can do any better than flip a coin.

No one individually has any information about the parity, and yet there exists a strategy whereby they can usually determine it by voting. Each person looks at the room and sees which bit is in the majority for the people he can see.  He then determines what the parity would be if he had the same bit as the majority, and votes for that answer.

Individually, each person is just as likely to be wrong as right, but collectively, the majority of people belong to the majority.  Thus, the people who are voting for the correct answer will outvote those who are voting incorrectly.

The only time the result fails is when the room is nearly evenly balanced.  If the room contains n zero bits, and n-1 one bits then when the people in the majority look at the room, they see a room that is evenly divided.  They have to flip a coin to decide how to vote, and half of them on average will vote incorrectly.  In the minority, everyone will vote incorrectly, so the incorrect result will win.

In all other cases, the majority remains the majority even with the loss of one person, so the strategy succeeds in generating information where none existed individually.


The prices right?

April 7, 2009

If this story is accurate, a new research paper finds that the “toxic assets” are correctly priced already.  That would be very bad news for the Geithner plan, which presumes those assets are underpriced and tries to correct that.  If those assets are correctly priced already, the Geithner plan will simply be throwing taxpayer money away.

(Via Instapundit.)


School vouchers work

April 6, 2009

The Washington Post reports:

A U.S. Education Department study released yesterday found that District students who were given vouchers to attend private schools outperformed public school peers on reading tests, findings likely to reignite debate over the fate of the controversial program. . .  Congress has cut off federal funding after the 2009-10 school year unless lawmakers vote to reauthorize it.

Overall, the study found that students who used the vouchers received reading scores that placed them nearly four months ahead of peers who remained in public school. However, as a group, students who had been in the lowest-performing public schools did not show those gains. There was no difference in math performance between the groups. . .

The study, conducted by the Education Department’s research arm, the Institute of Education Sciences, compared the performance and attitudes of students with scholarships with those of peers who were eligible but weren’t chosen in a lottery. Parents of students in the program were more satisfied with their children’s new schools and considered the schools safer, the report found. Students showed no difference in their level of satisfaction.

(Via Volokh.)

The usual counterattack by the education statists against vouchers is that private schools engage in “creaming” taking the best students and therefore producing the best results.  That argument will be very hard to make now, since the DC participants were selected by lottery. I suppose the new argument will draw from the fact that the students from the very worst schools did not improve, but I’m not sure how they’ll form it.

Still, I’m not wild about vouchers, for a completely different reason. What the government funds, it always ends up trying to control. Private schools that accept vouchers are setting themselves up for government interference, which will ultimately hurt educational performance, as well as degrade some of the other attributes that make them more attractive than public schools.  I’ve always felt that the best way to implement school choice is via refundable tax credits, rather than vouchers.  That way there’s no money flowing directly from the government to schools.


First impressions work

March 31, 2009

From the annals of surprising research results: new research suggests that one can get an accurate sense of someone’s creditworthiness by looking at their face.  That’s in the sense that physiognomy has statistically significant predictive value that is independent of other measurable factors.


Inspiration vs. perspiration

March 26, 2009

Scientific American:

More than three decades of research shows that a focus on effort—not on intelligence or ability—is key to success in school and in life.

Glenn Reynolds adds:

Hey, I’ve got an idea — why don’t we organize society so that it rewards hard work! We could even see that people who work harder and do better make more money! And then their efforts would pay off in more general societal prosperity, making life better for everyone! And we could . . . Naaaah.

Get serious, Glenn.

Related item here.


Study confirms ABA bias against conservative nominees

March 18, 2009

The National Law Journal has the story.  The study also finds bias against minorities, and against former congressional staffers.  (Via Volokh.)


Study suggests women executives get better pay, same promotions

February 19, 2009

A new study from CMU’s Tepper School suggests that the “glass ceiling” is a thing of the past, at least at the executive level:

Female executives who break through the “glass ceiling” in corporate America are rewarded with higher overall compensation than their male counterparts and benefit from the same rate of promotion, according to new research from the Tepper School of Business at Carnegie Mellon University. However, the study also found that the number of females in top executive positions remains a mere fraction of business leadership overall largely due to the tendency of women to leave the workforce earlier than men.

The findings, gleaned from tracking the career paths and compensation of more than 16,000 executives over a 14-year period, identified that female executives actually earned a total of about $100,000 more per year than men of the same age, educational background and job experience. . .

“Women aren’t climbing as many rungs on the executive ladder because they are more likely than males to retire earlier or switch careers,” said Robert A. Miller, professor of economics and strategy at the Tepper School and one of the study’s co-authors. “Although women may still be likely to face gender discrimination through unpleasant work environments or tougher, less rewarding assignments, our results find that there does appear to be equal pay and equal opportunity for women if they stay in the workforce and get to the executive level.” . . .

The study indicates that job turnover and tenure as well as education are better overall indicators of compensation rather than gender. In terms of compensation, an executive’s history of career turnover and the presence of an MBA or other advanced degree tend to have the greatest impact.


A cure for HIV?

February 12, 2009

Wow:

A 42-year-old HIV patient with leukemia appears to have no detectable HIV in his blood and no symptoms after a stem cell transplant from a donor carrying a gene mutation that confers natural resistance to the virus that causes AIDS, according to a report published Wednesday in the New England Journal of Medicine.

“The patient is fine,” said Dr. Gero Hutter of Charite Universitatsmedizin Berlin in Germany. “Today, two years after his transplantation, he is still without any signs of HIV disease and without antiretroviral medication.”

The case was first reported in November, and the new report is the first official publication of the case in a medical journal. . .

While promising, the treatment is unlikely to help the vast majority of people infected with HIV, said Dr. Jay Levy, a professor at the University of California San Francisco, who wrote an editorial accompanying the study. A stem cell transplant is too extreme and too dangerous to be used as a routine treatment, he said.

“About a third of the people die [during such transplants], so it’s just too much of a risk,” Levy said.

(Via Instapundit.)


Stimulus consensus

February 10, 2009

The Administration tells us that there is a consensus of economists for a big stimulus package.  As Vice-President Biden puts it:

Every economist, as I’ve said, from conservative to liberal, acknowledges that direct government spending on a direct program now is the best way to infuse economic growth and create jobs.

This is patently false.  Biden is either lying or shockingly misinformed. Brian Riedl runs down a list of economists that Biden has apparently never heard of:

Nobel Laureates Ed Prescott, James Buchanan, and Vernon Smith recently joined 200 other economists signing a letter opposing the legislation. Other notable economists critical of the stimulus package include Nobel Laureate Gary Becker, as well as Robert BarroGreg MankiwArthur Laffer, and Larry Lindsey.  Martin Feldstein, who had been the only notable conservative economist loudly supporting the stimulus, has since changed his mind.

More liberal economists such as Alice Rivlin and Alan Blinder have also strongly criticized certain aspects of the spending bill. 

Even President Obama’s own economic advisers—who are leading the fight for the “stimulus” bill—previously criticized the bill’s economic underpinnings.

Riedl goes on to list several criticisms of the plan from the president’s own advisers.  Greg Mankiw adds a few more names to the list, including Robert Lucas, a Nobel Prize winner formerly at CMU.  Mankiw goes on to speculate about what Biden might have been thinking, which seems like a thoroughly unprofitable enterprise to me.

In the battle of the politics of economics, one significant player is the master of economic misinformation, Paul Krugman.  Robert Barro writes of Krugman, “He just says whatever is convenient for his political argument. He doesn’t behave like an economist.”  Will Wilkinson has a more extensive analysis (and perhaps a more helpful one) :

Like the president, Krugman seems firmly caught in the paradox of countercyclical macroeconomic politics. The intermediate-level textbook theory says that at times like these we need a certain kind of policy to steady the economy’s nerves and lubricate consumption and investment. The economics says we need confidence. But political reality says we need panic. So we try to induce panic so that we can later induce confidence. This seems an extremely awkward and implausible approach, but that doesn’t keep anyone from trying it.

The deeper problem, I think, is that the textbook theory doesn’t have any politics in it. . . But of course, there is politics, which trashes hope of either consensus on or compliance with theory. And that’s how we ended up with the legislative monstrosity actually under consideration in Congress.

The economists can duke it out over the possibility of successful fiscal stimulus. But is there any reason based in up-to-date economic theory to believe that this trillion dollar deficit-spending bill is not, as Barro says, garbage?

Krugman is plumping for it anyway. Hard. . . Perhaps more than any economist of his caliber, Krugman understands that policy is largely determined by the outcome of the public opinion shoutfest. Yet this recognition seems to have no effect on Krugman’s ideas. Rather than bring inside his models disagreement over economic theory and the lack of political incentive to faithfully apply them, which would lead him to radically revise his prescriptions, Krugman leaves his textbook theory untouched and simply tries to win the shoutfest.  Krugman’s often unbearable stridency seems to reflect an attempt to overcome the problems of democratic disagreement and incentive compatibility through sheer force of will.

(Via Asymmetrical Information.)


More Barro

February 6, 2009

The Atlantic’s Conor Clarke has an interesting interview with Robert Barro.  On Paul Krugman:

Q. Do you read Paul Krugman’s blog?

A. Just when he writes nasty individual comments that people forward.

Q. Oh, well he wrote a series of posts saying he thought the World War II spending evidence was not good, for a variety of reasons, but I guess…

A. He said elsewhere that it was good and that it was what got us out of the depression. He just says whatever is convenient for his political argument. He doesn’t behave like an economist. And the guy has never done any work in Keynesian macroeconomics, which I actually did. He has never even done any work on that. His work is in trade stuff. He did excellent work, but it has nothing to do with what he’s writing about.

On the stimulus bill:

Q. The last thing is just about the stimulus bills as it stands. Two things here. One thing is what do you think about the ratio of spending to tax relief in the bill. And the second is, if you judge it by Larry Summers standard — that stimulus be temporary, timely and targeted — does it clear the bar?

A. This is probably the worst bill that has been put forward since the 1930s. I don’t know what to say. I mean it’s wasting a tremendous amount of money. It has some simplistic theory that I don’t think will work, so I don’t think the expenditure stuff is going to have the intended effect. I don’t think it will expand the economy. And the tax cutting isn’t really geared toward incentives. It’s not really geared to lowering tax rates; it’s more along the lines of throwing money at people. On both sides I think it’s garbage. So in terms of balance between the two it doesn’t really matter that much.

On where people got the idea that government spending stimulates the economy:

Q. Are there any conditions under which you might think spending could have a positive effect on output or is it always going to be the case that as a relative matter that tax cuts are going to be better?

A. Tax cuts are bound to be better. I think the best evidence for expanding GDP comes from the temporary military spending that usually accompanies wars — wars that don’t destroy a lot of stuff, at least in the US experience. Even there I don’t think it’s one for one, so if you don’t value the war itself it’s not a good idea. You know, attacking Iran is a shovel-ready project. But I wouldn’t recommend it.

There’s much more, too, but none of it will help you feel any better about this stimulus trainwreck.  Incidentally, Robert Barro is world’s the third-most influential economist, according to the RePEc/IDEAS measure.

(Via Truth on the Market, via Volokh.)  (Previous post.)


Destroying nuclear waste via fusion

February 2, 2009

A group of scientists say they are close to being able to build a system that breaks down nuclear waste into safer material using excess neutrons from a fusion reactor.  (Via Instapundit.)  No one has yet been able to build a fusion reactor that generates more power than it uses, but one isn’t be needed for the system to work.

Other technologies to dispose of most nuclear waste (mostly through re-use) have existed for decades, but were discontinued in the United States in 1976 due to concerns that they might lead to nuclear proliferation.  I’d be interested to hear how the new technology compares.


Rainforests expand

February 2, 2009

The NY Times reports:

These new “secondary” forests are emerging in Latin America, Asia and other tropical regions at such a fast pace that the trend has set off a serious debate about whether saving primeval rain forest — an iconic environmental cause — may be less urgent than once thought. By one estimate, for every acre of rain forest cut down each year, more than 50 acres of new forest are growing in the tropics on land that was once farmed, logged or ravaged by natural disaster. . .

About 38 million acres of original rain forest are being cut down every year, but in 2005, according to the most recent “State of the World’s Forests Report” by the United Nations Food and Agriculture Organization, there were an estimated 2.1 billion acres of potential replacement forest growing in the tropics — an area almost as large as the United States. . . The area of secondary forest is increasing by more than 4 percent yearly, Dr. Wright estimates.

Are the new rainforests “real”?  Many say yes:

The idea has stirred outrage among environmentalists who believe that vigorous efforts to protect native rain forest should remain a top priority. But the notion has gained currency in mainstream organizations like the Smithsonian Institution and the United Nations, which in 2005 concluded that new forests were “increasing dramatically” and “undervalued” for their environmental benefits. . .

“Biologists were ignoring these huge population trends and acting as if only original forest has conservation value, and that’s just wrong,” said Joe Wright, a senior scientist at the Smithsonian Tropical Research Institute here, who set off a firestorm two years ago by suggesting that the new forests could substantially compensate for rain forest destruction.

“Is this a real rain forest?” Dr. Wright asked, walking the land of a former American cacao plantation that was abandoned about 50 years ago, and pointing to fig trees and vast webs of community spiders and howler monkeys.

“A botanist can look at the trees here and know this is regrowth,” he said. “But the temperature and humidity are right. Look at the number of birds! It works. This is a suitable habitat.”

(Via Instapundit.)


The macroeconomic debate

January 25, 2009

Bruce Bartlett has an interesting column on the macroeconomic debate over the past eighty years.  I don’t agree with all of it, but it’s interesting and generally fair.  (Via the Corner.)


Keynes must go

January 23, 2009

The legendary economist Robert Barro explains in the Wall Street Journal why the stimulus package will not work.  (Via the Corner.)

One interesting bit that I did not know is Barro’s estimates of the Keynes multiplier.  For a peacetime stimulus, the multiplier is “insignificantly different from zero.”  That means that the “stimulus” does not stimulate the economy, and serves only to shift production away from consumption and investment.  Even during World War 2, when it supposedly worked, the “multiplier” was just 0.8, meaning that the economy grew less than the amount of stimulus.  The Obama Administration is reportedly assuming a multiplier of 1.5.


Guns and freedom

January 1, 2009

A new paper shows that gun ownership correlates with various measures of freedom and prosperity.  It strikes me as not too surprising that gun rights would correlate with other freedoms.


Spaceguard progress

December 25, 2008

Progress is being made on a Spaceguard-style system to watch the skies for dangerous asteroids:

Until now, these efforts have been carried out with existing telescopes, and researchers think they have found about three-quarters of the 1,000 or so neighbouring asteroids that have a civilisation-wrecking diameter of 1km or more. But to locate the rest, and to look for smaller objects that could still wreak local devastation, they need more specialised tools.

This month they will start to get them. On December 6th the University of Hawaii will activate a telescope designed specifically to look for dangerous asteroids. It is called PS1, a contraction of Panoramic Survey Telescope & Rapid Response System, and it is the first of four such instruments that will be used to catalogue as many as possible of the 100,000 or so near-Earth asteroids that measure between 140 metres and a kilometre across. . .

The other three telescopes should be completed by 2012, at a cost of about $100m for all four. They should take about ten years to catalogue 90% of the remaining dangerous asteroids thought to be out there.


Are articles in top journals more likely to be wrong?

December 22, 2008

A Greek epidemiologist makes the striking claim that articles in top journals are more likely to be wrong than ones in lesser ones:

IN ECONOMIC theory the winner’s curse refers to the idea that someone who places the winning bid in an auction may have paid too much. Consider, for example, bids to develop an oil field. Most of the offers are likely to cluster around the true value of the resource, so the highest bidder probably paid too much.

The same thing may be happening in scientific publishing, according to a new analysis. With so many scientific papers chasing so few pages in the most prestigious journals, the winners could be the ones most likely to oversell themselves—to trumpet dramatic or important results that later turn out to be false. This would produce a distorted picture of scientific knowledge, with less dramatic (but more accurate) results either relegated to obscure journals or left unpublished. . .

The assumption is that, as a result, such journals publish only the best scientific work. But Dr Ioannidis and his colleagues argue that the reputations of the journals are pumped up by an artificial scarcity of the kind that keeps diamonds expensive. And such a scarcity, they suggest, can make it more likely that the leading journals will publish dramatic, but what may ultimately turn out to be incorrect, research.

Dr Ioannidis based his earlier argument about incorrect research partly on a study of 49 papers in leading journals that had been cited by more than 1,000 other scientists. They were, in other words, well-regarded research. But he found that, within only a few years, almost a third of the papers had been refuted by other studies. For the idea of the winner’s curse to hold, papers published in less-well-known journals should be more reliable; but that has not yet been established.

I’m not familiar with Ioannidis’s research, so I can’t comment specifically, but there’s some room for skepticism. There are refutations and there are refutations. Did the refutations of these papers refute their specific findings, or the more general conclusions they drew from those findings? If it’s the latter, one could argue that there’s really no problem.

Still, it’s good to work in a field where you can prove your results with mathematical certainty (and, increasingly, with mechanical verification).