Antibiotics in Meat Do Lead to MRSA in Humans

I was extremely disturbed to see in the NYT’s letters a veterinarian’s defense of the practice of overuse of antibiotics in animals that suggested transmission of resistant organisms does not occur. Nonsense! It is abundantly clear that antibiotic use in animals results in resistant strains that then colonize humans. They are being recognized as the newest reservoir for strains of MRSA.

Unlike the GMO nonsense, this is a clear public health issue with a plausible (and demonstrated) mechanism of transmitted risk to humans. The author of the letter, Charles Hofacre, says two, wildly misleading things. For one, he suggests the antibiotics they are using are somehow substantively different from those in humans by saying, “About a third of livestock antibiotics used today are not used at all in human medicine.” Well, that means 2/3rds are the same and just because we don’t use the exact same antibiotics doesn’t mean they don’t share the exact same mechanism. If he’s trying to suggest resistance in livestock antibiotics isn’t relevant to human pathogens, he is just wrong, wrong, wrong. Second he says, “There is no proven link to antibiotic treatment failure in humans because of antibiotic use in animals for consumption — a critical point that is often missed. ” This is such a misleading statement I can’t believe an academic would say such a thing, as it assumes we’re just idiots. This suggests that there is not a transmission issue, or at least none of clinical relevance. But this is also wrong. There is extensive documentation of Methicillin-resistant Staph Aureus (MRSA) becoming more common in livestock, being transmitted to humans, and appearing in hospitals. There hasn’t been a “treatment failure”, because we still have antibiotics that work against MRSA, and MRSA is usually not pathogenic on its own without some failure of the host immune system, broken skin/non-sterile injection, surgery, chemo, etc. That doesn’t mean we should go around spreading MRSA! We have to start taking out the big guns to deal with MRSA infections when they do occur (we don’t treat colonization), and the more we expose these bacteria towards the better antibiotics, the more we’ll train them for resistance to those drugs. But it should be made clear, the transmission of resistant bacteria from farm animals to humans has been documented, just because the patients didn’t die doesn’t mean that there’s no problem here. This is just shameful.

Antibiotic resistance has existed since before we even used antibiotics and will only get worse the more we train the organisms to grow in the presence of antibiotics. These genes for resistance aren’t “new”, but not all bacteria carry them because there is an energy cost associated with production of proteins, and if it doesn’t benefit their survival, those bacterial strains wasting energy will become less common. If we constantly create a selective pressure on bacteria to maintain resistance genes, we are going to increase the proportion of bacteria that carry resistance, and thus the resistant organisms we are exposed to. Then, as we have to use more and more powerful antibiotics to address resistance, we create additional selective pressure on the organisms to carry more and better resistance genes (not all beta-lactamases are created equal), and as they mutate to become more effective, those effective resistance strains will eventually mutate into bacteria for which we have no therapeutic option. These are already starting to emerge as those who followed reports of the MDR-klebs outbreak at NIH know.

In my GMO thread I used the analogy that the beta-lactamase used for genetic modification of organisms by molecular biologists is like a “sharpened stick” it can use against weaker penicillins. This is why those resistance genes aren’t a danger for humans. They’ve been around forever anyway, all the bacteria that are going to carry them already do so we don’t even bother using weaker penicillins on those types of infections, and they can’t beat our stronger beta-lactam drugs like the anti-staph and extended-spectrum beta lactams. The multiple-resistance and pan-resistance bugs that we are finding in our ICUs are the “multiple nuclear warhead” bugs because they beat multiple classes of drugs as well as our extended-spectrum drugs. We’ve created these bugs by the steady application of selective pressure with exposure of the organisms to progressively more powerful antibiotics. The continued injudicious use of antibiotics in animals will invariably lead to the same phenomenon, just all over the place in communities and the workplace rather than just in the ICU. We are going to see a higher prevalence of resistant bacteria, those bacteria will mutate their resistance genes to become more and more effective, they’re already crossing over to humans and hospitals, and we’re going to have to use our big guns more which will speed up the loss of our antibacterials’ efficacy.

Some caveats. One, this represents more of a threat for farm workers than consumers, as MRSA is not carried in the meat itself, although it will likely contaminate the meat at higher frequency (this has indeed been shown) as the prevalence increases from slaughterhouse contamination. MRSA usually colonizes the outside of the animal, the nares, etc., not the inside of the animal. Two, standard practices of food handling will also decrease, but not eliminate our risk. Cooking meat and washing hands with soap after meat handling (which should be your standard practice) kills MRSA. Don’t prepare hamburger then pick your nose people. Clean surfaces on which meat has been prepared etc. However, the packaging, your cutting board, your trash can, all are likely to get contaminated if the meat was surface contaminated. Three, realize MRSA is not pathogenic in normal healthy people. But, something as simple as a cut can introduce staph and create a serious infection. Staph is everywhere, and the human body generally has no problem handling it. But when those defenses are down, MRSA reduces our therapeutic options. You don’t want that. Fourth, this is just one bug we may be exposed to, we’re also training the animals e. coli and enterobacter to become resistant too, and with poor food prep and exposure, you can get colonized with these bugs as well.

From a public-health standpoint it’s important that we reduce the prevalence of resistant bacteria we’re exposed to, so fewer of our infections will require the big-gun antibiotics. There is good news though, and we shouldn’t just develop a fatalistic attitude towards this problem. As we stop the overuse of antibiotics, selective pressure on the bacteria will cause some of them to shed the resistance genes, and there won’t be a reason for the bacteria to maintain and improve their antibiotic resistance genes. Without consistent exposure to antibiotics, they have far less selective pressure to produce proteins and maintain plasmids that provide them no advantage. While the resistance genes will still be out there (always have, always will), we can still benefit from common-sense measures that decrease their prevalence, and thus our individual risk of exposure to resistant organisms. And, the less we have to take out the big guns to treat infections, the fewer multiply-resistant organisms we’ll see.

The Web of Web Lobbying

The Wall Street Journal reported on a battle developing between privacy advocates and internet companies concerning AB 1291, a transparency measure that is in part based upon some of my privacy research:

The industry backlash is against the “Right to Know Act,” a bill introduced in February by Bonnie Lowenthal, a Democratic assemblywoman from Long Beach. It would make Internet companies, upon request, share with Californians personal information they have collected—including buying habits, physical location and sexual orientation—and what they have passed on to third parties such as marketing companies, app makers and other companies that collect and sell data.

Instead of discussing the merits of the bill, here I want to show an aspect of industry association lobbying. As noted previously, these groups are useful to companies for several reasons: they can be used to “launder” policy, they can air controversial views without attribution to any one company, they can help hide companies advocacy when it appears to conflict with previous commitments, and they defray critical reporting. They also amplify power, because they place legislators in a house of mirrors–trade groups allow companies to mask the provenance of their advocacy and to multiply it. This creates a kind of echo chamber for companies.

The Journal’s Vauhini Vara and Geoffrey Fowler reported:

The coalition includes such trade groups as the Internet Alliance, TechNet and TechAmerica, all of which represent major Internet companies

This past week, Will Gonzalez, a Facebook lobbyist based in Sacramento, aired concerns in a meeting about how the bill would hurt Facebook’s business, according to a legislative aide. Mr. Gonzalez didn’t respond to requests for comment.

Representatives for Facebook and Google declined to comment on the bill.

Vara and Fowler are on the right path–break through these groups and talk to their principals about their stance on the bill. Facebook and Google won’t comment to the Journal, I imagine, because AB 1291 is fundamentally a transparency measure. Opposition to it creates some dissonance with these companies’ rational choice/transparency/openness rhetoric.

But back to my point–the trade groups help companies hide their advocacy positions, and amplify them. Check out my poor man’s version of the web of web advocacy below.

This is the letterhead of the opposition letter submitted by tech companies against California's AB 1291.
This is the letterhead of the opposition letter submitted by tech companies against California’s AB 1291.

Conspiracy belief prevalence, according to Public Policy Polling is as high as 51%

And it may even be more when one considers that there is likely non-overlap between many of these conspiracies. It really is unfortunate that their isn’t more social pushback against those that express conspiratorial views. Given both the historical and modern tendency of some conspiracy theories being used direct hate towards one group or another (scratch a 9/11 truther and guess what’s underneath), and that they’re basically an admission of one’s own defective reasoning, why is it socially acceptable to espouse conspiracy theories? They add nothing to discussion, and instead hijack legitimate debate because one contributor has abandoned all pretense of using actual evidence. Conspiracy theories are used to explain a belief in the absence of real evidence. Worse, they are so often just a vehicle to direct vitriol and hate. We need less hate and partisanship. We should be able to disagree with a president without saying that he’s part of an agenda21/commoncore/obamacare/nazi/fascist/communist/North Korean conspiracy to make American citizens 3rd world slaves (not an exaggeration). We should be able to disagree with a corporation’s policies without asserting their objective is mass-murder. What is the benefit of this rhetoric? It’s just designed to poison our discourse, and inspire greater partisanship, divisiveness and incivility. Conspiracy theories are often used as a more subtle way to mask vile invective towards whichever group you hate. As you look underneath these theories you see it’s really just irrational hatred for somebody- liberals, conservatives, homosexuals, different races or religions, governments, or even certain professions. This is because at the root of the need for conspiratorial thinking is some irrational, overvalued idea, and often the open expression of the belief would result in social scorn.

I’ve found in my experience, almost everyone carries one really cranky belief that they can’t seem to shake, no matter how evidence-based their other positions are (probably because we are all capable of carrying some overvalued ideas). But it’s worth peering through PPP’s full results to see the nature of some of these associations.

For one, some of these associations I think are spurious, poorly questioned, or just reflect misinformation, rather than conspiracy. For instance:

44% of voters believe the Bush administration intentionally misled the public about weapons of mass destruction to promote the Iraq War, while 45% disagree. 72% of Democrats believed the statement while 73% of Republicans did not. 22% of Democrats, 33% of Republicans and 28% of independents believe Saddam Hussein was involved in the 9/11 terrorist attacks.

Many have questioned the inclusion of this question because, in reality, there were no weapons of mass destruction found in Iraq. So the question of whether we were “misled” or “intentionally-misled” puts us in the murky position at having to guess at the motivations of individuals like Bush and Cheney. Mind-reading is a dubious activity, and I tend to ascribe to the Napoleonic belief that you shouldn’t ascribe to malice, that which can be explained by incompetence (also known as Hanlon’s razor). Is it conspiratorial to think maybe they were more malicious than incompetent? While I think that administration really were “true believers”, of course I don’t really know for sure, and I don’t think it’s fair to describe such as conspiratorial reasoning. Instead it’s just the dubious but common practice of guessing at the intentions of others. The generally-similar numbers on the Saddam Hussein/9/11 connection, I believe, just suggests ignorance, rather than necessitating active belief in a conspiratorial framework (keeping in mind the margin of error is about 3% these aren’t huge partisan differences like over WMD).

One of the most disappointing numbers was on belief in a conspiracy behind JFK’s assassination:

51% of Americans believe there was a larger conspiracy at work in the JFK assassination, while 25% think Lee Harvey Oswald
acted alone.

That’s 51% conspiratorial belief, 24% probably showing ignorance of one of the most important events of the last century, and 25% actually informed. This is pretty sad. The movements of Oswald were so thoroughly-investigated and known, the hard evidence for his planning and involvement are so clear, the conspirators so unlikely (the mob/CIA/LBJ/KGB hiring crackpot loser communists for assassinations?), and the fabrications of the conspiracists so plain (asserting the shots couldn’t be made despite it being easily replicated by everyone from the Warren Commission to the Discovery Channel and even improved on, the disparaging of his marksmanship when LHO was a marine sharpshooter, altering the positions of the occupants of the car to make the bullet path from JFK to Connelly appear unlikely, etc.) it’s sad that so many have bought into this nonsense. The historically-bogus picture JFK, by Oliver Stone, may also play a large part in this, and is an example why Oliver Stone is really a terrible person. People that misrepresent history are the worst. If anyone wants to read a good book about the actual evidence that of what happened that day, as well as destroys the conspiracy position, Reclaiming History by Vincent Bugliosi is my favorite, as well as the most thorough.

But there is one redeeming feature of conspiracy about the JFK assassination. For the most part, conspiratorial ideas on the subject aren’t due to some dark part in people’s souls, as for many other conspiracies, but rather the very human need to ascribe more to such earth-shattering events as the assassination of a president than just the madness of a pitiable loser. The imbalance between the magnitude of the event, and the banal crank that accomplished it, is simply too much. There’s no way that a 24-year-old, violent, wife-beating, Marxist roustabout could be responsible for the death of a man like JFK right? Sadly no. The evidence shows even a man that pathetic can destroy the life of a much greater man with a cheap rifle and a simple plan.

The conspiracy theories embedded within this poll that really disturb me because I think they demonstrate the effect of irrational hate are ones such as for whether President Obama is the antichrist (although is that even really a conspiracy?). 13% of respondents believed this, 5% of those that voted for him still answered this question in the affirmative (really? you voted for the antichrist) as opposed to 22% of those that voted for Romney. Do we really need to elevate political disagreement to the level of labeling people the antichrist? Around 9% thought government adds fluoride for “sinister” reasons, and 11% believe in the LIHOP 9/11 conspiracy theory. They clearly think very little of their fellow Americans, and believe some really demonic things about our government. Our government is neither competent enough, or evil enough, to engage in then successfully cover up either of these things. Our top spy couldn’t even hide a tawdry affair.

Other conspiracy theories seem to indicate their is a baseline number of people, at about 15%, who will believe in just about anything from the moon landing being hoaxed to bigfoot. I would have actually pegged this number higher, given my pessimism about rational thought, but that seems to be what we can read from this. However, without being able to see whether or not it was the same people answering yes to each individual absurd conspiracy from reptilians to “government adds secret mind-controlling technology to television broadcast signals”, it’s possible this number is actually much larger. I would be curious to see the data on the overlap between these questions, as the phenomenon of crank magnetism is well known.

Ultimately, I read this data as saying that Americans have a big problem with conspiracy theories entering our political discourse. We should be embarrassed that as many as 37% of us believe that global warming is a “hoax”. That requires a belief is a grand conspiracy of scientists, policy-makers, journals, editors, etc., all acting together to somehow fabricate data for a single objective – often described as world-government control conspiracy to cede our sovereignty to the UN. Somehow, every single national scientific body, all those national academies, all those journals, and all those scientists, all those governments, all working in perfect secrecy according to some master plan (which I’m often accused of being a part of but I’m sure I’m missing the memo), and this is plausible how? The answer is, it’s not, unless you remain steadfastly ignorant of how science actually works and progresses.

Everyone, of any political persuasion, should be embarrassed by the conspiracy-theorists in their ranks. This isn’t healthy thinking, it isn’t rational discourse, and it only serves to divide us and make us hate. Enough of this already.

There Are Legitimate Criticisms of Obamacare – Hospitals Should not be Penalized for Readmissions

Crazy ranting about impending socialism/fascism aside, there are legitimate critiques to be made of Obamacare. One policy in particular that raises my ire is penalizing hospitals over performance metrics and penalizing readmissions in particular. The way it works is, patients are admitted to the hospital, treated, and eventually discharged, but a indicator of failure of adequate care is if that patient then bounces back, and is readmitted shortly after their hospitalization:

Under the new federal regulations, hospitals face hefty penalties for readmitting patients they have already treated, on the theory that many readmissions result from poor follow-up care.

It makes for cheaper and better care in the long run, the thinking goes, to help patients stay healthy than to be forced to readmit them for another costly hospital stay.

So hospitals call patients within 48 hours of discharge to find out how they are feeling. They arrange patients’ follow-up appointments with doctors even before a patient leaves. And they have redoubled their efforts to make sure patients understand what medicines to take at home.

Seems reasonable, right? These are things that are part of good medical care; good follow up, clarity with prescriptions, etc. It should be the responsibility of hospitals to get patients plugged into the safety net, assign social workers, and make sure patients won’t fail because they lack resources at home. However, the problem arises when the ideal of punishing readmissions as “failures” crashes into the reality of the general failure of our social safety net:

But hospitals have also taken on responsibilities far outside the medical realm: they are helping patients arrange transportation for follow-up doctor visits, get safe housing or even find a hot meal, all in an effort to keep them healthy.

“There’s a huge opportunity to reduce the cost of medical care by addressing these other things, the social aspects,” said Dr. Samuel Skootsky, chief medical officer of the U.C.L.A. Faculty Practice Group and Medical Group.

Medicare, which monitors hospitals’ compliance with the new rules, says nearly two-thirds of hospitals receiving traditional Medicare payments are expected to pay penalties totaling about $300 million in 2013 because too many of their patients were readmitted within 30 days of discharge. Last month, the agency reported that readmissions had dropped to 17.8 percent by late last year from about 19 percent in 2011.

But increasingly, health policy experts and hospital executives say the penalties, which went into effect in October, unfairly target hospitals that treat the sickest patients or the patients facing the greatest socioeconomic challenges. They say a hospital’s readmission rate is not a clear measure of the quality of care it provides, noting that hospitals with higher mortality rates may also have fewer returning patients.

“Dead patients can’t be readmitted,” Dr. Henderson said.

This is a problem with the careless application of rewards and penalties tied to medical outcomes. While I think it’s a healthy response that hospitals are taking on more of the social work that formerly would have been the arena of government programs, there is another defense mechanism used when government creates perverse incentives in health care. When you create payment incentives for good outcomes, you run the risk of patient selection, discrimination, and fraud. My favorite paper on this topic comes from the British NHS, and their attempt to reward physicians based on better clinical outcomes. My advice with this paper (and with most papers frankly) is to ignore what the authors say about their data (and the amazing success of their program!) and just look at the data for yourself. What they found with rewarding physicians based on health metrics was that doctors that treated the young, healthy, and rich did well, those with more patients, poorer patients, and older patients did more poorly. Finally, physicians that filed lots of “exception reports” to eliminate all their poorly-performing patients did great (yay, fraud!).

Metrics are good for identifying problems, but the mistake is the assumption that poor performance at a metric has everything to do with the physicians or the hospitals, or that slapping a penalty on poor performance will fix the problem. Sometimes, you’re studying society, not medical care. Incentive structures that put the burden on hospitals to take care of the most basic needs of their patients are going to penalize those hospitals that take care of the neediest, sickest, oldest patients, and reward those who treat insured, wealthy, younger, and fewer patients. Worse, if you penalize hospitals for taking care of difficult patient populations, I can predict the outcome. More bogus (and occasionally dangerous) transfers, more patients dumped on public and university hospitals, and all the other tricks of patient selection private hospitals can engage in to avoid getting stuck with the economic losses. That is, patients who are really sick, really poor, really old, and most in need of care will get transferred, obstructed, and dumped. Hospitals that are referral centers, major university and public hospitals that can’t refuse or transfer problem patients, will end up with the disproportionate amount of the penalties because they are often the healthcare providers of last resort. Not surprisingly, the early data already shows this is happening:

The second important development was the release of data on who will be penalized: two thirds of eligible U.S. hospitals were found to have readmission rates higher than the CMS models predicted, and each of these hospitals will receive a penalty. The number of hospitals penalized is much higher than most observers would have anticipated on the basis of CMS’s previous public reports, which identified less than 5% of hospitals as outliers. In addition, there is now convincing evidence that safety-net institutions (see graphsProportion of Hospitals Facing No Readmissions Penalty (Panel A) and Median Amount of Penalty (Panel B), According to the Proportion of Hospital’s Patients Who Receive Supplemental Security Income.), as well large teaching hospitals, which provide a substantial proportion of the care for patients with complex medical problems, are far more likely to be penalized under the HRRP.3 Left unchecked, the HRRP has the potential to exacerbate disparities in care and create disincentives to providing care for patients who are particularly ill or who have complex health needs, particularly if the penalties are larger than hospitals’ margins for caring for these patients.

It would be unfortunate if in the course of creating incentives for better care, we fall into the same old trap of punishing those who take care of the neediest. What we need instead is to acknowledge one major source of bad outcomes is a broken social-safety net. We can’t just keep creating these unfunded mandates that put all the onus of taking care of the uninsured, the poor and elderly on hospitals, and punish the centers that already carry the largest social burdens with responsibility for the failure of our nation to take care of its own. Unfortunately, our answer to problems like these is always to create one more shell game that hides the real, unavoidable costs of taking care of people by shifting it around. This will just result in higher bills on the insured, more crazy chargemaster fees, overburdened public and university hospitals, and ultimately, a system of regressive taxation.

Natural News' Mike Adams Adds Global Warming Denialism to HIV/AIDS denial, Anti-vax, Altie-med, Anti-GMO, Birther Crankery

I still think that list is pretty incomplete, the RationalWiki has more, but it’s interesting to see a potential internal ideological conflict as Adams sides with big business and the fossil fuel industry to suggest CO2 is the best gas ever. While he doesn’t appear to directly deny CO2 is a greenhouse gas, he’s managed to merge his anti-government conspiratorial tendencies with his overriding naturalistic fantasy to decide the government (and Al Gore) are conspiring to destroy our power infrastructure with carbon taxes, and deny the world the benefit of 1000ppm CO2 in the atmosphere. His solution? Pump coal power exhaust into greenhouses growing food. I’m not kidding:

This brings up an obvious answer for what to do with all the CO2 produced by power plants, office buildings and even fitness centers where people exhale vast quantities of CO2. The answer is to build adjacent greenhouses and pump the CO2 into the greenhouses.

Every coal-fired power plant, in other words, should have a vast array of greenhouses surrounding it. Most of what you see emitted from power plant smokestacks is water vapor and CO2, both essential nutrients for rapid growth of food crops. By diverting carbon dioxide and water into greenhouses, the problem of emissions is instantly solved because the plants update the CO2 and use it for photosynthesis, thus “sequestering” the CO2 while rapidly growing food crops. It also happens to produce oxygen as a “waste product” which can be released into the atmosphere, (slightly) upping the oxygen level of the air we breathe.

He seems to have forgotten about all the mercury, lead, cadmium, volatile organics, sulfur etc., emitted by burning coal. I wonder how these different crank theories somehow manage to occupy the same brain, as his mercury paranoia appears temporarily overwhelmed by his anti-government conspiracism. I mean, he’s defending burning coal. It boggles the mind. I’m not exactly the biggest food purity buff, but even I find the idea of growing food in coal-fire exhaust somewhat, well, insane? Mad? Totally bonkers? What’s the right word for it? Maybe we need to create a new word for this level of craziness? Maybe we should name it after Adams, and call it Adamsian. You could say “Adamsian nuttery” to really refer to a truly bizarre level of crankery. Unless it’s an April Fools day prank, but then it was published on the 31st…nope, I think he’s just that nuts.

Anti-GMO writers show profound ignorance of basic biology and now Jane Goodall has joined their ranks

It’s a sad day for the reality-based community, within the critiques of Jane Goodall’s new book ‘Seeds of Hope’ we find that in addition to plagiarism and sloppiness with facts, she’s fallen for anti-GMO crank Jeffrey Smith’s nonsense.

When asked by The Guardian whom she most despised, Goodall responded, “The agricultural company Monsanto, because I know too much about GM organisms and crops.” She might know too much, but what if what she knows is completely wrong?

Many of the claims in Seeds of Hope can also be found in Genetic Roulette: The Documented Health Risks of Genetically Engineered Foods, a book by “consumer advocate” Jeffrey Smith. Goodall generously blurbed the book (“If you care about your health and that of your children, buy this book, become aware of the potential problems, and take action”) and in Seeds of Hope cites a “study” on GMO conducted by Smith’s “think tank,” the Institute for Responsible Technology.

Like Goodall, Smith isn’t a genetic scientist. According to New Yorker writer Michael Specter, he “has no experience in genetics or agriculture, and has no scientific degree from any institution” but did study “business at the Maharishi International University, founded by the Maharishi Mahesh Yogi.” (In Seeds of Hope, Goodall also recommends a book on GM by Maharishi Institute executive vice president Steven M. Druker, who also has no scientific training). As Professor Bruce Chassy, an emeritus food scientist at the University of Illinois, told Specter, “His only professional experience prior to taking up his crusade against biotechnology is as a ballroom-dance teacher, yogic flying instructor, and political candidate for the Maharishi cult’s natural-law party.” Along with fellow food scientist Dr. David Tribe, Chassy runs an entire website devoted to debunking Smith’s pseudoscience.

And it apparently escaped Goodall’s notice that Smith’s most recent book—the one that she fulsomely endorsed—features a foreword by British politician Michael Meacher, who, after being kicked out of the Tony Blair’s government in 2003, has devoted a significant amount of time to furthering 9/11 conspiracy theories.

Goodall is, of course, not the first scientist of fame and repute to fall in for crankery and pseudoscience. From Linus Pauling to Luc Montagnier, even Nobel Prize winning scientists have fallen for psuedoscientific theories. However, we should always be saddened when yet another famous scientist decides to go emeritus and abandon the reality-based community.

There always seem to be a couple of different factors at play when this happens. For one, such scientists appear to have reached a such a status that it becomes very difficult for others to criticize them. It’s like a state of ultra-tenure, in which you practically have to insult the intelligence of an entire continent before people will object to your misbehavior. The second common factor seems to be that they start operating in a field in which they lack expertise, but seem to assume their expertise in other unrelated fields should allow them to waive in. This appears to be the case with Goodall, as even someone with rudimentary knowledge of molecular biology should be able to see the gaping holes in the anti-GMO movement’s logic.

For example, let’s start with the easy-pickings at Natural News. A recent article by Jon Rapaport entitled “Brand new GMO food can rewire your body: more evil coming” is a perfect example of how the arguments made against GMO foods are based on fundamentally-unsound understanding of biology. The author writes:

It’s already bad. Very bad. For the past 25 years, the biotech Dr. Frankensteins have been inserting DNA into food crops.

The widespread dangers of this technique have been exposed. People all over the world, including many scientists and farmers, are up in arms about it.

Countries have banned GMO crops or insisted on labeling.

Now, though, the game is changing, and it’ll make things even more unpredictable. The threat is ominous and drastic, to say the least.

GM Watch reports the latest GMO innovation: designed food plants that make new double-stranded (ds) RNA. What does the RNA do? It can silence a gene. It can activate a gene that was silent.

If you imagine the gene structure as a board covered with light bulbs, in the course of living some genes light up (activation) and some genes go dark (silent) at different times. This new designed RNA can change that process. No one knows how.

No one knows because no safety studies have been done. If you have genes lighting up and going dark in unpredictable ways, the functions of a plant or a body can change randomly.

Pinball, roulette, use any metaphor you want to; this is playing with the fate of the human race. Walk around with designer-RNA in your body, and who knows what effects will follow.

At this point, I think anyone familiar with the science of RNA interference (RNAi) has slapped themselves in the forehead, for anyone who wants a decent introduction the Wiki does a pretty good job. It’s clear that the author is projecting his own ignorance of RNAi onto the rest of us. Briefly, until about 20 years ago, the so-called “central dogma of molecular biology” was a one way road from DNA being transcribed into RNA which was then translated into a functional protein. Even this is a pretty gross simplification, but it’s fair to say, that prior to the discovery of RNAi, RNA was thought to be little more than a messenger in the cell, serving as an intermediary between the DNA code, and the protein function. Yes, we knew that some RNA had enzymatic function, was incorporated into some proteins, etc., but it wasn’t seen so much as a regulatory molecule.

Then, after a few intriguing findings in plants, Fire and Mello discovered that RNA itself could control the translation of other genes in c. elegans. Almost by accident, they found that if you inserted a double-stranded RNA molecule corresponding to a RNA transcript, that transcript would be degraded and the protein it encoded for wouldn’t be expressed. It was a surprising finding. One would think that what would work would be the anti-sense strand of RNA that would bind the sense strand and somehow inhibit it’s entry into the ribosomal machinery and ultimately interfere with translation. Instead, what they found was double-stranded RNA had a function all of it’s own, with a previously unknown cellular machinery specifically-purposed with processing dsRNA and inhibiting gene function through an entirely different mechansim. Subsequently we’ve also found the RNAi not only can directly regulate the levels of RNA transcripts, but can also regulate gene suppression, and activation directly on promoter sequences on DNA itself.

It’s amazing, decades after the discovery of RNA and understanding of its primary function, we discovered this new and incredibly complex layer of regulation of genetics by RNA molecules involved in everything from development to disease. But what does that mean for us? Should we be worried about gene-regulating RNA molecules in our food?

Of course not! RNAi is an intrinsic function of most eukaryotes. Just about every food you’ve ever eaten in your entire life is chock-full of RNA molecules, including double-stranded inhibitory RNAs involved in the normal biological processes occurring within the cell. If other organisms could affect us by poisoning us with RNA, we wouldn’t last a minute. Weirdly, in GMO paranoia world, however, whatever we consume has the potential to take over our bodies. The basic molecules of all life, that exist in everything we eat, take on new powers once handled by human scientists. The article hinted at as evidence of this risk (but of course not actually cited by the author) that suggests miRNA may have “cross-kingdom” effects, is a great example of crank cherry-picking, as the evidence demonstrating it may be artifact is of course not mentioned. And we shouldn’t be surprised, as it would be a pretty extraordinary hole in our defenses if other organisms could so easily modify our gene expression.

One of the great limitations of gene therapy as a potential therapy has been that it’s extremely difficult to introduce genes, or specifically regulate them with external vectors. If it were as simple as just feeding us RNA that would be something. For better or worse (likely better), your body is extremely resistant to other organisms tinkering with its DNA or cellular machinery.

Ok, but then you say, “Hey, that’s Natural News, we know they’re morons.” Ok, how about Clair Cummings in Common Dreams panic-posting about the GMO threat to our water supply from this week? Great evidence that “progressive” is no insulation from “anti-science”:

Today is World Water Day. The United Nations has set aside one day a year to focus the world’s attention on the importance of fresh water. And rightly so, as we are way behind in our efforts to protect both the quantity and quality of the water our growing world needs today.(Image:

And now, there is a new form of water pollution: recombinant genes that are conferring antibiotic resistance on the bacteria in the water.

Researchers in China have found recombinant drug resistant DNA, molecules that are part of the manufacturing of genetically modified organisms, in every river they tested.

Genetically engineered organisms are manufactured using antibiotic resistant genes. And these bacteria are now exchanging their genetic information with the wild bacteria in rivers. As the study points out, bacteria already present in urban water systems provides “advantageous breeding conditions for the(se) microbes.”

Antibiotic resistance is perhaps the number one threat to public health today.

Transgenic pollution is already common in agriculture. U.C. Berkeley Professor Ignacio Chapela was the first scientist to identify the presence of genetically engineered maize in local maize varieties in Mexico. He is an authority on transgenic gene flow. He says it is alarming that “DNA from transgenic organisms have escaped to become an integral component of the genome of free-living bacteria in rivers.” He adds that “the transgenic DNA studied so far in these bacteria will confer antibiotic resistance on other organisms, making many different species resistant to the antibiotics we use to protect ourselves from infections.”

Our expensive attempts to filter and fight chemicals with other chemicals are only partially effective. Our attempts to regulate recombinant DNA technology has failed to prevent gene pollution. The only way to assure a sustainable source of clean water is to understand water for what it is: a living system of biotic communities, not a commodity. It is a living thing and as such it deserves our respect, as does the human right to have abundant fresh clean water for life.

You heard it, now they’re making up a new category of pollution “gene pollution”.

Let’s go back to some of the basic science here, so again, we can display just how silly and uninformed these Chicken Littles are. When molecular biologists wish to produce large quantities of a DNA or protein, what they usually do is insert the sequence into an easy-to-grow organism like E. Coli, or yeast, or some other cell, and then have the biologic machinery of those cells produce it for us. This is one of the most simple forms of genetic modification, and we use it from everything to making plasmid DNA in the lab, to the production of recombinant human insulin for diabetics. In order to make sure your organism is making your product of interest you include a gene that encodes for resistance to an antibiotic (in bacteria most commonly to ampicillin) so that when you grow your bug you can make sure the only cells growing are the ones that are working for you by including that antibiotic in the mix. Other resistance genes we use are often for antibiotics we don’t use in humans, like hygromycin or neomycin, which is nephrotoxic if injected (but also poorly absorbed).

“That’s terrible!”, you say, “how could we teach so many bacteria to be resistant to antibiotics! Surely this will kill us all!”

Um, no. For one, the resistance genes we use aren’t novel or made de novo by humans, they already existed before a single human was ever treated with an antibiotic. The first antibiotic discovered, penicillin, is a natural product. It’s an ancient agent in an ongoing war between microorganisms. The antidote for penicillin and related molecules was actually discovered at about the same time as we discovered penicillin. Beta-lactamase, which breaks open the structure of the penicillins and inhibits their antibiotic effects was around long before humans figured out how to harness antibiotics for our own purposes. The gene, which we clone into plasmids to make our GMO bacteria work for us, came from nature too. Now if we were growing bacteria in vancomycin or linezolid, yeah, I’d be pissed, but that’s not what’s happening. And even though we still use older penicillins clinically, it’s with full knowledge that resistance has been around for decades, and they are used for infections that we know never become resistant to the drugs, like group b strep (or syphilis). The war for penicillin is over. We lost. Any bug that’s going to become resistant to penicillin already is.

The antibiotic resistance that plagues our ICUs and hospitals doesn’t come from GMOs being taught to fight ampicillin, it comes from overuse of more powerful antibiotics in humans. The genes that are providing resistance to even beta-lactam resistant antibiotics like the carbapenems or methicillin are the result of a more classic form of genetic modification – natural selection.

So what is the risk to humans from the DNA encoding a wimpy beta-lactamase or whatever being detected in water? Zilch. Nada. Zip.

The paranoia over recombinant DNA has persisted for decades despite no rational basis for a threat to humans or other living things. The continued paranoia over rDNA is a sign that the GMO paranoids get their science from bad movies, not textbooks or serious knowledge of the risks and benefits of this technology. rDNA is why we have an unlimited supply of insulin, it’s how we have virtually all of our knowledge of molecular biology, it’s how we even have an understanding of how things like antibiotic resistance work. It’s been around since the 70s and how many times have you heard of it actually hurting a person?

This is the state of the argument over genetically-modified organisms. To the uninitiated this stuff sounds like it might be kind of scary. But with any real understanding of the molecular mechanisms of these technologies, the plausibility of their risk drops to zero. Sadly, Goodall has not only shown a pretty poor level of scholarship with this new book, but also, has fallen in with cranks promoting implausible risks of this biotechnology. It’s unfortunate because she should be respected for her previous work as an environmentalist and a conservationist. This is what is so annoying about anti-GMO paranoia. It makes environmentalists look like idiots, as it distracts from actual threats to the environment with invented threats and irrational fears of biotech. I’m sure I’ll now be accused of being in the pocket of big ag, as I am in every thread on GMO, but I assure you, I have no financial interests, or any dealings with these companies ever. I’m irritated with the anti-GMO movement because it’s an embarrassment. It’s Luddism, and ignorance masquerading as environmentalism. It’s bad biology. It’s the progressive equivalent of creationism or global warming denial. It’s classic anti-science, and we shouldn’t tolerate it.

Fixing the Chargemaster Problem for the Uninsured

For those disturbed by the evils of the hospital chargemaster as exposed by Brill’s piece in time, Uwe E. Reinhardt’s proposed solution is a must read.

While the hospitals are never going to charge the uninsured the same rate as they charge medicare (and probably be less forgiving the more they think they can get out of you), that’s no reason we can’t force them to with state law. Apparently that’s what Reinhardt had them do in Jersey:

In the fall of 2007, Gov. Jon Corzine of New Jersey appointed me as chairman of his New Jersey Commission on Rationalizing Health Care Resources. On a ride to the airport at that time I learned that the driver and his family did not have health insurance. The driver’s 3-year-old boy had had pus coming out of a swollen eye the week before, and the bill for one test and the prescription of a cream at the emergency room of the local hospital came to more than $1,000.

By circuitous routes I managed to get that bill reduced to $80; but I did not leave it at that. As chairman of the commission, I put hospital pricing for the uninsured on the commission’s agenda.

After some deliberation, the commission recommended initially that the New Jersey government limit the maximum prices that hospitals can charge an uninsured state resident to what private insurers pay for the services in question. But because the price of any given service paid hospitals or doctors by a private insurer in New Jersey can vary by a factor of three or more across the state (see Chapter 6 of the commission’s final report), the commission eventually recommended as a more practical approach to peg the maximum allowable prices charged uninsured state residents to what Medicare pays (see Chapter 11 of the report).

Five months after the commission filed its final report, Governor Corzine introduced and New Jersey’s State Assembly passed Assembly Bill No. 2609. It limits the maximum allowable price that can be charged to uninsured New Jersey residents with incomes up to 500 percent of the federal poverty level to what Medicare pays plus 15 percent, terms the governor’s office had negotiated with New Jersey’s hospital industry.

Reinhardt also makes clear that the problem of excess cost is not the chargemaster or hospital profits, which are not so extraordinary, as did I in my original piece (at least compared to excess drug costs, insurance administration, inefficiently delivered service and unnecessary services etc). But the injustice of the uninsured facing these inflated bills that are designed to antagonize large payers like health insurance companies, should be addressed. You can’t bleed a radish, and hospitals should stop trying to when it comes to the uninsured. Since they won’t without government encouragement, such legislation should be considered at the state and national levels.

New homebirth statistics show it's way too dangerous, and Mike Shermer on liberal denialism

Two links today for denialism blog readers, both are pretty thought provoking. The first, from Amy Tuteur, on the newly-released statistics on homebirth in Oregon. It seems that her crusade to have the midwives share their mortality data is justified, as when they were forced to release this data in Oregon, planned homebirth was about 7-10 times more likely to result in neonatal mortality than planned hospital birth.

I’m sure Tuteur won’t mind me stealing her figure and showing it here (original source of data is Judith Rooks testimony):

Oregon homebirth neonatal mortality statistics, from the Skeptical OB.

Armed with data such as these, it needs to become a point of discussion for both obstetricians and midwives that out of hospital births have a dramatically-higher neonatal mortality, and this is worse for midwives without nursing training (the DEM or direct-entry-midwives). It’s their body and their decision, but this information should be crucial to informing women as to whether or not they should take this risk. It also is only a reflection of neonatal mortality, one could also assume it speaks to higher rates of morbidity as well, as longer distances and poorer recognition of fetal distress and complications will lead to worse outcomes when the child survives. It should be noted this data is also consistent with nationwide CDC data on homebirth DEMs, and actually better than midwife data for some states like Colorado.

The second article worth pointing out today (even though it’s old) is from Michael Shermer in Scientific American on the liberal war on science. Regular readers know that I’m of the belief there isn’t really a difference between left and right-wing ideology on acceptance of science, it just means they just reject different findings that collide with their ideology.

The left’s war on science begins with the stats cited above: 41 percent of Democrats are young Earth creationists, and 19 percent doubt that Earth is getting warmer. These numbers do not exactly bolster the common belief that liberals are the people of the science book. In addition, consider “cognitive creationists”—whom I define as those who accept the theory of evolution for the human body but not the brain. As Harvard University psychologist Steven Pinker documents in his 2002 book The Blank Slate (Viking), belief in the mind as a tabula rasa shaped almost entirely by culture has been mostly the mantra of liberal intellectuals, who in the 1980s and 1990s led an all-out assault against evolutionary psychology via such Orwellian-named far-left groups as Science for the People, for proffering the now uncontroversial idea that human thought and behavior are at least partially the result of our evolutionary past.

There is more, and recent, antiscience fare from far-left progressives, documented in the 2012 book Science Left Behind (PublicAffairs) by science journalists Alex B. Berezow and Hank Campbell, who note that “if it is true that conservatives have declared a war on science, then progressives have declared Armageddon.” On energy issues, for example, the authors contend that progressive liberals tend to be antinuclear because of the waste-disposal problem, anti–fossil fuels because of global warming, antihydroelectric because dams disrupt river ecosystems, and anti–wind power because of avian fatalities. The underlying current is “everything natural is good” and “everything unnatural is bad.”

Whereas conservatives obsess over the purity and sanctity of sex, the left’s sacred values seem fixated on the environment, leading to an almost religious fervor over the purity and sanctity of air, water and especially food.

I’m worried that Shermer has confused liberal Luddism with denialism, and I would argue some anti-technology skepticism is healthy and warranted. While I agree that the anti-GMO movement does delve into denialist waters with regularity, these are not good examples he has chosen. One needs to be cautious with technology, and it’s a faith-based assumption that technology can solve all ills. I’m with Evgeny Morozov on this one, the assumption there is (or should be) a technological fix for every problem has become almost a religious belief system. Appropriately including the potential perils of a technology in its cost-benefit analysis is not a sign of being anti-science. Even overblowing specific risks because of individual values isn’t really anti-science either. It might be anti-human to put birds before human needs as with wind turbines, but no one is denying that wind turbines generate electricity. And while liberals may be overestimating the risk of say, nuclear waste generation over carbon waste generation (guess which is a planet-wide problem!), it doesn’t mean they don’t think nuclear power works or is real. They just have an arguably-skewed risk perception, which is an established problem in cases of ideological conflict with science or technology. There is also reasonable debate to be had over the business-practices of corporations (Monsanto in his example), which need and deserve strong citizen push-back and regulation to prevent anti-competitive or abusive behavior.

Anti-science requires the specific rejection of data, the scientific method, or strongly-supported scientific theory due to an ideological conflict, not because one possesses superior data or new information. I don’t think Shermer actually listed very good examples of this among liberals. If you’re going to talk about GMO denialism, don’t complain about people fighting with Monsanto, talk about how anti-GMO advocates make up crazy claims about the foods (see natural news for example) such as that they cause autism, or cancer. And even then it’s difficult to truly say this is a completely liberal form of denialism as Kahan’s work shows again, there is a pretty split ideological divide on GMO.

I agree that liberals are susceptible to anti-science and the mechanism is the same – ideological conflict with scientific results. However, the liberal tendency towards skepticism of technology is healthy in moderation, and anti-corporatism is not automatically anti-science. In an essay that was striving to say we must be less ideological and more pragmatic, Shermer has wrongly lumped in technological skepticism, and anti-corporatism with science denial.

Lead Industry & the Deck of Cards

Helen Epstein has an interesting review of Lead Wars: The Politics of Science and the Fate of America’s Children by Gerald Markowitz and David Rosner, in the current New York Review of Books. The review is worth reading to better understand the public policy problem of lead in products and the environment. But I cannot help but point out that the article could be used to provide more footnotes to the Denialists’ Deck of Cards:

… The lead companies also paid scientists who produced flawed studies casting doubt on the link between lead exposure and child health problems. When University of Pittsburgh professor Herbert Needleman first showed that even children with relatively modest lead levels tended to have lower intelligence and more behavioral problems than their lead-free peers, some of these industry-backed researchers claimed that his methods were sloppy and accused him of scientific misconduct (he has since been exonerated).

The companies also hired a public relations firm to influence stories in The Wall Street Journal and other conservative news outlets, which characterized Needleman as part of a leftist plot to increase government spending on housing and other social programs…

The Good, Not So Good, and Long View on Bmail

Denialism blog readers, especially those at academic institutions that have/are considering outsourcing email, may be interested in my essay on UC Berkeley’s migration to Gmail.  This is cross-posted from the Berkeley Blog.

Many campuses have decided to outsource email and other services to “cloud” providers.  Berkeley has joined in by migrating student and faculty to bMail, operated by Google.  In doing so, it has raised some anxiety about privacy and autonomy in communications.  In this post, I outline some advantages of our outsourcing to Google, some disadvantages, and how we might improve upon our IT outsourcing strategy, especially for sensitive or especially valuable materials.

Why outsourcing matters

Many of us welcome possible alternatives to CalMail, which experienced an embarrassing, protracted outage in fall 2011.  Many of us welcomed the idea of migrating to Gmail, because we use it personally, have found it user-friendly and reliable, and because it is provided by a hip company that all of our students want to work for.

But did we really look before we leaped?  Did we really consider the special context of higher education, one that requires us to protect both students and faculty from outside meddling and university-specific security risks?  Before deciding to outsource, we have to be sure that there are service providers that understand our obligations, norms, and the academic context.

In part because of the university’s particular role, our email is important and can be unusually sensitive to a variety of threats.  Researchers at Berkeley are conducting clinical trials with confidential data and patient information.  We are developing new drugs and technologies that are extremely valuable.  Some of us perform research that is classified, export-controlled, or otherwise could, if misused, cause great harm.  Some of us consult to Fortune 500 companies, serve as lawyers with duties of confidentiality, or serve as advisors to the government.  Some of us are the targets of extremist activists who try to embarrass us or harm us physically.  Some of us are critical of companies and repressive governments.  These entities are motivated to find out the identities of our correspondents and our strategic thinking, through either legal or technical means.  And not least, our email routinely contains communications with students about their progress, foibles, and other sensitive information, including information protected by specific privacy laws, such as the Federal Educational Rights and Privacy Act (FERPA). We have both legal and ethical duties to protect this information.

Our CalMail operators know these things, and as I understand it, they have been very careful in protecting the privacy of campus communications. Outsourcing providers such as Google however, may be far less likely to be familiar with our specific duties, norms, and protocols, or to have in place procedures to implement them. Outsource providers may be motivated to provide services that they can develop and serve “at scale” and that do not require special protocols. As described below, this seems to have been the case with Google’s contracts with academic institutions.

Finally, communications platforms are powerful.  They are the focus of government surveillance and control because those who control communications can influence how people think and how they organize.  Universities have historically experienced periodic pressures to limit research, publication, teaching, and speech. Without communications confidentiality, integrity, and availability, the quality of our freedom and the role we play in society suffers.  And thus the decision to entrust the valuable thoughts of our community to outsiders requires some careful consideration.

The Good

There are some clear benefits to outsourcing to Google.  They include:

  • An efficient, user-friendly communications system with a lot of storage.  The integration of Google Apps, such as Calendar, is particularly appealing, given the experience we have had with CalAgenda.  Google Drive is a pleasure compared to the awkward AFS.
  • Our communications may in some senses be more securely stored in the hands of Google.  Google has some of the best information security experts in the world.  They are experienced in addressing sophisticated, state-actor-level attacks against their network.  To its credit, Google has been more transparent about these attacks than other companies.
  • Although it is not implemented at Berkeley, Google offers two-factor authentication.  This is an important security benefit not offered by CalMail that could reduce the risk that our accounts are taken over by others.  Those of us using sensitive data, or who are at risk of retaliation by governments, hackers, activists, etc., should use two-factor authentication.
  • As a provider of services to the general public, Google is subject to a key federal communications privacy law.  This law imposes basic obligations on Google when data are sought by the government or private parties.  It is not clear that this law binds the operations of colleges and universities generally.  However, this factor is not very important with respect to the Berkeley’s adoption of bMail, as we have adopted a strongelectronic communications policy protecting emails systemwide.
  • Google recently announced that it will require government agents to obtain a probable cause warrant for user content.  This is important, because other providers release “stale” (that is, over 180 days old) data to government investigators with a mere subpoena.  A subpoena is very easy to obtain, whereas a probable cause warrant standard requires the involvement of a judge, an important check against overzealous law enforcement.  Google’s position protects us from the problem that our email archives can be obtained by many government officials who need only fill out and stamp a one-page form.

The Not So Good

Still, there are many reasons why outsourcing, and outsourcing to Google specifically, creates new risks.  While our IT professionals did an in-depth analysis of Google and Microsoft, it seems that the decision to outsource was taken before the reality of the alternatives available to us were evaluated.

  • We must consider issues around contract negotiations and whether services provided fulfill the requirements I set forth above. In initial negotiations, Google treated Berkeley IT professionals like ordinary consumers—it presented take-it-or–leave-it contracts.  Google was resistant to, though it eventually accepted, assuming obligations under FERPA, a critical concession for colleges and universities.  Google also used a gag clause in its negotiations with schools.  This made it difficult for our IT professionals to learn from other campuses about the nuances of outsourcing to Google.  As a result, much of what we know about how other campuses protected the privacy of their students and faculty is rumor that cannot be invoked, as it implicitly violates the gag clause.
  • On the most basic level, we should pause to consider that both companies the campus considered for outsourcing are the subject of 20-year consent decrees for engaging in deceptive practices surrounding privacy and/or security.  Google in particular, with its maximum transparency ideology, does not seem to have a corporate culture that appreciates the special context of professional secrecy.  The company is not only a fountainhead of privacy gaffes but also benefits from shaping users’ activities towards greater disclosure.
  • As discussed above, UC and Berkeley routinely handle very sensitive information, and many of us on campus have special obligations or particularized vulnerabilities.  Companies with valuable secrets do not place crown jewels in clouds.  When they do outsource, they typically buy “single-tenant” clouds, computers where a single client’s data resides on the machine.  Google’s service is a “multi-tenant” cloud, and thus Berkeley data will only be separated from others on a logical level.  Despite the contract negotiation, Google’s is a consumer-level service and our contract has features of that type of service.  There is a rumor that one state school addressed this issue by negotiating to be placed in Google’s government-grade cloud service, but because of the secrecy surrounding Google’s negotiations, I cannot verify this.
  • Third parties are a threat to communications privacy, but so are first parties—communications providers themselves.  While we may perceive cloud services as being akin to a locker that the user secures, in reality these are services where the provider can open the door to the locker.  In some cases, there is a technical justification for this, in other cases, companies have some business justification, such as targeting advertising or engaging in analysis of user data.
  • It is rumored that some campuses understood this risk, and negotiated a “no data mining clause.”  This would guarantee that Google would not use techniques to infer knowledge about users’ relationships with others or the content of messages.  Despite our special responsibilities to students to protect their information and our research and other requirements, we lack this guarantee.
  • Despite the good news about Google’s warrant requirement, we still need to consider intelligence agency monitoring of our data.  Any time data leaves the country, our government (and probably others) captures it at the landing stations and at repeater stations deep under the ocean.  And the bad news is our contract does not keep Berkeley data in the U.S.  Even while stored in the country, there are risks.  For instance, the government could issue a national security letter to Google, demanding access to hundreds or even thousands of accounts while prohibiting notice to university counsel.  Prior to outsourcing, those demands would have to be delivered to university officials because our IT professionals had the data.  Again, to its credit, Google is one of the most forthcoming companies on the national security letter issue, and its reporting on the topic indicates that some accounts have been subject to such requests.
  • Google represented that its service meets a SAS 70 standard in response to security concerns, but it is not clear to me that this certification is even relevant.    SAS 70 speaks to the internal controls of an organization, and specifically to data integrity in the financial services context.  The University’s concerns are broader–confidentiality and availability are key elements–and apply to both external and internal controls and the University’s rights to monitor and verify.  There are notable examples of SAS 70 compliant cloud services with extreme security lapses, such as Epsilon (confidentiality) and AWS (availability).  SAS 70 allows the company, which is the client of the auditor, and the auditor itself, to agree upon what controls are to be assured.
  • Google will have few if any incentives to develop privacy-enhancing technologies for our communications platform, such as a workable encryption infrastructure.  As it stands, the contract creates no incentives or requirements for development of such technologies, and in fact, such development runs counter to Google’s interests.
  • In the end, CalMail was being very effectively maintained by only a few employees. It is not clear to me that an outsourced solution—which, in order for the security and other issues to be managed properly, requires Berkeley personnel to interface with the system and with Google—is necessarily less costly. This is especially concerning in light of the fact that we appear to have lost the connection to IT personnel who understand the sensitivity of the data we handle, and moved to a much more consumer-oriented product.

The long view

Looking ahead, we should carefully consider how we could assume the best posture for outsourcing. Instead of experimenting with Google, we would be better served by an evaluation of the campus needs that includes regulatory and ethical obligations and that captures the norms and values of our mission.  Provider selection should be broader than choosing between Google and Microsoft.

As a first step, we should charge our IT leadership with forming formal alliances with other institutions to jointly share information and negotiate with providers.  Google’s gag provision harmed our ability to both recognize risks and to address them.

We need to be less infatuated with “the cloud,” which to some extent is a marketing fad.  Many of the putative benefits of the cloud are disclaimed in these services’ terms of service.  For instance, a 2009 survey of 31 contracts found that, “…In effect, a number of providers of consumer-oriented Cloud services appear to disclaim the specific fitness of their services for the purpose(s) for which many customers will have specifically signed up to use them.”  The same researchers found that providers’ business models were related to the generosity of terms.  This militates towards providers that charge some fee for service as opposed to “free” ones that monetize user data.

We should charge our IT professionals with the duty of documenting problems with outsourced services.  To more objectively understand the cloud phenomenon, we should track the real costs associated with outsourcing, including outages, the costs of managing the relationship with Google, and the technical problems that users experience.  Outsourcing is not costless.  We could learn that employees have simply been transferred from the operation of CalMail to the management of bMail.  We should not assume that systems mean fewer people—they may appropriately require meaningful staffing to fulfill our needs. As the expiration date ofsystem wide Google contract approaches in June 2015, these metrics will help us make an economical decision.

Finally, there are technical approaches that, if effective, could blunt, but not completely eliminate, the privacy problems created by cloud services.  Encryption tools, such asCipherCloud, exist to mask data from Google itself.  This can help hide the content of messages, reduce data mining risks from Google, and cause the government to have to come to Berkeley officials to gain access to content.  The emergence of these services indicates that there is a shared concern about storing even everyday emails in cloud services.  These services cost real money, but if we continue to think we can save money by handing over our communications systems to data mining companies, we are likely to end up paying in other ways.